text
stringlengths
1
1.3M
id
stringlengths
2
2.39k
metadata
dict
\section{Introduction} The previous version of this article is published as \cite{CQE}. \bigskip Let $G$ be a simple graph on $n$ vertices, $E_G$ the edge set of $G$ and $V_G$ the vertex set of $G$. Let $R=k[x_1, \ldots, x_n]$ be the polynomial ring over a field $k$. The {\em edge ideal} of $G$ is the quadratic squarefree monomial ideal ${\mathcal I}(G)=\langle\{ x_ix_j\}\;\vert\; \{ x_i, x_j\}\in E_G\rangle\subset R$. Then we define the {\em squarefree Alexander dual} of ${\mathcal I}(G)$ as ${\mathcal I}(G)^\vee=\cap_{\{ x_i,x_j\}\in E_G} \langle x_i, x_j\rangle$. Calling ${\mathcal I}(G)^\vee$ the squarefree Alexander dual of ${\mathcal I}(G)$ is natural since ${\mathcal I}(G)^\vee$ is the Stanley--Reisner ideal of the simplicial complex $\Delta^\vee$, that is, the Alexander dual simplicial complex of $\Delta$. Here $\Delta$ is the simplicial complex, the Stanley-Reisner ideal of which is ${\mathcal I}(G)$. In \cite{HH} Herzog and Hibi give the following definition. Given a graded ideal $I\subset R$, we denote by $I_{\langle d\rangle}$ the ideal generated by the elements of degree $d$ that belong to $I$. Then we say that a (graded) ideal $I\subset R$ is {\em componentwise linear} if $I_{\langle d\rangle}$ has a linear resolution for all $d$. If the graph $G$ is chordal, that is, every cycle of length $m\geq 4$ in $G$ has a chord, then it is proved by Francisco and Van Tuyl \cite{FvT1} that ${\mathcal I}(G)^V$ is componentwise linear. (The authors then use the result to show that all chordal graphs are sequentially Cohen-Macaulay.) In this report we examine componentwise linearity of ideals arising from complete graphs and of the form $\bigcap_{\{ x_i,x_j\}\in E_G}\langle x_i, x_j\rangle^{t}$. \section{Intersections for complete graphs}\label{iocg} Let $K_n$ be a complete graph on $n$ vertices, that is, $\{ x_i,x_j\}\in E_{K_n}$ for all $1\leq i\ne j\leq n$. We write $K_n^{(t)}=\bigcap_{\{ x_i,x_j\}\in E_{K_n}}\langle x_i, x_j\rangle^{t}$. We will show that the ideal $K_n^{(t)}$ is componentwise linear for all $n\geq 3$ and $t\geq 1$. Recall that a {\em vertex cover} of a graph $G$ is a subset $A\subset V_G$ such that every edge of $G$ is incident to at least one vertex of $A$. One can show that $\mathcal I(G)^V=\langle x_{i_1}\cdots x_{i_k}\;\vert\; \{x_{i_1},\ldots ,x_{i_k}\}\; {\rm a\; vertex\; cover\;of}\; G\rangle$. A {\em t-vertex cover} (or a {\em vertex cover of order t) of G} is a vector ${\rm\bf a} =(a_1,\ldots, a_n)$ with $a_i\in\mathbb N$ such that $a_i+a_j\geq t$ for all $\{ x_i,x_j\}\in E_G$. In the proof of our main result Theorem 2.3, we use the following definition and proposition. \begin{definition}\label{lq} A monomial ideal $I$ is said to have {\em linear quotients}, if for some degree ordering of the minimal generators $f_1,\ldots ,f_r$ and all $k>1$, the colon ideals $\langle f_1,\ldots ,f_{k-1}\rangle :f_k$ are generated by a subset of $\{x_1,\ldots ,x_n \}$. \end{definition} \begin{prop}[Proposition 2.6 in \cite{FvT2} and Lemma 4.1 in \cite{CH}]\label{lqcl} If $I$ is a homogeneous ideal with linear quotients, then $I$ is componentwise linear. \end{prop} \begin{thm} The ideal $K_n^{(t)}$ is componentwise linear for all $n\geq 3$ and $t\geq 1$. \begin{proof} For calculating an explicit generating system of $K_n^{(t)}$ we will use $t$-vertex covers. Pick any monomial $m$ in the generating set of $K_n^{(t)}$ and, for some $k$ and $l$, consider the greatest exponents $t_k$ and $t_l$ such that $x_k^{t_k}x_l^{t_l}$ is a factor in $m$. As $m$ is contained in $\langle x_k,x_l\rangle^{t}$ we must have $t_k+t_l\geq t$. Hence, $K_n^{(t)}$ is generated by the monomials of the form $\rm\bf x^a$, where $\rm\bf a$ is an $t$-cover of $K_n$. That is, the sum of the two lowest exponents in every (monomial) generator of $K_n^{(t)}$ is at least $t$. First we assume that $t=2m+1$ is odd. Using the degree lexicographic ordering $x_1\prec x_2\prec\cdots \prec x_n$ on the the minimal generators we get \begin{displaymath} \begin{array}{lrcr} K_n^{(t)}=K_n^{(2m+1)}= & \big\langle x_1^{m}\prod_{i\ne 1} x_i^{m+1}, & \ldots & ,x_n^{m}\prod_{i\ne n} x_i^{m+1}, \\ &&&\\ & x_1^{m-1}\prod_{i\ne 1} x_i^{m+2}, &\ldots & ,x_n^{m-1}\prod_{i\ne 1} x_i^{m+2},\\ & & \vdots & \\ & \prod_{i\ne 1} x_i^{2m+1}, &\ldots & ,\prod_{i\ne n} x_i^{2m+1}\big\rangle. \end{array} \end{displaymath} This ordering of the minimal generators satisfies the condition in Definition~\ref{lq}. Hence, $K_n^{(t)}$ has linear quotients and is componentwise linear by Proposition~\ref{lqcl}. If $t=2m$ is even, then the degree lexicographic ordering yields the sequence \begin{displaymath} \begin{array}{lrcr} K_n^{(t)}=K_n^{(2m)}= & \big\langle \prod_{i=1}^{2m} x_i^m,\quad x_1^{m-1}\prod_{i\ne 1} x_i^{m+1}, & \ldots & ,x_n^{m-1}\prod_{i\ne n} x_i^{m+1}, \\ &&&\\ & x_1^{m-2}\prod_{i\ne 1} x_i^{m+2}, &\ldots & ,x_n^{m-2}\prod_{i\ne 1} x_i^{m+2},\\ & & \vdots & \\ & \prod_{i\ne 1} x_i^{2m}, &\ldots & ,\prod_{i\ne n} x_i^{2m}\big\rangle, \end{array} \end{displaymath} which also satisfies the condition in Definition~\ref{lq}, and the same result follows. \end{proof} \end{thm} \begin{exa} \[ K_{12}^{(5)}=\big\langle \{ x_j^2\prod_{i\ne j} x_i^3\}_{1\leq j\leq 12},\; \{ x_j\prod_{i\ne j} x_i^4\}_{1\leq j\leq 12},\; \{\prod_{i\ne j} x_i^5\}_{1\leq j\leq 12} \big\rangle \] and \[ K_5^{(6)}=\big\langle \prod_{i=1}^5 x_i^3,\;\{ x_j^2\prod_{i\ne j} x_i^4\}_{1\leq j\leq 5},\;\{ x_j\prod_{i\ne j} x_i^5\}_{1\leq j\leq 5},\;\{\prod_{i\ne j} x_i^6\}_{1\leq j\leq 5} \big\rangle. \] \end{exa} \begin{remark} A monomial ideal is called $polymatroidal$ if it is generated in one degree and its minimal generators satisfy a certain "exchange condition". In \cite{HT} Herzog and Takayama show that polymatroidal ideals have linear resolutions. Later Francisco and van Tuyl \cite{FvT2} proved that some families of ideals $I$ are componentwise linear showing in their Theorem~3.1 that $I_{\langle d\rangle}$ are polymatroidal for all $d$. The ideals $K_n^{(t)}$ are also polymatroidal, but the proof using the same techniques as in the proof of Theorem~3.1 in \cite{FvT2} is rather tedious and takes a few pages. \end{remark} \section{A counterexample } {\em There exists a chordal graph $G$ such that $\bigcap_{\{ x_i,x_j\}\in E_G}\langle x_i, x_j\rangle^t$ is not componentwise linear for any $t>1$.} \begin{proof} Let $G$ be the chordal graph \[ \xymatrix{&b\ar@{-}[dl]\ar@{-}[dd]\ar@{-}[dr]&\\ a\ar@{-}[dr]&&d\ar@{-}[dl]\\ &c&} \] and denote the intersection $\langle a,b\rangle^t\cap\langle a,c\rangle^t\cap\langle b,c\rangle^t\cap\langle b,d\rangle^t $ by $I_4^{(t)}$. We have \[ I_4^{(1)}=\bigcap_{\{ i,j \}\in E_G}\langle i,j\rangle=\langle bc \rangle + \langle abd,acd\rangle \] and \[ I_4^{(2)}=\bigcap_{\{ i,j \}\in E_G}\langle i,j\rangle =\langle b^2c^2,abcd\rangle + \langle a^2b^2d^2,a^2c^2d^2\rangle. \] Arguing in the same way as for $K_n^{(t)}$ we see that the minimal generating set consists of generators of exactly degree $2t$ and generators of higher degrees: \begin{itemize} \item If $t_a\leq\lfloor\frac{t}{2} \rfloor$ then $t_b=t-t_a=t_c$ (the sum $t_b+t_c\geq t$ automatically) and $t_d=t-t_b=t-t_c=t_a$. We get the set of minimal generators of degree $2t$: \[ \big\{ a^{i}(bc)^{t-i}d^{i} \big\}_{0\leq i\leq\lfloor\frac{t}{2} \rfloor }. \] \item If $t_a> \lfloor\frac{t}{2} \rfloor$, then either $t_b=t-t_a$ and $t_c=t-t_b=t_a$, or $t_c=t-t_a$ and $t_b=t_a$. Further $t_d=t_a$. The set of minimal generators we get in this way is equal to \[ \big\{ (acd)^{i}b^{t-i} \big\}_{\lfloor\frac{t}{2} \rfloor < i\leq t} \cup \big\{ (acd)^{i}b^{t-i} \big\}_{\lfloor\frac{t}{2} \rfloor < i\leq t }. \] The generators in this set are of degree at least $(2t+1)$ for odd $t$ and of degree at least $(2t+2)$ for even $t$. \end{itemize} \bigskip Now consider the minimal free resolution $\mathcal{F}.$ of $(I_4^{(t)})_{\langle 2t\rangle}$. Since $\mathcal{F}.$ is contained in any free resolution $\mathcal{G}.$ of $(I_4^{(t)})_{\langle 2t\rangle}$ we have that if $F_1$ (the component of $\mathcal{F}$. in homological degree 1) has a non-zero component in a certain degree, then so does $G_1$. Let $\mathcal{G}.$ be the Taylor resolution of $(I_4^{(t)})_{\langle 2t\rangle}$. The degrees in which $G_1$ has nonzero components come from least common mutliples of pairs of minimal generators of $(I_4^{(t)})_{\langle 2t\rangle}$. By considering the above description of the minimal generators in degree $2t$, one sees that $G_1$ has non-zero components only in degrees strictly larger than $2t+1$. Thus $\mathcal{F}.$ cannot be a linear resolution and, hence, $I_4^{(t)}$ is not componentwise linear. \end{proof} \section*{Acknowledgements} First of all we would like to thank the Universit$\rm\grave{a}$ di Catania and the organizers of the PRAGMATIC summer school 2008, especially Alfio Ragusa and Giuseppe Zappal$\rm\grave{a}$. We are deeply greatful to J\"urgen Herzog and Volkmar Welker for their excellent lectures, interesting problems and thorough guidance. \bibliographystyle{amsalpha}
e8031dd5adcd5cdfaa2236d26a6572016b577eb8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction and Motivation} \label{secintro} In order to give the negative-energy solutions of the Dirac equation a meaningful physical interpretation, Dirac proposed that in the vacuum all states of negative energy should be occupied by particles forming the so-called {\em{Dirac sea}}~\cite{dirac2, dirac3}. His idea was that the homogeneous and isotropic Dirac sea configuration of the vacuum should not be accessible to measurements, but deviations from this uniform configuration should be observable. Thus particles are described by occupying additional states having positive energy, whereas ``holes'' in the Dirac sea can be observed as anti-particles. Moreover, Dirac noticed in~\cite{dirac3} that deviations from the uniform sea configuration may also be caused by the interaction with an electromagnetic field. In order to analyze this effect, he first considered a formal sum over all vacuum sea states \begin{equation} \label{seasum} R(t, \vec{x}; t', \vec{x}') = \sum_{l \text{ occupied}} \Psi_l(t, \vec{x}) \:\overline{\Psi_l(t', \vec{x}')} \:. \end{equation} He found that this sum diverges if the space-time point~$(t, \vec{x})$ lies on the light cone centered at~$(t', \vec{x}')$ (i.e.\ if~$(t-t')^2 = |\vec{x} - \vec{x}'|^2$). Next, he inserted an electromagnetic potential into the Dirac equation, \[ \big( i \mbox{$\partial$ \hspace{-1.2 em} $/$} + e \mbox{ $\!\!A$ \hspace{-1.2 em} $/$}(t, \vec{x}) - m \big) \Psi_l(t, \vec{x}) = 0 \:. \] He proceeded by decomposing the resulting sum~\eqref{seasum} as \begin{equation} \label{Rab} R = R_a + R_b \:, \end{equation} where~$R_a$ is again singular on the light cone, whereas~$R_b$ is a regular function. The dependence of~$R_a$ and~$R_b$ on the electromagnetic potential can be interpreted as describing a ``polarization of the Dirac sea'' caused by the non-uniform motion of the sea particles in the electromagnetic field. When setting up an interacting theory, one faces the problem that the total charge density of the sea states is given by the divergent expression \[ \sum_{l \text{ occupied}} e\: \overline{\Psi_l(t, \vec{x})} \gamma^0 \Psi_l(t, \vec{x}) \:. \] Thus the Dirac sea has an infinite charge density, making it impossible to couple it to a Maxwell field. Similarly, the Dirac sea has an infinite negative energy density, leading to divergences in Einstein's equations. Thus before formulating the field equations, one must get rid of the infinite contribution of the Dirac sea to the current and the energy-momentum tensor. In the standard perturbative description of quantum field theory (QFT), this is accomplished by subtracting infinite counter terms (for a more detailed discussion also in connection to renormalization see Section~\ref{secvac} below). Then in the resulting theory, the Dirac sea is no longer apparent. Therefore, it is a common view that the Dirac sea is merely a historical relic which is no longer needed in modern QFT. However, this view is too simple because removing the Dirac sea by infinite counter terms entails conceptual problems. The basic shortcoming can already be understood from the representation~\eqref{Rab} of the Dirac sea in an electromagnetic field. Since the singular term~$R_a$ involves~$\mbox{ $\!\!A$ \hspace{-1.2 em} $/$}$, the counter term needed to compensate the infinite charge density of the Dirac sea must depend on the electromagnetic potential. But then it is no longer clear how precisely this counter term is to be chosen. In particular, should the counter term include~$R_b$, or should~$R_b$ not be compensated and instead enter the Maxwell equations? More generally, in a given external field, the counter terms involve the background field, giving a lot of freedom in choosing the counter terms. In curved space-time, the situation is even more problematic because the counter terms depend on the choice of coordinates. Taking the resulting arbitrariness seriously, one concludes that the procedure of subtracting infinite charge or energy densities is not a fully convincing concept. Similarly, infinite counter terms are also needed in order to treat the divergences of the Feynman loop diagrams. Dirac himself was uneasy about these infinities, as he expressed later in his life in a lecture on quantum electrodynamics~\cite[Chapter~2]{dirac4}: \begin{quote} ``I must say that I am very dissatisfied with the situation, because this so-called good theory does involve neglecting infinities which appear in its equations \ldots in an arbitrary way. This is not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small -- not neglecting it just because it is infinitely great and you do not want it!'' \end{quote} The dissatisfaction about the treatment of the Dirac sea in perturbative QFT was my original motivation for trying to set up a QFT where the Dirac sea is not handled by infinite counter terms, but where the states of the Dirac sea are treated on the same footing as the particle states all the way, thus making Dirac's idea of a ``sea of interacting particles'' manifest. The key step for realizing this program is to describe the interaction by a new type of action principle, which has the desirable property that the divergent terms in~\eqref{seasum} drop out of the equations, making it unnecessary to subtract any counter terms. This action principle was first introduced in~\cite{PFP}. More recently, in~\cite{sector} it was analyzed in detail for a system of Dirac seas in the simplest possible configuration referred to as a single sector. Furthermore, the connection to entanglement and second quantization was clarified in~\cite{entangle}. Putting these results together, we obtain a consistent formulation of QFT which is satisfying conceptually and reproduces the results of perturbative QFT. Moreover, our approach gives surprising results which go beyond standard QFT, like a mechanism for the generation of boson masses and small corrections to the field equations which violate causality. The aim of the present paper is to explain a few ideas behind the fermionic projector approach and to review the present status of the theory. \section{Perturbative Quantum Field Theory and its Shortcomings} \label{secvac} Let us revisit the divergences in~\eqref{seasum} in the context of modern QFT. Historically, Dirac's considerations were continued by Heisenberg~\cite{heisenberg2}, who analyzed the singularities of~$R_a$ in more detail and used physical arguments involving conservation laws and the requirement of gauge invariance to deduce a canonical form of the counter terms in Minkowski space. This result was then taken up by Uehling and Serber~\cite{uehling, serber} to deduce corrections to the Maxwell equations which are now known as the one-loop vacuum polarization. A more systematic analysis became possible by {\em{covariant perturbation theory}} as developed following the pioneering work of Schwinger, Feynman and Dyson (see for example~\cite{schwinger, feynman, dyson2}). In the resulting formulation of the interaction in terms of Feynman diagrams, one can compute the loop corrections and the $S$-matrix of a scattering process, in excellent agreement with experiments. Moreover, the procedure of subtracting infinite counter terms was replaced by the {\em{renormalization program}}, which can be outlined as follows (for details cf.~\cite{peskin+schroeder} or~\cite{collins}): In order to get rid of the divergences of the Feynman diagrams, one first regularizes the theory. Then one shows that the regularization can be removed if at the same time the coupling constants and masses in the theory are suitably rescaled. Typically, the coupling constants and the masses diverge as the ultraviolet regularization is removed, but in such a way that the effective theory obtained in the limit has finite effective coupling constants and finite effective masses. The renormalization program is carried out order by order in perturbation theory. Clearly, the procedure is not unique as there is a lot of freedom in choosing the regularization. A theory is called {\em{renormalizable}} if this freedom can be described to all orders in perturbation theory by a finite number of empirical constants. Despite its overwhelming success, the present formulation of QFT suffers from serious shortcomings. A major technical problem is that, despite considerable effort (see for example~\cite{glimm+jaffe}), one has not succeeded in rigorously constructing an interacting QFT in Minkowski space. In particular, the renormalized perturbation series of quantum electrodynamics makes sense only as a formal power expansion in the coupling constant. A more conceptual difficulty is that the covariant perturbation expansion makes statements only on the scattering matrix. This makes it possible to compute the asymptotic in- and out-states in a scattering process. But it remains unclear what the quantum field is at intermediate times, while the interaction takes place. Moreover, one needs free asymptotic states to begin with. But under realistic conditions, the system interacts at all times, so that there are no asymptotic states. What does the quantum field mean in this situation? For example, if one tries to formulate the theory in a fixed time-dependent background field, then there are no plane-wave solutions to perturb from, so that standard perturbation theory fails. If one tries to include gravity, the equivalence principle demands that the theory should be covariant under general coordinate transformations. But the notion of free states distinguishes specific coordinate systems, in which the free states are represented by plane waves. A related difficulty is entailed in the notion of the ``Feynman propagator'', defined by the conditions that positive and negative frequencies should propagate to the future and past, respectively. Again the notion ``frequency'' refers to an observer, explaining why Feynman's frequency conditions are not invariant under general coordinate transformations. To summarize, present QFT involves serious conceptual difficulties if one wants to go beyond the computation of the scattering matrix and tries to understand the dynamics of the quantum field at intermediate times or considers systems which for large times do not go over to a free field theory in Minkowski space. In order to understand these conceptual difficulties in more detail, it is a good starting point to disregard the divergences caused by the interaction and to consider {\em{free quantum fields in an external field}}. In this considerably simpler setting, there are several approaches to construct quantum fields, as we now outline. Historically, quantum fields in an external field were first analyzed in the Fock space formalism. Klaus and Scharf~\cite{klaus+scharf1, klaus+scharf2} considered the Fock representation of the electron-positron field in the presence of a static external field. They noticed that the Hamiltonian needs to be regularized by suitable counter terms which depend on the external field. Thus the simple method of the renormalization program of removing the regularization while adjusting the bare masses and coupling constants no longer works. Similar to the explanation in Section~\ref{secintro}, one needs to subtract infinite counter terms which necessarily involve the external field. Klaus and Scharf also realized that the Fock representation in the external field is in general inequivalent to the standard Fock representation of free fields in Minkowski space (see also~\cite{nenciu+scharf, klaus}). This result shows that a perturbation expansion about the standard Fock vacuum necessarily fails. In the time-dependent setting, Fierz and Scharf~\cite{fierz+scharf} proposed that the Fock representation should be adapted to the external field as measured by a local observer. Then the Fock representation becomes time and observer dependent. This implies that the distinction between particles and anti-particles no longer has an invariant meaning, but it depends on the choice of an observer. In this formulation, the usual particle interpretation of quantum states only makes sense for the in- and outgoing scattering states, but it has no invariant meaning for intermediate times. For a related approach which allows for the construction of quantum fields in the presence of an external magnetic field see~\cite{merkl}. In all the above approaches, the Dirac sea leads to divergences, which must be treated by an ultraviolet regularization and suitable counter terms. As an alternative to working with Fock spaces, one can use the so-called {\em{point splitting renormalization method}}, which is particularly useful for renormalizing the expectation value of the energy-momentum tensor~\cite{christensen}. Similar to the above procedure of Dirac and Heisenberg for treating the charge density of the Dirac sea, the idea is to replace a function of one variable~$T(x)$ by a two-point distribution~$T(x,y)$, and to take the limit~$y \rightarrow x$ after subtracting suitable singular distributions which take the role of counter terms. Analyzing the singular structure of the counter terms leads to the so-called {\em{Hadamard condition}} (see for example~\cite{fulling+sweeny+wald}). Reformulating the Hadamard condition for the two-point function as a local spectral condition for the wave front set~\cite{radzikowski} turns out to be very useful for the axiomatic formulation of free quantum fields in curved space-time. As in the Fock space formalism, in the point splitting approach the particle interpretation depends on the observer. This is reflected mathematically by the fact that the Hadamard condition specifies the two-point distribution only up to smooth contributions. For a good introduction to free quantum fields in curved space-time we refer to the recent book~\cite{baer+fredenhagen}. We again point out that in all the above papers on quantum fields in an external field or in curved space-time, only free fields are considered. The theories are not set up in a way where it would be clear how to describe an additional interaction in terms of Feynman diagrams. Thus it is fair to say that the formulation of a background independent interacting perturbative QFT is an open and apparently very difficult problem. All the methods so far suffer from the conceptual difficulty that to avoid divergences, one must introduce infinite counter terms ad-hoc. \section{An Action Principle for the Fermionic Projector in Space-Time} In order to introduce the fermionic projector approach, we now define our action principle on a formal level (for the analytic justification and more details see~\cite[Chapter~2]{sector}). Similar to~\eqref{seasum}, we describe our fermion system for any points~$x$ and~$y$ in Minkowski space by the so-called {\em{kernel of the fermionic projector}} \begin{equation} \label{Pkernel} P(x,y) = - \!\!\!\sum_{l \text{ occupied}} \Psi_l(x) \:\overline{\Psi_l(y)} \:, \end{equation} where by the occupied states we mean the sea states except for the anti-particle states plus the particle states. For any~$x$ and~$y$, we introduce the {\em{closed chain}}~$A_{xy}$ by \begin{equation} \label{cchain} A_{xy} = P(x,y)\, P(y,x)\:. \end{equation} It is a $4 \times 4$-matrix which can be considered as a linear operator on the Dirac wave functions at~$x$. For such a linear operator~$A$ we define the {\em{spectral weight}} $|A|$ by \[ |A| = \sum_{i=1}^4 |\lambda_i|\:, \] where~$\lambda_1, \ldots, \lambda_4$ are the eigenvalues of~$A$ counted with algebraic multiplicities. We define the {\em{Lagrangian}} $\L$ by \begin{equation} \label{Ldef} \L_{xy}[P] = |A_{xy}^2| - \frac{1}{4}\: |A_{xy}|^2 \:. \end{equation} Integrating over space-time, we introduce the functionals \begin{equation} \label{STdef} {\mathcal{S}}[P] \;=\; \iint \L_{xy}[P] \:d^4 x\: d^4y \qquad \text{and} \qquad {\mathcal{T}}[P] \;=\; \iint |A_{xy}|^2 \:d^4 x\: d^4y\:. \end{equation} Our action principle is to \begin{equation} \label{actprinciple} \text{minimize ${\mathcal{S}}$ for fixed~${\mathcal{T}}$}\:, \end{equation} under variations of the wave functions~$\Psi_l$ which preserve the normalization with respect to the space-time inner product \begin{equation} \label{stip} \,<\!\! \Psi | \Phi \!\!>\, = \int \overline{\Psi(x)} \Phi(x)\: d^4x\:. \end{equation} The action principle~\eqref{actprinciple} is the result of many thoughts and extensive calculations carried out over several years. The considerations which eventually led to this action principle are summarized in~\cite[Chapter~5]{PFP}. Here we only make a few comments. We first note that the factor~$1/4$ in~\eqref{Ldef} is merely a convention, as the value of this factor can be arbitrarily changed by adding to~${\mathcal{S}}$ a multiple of the constraint~${\mathcal{T}}$. Our convention has the advantage that for the systems under consideration here, the Lagrange multiplier of the constraint vanishes, making it possible to disregard the constraint in the following discussion. Next, we point out that taking the absolute value of an eigenvalue of the closed chain is a non-linear (and not even analytic) operation, so that our Lagrangian is not quadratic. As a consequence, the corresponding Euler-Lagrange equations are {\em{nonlinear}}. Our Lagrangian has the property that it vanishes if~$A$ is a multiple of the identity matrix. Furthermore, it vanishes if the eigenvalues of~$A$ form a complex conjugate pair. These properties are responsible for the fact that the singularities on the light cone discussed in the introduction drop out of the Euler-Lagrange equations. Moreover, it is worth noting that the action involves only the fermionic wave functions, but {\em{no bosonic fields}} appear at this stage. The interaction may be interpreted as a direct particle-particle interaction of all the fermions, taking into account the sea states. We finally emphasize that our action involves neither coupling constants nor any other free parameters. Clearly, our setting is very different from the conventional formulation of physics. We have neither a fermionic Fock space nor any bosonic fields. Although the expression~\eqref{Pkernel} resembles the two-point function, the $n$-point functions are not defined in our setting. More generally, it seems inappropriate and might even be confusing to use notions from QFT, which have no direct correspondence here. Thus one should be willing to accept that we are in a new mathematical framework where we describe the physical system on the fundamental level by the fermionic projector with kernel~\eqref{Pkernel}. The connection to QFT is not obvious at this stage, but will be established in what follows. We finally remark that our approach of working with a nonlinear functional on the fermionic states has some similarity to the ``non-linear spinor theory'' by Heisenberg et al~\cite{heisenberg}, which was controversially discussed in the 1950s, but did not get much attention after the invention of renormalization. We point out that our action~\eqref{actprinciple} is completely different from the equation~$\mbox{$\partial$ \hspace{-1.2 em} $/$} \Psi \pm l^2 \gamma^5 \gamma^j \Psi\: (\overline{\Psi} \gamma_j \gamma^5 \Psi) =0$ considered in~\cite{heisenberg}. Thus there does not seem to be a connection between these approaches. \section{Intrinsic Formulation in a Discrete Space-Time} \label{secdst} Our action principle has the nice feature that it does not involve the differentiable, topological or causal structure of the underlying Minkowski space. This makes it possible to drop these structures, and to formulate our action principle intrinsically in a discrete space-time. To this end, we simply replace Minkowski space by a finite point set~$M$. To every space-time point we associate the {\em{spinor space}} as a four-dimensional complex vector space endowed with an inner product of signature~$(2,2)$, again denoted by~$\overline{\Psi} \Phi$. A {\em{wave function}}~$\Psi$ is defined as a function which maps every space-time point~$x \in M$ to a vector~$\Psi(x)$ in the corresponding spinor space. For a (suitably orthonormalized) finite family of wave functions~$\Psi_1, \ldots, \Psi_f$ we then define the kernel of the fermionic projector in analogy to~\eqref{Pkernel} by \[ P(x,y) = -\sum_{l=1}^f \Psi_l(x) \overline{\Psi_l(y)} \:. \] Now the action principle can be introduced again by~\eqref{STdef}--\eqref{stip} if we only replace the space-time integrals by sums over~$M$. The formulation in discrete space-time is a possible approach for physics on the Planck scale. The basic idea is that the causal and metric structure should be induced on the space-time points by the fermionic projector as a consequence of a spontaneous symmetry breaking effect. In non-technical terms, this {\em{structure formation}} can be understood by a self-organization of the wave functions as described by our action principle. More specifically, a discrete notion of causality is introduced as follows: \begin{Def} {\bf{(causal structure)}} \label{defcausal} Two space-time points~$x,y \in M$ are called {\bf{timelike}} separated if the spectrum of the product~$P(x,y) P(y,x)$ is real. Likewise, the points are {\bf{spacelike}} separated if the spectrum of~$P(x,y) P(y,x)$ forms two complex conjugate pairs having the same absolute value. \end{Def} \noindent We refer the reader interested in the spontaneous structure formation and the connection between discrete and continuum space-times to the survey paper~\cite{lrev} and the references therein. The only point of relevance for what follows is that in the discrete formulation, our action principle is finite and minimizers exist. Thus there is a fundamental setting where the physical equations are intrinsically defined and have regular solutions without any divergences. \section{Bosonic Currents Arising from a Sea of Interacting Dirac Particles} In preparation for analyzing our action principle, we need a systematic method for describing the kernel of the fermionic projector in position space. In the vacuum, the formal sum in~\eqref{Pkernel} is made precise as the Fourier integral of a distribution supported on the lower mass shell, \begin{equation} \label{Psea} P^\text{sea}(x,y) \;=\; \int \frac{d^4k}{(2 \pi)^4}\: (k\mbox{ \hspace{-1.13 em} $/$}+m)\: \delta(k^2-m^2)\: \Theta(-k^0)\: e^{-ik(x-y)} \end{equation} (where~$\Theta$ is the Heaviside function). In order to introduce particles and anti-particles, one occupies (suitably normalized) positive-energy states or removes states of the sea, \begin{equation} \label{particles} P(x,y) = P^\text{sea}(x,y) -\frac{1}{2 \pi} \sum_{k=1}^{n_f} \Psi_k(x) \overline{\Psi_k(y)} +\frac{1}{2 \pi} \sum_{l=1}^{n_a} \Phi_l(x) \overline{\Phi_l(y)} \:. \end{equation} Next we want to modify the physical system so as to describe a general interaction. To this end, it is useful to regard~$P(x,y)$ as the integral kernel of an operator~$P$ on the wave functions, i.e. \[ (P \Psi)(x) := \int P(x,y)\: \Psi(y)\: d^4y \:. \] Since we want to preserve the normalization of the fermionic states with respect to the inner product~\eqref{stip}, the interacting fermionic projector~$\tilde{P}$ can be obtained from the vacuum fermionic projector~$P$ by the transformation \[ \tilde{P} = U P U^{-1} \] with an operator~$U$ which is unitary with respect to the inner product~\eqref{stip}. The calculation \[ 0 = U (i \mbox{$\partial$ \hspace{-1.2 em} $/$} - m) P U^{-1} = U (i \mbox{$\partial$ \hspace{-1.2 em} $/$} - m) U^{-1} \tilde{P} \] shows that~$\tilde{P}$ is a solution of the Dirac equation \[ (i \mbox{$\partial$ \hspace{-1.2 em} $/$} + {\mathscr{B}} - m) \tilde{P} = 0 \qquad \text{where} \qquad {\mathscr{B}} := i U \mbox{$\partial$ \hspace{-1.2 em} $/$} U^{-1} - i \mbox{$\partial$ \hspace{-1.2 em} $/$} \:. \] This consideration shows that we can describe a general interaction by a potential~${\mathscr{B}}$ in the Dirac equation, provided that~${\mathscr{B}}$ is an operator of a sufficiently general form. It can be a multiplication or differential operator, but it could even be a nonlocal operator. The usual bosonic potentials correspond to special choices of~${\mathscr{B}}$. This point of view is helpful because then the bosonic potentials no longer need to be considered as fundamental physical objects. They merely become a technical device for describing specific variations of the Dirac sea. In order to clarify the structure of~$\tilde{P}$ near the light cone, one performs the so-called {\em{causal perturbation expansion}} and the {\em{light-cone expansion}}. For convenience omitting the tilde, one gets in analogy to~\eqref{Rab} a decomposition of the form \begin{equation} \label{Pdecomp} P^\text{sea}(x,y) = P^{\text{sing}}(x,y) + P^{\text{reg}}(x,y) \:, \end{equation} where~$P^{\text{sing}}(x,y)$ is a distribution which is singular on the light cone and can be expressed explicitly by a series of terms involving line integrals of~${\mathscr{B}}$ and its partial derivatives along the line segment~$\overline{xy}$. The contribution~$P^{\text{reg}}$, on the other hand, is a smooth function which is noncausal in the sense that it depends on the global behavior of~${\mathscr{B}}$ in space-time. It can be decomposed further into so-called low-energy and high-energy contributions which have a different internal structure. For simplicity, we here omit all details and only mention two points which are important for the physical understanding. First, one should keep in mind that the distribution~$P^\text{sea}$ as defined by the causal perturbation expansion distinguishes a unique reference state, even if~${\mathscr{B}}$ is time dependent. Thus the decomposition~\eqref{particles} yields a globally defined picture of particles and anti-particles, independent of a local observer. Second, it is crucial for the following constructions that the line integrals appearing in~$P^\text{sing}$ also involve partial derivatives of~${\mathscr{B}}$. In the case when~${\mathscr{B}}=\mbox{ $\!\!A$ \hspace{-1.2 em} $/$}$ is an electromagnetic potential (or similarly a general gauge field), one finds that~$P^\text{sing}$ involves the electromagnetic field tensor and the electromagnetic current. More specifically, the contribution to~$P^\text{sing}$ involving the electromagnetic current takes the form \begin{equation} \label{current} -\frac{e}{16 \pi^3} \int_0^1 (\alpha-\alpha^2) \gamma_k \, (\partial^k_{\;\:l} A^l - \Box A^k) \big|_{\alpha y + (1-\alpha x)} \;\lim_{\varepsilon \searrow 0} \log \Big( (y-x)^2 + i \varepsilon \:(y^0-x^0) \Big) \:. \end{equation} The appearance of this contribution to the fermionic projector can be understood similar to the ``polarization of the Dirac sea'' mentioned in the introduction as being a result of the non-uniform motion of the sea particles in the electromagnetic field. This contribution influences the closed chain~\eqref{cchain} and thus has an effect on our action principle~\eqref{actprinciple}. In this way, the electromagnetic current also enters the corresponding Euler-Lagrange equations. In general terms, one can say that in our formulation, the bosonic currents arise in the physical equations only as a consequence of the collective dynamics of the particles of the Dirac sea. \section{The Continuum Limit, the Field Equations} \label{secfield} We now outline the method for analyzing our action principle for the fermionic projector~\eqref{Pdecomp}. Since~$P^\text{sing}$ is a distribution which is singular on the light cone, the pointwise product~$P(x,y) \,P(y,x)$ is ill-defined. Thus in order to make mathematical sense of the Euler-Lagrange equations corresponding to our action principle, we need to introduce an ultraviolet regularization. Such a regularization is not a conceptual problem because the setting in discrete space-time in Section~\ref{secdst} can be regarded as a special regularization. Thus in our approach, a specific, albeit unknown regularization should have a fundamental significance. Fortunately, the details of this regularization are not needed for our analysis. Namely, for a general class of regularizations of the vacuum Dirac sea (for details see~\cite[Chapter~3]{sector} or~\cite[Chapter~4]{PFP}), the Euler-Lagrange equations have a well-defined asymptotic behavior when the regularization is removed. In this limit, the Euler-Lagrange equations give rise to differential equations involving the particle and anti-particle wave functions as well as the bosonic potentials and currents, whereas the Dirac sea disappears. This construction is subsumed under the notion {\em{continuum limit}}. In the recent paper~\cite{sector}, the continuum limit was analyzed in detail for systems which in the vacuum are described in generalization of~\eqref{Psea} by a sum of Dirac seas, \begin{equation} \label{seavac} P^\text{sea}(x,y) \;=\; \sum_{\beta=1}^g \int \frac{d^4k}{(2 \pi)^4}\: (k\mbox{ \hspace{-1.13 em} $/$}+m_\beta)\: \delta(k^2-m_\beta^2)\: \Theta(-k^0)\: e^{-ik(x-y)} \:. \end{equation} Such a configuration is referred to as a {\em{single sector}}. The parameter~$g$ can be interpreted as the number of generations of elementary particles. It turns out that in the case~$g=1$ of one Dirac sea, the continuum limit gives equations which are only satisfied in the vacuum, in simple terms because the logarithm in current terms like~\eqref{current} causes problems. In order to get non-trivial differential equations, one must assume that there are exactly {\em{three generations}} of elementary particles. In this case, the logarithms in the current terms of the three Dirac seas can compensate each other, as is made precise by a uniquely determined so-called local axial transformation. Analyzing the possible operators~${\mathscr{B}}$ in the corresponding Dirac equation in an exhaustive way (including differential and nonlocal operators), one finds that the dynamics is described completely by an {\em{axial potential}} $A_\text{a}$ coupled to the Dirac spinors. We thus obtain the coupled system \begin{equation} \label{daeq} (i \mbox{$\partial$ \hspace{-1.2 em} $/$} + \gamma^5 \mbox{ $\!\!A$ \hspace{-1.2 em} $/$}_\text{\rm{a}} - m) \Psi = 0 \:,\qquad C_0 \,j^k_\text{\rm{a}} - C_2\, A^k_\text{\rm{a}} = 12 \pi^2\, J^k_\text{\rm{a}}\:, \end{equation} where~$j_\text{\rm{a}}$ and~$J_\text{\rm{a}}$ are the axial currents of the gauge field and the Dirac particles, \begin{align} j^k_\text{\rm{a}} &= \partial^k_{\;\:l} A^l_\text{\rm{a}} - \Box A^k_\text{\rm{a}} \\ J^i_\text{\rm{a}} &= \sum_{k=1}^{n_f} \overline{\Psi_k} \gamma^5 \gamma^i \Psi_k - \sum_{l=1}^{n_a} \overline{\Phi_l} \gamma^5 \gamma^i \Phi_l\:. \label{JDirac} \end{align} As in~\eqref{particles}, the wave functions~$\Psi_k$ and~$\Phi_l$ denote the occupied particle and anti-particle states, respectively. The constants~$C_0$ and~$C_2$ in~\eqref{daeq} are empirical parameters which take into account the unknown microscopic structure of space-time. For a given regularization method, these constants can be computed as functions of the fermion masses. For clarity, we point out that the Dirac current~\eqref{JDirac} involves only the particle and anti-particle states of the system, but not the states forming the Dirac sea. The reason is that the contributions by the sea states cancel each other in our action principle. As a consequence, only the deviations from the completely filled sea configuration contribute to the Dirac current. In the continuum limit, pair creation is described following Dirac's original idea by removing a sea state and occupying instead a particle state. To avoid confusion, we mention that the wave functions~$\Psi_k$ and~$\Phi_l$ need to be suitably orthonormalized. Taking this into account, the sum of the one-particle currents in~\eqref{JDirac} is indeed the same as the expectation value of the current operator computed for the Hartree-Fock state obtained by taking the wedge product of the wave functions~$\Psi_k$ and~$\Phi_l$. We finally remark that more realistic models are obtained if one describes the vacuum instead of~\eqref{seavac} by a direct sum of several sectors. The larger freedom in perturbing the resulting Dirac operator gives rise to several effective gauge fields, which couple to the fermions in a specific way. As shown in~\cite[Chapters~6-8]{PFP}, this makes it possible to realize the gauge groups and couplings of the standard model. The derivation of the corresponding field equations is work in progress. \section{A New Mechanism for the Generation of Boson Masses} The term~$C_2\, A^k_\text{\rm{a}}$ in~\eqref{daeq} gives the axial field a rest mass~$M = \sqrt{C_2/C_0}$. This bosonic mass term is surprising, because in standard gauge theories a boson can be given a mass only by the Higgs mechanism of spontaneous symmetry breaking. We now explain how the appearance of the mass term in~\eqref{daeq} can be understood on a non-technical level (for more details see~\cite[\S6.2 and~\S8.5]{sector}). In order to see the connection to gauge theories, it is helpful to consider the behavior of the Dirac operator and the fermionic projector under gauge transformations. We begin with the familiar gauge transformations of electrodynamics, for simplicity in the case~$m=0$ of massless fermions. Thus assume that we have a pure gauge potential $A = \partial \Lambda$ with a real function~$\Lambda(x)$. This potential can be inserted into the Dirac operator by the transformation \[ i \mbox{$\partial$ \hspace{-1.2 em} $/$} \;\rightarrow\; e^{i \Lambda(x)} i \mbox{$\partial$ \hspace{-1.2 em} $/$}\, e^{-i \Lambda(x)} = i \mbox{$\partial$ \hspace{-1.2 em} $/$} + (\mbox{$\partial$ \hspace{-1.2 em} $/$} \Lambda) \:, \] showing that the electromagnetic potential simply describes the phase transformation $\Psi(x) \rightarrow e^{i \Lambda(x)} \Psi(x)$ of the wave functions. Since the multiplication operator~$U=e^{i \Lambda}$ is unitary with respect to the inner product~\eqref{stip}, it preserves the normalization of the fermionic states. Thus in view of~\eqref{Pkernel}, the kernel of the fermionic projector simply transforms according to \[ P(x,y) \;\rightarrow\; e^{i \Lambda(x)} P(x,y)\, e^{-i \Lambda(y)} \:. \] When forming the closed chain~\eqref{cchain}, the phase factors drop out. This shows that our action principle is {\em{gauge invariant}} under the local $U(1)$-transformations of electrodynamics. We next consider an axial potential~$A_\text{a}$ as appearing in~\eqref{daeq}. A pure gauge potential~$A_\text{a}= \partial \Lambda$ can be generated by the transformation \[ i \mbox{$\partial$ \hspace{-1.2 em} $/$} \;\rightarrow\; e^{i \gamma^5 \Lambda(x)} i \mbox{$\partial$ \hspace{-1.2 em} $/$}\, e^{i \gamma^5 \Lambda(x)} = i \mbox{$\partial$ \hspace{-1.2 em} $/$} + \gamma^5 (\mbox{$\partial$ \hspace{-1.2 em} $/$} \Lambda) \:, \] suggesting that the kernel of the fermionic projector should be transformed according to \[ P(x,y) \;\rightarrow\; e^{-i \gamma^5 \Lambda(x)} P(x,y)\, e^{-i \gamma^5 \Lambda(x)} \:. \] The main difference compared to the electromagnetic case is that now the transformation operator~$U=e^{-i \gamma^5 \Lambda(x)}$ is {\em{not}} unitary with respect to the inner product~\eqref{stip}. This leads to the technical complication that we need to be concerned about the normalization of the fermionic states. More importantly, the phases no longer drop out of the closed chain, because \begin{align*} A_{xy} \rightarrow & \left( e^{-i \gamma^5 \Lambda(x)} P(x,y)\, e^{-i \gamma^5 \Lambda(x)} \right) \left( e^{-i \gamma^5 \Lambda(y)} P(y,x)\, e^{-i \gamma^5 \Lambda(x)} \right) \\ &= e^{-i \gamma^5 \Lambda(x)} P(x,y)\, e^{-2 i \gamma^5 \Lambda(y)} P(y,x)\, e^{-i \gamma^5 \Lambda(x)} \:. \end{align*} This shows that in general, our action is not invariant under axial gauge transformations. As a consequence, the appearance of the axial potential in the field equations does not contradict gauge invariance. A more detailed analysis shows that the above axial transformation indeed changes only the phases of the eigenvalues~$\lambda_i$ of the closed chain, and these phases drop out when taking their absolute values as appearing in the closed chain. But repeating the above argument in the case~$m>0$ of massive fermions, one finds additional contributions proportional to~$m^2 A_a$ which even affect the absolute values~$|\lambda_i|$. These contributions are responsible for the bosonic mass term in the field equations. In simple terms, the bosonic mass arises because the corresponding potential does not describe a local symmetry of our system. More specifically, an axial gauge transformation changes the relative phase of the left- and right-handed components of the fermionic projector. This relative phase does change the physical system and is thus allowed to enter the physical equations. In order to get a closer connection to the Higgs mechanism, one can say that the axial gauge symmetry is spontaneously broken by the states of the Dirac sea, because they distinguish the relative phase of the left- and right-handed components of the fermionic projector. \section{The Vacuum Polarization} \label{secpolarize} We now describe how the one-loop vacuum polarization arises in the fermionic projector approach and compare the situation with perturbative QFT. For the derivation of the field equations in Section~\ref{secfield}, we considered the singular contribution~$P^{\text{sing}}(x,y)$ in~\eqref{Pdecomp}, but we disregarded the noncausal contribution~$P^\text{reg}$. Analyzing the latter contribution in the continuum limit gives rise to correction terms to the field equations~\eqref{daeq} of the form \begin{equation} \label{correction} - f_{[0]}* j^k_\text{\rm{a}} + 6 f_{[2]}* A^k_\text{\rm{a}} \:, \end{equation} where~$f_{[p]}$ are explicit Lorentz invariant distributions and the star denotes convolution (see~\cite[Theorem~8.2]{sector}). These corrections can already be understood in Dirac's decomposition~\eqref{Rab} as the ``polarization effect'' as described by the regular function~$R_b$. In the static situation, the term~$- f_{[0]}* j^k_\text{\rm{a}}$ reduces to the axial analogue of the well-known Uehling potential~\cite{uehling} (see~\cite[\S8.2]{sector}), whereas the term~$6 f_{[2]}* A^k_\text{\rm{a}}$ can be regarded as a correction to the bosonic mass term. We have thus reproduced the standard vacuum polarization, which is described in more modern language by the Feynman diagram involving one fermion loop in Figure~\ref{figfeynman} (left). \begin{figure} \includegraphics{srevfig}% \caption{A fermionic loop diagram (left) and a bosonic loop diagram (right).} \label{figfeynman} \end{figure} The connection to the Uehling correction in standard QFT can be understood most easily by going back to the original papers~\cite{dirac3, heisenberg2, uehling}. Heisenberg starts from Dirac's decomposition~\eqref{Rab}. Motivated by symmetry considerations and physical arguments, he gives a procedure for disregarding the singularities, so that only a regular contribution remains. This regular contribution gives rise to the Uehling potential. Similarly, the starting point in~\cite{sector} is the decomposition of the fermionic projector~\eqref{Pdecomp}. The main difference is that now the singular terms are not disregarded or removed, but they are carried along all the way. However, the singular terms drop out of the Euler-Lagrange equations corresponding to our action principle~\eqref{actprinciple}. In this way, all divergences disappear. The remaining finite contributions to~$P^\text{sing}$ give rise to the bosonic current and mass terms in the resulting field equations~\eqref{daeq}, whereas~$P^\text{reg}$ describes the vacuum polarization. The main advantage of the fermionic projector approach is that no counter terms are needed. The back-reaction of the Dirac sea on the electromagnetic field is finite, no divergences occur. Moreover, as we do not need counter terms, the setting immediately becomes background independent. It is to be expected (although it has not yet been worked out in detail) that the singularities of the fermionic projector will also drop out of the Euler-Lagrange equations if one sets up the theory in curved space-time. In modern QFT, the vacuum polarization is still described as in the original papers, with the only difference that the singularities are now removed more systematically by a normal ordering of the field operators. In the interacting situation, the subtle point is to choose the correct ``dressing'' of the electrons. This means that one must distinguish a subspace of the Fock space as describing the Dirac sea; then the normal ordering is performed with respect to this subspace. In~\cite{bach} a quantized Dirac field is considered which interacts with a Coulomb field and a magnetic field. It is shown that the resulting Hamiltonian is positive, provided that the atomic numbers and the fine structure constant are not too big. However, the chosen dressing has the shortcoming that polarization effects are suppressed. A more careful analysis is given in the series of papers~\cite{hainzl+sere2, hainzl+sere1, hainzl+solovej2, gravejat+lewin+sere}, where the vacuum state is constructed for a system of Dirac particles with electrostatic interaction in the Bogoliubov-Dirac-Fock approximation, and the question of renormalization is addressed. The conclusion of this analysis is that for mathematical consistency, one must take into account all the states forming the Dirac sea. Furthermore, the interaction ``mixes'' the states in such a way that it becomes impossible to distinguish between the particle states and the states of the Dirac sea. Thus, despite the use of a very different mathematical framework, the physical picture in these papers is quite similar to that of the fermionic projector approach. \section{General Loop Diagrams} So far, we only considered a Feynman diagram involving a fermion loop. Let us now consider how to obtain Feynman diagrams which involve bosonic loops: In the continuum limit, the system is described by the partial differential equations~\eqref{daeq}. Here the bosonic potential~$A_\text{\rm{a}}$ is not quantized; it is simply a classical field. But the system~\eqref{daeq} is nonlinear, and as shown in~\cite[\S8.4]{sector}, treating this nonlinearity perturbatively gives rise to the bosonic loop diagram in Figure~\ref{figfeynman} (right), as well as higher order bosonic loop diagrams. Taking the corrections~\eqref{correction} into account, one also gets the diagrams with fermion loops. In this way, one gets all the usual Feynman diagrams. But there are also differences. Since the analysis of the diagrams has not yet been carried out systematically, we merely state the potential effects as open problems: \begin{itemize} \item It is not clear whether the usual divergences of the bosonic loop diagram in Figure~\ref{figfeynman} (right) can be associated with a singularity of the fermionic projector which drops out of our action principle (similar to the explanation for the fermionic loop diagram in Section~\ref{secpolarize}). More generally, it is an open problem whether the bosonic loop diagrams necessary diverge. In particular, it seems promising to try to avoid the divergences completely by a suitable choice of the bosonic Green's function. This analysis might reveal a connection to the ``causal approach'' by Epstein and Glaser~\cite{epstein+glaser} and Scharf~\cite{scharf}. \item The main difference of the perturbation expansion in the fermionic projector approach is that instead of working with the Feynman propagator, the normalization conditions for the sea states enforce a non-trivial combinatorics of operator products involving different types of Green's functions and fundamental solutions (for details see~\cite{grotz}). This difference has no influence on the singularities of the resulting Feynman diagrams, and thus we expect that the renormalizability of the theory is not affected. But the higher-loop radiative corrections should depend on the detailed combinatorics, giving the hope to obtain small deviations from standard QFT which might be tested experimentally. \end{itemize} \section{Violation of Causality} As explained in Section~\ref{secpolarize}, the correction terms in~\eqref{correction} can also be understood in the framework of standard QFT via fermionic loop diagrams (like in Figure~\ref{figfeynman} (left)). However, the detailed analysis of the correction terms in position space as carried out in~\cite[Chapter~8 and Appendix~D]{sector} reveals an underlying structure which is not apparent in the usual description in momentum space. Namely, the correction term~\eqref{correction} violates causality in the sense that the future can influence the past! To higher order in the bosonic potential, even space-time points with spacelike separation can influence each other. At first sight, a violation of causality seems worrisome because it contradicts experience and seems to imply logical inconsistencies. However, these non-causal correction terms are only apparent on the Compton scale, and furthermore they are too small for giving obvious contradictions to physical observations. But they might open the possibility for future experimental tests. For a detailed discussion of the causality violation we refer to~\cite[\S8.2 and~\S8.3]{sector}. In order to understand how the violation of causality comes about, it is helpful to briefly discuss the general role of causality in the fermionic projector approach. We first point out that in discrete space-time, causality does not arise on the fundamental level. But for a given minimizer of our action principle, Definition~\ref{defcausal} gives us the notion of a ``discrete causal structure.'' This notion is compatible with our action principle in the sense that space-time points~$x$ and~$y$ with spacelike separation do not influence each other via the Euler-Lagrange equations. This can be seen as follows: According to our definition, for such space-time points the eigenvalues of the closed chain all have the same absolute value. Using the specific form of the Lagrangian~\eqref{Ldef}, this implies that the Lagangian and its first variation vanish. This in turn implies that~$A_{xy}$ drops out of the Euler-Lagrange equations. We conclude that our action principle is ``causal'' in the sense that no spacelike influences are possible. But at this stage, no time direction is distinguished, and therefore there is no reason why the future should not influence the past. The system of hyperbolic equations~\eqref{daeq} obtained in the continuum limit is causal in the sense that given initial data has a unique time evolution. Moreover, we have finite propagation speed, meaning that no information can travel faster than the speed of light. Thus in the continuum limit we recover the usual notion of causality. However, the fermionic projector~$P^\text{sea}$ is not defined via an initial value problem, but it is a global object in space-time (see~\cite[Chapter~2]{PFP}). As a consequence, the contribution~$P^\text{reg}$ in~\eqref{Pdecomp} is noncausal in the sense that the future influences the past. Moreover, to higher order in the bosonic potential the normalization conditions for the fermions give rise to nonlocal constraints. As a consequence, the bosonic potential may influence $P(x,y)$ even for spacelike distances. \section{Entanglement and Second Quantization} Taking the wedge product of the one-particle wave functions, \[ \Psi_1 \wedge \cdots \wedge \Psi_f \:, \] and considering the continuum limit, we obtain a system of classical bosonic fields coupled to a fermionic Hartree-Fock state. Although this setting gives rise to the Feynman diagrams, it is too restrictive for describing all quantum effects observed in nature. However, as shown in~\cite{entangle}, the framework of the fermionic projector also allows for the description of general second quantized fermionic and bosonic fields. In particular, it is possible to describe entanglement. The derivation of these results is based on the assumption that space-time should have a non-trivial microstructure. In view of our concept of discrete space-time, this assumption seems natural. Homogenizing the microstructure, one obtains an effective description of the system by a vector in the fermionic or bosonic Fock space. This concept, referred to as the {\em{microscopic mixing of decoherent subsystems}}, is worked out in detail in~\cite{entangle}. In~\cite{dice2010}, the methods and results are discussed with regard to decoherence phenomena and the measurement problem. \section{Conclusions and Outlook} Combining our results, we obtain a formulation of QFT which is consistent with perturbative QFT but has surprising additional features. First, we find a new mechanism for the generation of masses of gauge bosons and obtain new types of corrections to the field equations which violate causality. Moreover, our model involves fewer free parameters, and the structure of the interaction is completely determined by our action principle. Before one can think of experimental tests, one clearly needs to work out a more realistic model which involves all elementary particles and includes all interactions observed in nature. As shown in~\cite[Chapters~6--8]{PFP}, a model involving 24 Dirac seas is promising because the resulting gauge fields have striking similarity to the standard model. Furthermore, the underlying diffeomorphism invariance gives agreement with the equivalence principle of general relativity. Thus working out the continuum limit of this model in detail will lead to a formulation of QFT which is satisfying conceptually and makes quantitative predictions to be tested in future experiments. \vspace*{.6em} \noindent \thanks{{\em{Acknowledgments:}} I would like to thank Bertfried Fauser, Christian Hainzl and the referees for helpful comments on the manuscript.} \def\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
51902da6dc05ba4383ec2c52dcb973f362d65a19
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{intro} Since the experimental realization of a single monolayer of graphite \cite{novoselov}, named graphene, a two dimensional crystal made of carbon atoms hexagonally packed, a lot of efforts have been made to study many properties of electrons sitting on a honeycomb lattice \cite{castroneto}. Also the problem of magnetic impurities in such a system has become a topic of recent investigations in the last few years \cite{saremi,hentschel,sengpunta,uchoa,cornaglia}, although a detailed derivation of the effective model is still lacking. The main motivation of the present work is, therefore, that of deriving, from the lattice Hamiltonian, the corresponding continuum model for the Kondo-like impurity, writing the effective couplings from the lattice parameters. From angular mode expansion we get an effective one-dimensional Kondo model which has, in general, four flavors and is peculiar to graphene-like sublattice systems. Strikingly, we find that there is an angular momentum mixing only in the presence of internode scattering processes, being the valleys and the momenta locked in pairs, in each sublattice sector. The complete model has six couplings in the spin-isotropic case, however, thanks to the lattice symmetry, for some particular positions of the impurity, the number of couplings can be reduced to one, obtaining a multichannel pseudogap Kondo model sharing, now, many similarities with other gapless fermionic systems \cite{withoff,ingersent,ingersent2}, as for example, some semiconductors \cite{withoff}, $d$-wave superconductors \cite{cassanello2} and flux phases \cite{cassanello1}. A second issue which is worthwhile being addressed is related to interactions. In real systems logarithmic corrections in the density of states may appears, as a result of many-body effects. In order to include, at some extent, correlation effects we allow the Fermi velocity to be energy dependent. Indeed for a system of electrons in the half-filled honeycomb lattice, like graphene, an effect of Coulomb interaction is that of renormalizing the Fermi velocity \cite{gonzalez94} which grows in the infrared limit. This behavior induces in the density of states subleading logarithmic corrections. We plan therefore to analyze the effect of these corrections onto the Kondo effect in order to see how finite coupling constant transition, obtained within the large-$N$ expansion technique \cite{read} and renormalization group approach \cite{anderson,hewson}, can be affected by deviations from power law. We find that the critical Kondo coupling becomes non-universal and is enhanced in the ultraviolet by a quantity directly related to the Coulomb screening. Moreover, we find that the impurity contribution to the magnetic susceptibility and the specific heat vanish faster by log$^3$ than in the free case, as approaching zero magnetic field or zero temperature. \section{The model} In this section we will derive the continuum one-dimensional effective model from the microscopic lattice Hamiltonian. \subsection{Lattice Hamiltonian} \label{sec2} Let us consider a honeycomb lattice which can be divided into two sublattices, A and B. The tight-binding vectors can be chosen as follows \begin{eqnarray} &&{\delta}_1=\frac{a}{2}(1,\sqrt{3}),\\ &&{\delta}_2=\frac{a}{2}(1,-\sqrt{3}),\\ &&{\delta}_3=a(-1,0), \end{eqnarray} where $a$ is the smallest distance between two sites. These vectors link sites belonging to two different triangular sublattices. Each sublattice is defined by linear combinations of other two vectors, $\frac{a}{2}(3,\sqrt{3})$ and $\frac{a}{2}(3,-\sqrt{3})$. From these values one can derive the reciprocal-lattice vectors in momentum space and draw the Brillouin zone which has an hexagonal shape, i.e. with six corners. We choose two inequivalent corners (the others are obtained by a shift of a reciprocal-lattice vector) at the positions \begin{eqnarray} \label{K1} &&{\bf K}=\frac{4\pi}{3\sqrt{3} a}(0,1),\\ \label{K2} &&{\bf K}^\prime=-{\bf K}. \end{eqnarray} These points are actually the Fermi surface reduced to two dots approaching the zero chemical potential, i.e. at half-filling. \\ We will consider the following Hamiltonian defined on this honeycomb lattice \begin{equation} H=H_0+H_{K}. \end{equation} The first contribution is given by the tight-binding Hamiltonian \begin{equation} H_0=-t\sum_{{\bf r}\,{\delta}\,\sigma}c_{A\sigma }^\dagger({\bf r}) c_{B\sigma}({\bf r}+{\delta})+h.c., \end{equation} where $t$ is the nearest neighbour hopping parameter, $c_{A\sigma }^\dagger({\bf r})$ ($c_{A\sigma }({\bf r})$) is the creation (annihilation) operator for electrons with spin $\sigma$ localized on the site ${\bf r}$, a vector belonging to the sublattice $A$, while $c_{B\sigma}^\dagger({\bf r}+{\delta})$ ($c_{B\sigma}({\bf r}+{\delta})$) the creation (annihilation) operator for electrons on the site ${\bf r}+\delta$, belonging to the sublattice $B$. The second contribution to $H$ is the Kondo-like impurity term \begin{equation} H_{K} \sum_{{{\bf v}}}\left\{\lambda_{{{\bf v}} \perp}\left(S_+ c_\uparrow^{\dagger}({\bf v}) c_\downarrow({\bf v})+ S_- c_\downarrow^{\dagger}({\bf v})c_\uparrow({\bf v})\right) +\lambda_{{{\bf v}} z}S_z \left(c_\uparrow^{\dagger}({\bf v})c_\uparrow({\bf v})-c_\downarrow^{\dagger}({\bf v})c_\downarrow({\bf v})\right)\right\}, \end{equation} where $\lambda_{{\bf v}\perp}$ and $\lambda_{{\bf v}z}$ are the short-range Kondo couplings, $\vec S$ (with $S_\pm=S_x\pm iS_y$) is the spin of the impurity sitting at the reference position $(0,0)$, $\vec \sigma$ is the spin operator of the electrons located at ${\bf v}$ from the impurity. ${\bf v}$ can belong to $A$ or $B$ and we sum over all these vectors. \subsection{Derivation of 1D effective model} \label{sec3} We now rewrite the fields $c$ in the following way \begin{eqnarray} &&c_{A\sigma}({\bf r})\simeq e^{i{\bf K}\cdot {\bf r}}\psi^{{1}}_{A \sigma}{({\bf r})}+e^{-i{\bf K}\cdot {\bf r}}\psi^{{2}}_{A \sigma}{({\bf r})},\\ &&c_{B\sigma}({\bf r}+{\delta})\simeq e^{i{\bf K}\cdot ({\bf r}+{\delta})}\psi^{{1}}_{B\sigma}{({\bf r}+{\delta})}+e^{-i{\bf K}\cdot ({\bf r}+{\delta})}\psi^{{2}}_{B\sigma}{({\bf r}+{\delta})},\\ &&c_{\sigma}({\bf v})\simeq e^{i{\bf K}\cdot {{\bf v}}}\psi^{{1}}_{L\sigma}{({\bf v})}+e^{-i{\bf K}\cdot {{\bf v}}}\psi^{{2}}_{L\sigma}{({\bf v})} \end{eqnarray} with $L=A$ if ${\bf v} \in A$, or $L=B$ if ${\bf v} \in B$. The upper indices, $1$ and $2$, label the Fermi points Eqs. (\ref{K1}), (\ref{K2}). At these particular points we get the following equalities \begin{eqnarray} &&\sum_{{\delta}} e^{\pm i{\bf K}\cdot {\delta}}= 0,\\ &&\sum_{{\delta}}{\delta}\, e^{\pm i{\bf K}\cdot {\delta}}= -\frac{3a}{2}(1,\mp i). \end{eqnarray} Expanding the slow fields $\psi^i_{L\,\sigma}({\bf r}+{\delta})$ around ${\bf r}$, introducing the multispinor \begin{equation} \label{spinor_spin} \psi=\left(\psi^{{1}}_{A\,\uparrow},\psi^{{1}}_{B\,\uparrow},\psi^{{2}}_{A\,\uparrow},\psi^{{2}}_{B\,\uparrow},\psi^{{1}}_{A\,\downarrow},\psi^{{1}}_{B\,\downarrow},\psi^{{2}}_{A\,\downarrow},\psi^{{2}}_{B\,\downarrow}\right)^t, \end{equation} the identities $\sigma_0$, $\tau_0$, $\gamma_0$ and the Pauli matrices $\sigma_i$, $\tau_{i}$ and $\gamma_{i}$, $i=1,2,3$, acting respectively on the spin space, $\uparrow \downarrow$, valley space, $1, 2$, and sublattice space, $A, B$, we get, in the continuum limit, \begin{eqnarray} \label{H0} &&H_0=-i v_F \int\, {d\bf r}\,\psi^\dagger({\bf r})\sigma_0\left(\tau_3 \gamma_1 \partial_y - \tau_0 \gamma_2 \partial_x\right)\psi({\bf r}),\\ &&H_{K}=\psi^\dagger(0)\left(\frac{1}{2}\hat J_{\perp}\left(S_+ \sigma_- + S_- \sigma_+\right)+\hat J_{z}S_z\sigma_z\right)\psi(0), \end{eqnarray} where $v_F=\frac{3at}{2}$ is the Fermi velocity and \begin{equation} \label{J_sig} \hat J_{\sigma}=\frac{1}{2}\left\{\left(J^A_{0\sigma}\tau_0+J^A_{1\sigma}\tau_1+J^A_{2\sigma}\tau_2\right)(\gamma_0+\gamma_3)+\left(J^B_{0\sigma}\tau_0+J^B_{1\sigma}\tau_1+J^B_{2\sigma}\tau_2\right)(\gamma_0-\gamma_3)\right\} \end{equation} a Kondo coupling matrix with the following components, containing the lattice details, \begin{eqnarray} \label{def1} &&J^L_{0\,\sigma}= \sum_{{{\bf v}}\in L}\lambda_{{{\bf v}}\,\sigma} \,,\\ \label{def2} &&J^L_{1\,\sigma}=\sum_{{{\bf v}}\in L}\cos(2{\bf K}\cdot {{\bf v}})\lambda_{{{\bf v}}\,\sigma} \,,\\ \label{def3} &&J^L_{2\,\sigma}=\sum_{{{\bf v}}\in L}\sin(2{\bf K}\cdot {{\bf v}})\lambda_{{{\bf v}}\,\sigma} \,, \end{eqnarray} where $\sigma=\perp, z$ the spin index and $L=A,B$ the sublattice index. Eq.~(\ref{H0}) is a Dirac-Weyl Hamiltonian, constant in spin-space, which, after defining $\bar \psi=\psi^{\dagger}\gamma_3$, can be written as $v_F \int\, {d\bf r}\,\bar \psi ({\bf r})\left(\tau_0 \gamma_1 \partial_x + \tau_3 \gamma_2 \partial_y\right)\psi({\bf r})$ to make Lorentz invariance manifest. The spectrum is made of a couple of Dirac cones departing from the two Fermi points, and the density of states vanishes linearly approaching the zero energy, $\rho(\epsilon)=\nu |\epsilon|$, where $\nu\propto v_F^{-2}$. This property plays a fundamental role on the scaling behavior of the Kondo impurity, as we are going to see. Let us rewrite the full effective Hamiltonian in momentum space, \begin{eqnarray} \nonumber H&=&v_F\int\frac{d{\bf p}}{(2\pi)^2}\, \psi^\dagger({\bf p})\,p\,\sigma_0\left(\tau_3 \gamma_1 \sin\theta_p - \tau_0 \gamma_2 \cos\theta_p\right)\psi({\bf p})\\ &&+\int\frac{d{\bf p}}{(2\pi)^2}\int\frac{d{\bf q}}{(2\pi)^2}\, \psi^\dagger({\bf q})\left(\frac{1}{2}\hat J_{\perp}\left(S_+ \sigma_- + S_- \sigma_+\right)+\hat J_{z}S_z\sigma_z\right)\psi({{\bf p}}), \label{H_moment} \end{eqnarray} where we have parametrized the momenta as follows \begin{eqnarray} p_x=p\,\cos\theta_p ,\\ p_y=p\,\sin\theta_p . \end{eqnarray} For the benefits of forthcoming discussions we first notice that the orbital angular momentum operator \begin{equation} {\cal L}=-i(x\partial_y-y\partial_x) \end{equation} does not commute with the Hamiltonian $H_0$ in Eq.~(\ref{H0}). On the other hand, in order to define proper total angular momenta we introduce the operator \begin{equation} {\cal J}={\cal L}+\frac{1}{2}\tau_3\gamma_3, \end{equation} which does commute with $H_0$, \begin{equation} \left[{\cal J},H_0\right]=0, \end{equation} and also with the $\tau_0$-components of $H_K$. In particular, given some amplitudes $\psi^{{i}}_{L\sigma m}(p)$, an eigenstate of ${\cal J}$ with eigenvalue $j=\left(m+\frac{1}{2}\right)$ can be written as \begin{equation} \label{eigenJ} \left(\begin{array}{r} e^{im\theta_p} \psi^{{1}}_{A\,\sigma\, m}(p)\\ e^{i(m+1)\theta_p} \psi^{{1}}_{B\,\sigma\, m}(p)\\ e^{i(m+1)\theta_p} \psi^{{2}}_{A\,\sigma\, m}(p)\\ e^{im\theta_p}\psi^{{2}}_{B\,\sigma\, m}(p) \end{array}\right). \end{equation} Performing the following unitary transformation \begin{equation} \label{U} U_p= \frac{e^{\frac{i}{2}\theta_p\tau_3}}{2\sqrt{2}}\tau_0\left[(1+ie^{-i\theta_p\tau_3})(\gamma_0-i\gamma_2)+(1-ie^{-i\theta_p\tau_3})(\gamma_1-\gamma_3)\right] \end{equation} to the fields \begin{equation} \label{Uphi} \psi({\bf p})=U_p \,\phi({\bf p}), \end{equation} the Hamiltonian Eq.~(\ref{H_moment}) becomes \begin{eqnarray} \label{H_rot} H&=&v_F\int\frac{d{\bf p}}{(2\pi)^2}\, \phi^\dagger({\bf p})\,p\,\sigma_0\tau_0\gamma_3\,\phi({\bf p})\\ \nonumber &&+\int\frac{d{\bf p}}{(2\pi)^2}\int\frac{d{\bf q}}{(2\pi)^2}\, \phi^\dagger({\bf q})\left(\frac{1}{2}\hat K_{\perp}(\theta_q,\theta_p)\left(S_+ \sigma_- + S_- \sigma_+\right)+\hat K_{z}(\theta_q,\theta_p)S_z\sigma_z\right)\phi({{\bf p}}), \end{eqnarray} namely, $H_0$ becomes diagonal, the cost to pay is that the Kondo couplings depends on the angular part of the momenta, \begin{eqnarray} \label{K_rot} \nonumber \hat K_{\sigma}(\theta_q,\theta_p)&\equiv& \frac{1}{2}\Big\{ \left(J_{0\sigma}^{A} e^{\frac{i}{2}(\theta_q-\theta_p)\tau_3}\tau_0 +J_{1\sigma}^{A}e^{\frac{i}{2}(\theta_q+\theta_p)\tau_3}\tau_1+J_{2\sigma}^{A}e^{\frac{i}{2}(\theta_q+\theta_p)\tau_3}\tau_2\right)(\gamma_0-\gamma_1)\\ &&+ \left(J_{0\sigma}^{B} e^{\frac{i}{2}(\theta_p-\theta_q)\tau_3}\tau_0+J_{1\sigma}^{B} e^{-\frac{i}{2}(\theta_q+\theta_p)\tau_3}\tau_1+J_{2\sigma}^{B} e^{-\frac{i}{2}(\theta_q+\theta_p)\tau_3}\tau_2\right)(\gamma_0+\gamma_1) \Big\}. \end{eqnarray} Notice that the angular dependence of $\hat K$ does not prevent the model to be renormalizable. As one can see by poor man's scaling procedure \cite{anderson,hewson}, being ${\bf q}$ the intermediate momentum in the edge bands, dropping for the moment the spin indices, the contributions which renormalize, for instance, $J_{0}^{A}$ in the particle channel are \begin{eqnarray} && J_{0}^{A}J_{0}^{A} e^{\frac{i}{2}(\theta_{p_1}-\theta_{q})\tau_3}e^{\frac{i}{2}(\theta_{q}-\theta_{p_2})\tau_3}[...]=J_{0}^{A}J_{0}^{A} e^{\frac{i}{2}(\theta_{p_1}-\theta_{p_2})\tau_3}[...],\\ &&J_{i}^{A}J_{i}^{A} e^{\frac{i}{2}(\theta_{p_1}+\theta_{q})\tau_3}\tau_i e^{\frac{i}{2}(\theta_{p_2}+\theta_{q})\tau_3}\tau_i[...]=J_{i}^{A}J_{i}^{A} e^{\frac{i}{2}(\theta_{p_1}-\theta_{p_2})\tau_3}[...] \end{eqnarray} with $i=1,2$. In the same way we can check that the corrections to $J_{i}^{A}$, with $i=1,2$, are \begin{eqnarray} J_{0}^{A}J_{i}^{A} e^{\frac{i}{2}(\theta_{p_1}-\theta_{q})\tau_3}e^{\frac{i}{2}(\theta_{q}+\theta_{p_2})\tau_3}\tau_i[...]= J_{0}^{A}J_{i}^{A} e^{\frac{i}{2}(\theta_{p_1}+\theta_{p_2})\tau_3}\tau_i[...],\\ J_{i}^{A}J_{0}^{A} e^{\frac{i}{2}(\theta_{p_1}+\theta_{q})\tau_3}\tau_i e^{\frac{i}{2}(\theta_{q}-\theta_{p_2})\tau_3}[...]=J_{i}^{A}J_{0}^{A} e^{\frac{i}{2}(\theta_{p_1}+\theta_{p_2})\tau_3}\tau_i[...]. \end{eqnarray} Analogous corrections can be verified in the hole channel. In all these corrections $\theta_q$ always cancels out, recovering the right momentum dependence for the slow modes.\\ From Eqs.~(\ref{def1}-\ref{def3}) we actually get access to the renormalization of linear combinations of the original lattice parameters $\lambda_{{\bf v}}$. In order to reduce the problem to one dimension we proceed expanding the fields $\phi({\bf p})$ in angular momentum eigenmodes as follows \begin{equation} \phi({\bf p})=\sum_{m=\infty}^\infty e^{i (m+\frac{1}{2})\theta_p}\phi_m(p), \end{equation} with $m\in \mathbb{Z}$. Indeed, due to the gauge in Eq.~(\ref{U}), all the spinor components have the same angular phase. Actually from Eq.~(\ref{Uphi}), one verify that $e^{i (m+\frac{1}{2})\theta_p}\phi_m(p)$ is the eigenvector of ${\cal J}$, Eq.~(\ref{eigenJ}), with eigenvalue $j=m+\frac{1}{2}$, transformed by $U_p^{-1}$, and with amplitudes \begin{equation} \phi^{{i}}_{\pm\,\sigma \,m}(p) =\frac{1}{\sqrt{2}}\left(\psi^{{i}}_{B\,\sigma\, m}(p)\mp i\psi^{{i}}_{A\,\sigma\, m}(p)\right), \end{equation} where the subscript $\pm$ replaces the sublattice index and refers to the sign of the energy, $v_Fp\gamma_3$, appearing in Eq.~(\ref{H_rot}). The original field at position ${\bf r}=(r,\varphi)$, can be written as \begin{equation} \psi(r,\varphi)=\int \frac{dp\,p}{4\sqrt{2}\pi}\sum_{m=-\infty}^{\infty}i^me^{im\varphi} \left(\begin{array}{c} i\left[{\sf J}_{m+1}(pr)e^{i\varphi}\gamma_d+ {\sf J}_m(pr)\gamma_u\right]\phi^{{1}}_{m}(p)\\ \left[{\sf J}_{m}(pr)\gamma_d- {\sf J}_{m+1}(pr)e^{i\varphi}\gamma_u\right]\phi^{{2}}_{m}(p) \end{array} \right), \end{equation} where ${\sf J}_m(z)$ are the Bessel functions of the first kind, $\gamma_d\equiv\gamma_0+\gamma_1-i\gamma_2-\gamma_3$ and $\gamma_u\equiv\gamma_0-\gamma_1-i\gamma_2+\gamma_3$. At $r=0$ the only terms which survive are those with $m=0,-1$, corresponding to $j=\pm \frac{1}{2}$, in terms of eigenvalues of ${\cal J}$. After integrating Eq.~(\ref{H_rot}) over the angles, indeed, we get in the $H_K$ only contributions with $m=0,-1$, in the following combinations \begin{eqnarray} \nonumber H &=&\int_0^{\infty} dp\, \frac{v_Fp^2}{2\pi} \sum_{i=1,2} \sum_{m=-\infty}^{\infty} \phi^{{i}\dagger}_{m}(p)\,\sigma_0\gamma_3\,\phi^{{i}}_{m}(p) \frac{1}{2} \int_0^{\infty}\frac{dp}{2\pi}\,p\int_0^{\infty}\frac{dq}{2\pi}\, q\\ \nonumber&& \Big\{ J^{A}_{0}\left[\phi^{{1}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{1}}_{0}(p) +\phi^{{2}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{2}}_{-1}(p)\right]\\ \nonumber &&+J^{A}_{1}\left[\phi^{{1}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{2}}_{-1}(p) +\phi^{{2}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{1}}_{0}(p)\right]\\ \nonumber &&-i J^{A}_{2}\left[\phi^{{1}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{2}}_{-1}(p) -\phi^{{2}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0-\gamma_1)\phi^{{1}}_{0}(p)\right] \\ \nonumber&& J^{B}_{0}\left[\phi^{{1}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma})(\gamma_0+\gamma_1)\phi^{{1}}_{-1}(p)+\phi^{{2}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma})(\gamma_0+\gamma_1)\phi^{{2}}_{0}(p) \right]\\ \nonumber &&+J^{B}_{1}\left[\phi^{{1}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0+\gamma_1)\phi^{{2}}_{0}(p) +\phi^{{2}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0+\gamma_1)\phi^{{1}}_{-1}(p) \right]\\ &&-i J^{B}_{2}\left[\phi^{{1}\dagger}_{-1}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0+\gamma_1)\phi^{{2}}_{0}(p) -\phi^{{2}\dagger}_{0}(q) ({\vec S}\cdot{\vec \sigma}) (\gamma_0+\gamma_1)\phi^{{1}}_{-1}(p)\right] \Big\}, \end{eqnarray} where now $\phi^{{i}}_{m}(p)$ are spinors only in spin and energy spaces. Here we are considering the spin-isotropic case, with $J^L_{i}\equiv J^L_{i\perp}=J^L_{i z}$, to simplify the notation. In the spin-anisotropic case one simply has to replace $J^L_{i}({\vec S}\cdot{\vec \sigma})$ with $J^{L}_{i\perp}({S}_x {\sigma}_x+{S}_y {\sigma}_y)+J^{L}_{i z}({S}_z{\sigma}_z)$. In the free part of the effective model, $H_0$, we keep only the contributions from the particles with $m=0,-1$, the only ones which can scatter with the impurity. We now unfold the momenta from $[0,+\infty)$ to $(-\infty,+\infty)$ by redefining the fields in the following way \begin{equation} \xi^{{i}}_{s\,\sigma}(p)\equiv \left[{\textrm {sign}}(p)\right]^{m+i}\sqrt{|p|}\phi^{{i}}_{{\textrm {sign}}(p)\,\sigma\, m}(|p|), \end{equation} where, in order to label the fermions, we choose the index $i$ in valley space and the index $s={\textrm {sign}}\left(m+\frac{1}{2}\right)$, the sign of the total angular momenta, eigenvalues of ${\cal J}$, which are good quantum numbers as soon as there is not internode scattering, i.e. $J^{A}_1=J^{B}_1=J^{A}_2=J^{B}_2=0$. Introducing for simplicity \begin{equation} J^{L}_{\pm}\equiv J^{L}_{1}\pm i J^{L}_{2} =\sum_{{{\bf v}}\in {L}}e^{\pm i 2 {\bf K}\cdot {{\bf v}}}\lambda_{{{\bf v}}} \,, \end{equation} we finally end up with the following one-dimensional effective Hamiltonian \begin{eqnarray} \label{H_full} H&=&\int_{-\infty}^{\infty} \frac{dp}{2\pi}\,E(p)\sum_{s,\sigma,i}\xi^{{i}\dagger}_{s\sigma}(p)\,\xi^{{i}}_{s\sigma}(p) +\frac{1}{2}\int_{-\infty}^{\infty} \frac{dq}{2\pi}\,\int_{-\infty}^{\infty} \frac{dp}{2\pi}\sqrt{|q|}\sqrt{|p|}\\ \nonumber&&{\vec S}\cdot\Big\{ J^{A}_{0}\left(\xi^{{1}\dagger}_{{+}}(q) {\vec \sigma} \xi^{{1}}_{{+}}(p) +\xi^{{2}\dagger}_{{-}}(q) {\vec \sigma} \xi^{{2}}_{{-}}(p)\right +J^{A}_{-}\,\xi^{{1}\dagger}_{{+}}(q) {\vec \sigma} \xi^{{2}}_{{-}}(p) +J^{A}_{+}\,\xi^{{2}\dagger}_{{-}}(q) {\vec \sigma} \xi^{{1}}_{{+}}(p)\\ \nonumber &&\phantom {S}+J^{B}_{0}\left(\xi^{{1}\dagger}_{{-}}(q) {\vec \sigma}\xi^{{1}}_{{-}}(p) +\xi^{{2}\dagger}_{{+}}(q) {\vec \sigma}\xi^{{2}}_{{+}}(p)\right) +J^{B}_{-} \, \xi^{{1}\dagger}_{{-}}(q) {\vec \sigma} \xi^{{2}}_{{+}}(p) +J^{B}_{+} \, \xi^{{2}\dagger}_{{+}}(q) {\vec \sigma} \xi^{{1}}_{{-}}(p) \Big\}, \end{eqnarray} where, in the first term, the indices $s=\pm$, $\,i=1,2$ and $\sigma=\uparrow,\downarrow$, are summed, and the dispersion relation is $E(p)=v_F p$. The full model, Eq.~(\ref{H_full}), has six Kondo couplings, in the spin-isotropic case, which are independent for a generic position of the magnetic impurity on the lattice. Moreover Eq.~(\ref{H_full}) exhibits an angular momentum mixing in the presence of internode scattering amplitudes $J^A_\pm$ and $J^B_\pm$, namely, when also the nodes are mixed. We are not going to analyze the complete model in full generality but we shall consider only particular cases physically relevant. \subsection{Some particular examples} \paragraph{Impurity on a site.} If we consider an impurity on top of a site of the honeycomb lattice, belonging to the sublattice $A$, for instance, and consider only the nearest neighbour coupling between the impurity and the electrons located on this site, we have $\lambda_{{\bf v}}\neq 0$ if ${\bf v}=(0,0)$ and assume $\lambda_{{\bf v}}=0$ for ${\bf v}\neq (0,0)$. In this case we get \begin{eqnarray} &&J^A_{0}=J^A_{1}=\lambda_{(0,0)}, \\ &&J^A_{2}=J^B_{0}=J^B_{1}=J^B_{2}=0. \end{eqnarray} Introducing the symmetric combination for the fields \begin{equation} \zeta=\xi^{{1}}_{+}+\xi^{{2}}_{-} \,, \end{equation} the effective Hamiltonian Eq.~(\ref{H_full}) becomes simply \begin{equation} H=\int_{-\infty}^{\infty} \frac{dp}{2\pi}\,E(p)\sum_{\sigma}\zeta^{\dagger}_{\sigma}(p)\zeta_{\sigma}(p) +\frac{J^{A}_{0}}{2}\iint_{-\infty}^{\infty} \frac{dq}{2\pi} \frac{dp}{2\pi}{\sqrt{|qp|}}\,{\vec S}\cdot \zeta^{\dagger}(q) {\vec \sigma}\zeta(p) , \label{H_os} \end{equation} which is a single channel Kondo model. \paragraph{Impurity by substitution.} If we now consider an impurity sitting on a site of the honeycomb lattice, let us say, belonging to the sublattice $A$, and consider only nearest neighbour couplings between the impurity and the electrons, we have $\lambda_{{\bf v}}=0$ if ${\bf v} \in A$ while $\lambda_{{\bf v}}=\lambda_{{\delta}_1}=\lambda_{{\delta}_2}=\lambda_{{\delta}_3}$, if ${\bf v} = {\delta}_i$, $i=1,2,3$, and $\lambda_{{\bf v}}=0$ for $v>a$. Noticing that \begin{equation} \label{key} \sum_{i=1}^3\cos(2{\bf K}\cdot {\delta}_i)=\sum_{i=1}^3 \sin(2{\bf K}\cdot {\delta}_i)=0, \end{equation} we get a remarkable reduction of the number of couplings given by Eqs.~(\ref{def1}-\ref{def3}) \begin{equation} J^A_{0}=J^A_{1}=J^A_{2}=J^B_{1}=J^B_{2}=0. \end{equation} Recalling the fields as follows \begin{equation} \zeta_{1}=\xi^{{1}}_{-}\,,\;\;\; \zeta_{2}=\xi^{{2}}_{+} \,, \end{equation} the effective Hamiltonian Eq.~(\ref{H_full}) reduces to \begin{equation} H=\int_{-\infty}^{\infty} \frac{dp}{2\pi}\,E(p)\sum_{\sigma,\,i}\zeta^{\dagger}_{i\sigma}(p)\zeta_{i\sigma}(p)+\frac{J^{B}_{0}}{2}\iint_{-\infty}^{\infty} \frac{dq}{2\pi} \frac{dp}{2\pi}{\sqrt{|qp|}}\sum_{i=1}^{N_f}{\vec S}\cdot\zeta^{\dagger}_{i}(q) {\vec \sigma}\zeta_{i}(p) , \label{H_sub} \end{equation} where the $N_f=2$ flavors (the valleys and the momenta are locked in pairs) are decoupled and we realize a two-channel Kondo model. The reduced model Eq.~(\ref{H_sub}) is the same as that found for flux phases \cite{cassanello1}. \paragraph{Impurity at the center of the cell.} Finally, let us consider an impurity at the center of the honeycomb cell. In this case, using Eqs.~(\ref{def1}-\ref{def3}) and Eq.~(\ref{key}), we have \begin{eqnarray} &&J^A_{0}=J^B_{0},\\ &&J^A_{1}=J^A_{2}=J^B_{1}=J^B_{2}=0. \end{eqnarray} Enumerating the fields as follows, for instance, \begin{equation} \zeta_{1}=\xi^{{1}}_{+}\,,\;\;\; \zeta_{2}=\xi^{{2}}_{-}\,,\;\;\; \zeta_{3}=\xi^{{1}}_{-}\,,\;\;\; \zeta_{4}=\xi^{{2}}_{+}\,, \end{equation} we get the same Hamiltonian as in Eq.~(\ref{H_sub}) with, now, $N_f=4$ flavors, realizing, therefore, a four-channel Kondo model, as in the case of $d$-wave superconductors \cite{cassanello2}. \section{Large-$N$ expansion and the role of Coulomb interaction} In this section we solve the model Eq.~(\ref{H_sub}) in the large-$N$ approximation, where $N$ is the rank of the symmetry group of the impurity, which actually is equal to $2$ for spin one-half. Following the standard procedure \cite{read, cassanello2}, within a path integral formalism, we write $\vec S=f_\alpha^{\dagger}\vec \sigma_{\alpha\beta} f_\beta$, introducing additional fermionic fields $f$, with the constraint $Q=f_\alpha^{\dagger} f_\alpha$, the charge occupancy at the impurity site. In the Lagrangian, therefore, a Lagrange multiplier $\epsilon_0$ is included to enforce such constraint, which is actually the impurity Fermi level. To decouple the quartic fermionic term one introduces the Hubbard-Stratonovich fields $\Phi_i$, where $i=1,... N_f$, being $N_f$ the number of flavors. For impurity by substitution $N_f=2$, as seen before. After integrating over the fermionic fields, $\zeta$ and $f$, we end up with the following effective free energy, \begin{equation} F=\frac{N}{\pi} \int d\epsilon\, f(\epsilon)\,\delta(\epsilon)+\int d\tau \left(\frac{N}{J^B_0}\sum_i^{N_f} |\Phi_i(\tau)|^2-Q\,\epsilon_0\right), \label{F} \end{equation} where $f(\epsilon)$ is the Fermi function and \begin{equation} \delta(\epsilon)=\arctan\left(\frac{\pi|\epsilon|\Delta/2}{(\epsilon-\epsilon_0)v_F^2+\epsilon\Delta \ln(\Lambda/|\epsilon|)}\right) \end{equation} the phase shift, with $\Delta=\sum_i |\Phi_i(\epsilon)|^2/\pi$, and $\Lambda$ a positive ultraviolet cut-off which dictates the limit of validity of the continuum Dirac-like model for the free Hamiltonian. For graphene the typical value is $\Lambda\sim 2$eV. So far we have considered a model of free fermions hopping on a lattice and scattering eventually with a magnetic impurity, but in order to get more realistic predictions we should consider, at some extent, interaction effects. In order to do that, we let the Fermi velocity be energy dependent, i.e. $v_F\equiv v_F(\epsilon)$. This is not unrealistic since it has been shown \cite{gonzalez94} that, due to Coulomb screening in an electronic system defined on the half-filled honeycomb lattice, as in the case of a monolayer of graphene \cite{gonzalez99,polini}, the effective Fermi velocity is renormalized in such a way that $v_F$ flows to higher values in the infrared, and consequently the density of states around the Fermi energy decreases. The low energy behavior for the renormalized velocity is $v_F\sim \ln(\epsilon^{-1})$ and so the density of states should behave naively as $\rho\sim \epsilon\, v_F^{-2}\sim \epsilon/\ln(\epsilon^{-1})^2$. The aim of the following section is then to study the role of such corrections onto the Kondo effect, neglecting, however, possible renormalization of the Kondo coupling due to Coulomb interaction. The idea is to consider an uncharged magnetic impurity embedded in a cloud of charges dressed by Coulomb interaction. The realistic expression for the Fermi velocity is the following \cite{polini} \begin{equation} \label{vF} v_F(\epsilon)= v\left(1+\eta \ln(\Lambda/|\epsilon|)\right), \end{equation} where $\eta$ is related to the fine structure constant, for Thomas-Fermi screening it is $\eta=e^2/4\varepsilon \hbar v$, being $\varepsilon$ the dielectric constant, and $v$ is an energy independent velocity. \subsection{Saddle point equations} From Eq.~(\ref{F}), the extremal values of $\epsilon_0$ and $\Delta$, evaluated at zero energy in the static approximation, satisfy the saddle point equations $\frac{\partial F}{\partial \epsilon_0}=0$ and $\frac{\partial F}{\partial \Delta}=0$, which can be written as follows \cite{cassanello1,cassanello2} \begin{eqnarray} \label{Qsp} Q&=&\frac{1}{\pi}\int^{D}_{-D}d\epsilon \, f(\epsilon) \frac{\partial\delta}{\partial \epsilon_0}(\epsilon),\\ -\frac{1}{J^B_0}&=&\frac{1}{\pi^2}\int^{D}_{-D} d\epsilon \, f(\epsilon) \frac{\partial\delta}{\partial \Delta}(\epsilon), \label{Jsp} \end{eqnarray} where $D\le \Lambda$ is the bandwidth. The Eq.~(\ref{Qsp}) dictates the relation between the singlet amplitude $\Delta\sim\sum_i\langle|\sum_{\sigma}\zeta_{i\sigma}^\dagger f_{\sigma}|^2\rangle$ and the impurity level $\epsilon_0$, at fixed occupation charge $Q$, and reads \begin{equation} \label{Q} Q=\int_{-D}^D \frac{d\epsilon}{\pi}\, f(\epsilon) \frac{2\pi v_F(\epsilon)^2|\epsilon|\Delta}{(\pi|\epsilon|\Delta)^2+4(v_F(\epsilon)^2(\epsilon-\epsilon_0)+\epsilon\Delta\ln(\Lambda/|\epsilon|))^2}. \end{equation} For $T=0$ and for a generic value of $Q$, we get the following behavior for the impurity level, $\epsilon_0\sim \Lambda\, e^{\frac{1}{\eta}\left(1+\frac{\Delta (1+\eta\ln(\Lambda/D))}{2\eta Qv^2(1+\eta\ln(\Lambda/D))-\Delta}\right)}$. In the non-interacting limit, formally, when $\eta\rightarrow 0$, it reduces to $\epsilon_0\sim D\, e^{-{2v^2 Q}/{\Delta}} $, in agreement with Ref.~\cite{cassanello2}. Strikingly, the limit of $\Delta\rightarrow 0$ is finite and equal to $\Lambda\,e^{1/\eta}$, i.e. the two limits do not commute. This means that, in the presence of Coulomb interaction, the occupation charge for an impurity level within the bandwidth is finite only if the singlet is formed and $Q\sim \Delta/2\eta v v_F(D)$. The energy scale $\epsilon_0$ in Eq.~(\ref{Q}) does not play the role of an infrared cut-off for $\Delta\rightarrow 0$, and as a result, in that limit, $Q$ goes to zero for any value of $\epsilon_0$. This result is different from that found in the free case \cite{cassanello2} where the Fermi velocity is constant, $v_F(\epsilon)=v$. In the latter case $\epsilon_0$ vanishes, approaching zero singlet amplitude, for any value of the occupation charge. The second equation, Eq.~(\ref{Jsp}), dropping the indices for simplicity, reads \begin{equation} \frac{1}{J}=\int_{-D}^D \frac{d\epsilon}{\pi}\, f(\epsilon) \frac{2v_F(\epsilon)^2|\epsilon|(\epsilon_0-\epsilon)}{(\pi|\epsilon|\Delta)^2+4(v_F(\epsilon)^2(\epsilon-\epsilon_0)+\epsilon\Delta\ln(\Lambda/|\epsilon|))^2}. \end{equation} Setting $\epsilon_0=0$ and $\Delta=0$ at $T=0$, we get the following critical coupling \begin{equation} \label{Jc} \frac{1}{J_c}=\int_{0}^D\frac{d\epsilon}{2\pi} \frac{1}{v_F(\epsilon)^2}=\frac{D}{2\pi\eta^2v^2}\left\{\frac{\eta }{1+\eta \ln(\Lambda/D)}-\frac{\Lambda}{D} e^{1/\eta}\Gamma[0,1/\eta+\ln(\Lambda/D)]\right\}, \end{equation} where $\Gamma[a,x]\equiv \int^\infty_x t^{a-1}e^{-t}dt$ is the Incomplete Gamma function. Sending $\eta\rightarrow 0$ we recover the standard result $\frac{1}{J_c}=\frac{D}{2\pi v^2}$ \cite{withoff}.\\ At this point it is worthwhile making a digression. Contrary to the free case, where the limit $\lim_{D\rightarrow 0}\frac{v^2}{DJ_c}$ is trivially finite and equal to $\frac{1}{2\pi}$, in the interacting case, using Eq.~(\ref{Jc}), this limit is zero. On the other hand, if we replace $v$ with renormalized velocity $v_F(D)$, the limit \begin{equation} \lim_{D\rightarrow 0}\frac{v_F(D)^2}{DJ_c}=\frac{1}{2\pi} \end{equation} is finite and equal to the standard case. This is consistent with the fact that the dimensionless parameter relevant in the Kondo effect is not the bare coupling $J$ but the product $\rho J$ and that, in the presence of a renormalized Fermi velocity, Eq.~(\ref{vF}), the density of states is modified as $\rho(\epsilon)\sim\epsilon/v_F(\epsilon)^2$. In order to validate this result and to get more insights one can address the problem from a renormalization group prospective, as we did in Appendix.\\ From Eq.~(\ref{Jc}) we find that the critical coupling $J_c\rho(D)$ is not universal, being an increasing function of the ratio $D/\Lambda$, and is larger than the corresponding mean field result in the non-interacting case for any positive $D\le \Lambda$.\\To go beyond the tree level, one should consider quantum fluctuations, i.e. higher orders in large-$N$ expansion, which might spoil the critical point obtained in the mean field level, as in the case of strictly power-law pseudogap Kondo systems \cite{ingersent2}, if the particle-hole symmetry is preserved. In order to break particle-hole symmetry, however, one can include straightforwardly a gate voltage in the model \cite{sengpunta}. In any case the role of fluctuations, in the presence of a logarithmic deviation from power-law in the density of states is still an open issue which we are not going to address here. \subsection{Magnetic susceptibility and specific heat \paragraph{Magnetic susceptibility.} The magnetic field can be easily included in our final model introducing a Zeeman term $H\sigma_3$. This term modifies the phase shift in the free energy, Eq.~(\ref{F}), as $\delta(\epsilon)\rightarrow \frac{1}{2}(\delta(\epsilon+H)+\delta(\epsilon-H))$. We can, therefore, calculate the magnetization \begin{equation} M(T,H)=-\frac{\partial F}{\partial H}, \end{equation} and the magnetic susceptibility \begin{equation} \chi(T,H)=-\frac{\partial^2 F}{\partial H^2}. \end{equation} For $T\rightarrow 0$ and $H\ll \epsilon_0$, we have the following magnetization \begin{equation} M(0,H)=\frac{N}{2\pi}\left[\delta(-H)-\delta(H)\right]\simeq \frac{N\Delta H^2}{2\epsilon_0^2 v_F(H)^4}\left[v_F(H)^2+\Delta\ln(\Lambda/H)\right]\simeq \frac{N\Delta}{2\epsilon_0^2 v^2\eta^2}\frac{H^2}{\ln(\Lambda/H)^2}. \end{equation} The final result for the magnetization is valid only if $H\ll \Lambda e^{-\Delta/(v\eta)^2}$. In the same limit the asymptotic behavior of the magnetic susceptibility is, then, given by \begin{equation} \label{chi(0,H)} \chi(0,H)\simeq \frac{N\Delta}{\epsilon_0^2 v^2\eta^2} \frac{H}{\ln(\Lambda/H)^2}. \end{equation} For $\Lambda e^{-\Delta/(\eta v)^2}\ll H\ll\epsilon_0$, instead, one gets $\chi(0,H)\simeq \frac{N\Delta^2 H}{\epsilon_0^2 v_F(H)^4}\ln({\Lambda}/{H})$, and for $v_F(H)\rightarrow v$, one recover the result for the non-interacting case \cite{cassanello2}.\\ For $H\rightarrow 0$ and $T\ll \epsilon_0,\,\Lambda e^{-\Delta/(\eta v)^2}$, we have, instead, the following magnetic susceptibility \begin{equation} \chi(T,0)=\frac{N}{\pi}\int_{-\infty}^{\infty} d\epsilon \frac{\partial f}{\partial\epsilon}\frac{\partial\delta}{\partial\epsilon}\simeq \frac{N\Delta T}{2\epsilon_0^2}\int_{-\infty}^{\infty} dx \frac{e^{x}(e^x-1)}{(1+e^x)^3}\frac{|x|x}{v_F(T|x|)^2} \simeq \frac{2\ln(2)N\Delta}{\epsilon_0^2 v^2\eta^2}\frac{T}{\ln(\Lambda/T)^2}, \end{equation} which crosses over to $\chi(T,0)=\frac{2\ln(2)N\Delta^2}{\epsilon_0^2 v_F(T)^4} T\,{\ln(\Lambda/T)}$, for $\Lambda e^{-\Delta/(\eta v)^2}\ll T\ll \epsilon_0$. In the presence of Coulomb interaction, therefore, the magnetization and the susceptibility vanish logarithmically faster, as approaching zero magnetic field or zero temperature, than in the free pseudogap case. The overscreening effects pointed out in Ref.~\cite{cassanello2} is, then, enhanced by a factor on the order $\frac{\Delta\eta^2}{v^2}[\ln(\frac{\Lambda}{{\textrm{min}}(T,H)})]^3$, in the presence of a renormalized Fermi velocity induced by the Coulomb screening. \paragraph{Specific heat.} Let us calculate now the impurity contribution to the specific heat, defined as follows \begin{equation} C(T,H)=-T\frac{\partial F}{\partial T^2}. \end{equation} For $H\rightarrow 0$ and $T\ll \epsilon_0$, we have \begin{eqnarray} \nonumber C(T,0)=\frac{N}{T\pi}\int_{-\infty}^{\infty} d\epsilon \,\epsilon^2\frac{\partial f}{\partial\epsilon}\frac{\partial\delta}{\partial\epsilon} \simeq \frac{N\Delta T^2}{2\epsilon_0^2}\int_{-\infty}^{\infty} dx \left(\frac{x^2e^{x}(e^x-1)}{(1+e^x)^3}-\frac{2xe^x}{(1+e^x)^2}\right)\frac{|x|x}{v_F(T|x|)^2}\\ \simeq \frac{9\zeta(3)N\Delta}{\epsilon_0^2 v^2\eta^2}\frac{T^2}{\ln(\Lambda/T)^2},\; \label{C(T,0)} \end{eqnarray} where $\zeta(3)\approx 1.2$ is the Riemann zeta function at $3$. Also in this case we assume $T\ll \Lambda e^{-\Delta/(\eta v)^2}$. For $T$ larger than that energy scale, instead, $C(T,0)\simeq \frac{9\zeta(3)N\Delta^2}{\epsilon_0^2 v_F(T)^4}{T^2}{\ln(\Lambda/T)}$, and for $\eta\rightarrow 0$ one recover the non-interacting result \cite{cassanello2}.\\ For $T\rightarrow 0$ and $H\ll \epsilon_0$ we have, instead, the following behavior \begin{eqnarray} \nonumber C(T,H)=\frac{N}{2\pi T}\int_{-\infty}^{\infty} d\epsilon \,\epsilon^2 \frac{\partial f}{\partial\epsilon}\left(\frac{\partial \delta}{\partial\epsilon}(\epsilon+H)+\frac{\partial \delta}{\partial\epsilon}(\epsilon-H)\right)\simeq -T\chi(0,H)\int_{-\infty}^{\infty} dx x^2\frac{\partial f}{\partial x}\\ =\frac{\pi^2}{3}T \chi(0,H), \label{C(0,H)} \end{eqnarray} with $\chi(0,H)$ calculated before. Eq.~(\ref{C(0,H)}) corresponds also to the asyptotic behavior of the impurity entropy, $-\partial F/\partial T$, for $T\ll H$. The specific heat, like the magnetic susceptibility, vanishes logarithmically faster than in the case with constant Fermi velocity, approaching zero magnetic field and zero temperature. However, the Wilson ratios $C/(T \chi)$, both for $T\ll H$ and for $H\ll T$, are exactly the same as those found in Ref.~\cite{cassanello2}. \section{Summary and conclusions} We have derived the low-energy continuum limit of a Kondo-like impurity model defined on a honeycomb lattice at half-filling. By angular momentum eigenmode expansion we have obtained an effective one-dimensional model with two colors and four flavors, two for each sublattice sector, Eq.~(\ref{H_full}). The impurity effective Hamiltonian involves two angular momenta which are linked with the two nodes. We have found, therefore, that the internode scattering contributions correspond also to the angular momentum mixing terms. Quite in general, we have to deal with six couplings in the spin-isotropic case, which are linear combinations of the original lattice parameters, Eqs.~(\ref{def1}-\ref{def3}). However, due to the underlying lattice symmetry, in tight-binding approximation, the number of Kondo couplings can be reduced to one, for particular impurity configurations. We have finally calculate, both within large-$N$ expansion technique and renormalization group approach the mean field critical Kondo coupling which is increased by the presence of a renormalized Fermi velocity driven by Coulomb interaction. From the calculation of some thermodynamic quantities, we find, however, that even though the Kondo phase is suppressed, at least in the mean field level, once the singlet is formed, Kondo screening effects are enforced by the Coulomb charge screening. \acknowledgments I would like to thank A. De Martino, D.M. Basko and L. De Leo for useful discussions
1671e3ebf26dacb53d86cfdf779bff61f54d5cb0
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Facility location problems are widely studied in Operations Research, Economics, Mathematics, Theoretical Computer Science, and Artificial Intelligence. In essence, in these problems facilities must be placed in some underlying space to serve a set of clients that also live in that space. Famous applications of this are the placement of hospitals in rural areas to minimize the emergency response time or the deployment of wireless Internet access points to maximize the offered bandwidth to users. These problems are purely combinatorial optimization problems and can be solved via a rich set of methods. Much more intricate are facility location problems that involve competition, i.e., if the facilities compete for the clients. These settings can no longer be solved via combinatorial optimization and instead, methods from Game Theory are used for modeling and analyzing them. The first model on competitive facility location is the famous \emph{Hotelling-Downs model}, first introduced by \citet{hotelling} and later refined by \citet{downs}. Their original interpretations are selling a commodity in the main street of a town, and parties placing themselves in a political left-to-right spectrum, respectively. They assume a one-dimensional market on which clients are uniformly distributed and there are $k$ facility agents that each want to place a single facility on the market. Each facility gets the clients, to which their facility is closest. \citet{voronoigames} introduced Voronoi games on networks, that move the problem onto a graph and assume discrete clients on each node. The models mentioned above are one-sided, i.e., only the facility agents face a strategic choice while the clients simply patronize their closest facility independently of the choices of other clients. Obviously, realistic client behavior can be more complex than this. For example, a client might choose not to patronize any facility, if there is no facility sufficiently close to her. This setting was recently studied by \citet{feldman-hotelling}, \citet{hotelling-limited-attraction} and \citet{hotelling-line-bubble} albeit with continuous clients on a line. In their model with limited attraction ranges, clients split their spending capacity uniformly among all facilities that are within a certain distance. In contrast to the Hotelling-Downs model, pure Nash equilibria always exist. In another related variant by \citet{fournier2020spatial}, clients that have multiple facilities in their range choose the nearest facilities. Another natural client behavior is that they might avoid crowded facilities to reduce waiting times. This notion was introduced to the Hotelling-Downs model by \citet{load-balancing}, also on a line. Clients consider a linear combination of both distance and waiting time, as they want to minimize the total time spent visiting a facility. This models clients that perform load balancing between different facilities. \citet{Peters2018} prove the existence of subgame perfect equilibria for certain trade-offs of distance and waiting time for two, four and six facilities and they conjecture that equilibria exist for all cases with an even number of facilities for client utility functions that are heavily tilted towards minimizing waiting times. \citet{hotelling-load-balancing} investigated the existence of approximate pure subgame perfect equilibria for Kohlberg's model and their results indicate that $1.08$-approximate equilibria exist. The most notable aspect of Kohlberg's model is that it is two-sided, i.e., both facility and client agents act strategically. This implies that the facility agents have to anticipate the client behavior, in particular the client equilibrium. For Kohlberg's model \citet{hotelling-load-balancing} show that this entails the highly non-trivial problem of solving a complex system of equations. In this paper we present a very general two-sided competitive facility location model that is essentially a combination of the models discussed above. Our model has an underlying host graph with discrete weighted clients on each vertex. The host graph is directed, which allows to model limited attraction ranges, and we have facilities and clients that both face strategic decisions. Most notably, in contrast to Kohlberg's model and despite our model's generality, we provide an efficient algorithm for computing the facilities' loads in a client equilibrium. Hence, facility agents can efficiently anticipate the client behavior and check if a game state is in equilibrium. \subsection{Further Related Work} Voronoi games were introduced by \citet{voronoi1d} on a line. For the version on networks by \citet{voronoigames}, the authors show that equilibria may not exist and that existence is NP-hard to decide. Also, they investigate the ratio between the social cost of the best and the worst equilibrium state, where the social cost is measured by the total distance of all clients to their selected facilities. With $n$ the number of clients and $k$ the number of facilities, they prove bounds of $\Omega(\sqrt{n/k})$ and $\mathcal{O}(\sqrt{kn})$. While we are not aware of other results on general graphs, there is work for specific graph classes: \citet{voronoi-cycle} limit their investigation to cycle graphs and characterize the existence of equilibria and bound the Price of Anarchy (PoA)~\cite{poa} and the Price of Stability (PoS)~\cite{pos} to $\frac94$ and $1$, respectively. Additionally, there are many closely related variants with two agents: restaurant location games \cite{restaurant-location}, a variant by~\citet{duopoly-voronoi}, and a multi round version~\cite{voronoi-multi-round}. Moreover, there are variants played in $k$-dimensional space: \citet{voronoi-kd-voters}, \citet{voronoi1d}, \citet{voronoi-choice}. To the best of our knowledge, there is no variant with strategic clients aiming at minimizing their maximum waiting time. A concept related to our model are utility systems, as introduced by \citet{vetta-utility-system}. Agents gain utility by selecting a set of acts, which they choose from a collection of subsets of a groundset. Utility is assigned by a function that takes the selected acts of all agents as an input. Two special types are considered: basic and valid utility systems. For the former, it is shown that pure Nash equilibria (NE) exist. For the latter, no NE existence is shown but the PoA is upper bounded by 2. We show in the supplementary material that our model with load balancing clients is a valid but not a basic utility system. Covering games~\cite{gairing-covering-games} correspond to a one-sided version of our model, i.e., where clients simply distribute their weight uniformly among all facilities in their shopping range. There, pure NE exist and the PoA is upper bounded by~$2$. More general versions are investigated by \citet{3g-market-sharing} and \citet{alex-market-sharing} in the form of market sharing games. In these models, $k$ agents choose to serve a subset of $n$ markets. Each market then equally distributes its utility among all agents who serve it. \citet{alex-market-sharing} show a PoA of $2-\frac1k$ for their game. Recently \citet{alex-network-investment} introduced a model which considers an inherent load balancing problem, however, each facility agent can create and choose multiple facilities and each client agent chooses multiple facilities. For further related models we refer to the excellent surveys by \citet{ELT93} and \citet{RE05}. \subsection{Model and Preliminaries} We consider a game-theoretic model for non-cooperative facility location, called the \emph{Two-Sided Facility Location Game (\limtmpmodel{})}, where two types of agents, $k$ \emph{facilities} and $n$ \emph{clients}, strategically interact on a given vertex-weighted directed host graph $H = (V,E,w)$, with $V = \{v_1,\dots,v_n\}$, where $w:V \to \mathbb{N}$ denotes the vertex weight. Every vertex $v_i \in V$ corresponds to a client with weight $w(v_i)$, that can be understood as her spending capacity, and at the same time each vertex is a possible location for setting up a facility for any of the $k$ facility agents $\mathcal{F} = \{f_1,\dots,f_k\}$. Any client $v_i \in V$ considers visiting a facility in her \emph{shopping range} $N(v_i)$, i.e., her direct closed neighborhood $N(v_i) = \{v_i\} \cup \{z \mid (v_i,z) \in E\}$. Moreover, let $w(X) = \sum_{v_i \in X}w(v_i)$, for any $X \subseteq V$, denote the total spending capacity of the client subset $X$. In our setting the strategic behavior of the facility and the client agents influences each other. Facility agents select a location to attract as much client weight as possible, whereas clients strategically decide how to distribute their spending capacity among the facilities in their shopping range. More precisely, each facility agent $f_j \in \mathcal{F}$ selects a single location vertex $s_j \in V$ for setting up her facility, i.e., the strategy space of any facility agent $f_j\in \mathcal{F}$ is $V$. Let $\mathbf{s} = (s_1,\dots,s_k)$ denote the \emph{facility placement profile}. And let $\mathcal{S} = V^k$ denote the set of all possible facility placement profiles. We will sometimes use the notation $\mathbf{s} = (s_j,s_{-j})$, where $s_{-j}$ is the vector of strategies of all facilities agents except $f_j$. Given $\mathbf{s}$, we define the \emph{attraction range} for a facility $f_j$ on location $s_j \in V$ as $A_\mathbf{s}(f_j) = \{s_j\} \cup \{v_i \mid (v_i,s_j) \in E\}$. We extend this to sets of facilities $F \subseteq \mathcal{F}$ in the natural way, i.e., $A_\mathbf{s}(F) = \{s_j \mid f_j \in F\} \cup \{v_i \mid (v_i,s_j) \in E, f_j\in F\}$. Moreover, let $w_\mathbf{s}(\mathcal{F}) = \sum_{v_i \in A_\mathbf{s}(\mathcal{F})} w(v_i)$. We assume that all facilities provide the same service for the same price and arbitrarily many facilities may be co-located on the same location. Each client $v_i\in V$ strategically decides how to distribute her spending capacity $w(v_i)$ among the opened facilities in her shopping range~$N(v_i)$. For this, let $N_{\mathbf{s}}(v_i) = \{f_j \mid s_j \in N(v_i)\}$ denote the set of facilities in the shopping range of client $v_i$ under $\mathbf{s}$. Let $\sigma: \mathcal{S} \times V \to \mathbb{R}_+^k$ denote the \emph{client weight distribution function}, where $\sigma(\mathbf{s},v_i)$ is the weight distribution of client $v_i$ and $\sigma(\mathbf{s},v_i)_{j}$ is the weight distributed by $v_i$ to facility~$f_j$. We say that $\sigma$ is \emph{feasible} for $\mathbf{s}$, if all clients having at least one facility within their shopping range distribute all their weight to the respective facilities and all other clients distribute nothing. Formally, $\sigma$ is feasible for $\mathbf{s}$, if for all $v_i\in V$ we have $\sum_{f_j \in N_\mathbf{s}} \sigma(\mathbf{s},v_i)_j = w(v_i)$, if $N_\mathbf{s}(v_i) \neq \emptyset$, and $\sigma(\mathbf{s},v_i)_j = 0$, for all $1\leq j \leq k$, if $N_\mathbf{s}(v_i) = \emptyset$. We use the notation $\sigma = (\sigma_i,\sigma_{-i})$ and $(\sigma_i',\sigma_{-i})$ denotes the changed client weight distribution function that is identical to $\sigma$ except for client $v_i$, which plays $\sigma'(\mathbf{s},v_i)$ instead of $\sigma(\mathbf{s},v_i)$. Any state $(\mathbf{s},\sigma)$ of the \limtmpmodel{} is determined by a facility placement profile~$\mathbf{s}$ and a feasible client weight distribution function $\sigma$. A state $(\mathbf{s},\sigma)$ then yields a \emph{facility load} $\ell_j(\mathbf{s},\sigma)$ with $\ell_j(\mathbf{s},\sigma) = \sum_{i=1}^n \sigma(\mathbf{s},v_i)_j$ for facility agent $f_j$. Hence, $\ell_j(\mathbf{s},\sigma)$ naturally models the total congestion for the service offered by the facility of agent $f_j$ induced by~$\sigma$. A facility agent $f_j$ strategically selects a location $s_j$ to maximize her induced facility load $\ell_j(\mathbf{s},\sigma)$. We assume that the service quality of facilities, e.g. the waiting time, deteriorates with increasing congestion. Hence, for a client the facility load corresponds to the waiting time at the respective facility. There are many ways of how clients could distribute their spending capacity. As proof-of-concept we consider the \emph{load balancing \limtmpmodel{}} with \emph{load balancing clients}, i.e., a natural strategic behavior where client $v_i$ strategically selects $\sigma(\mathbf{s},v_i)$ to minimize her maximum waiting time. More precisely, client $v_i$ tries to minimize her \emph{incurred maximum facility load} over all her patronized facilities (if any). More formally, let $P_i(\mathbf{s},\sigma) = \{j \mid \sigma(\mathbf{s},v_i)_j > 0\}$ denote the set of facilities patronized by client $v_i$ in state $(\mathbf{s},\sigma)$. Then client $v_i$'s incurred maximum facility load in state $(\mathbf{s},\sigma)$ is defined as $L_i(\mathbf{s},\sigma) = \max_{j \in P_i(\mathbf{s},\sigma)} \ell_j(\mathbf{s},\sigma)$. We say that $\sigma^*$ is a \emph{client equilibrium weight distribution}, or simply a \emph{client equilibrium}, if for all $v_i \in V$ we have that $L_i(\mathbf{s},(\sigma_{i}^*,\sigma_{-i})) \leq L_i(\mathbf{s},(\sigma_{i}',\sigma_{-i}))$ for all possible weight distributions $\sigma'(\mathbf{s},v_i)$ of client $v_i$. See Figure~\ref{fig:figure1} for an illustration of the load balancing \limtmpmodel{}. \begin{figure}[t] \centering \includegraphics[width=8.0cm]{example_instance} \caption{ Example of the load balancing \limtmpmodel{}. The clients (vertices) split their weight (shown by numbers) among the facilities (colored dots) in their shopping range. The client distributions are shown by colored pie charts. Left: The blue facility receives a load of $2$ while all other facilities get a load of $\frac43$. The left client with weight $2$ distributes weight $\frac{4}{3}$ to the yellow facility and $\frac{1}{3}$ to both the green and the red facility. The state is not in SPE as the red facility can improve her load to $\frac32$ by co-locating with the blue facility. Right: A SPE for this instance, all facilities have a load of $\frac32$.} \label{fig:figure1} \end{figure} We define the \emph{stable states} of the \limtmpmodel{} as \emph{subgame perfect equilibria (SPE)}, since we inherently have a two-stage game. First, the facility agents select locations for their facilities and then, given this facility placement, the clients strategically distribute their spending capacity among the facilities in their shopping range. A state $(\mathbf{s},\sigma)$ is in SPE, or \emph{stable}, if \begin{itemize} \item[(1)] $\forall f_j \in \mathcal{F}, \forall s_j' \in V$: $\ell_j(\mathbf{s},\sigma) \geq \ell_j((s_j',s_{-j}),\sigma)$ and \item[(2)] $\forall \mathbf{s} \in \mathcal{S}, \forall v_i \in V: L_i(\mathbf{s},\sigma) \leq L_i(\mathbf{s},(\sigma_{i}',\sigma_{-i}))$ for all feasible weight distributions $\sigma'(\mathbf{s},v_i)$ of client $v_i$. \end{itemize} We say that client $v_i$ is \emph{covered by $\mathbf{s}$}, if $N_\mathbf{s}(v_i) \neq \emptyset$, and \emph{uncovered by $\mathbf{s}$}, otherwise. Let $C(\mathbf{s}) = \{v_i \mid v_i \in V, N_\mathbf{s}(v_i) \neq \emptyset\}$ denote the set of covered clients under facility placement~$\mathbf{s}$. We will compare states of the \limtmpmodel{} by measuring their \emph{social welfare} that is defined as the \emph{weighted participation rate} $W(\mathbf{s}) = w(C(\mathbf{s})) = \sum_{v_i \in C(\mathbf{s})}w(v_i)$, i.e., the total spending capacity of all covered clients. For a host graph $H$ and a number of facility agents $k$, let $\text{\emph{OPT}}(H,k)$ denote the facility placement profile that maximizes the weighted participation rate $W(\text{\emph{OPT}}(H,k))$ among all facility placement profiles with $k$ facilities on host graph~$H$. We measure the inefficiency due to the selfishness of the agents via the Price of Anarchy (PoA) and the Price of Stability (PoS). Let $\text{bestSPE}(H,k)$ (resp. $\text{worstSPE}(H,k)$) denote the SPE with the highest (resp. lowest) social welfare among all SPEs for a given host graph $H$ and a facility number $k$. Moreover, let $\mathcal{H}$ be the set of all possible host graphs $H$. Then the PoA is defined as $$PoA :=\max_{H\in \mathcal{H},k} W(\text{\emph{OPT}}(H,k))/W(\text{worstSPE}(H,k)),$$ whereas the PoS is defined as $$PoS:=\max_{H\in \mathcal{H},k} W(\text{\emph{OPT}}(H,k))/W(\text{bestSPE}(H,k)).$$ We study dynamic properties of the \limtmpmodel{}. Let an \emph{improving move} by some (facility or client) agent be a strategy change that improves the agent's utility. A game has the \emph{finite improvement property (FIP)} if all sequences of improving moves are finite. The FIP is equivalent to the existence of an \emph{ordinal potential function}~\cite{MS96}. \subsection{Our Contribution} We introduce and analyze the \limtmpmodel{}, a general model for competitive facility location games, where facility agents and also client agents act strategically. We focus on the load balancing \limtmpmodel{}, where clients selfishly try to minimize their maximum waiting times that not only depend on the placement of the facilities but also on the behavior of all other client agents. We show that client equilibria always exist and that all client equilibria are equivalent from the facility agents' point-of-view. Additionally, we provide an efficient algorithm for computing the facility loads in a client equilibrium that enables facility agents to efficiently anticipate the clients' behavior. This is crucial in a two-stage game-theoretic setting. Moreover, since there are only $n$ possible locations for facilities, we can efficiently check if a given state of the load balancing \limtmpmodel{} is in SPE. Using a potential function argument, we can show that a SPE always exists. Finally, we consider the \limtmpmodel{} with an arbitrary feasible client weight distribution function. For this broad class of games, we prove that the PoA is upper bounded by $2$ and we give an almost tight lower bound of $2 - \frac{1}{k}$ on the PoA and~PoS. This implies an almost tight PoA lower bound for the load balancing \limtmpmodel{}. Furthermore, we show that computing a social optimum state for the \limtmpmodel{} with an arbitrary feasible client weight distribution function $\sigma$ is NP-hard for all feasible $\sigma$, hence, also for the load balancing \limtmpmodel{}. \section{Load Balancing Clients} In this section we analyze the load balancing \limtmpmodel{} in which we consider not only strategic facilities that try to get patronized by as many clients as possible but we also have selfish clients that strategically distribute their spending capacity to minimize their maximum waiting time for getting serviced. We start with a crucial statement that enables the facility agents to anticipate the clients' behavior. \begin{theorem} \label{theo:existence} For a facility placement profile $\mathbf{s}$, a client equilibrium $\sigma$ exists and every client equilibrium induces the same facility loads $(\ell_1(\mathbf{s},\sigma),\ldots,\ell_k(\mathbf{s},\sigma))$. \end{theorem} \begin{proof} We consider the following optimization problem (EQ): \begin{align*} &\min_{\sigma} \sum_{i=1}^k \ell_i(\mathbf{s},\sigma)^2\\ \text{subject to}\\ \sigma(\mathbf{s},v_i)_j &\ge 0 &\text{ for all } f_j \in N_\mathbf{s}(v_i)\\ \sigma(\mathbf{s},v_i)_j &= 0 &\text{ for all } f_j \notin N_\mathbf{s}(v_i)\\ \sum_{f_j \in N_\mathbf{s}(v_i)} \sigma(\mathbf{s},v_i)_j &= w(v_i) &\text{ if } N_\mathbf{s}(v_i) \ne \emptyset\\ \end{align*} It is easy to see that an optimal solution $\sigma$ of EQ is a client equilibrium. For the sake of contradiction, assume that there exists a client $v_i$ and two facility agents $f_p$ and $f_q$ with $\ell_q(\mathbf{s},\sigma) > \ell_p(\mathbf{s},\sigma)$ and $\sigma(\mathbf{s},v_i)_q > 0$. However, this contradicts the optimality of $\sigma$ as the KKT conditions~\citet{KKT} demand that $\ell_q(\mathbf{s},\sigma) \le \ell_p(\mathbf{s},\sigma)$ for all $f_p,f_q \in N_\mathbf{s}(v_i)$ with $\sigma(\mathbf{s},v_i)_q > 0$. Moreover, the KKT conditions are precisely the conditions of a client equilibrium, hence every equilibrium is an optimal solution of EQ. Observe that the objective of EQ is convex in the facilities' loads $\ell_1(\mathbf{s},\sigma),\ldots,\ell_k(\mathbf{s},\sigma)$ and the set of feasible solutions is compact and convex. Suppose there are two global optima $\sigma$ and $\sigma'$ of EQ. By convexity of the objective function, we must have $\ell_j(\mathbf{s},\sigma) = \ell_j(\mathbf{s},\sigma')$ for all facility agents $f_j$ as otherwise a convex combination of $\sigma$ and $\sigma'$ would yield a feasible solution for EQ with smaller objective function value. \end{proof} \noindent Two facility agents sharing a client have equal load if the shared client puts weight on both of them: \begin{lemma} \label{lemma:shared-client-equal-load-balancing} In the load balancing \limtmpmodel{}, for a facility placement $\mathbf{s}$, in a client equilibrium $\sigma$, if there are two facility agents $f_p$ and $f_q$ and a client $v_i$ with $p, q \in P_i(\mathbf{s}, \sigma)$, then $\ell_p(\mathbf{s},\sigma)=\ell_q(\mathbf{s},\sigma)$. \end{lemma} \begin{proof} Let $v_i$ be a client and $p$ be the agent with the highest load in $P_i(\mathbf{s}, \sigma)$. Assume that there is an agent $q \in P_i(\mathbf{s}, \sigma)$ with $\ell_p(\mathbf{s},\sigma)>\ell_q(\mathbf{s},\sigma)$. In this case, the client $v_i$ decreases her weight on $f_p$ (and all facility agents in $P_i(\mathbf{s}, \sigma)$ with the same load) and increases her weight on~$f_q$, decreasing her total costs. This contradicts $\sigma$ being a client equilibrium. \end{proof} \noindent Next, we define a \emph{shared client set}, which represents a set of facility agents who share weight of the same clients. \begin{definition} \label{def:shared-client-set} For a facility placement profile $\mathbf{s}$, let $f_p$ be an agent, $\sigma$ be a client equilibrium. We define a shared client set of facility agents $S_\sigma(f_p)$, such that (1) $f_p \in S_{\sigma}(f_p)$ and (2) For two facility agents $f_q, f_r$: If $f_q\in S_\sigma(f_p)$ and there is a client $v_i$ with $q,r \in P_i(\mathbf{s}, \sigma)$, then $f_r\in S_{\sigma}(f_p)$. \end{definition} \noindent We prove two properties of such a shared client set: First, all facility agents in a shared client set have the same load, and second, a client's weight is either completely inside or completely outside a shared client set in a client equilibrium. \begin{lemma} \label{lemma:shared-set-equal-load} For a facility placement $\mathbf{s}$ in a client equilibrium $\sigma$, for every $f_q, f_r \in S_{\sigma}(f_p)$ we have $\ell_q(\mathbf{s},\sigma) = \ell_r(\mathbf{s},\sigma)$. \end{lemma} \begin{proof} As $f_q$ and $f_r$ are both members of $S_{\sigma}(f_p)$ there exists a sequence of facility agents $F = (f_q, f_{i_1}, f_{i_2}, \dots, f_r)$, in which two adjacent facility agents share a client. By \Cref{lemma:shared-client-equal-load-balancing}, each pair of neighbors in $F$ has identical loads. Thus, $\ell_q(\mathbf{s},\sigma) = \ell_r(\mathbf{s},\sigma)$. \end{proof} \noindent The next lemma follows from \Cref{def:shared-client-set}: \begin{lemma} \label{lemma:shared-set-whole-client} For a facility placement $\mathbf{s}$, in a client equilibrium $\sigma$ for every client $v_i$ and facility agent $f_p$ with $p \in P_i(\mathbf{s}, \sigma)$ , we have that for every facility agent $f_r \notin S_{\sigma}(f_p)$ it holds that $r \notin P_i(\mathbf{s}, \sigma)$. \end{lemma} \noindent Additionally, we show that each facility agent's load can only take a limited number of values. \begin{lemma} \label{lemma:limited-values-load-balancing} For a facility placement profile $\mathbf{s}$, in a client equilibrium $\sigma$ a facility agent's load can only take a value of the form $\frac{x}{y}$ for $x \leq w_\mathbf{s}(\mathcal{F})$ and $y \leq k$ with $x, y \in \mathbb{N}$. \end{lemma} \begin{proof} If a client is shared between two facilities, these two facilities must, by \Cref{lemma:shared-client-equal-load-balancing}, have the same load. We consider an arbitrary facility agent $f_j$ and her shared client set $S_{\sigma}(f_j)$. All facility agents in $S_{\sigma}(f_j)$ have the same load by \Cref{lemma:shared-set-equal-load} and all clients which have weight on a facility agent on $S_{\sigma}(f_j)$ have their complete weight inside $S_{\sigma}(f_j)$ by \Cref{lemma:shared-set-whole-client}. Therefore, the sum of loads of the facility agents $S_{\sigma}(f_j)$ must be an integer $i \leq w_\mathbf{s}(\mathcal{F})$. Thus, the load of $f_j$ is $\frac{i}{|S_{\sigma}(f_j)|}$. Since $i \leq w_\mathbf{s}(\mathcal{F})$ (sum of client weights) and $|S_\sigma(f_j)| \leq k$ (number of facility agents) with $i, |S_\sigma(f_j)| \in \mathbb{N}$, the lemma is true. \end{proof} \begin{definition} For a facility placement profile $\mathbf{s}$, a set of facility agents $\emptyset \subset M \subseteq \mathcal{F}$ is a \emph{minimum neighborhood set} (MNS) if for all $\emptyset \subset T \subseteq \mathcal{F}\text:\frac{w(A_\mathbf{s}(M))}{|M|} \leq \frac{w(A_\mathbf{s}(T))}{|T|}\text.$ We define the \emph{minimum neighborhood ratio} (MNR) as $ \rho_\mathbf{s} := \frac{w(A_\mathbf{s}(M))}{|M|}\text, $ with $M$ being a \emph{MNS}. \end{definition} \noindent We show that a \emph{MNS} receives the entire weight of all clients within its range and this weight is equally distributed. \begin{lemma} \label{lemma:minimum-neighborhood-set} For a facility placement profile $\mathbf{s}$, in a client equilibrium $\sigma$, each facility $f_j \in M$ of a minimum neighborhood set $M$ has a load of exactly $\ell_j(\mathbf{s},\sigma)=\rho_\mathbf{s}$. \end{lemma} \begin{proof} Let $M$ be a MNS and $\sigma$ be an arbitrary client equilibrium. Let $T = \argmin_{f_j \in \mathcal{F}} {\left(\ell_j(\mathbf{s},\sigma)\right)}$ be the set of facility agents who share the lowest load in $\sigma$. Let $\ell_T$ be the load of each facility agents in $T$, hence for each $f_j \in T$ we have $\ell_j(\mathbf{s},\sigma) = \ell_T$. Assume for the sake of contradiction that $\ell_T < \frac{w(A_\mathbf{s}(M))}{|M|}$. Since $M$ is a MNS, we have $$\frac{w(A_\mathbf{s}(T))}{|T|} \geq \frac{w(A_\mathbf{s}(M))}{|M|}\text.$$ Thus, \begin{align*} \sum_{f_j\in T}{\ell_j(\mathbf{s},\sigma)} = |T| \cdot \ell_T & < |T| \frac{w(A_\mathbf{s}(M))}{|M|} \\ & \leq |T| \frac{w(A_\mathbf{s}(T))}{|T|} = w(A_\mathbf{s}(T))\text. \end{align*} Hence, there is at least one client agent $v_i$ in a range of at least one facility agent $f_a \in T$, who does not put her complete weight on the facility agents in $T$. Therefore, there is a facility agent $f_b \notin T$, with $\ell_b(\mathbf{s},\sigma) > \ell_T$ and $\sigma(\mathbf{s},v_i)_b > 0$. However, since $\ell_b(\mathbf{s},\sigma) > \ell_T$, $v_i$ would prefer to move weight away from $f_b$ to $f_a$. Thus, we arrive at a contradiction and in all client equilibria we have for each facility agent $f_j \in \mathcal{F}$ that $\ell_j(\mathbf{s},\sigma) \geq \frac{|A_\mathbf{s}(M)|}{|M|}$. The facility agents in $M$ only have access to the clients in $A_\mathbf{s}(M)$. Thus, if for any facility agent $f_c \in M$ the utility is $\ell_c(\mathbf{s},\sigma) > \frac{w(A_\mathbf{s}(M))}{|M|}$, there must be another facility agent $f_d \in M$ where $\ell_d(\mathbf{s},\sigma) < \frac{w(A_\mathbf{s}(M))}{|M|}$ holds. Since this is not possible, we get for each facility agent $f_j \in M$ that $\ell_j(\mathbf{s},\sigma) = \frac{w(A_\mathbf{s}(M))}{|M|}$. \end{proof} \subsection{Facility Loads in Polynomial Time} \label{sec:poly-alg-load-balancing} We present a polynomial-time combinatorial algorithm to compute the loads of the facility agents in a client equilibrium for a given facility placement profile $\mathbf{s}$. As each facility agent only has $n$ possible strategies, this implies that the best responses of facility agents are computable in polynomial time. Algorithm~\ref{alg:load-balancing-utilities} iteratively determines a MNS $M$, assigns to each facility in $M$ the MNR and removes the facilities and all client agents in their range from the instance. See \Cref{fig:load-balancing-algorithm} for an example of a run of the algorithm. \begin{algorithm} \caption{computeUtilities($H=(V,E,w), \mathcal{F}$, $\mathbf{s}$)} \label{alg:load-balancing-utilities} \lIf{$\mathcal{F} = \emptyset$}{\KwRet} $M \gets$ computeMNS($H, \mathcal{F}, \mathbf{s}$)\; \For{$f_j \in M$}{ $\ell_j(\mathbf{s},\sigma) \gets \frac{w(A_\mathbf{s}(M))}{|M|}$\; } $H' \gets (V,E,w')$ \textbf{with} $ w'(v_i)=0$ \textbf{if} $v_i \in A_\mathbf{s}(M)$ \textbf{else} $w'(v_i)=w(v_i)$\; computeUtilities($H', \mathcal{F} \setminus M, \mathbf{s}$)\; \end{algorithm} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.8, transform shape] \begin{scope} [ every node/.style = {circle, thick, draw, inner sep = 0 pt, minimum size = 15 pt} ] \node (1) [label=above left:$f_1$] at (0, 0) {}; \node (2) at (1*1.03923, 0) {}; \node (3) at (2*1.03923, 0) {}; \node (4) at (3*1.03923, 0) {}; \node (5) at (4*1.03923, 0) {}; \node (6) [label=above right:$f_4$] at (5*1.03923, 0) {}; \node (7) [label=right:$f_2$] at (2.5*1.03923, 0.95*1.03923) {}; \node (8) [label=right:$f_3$] at (2.5*1.03923, -0.95*1.03923) {}; \node (9) at (5*1.03923, -0.95*1.03923) {}; \node (10) at (5*1.03923, 0.95*1.03923) {}; \end{scope} \begin{scope} [ every node/.style = {circle,fill,inner sep=2pt} ] \node (1a) at (1) {}; \node (6a) at (6) {}; \node (7a) at (7) {}; \node (8a) at (8) {}; \end{scope} \coordinate(box1a) at (1.4*1.03923,1.3*1.03923); \coordinate(box1b) at (-0.7*1.03923,-1.3*1.03923); \coordinate(box2a) at (4.4*1.03923,1.3*1.03923); \coordinate(box2b) at (1.6*1.03923,-1.3*1.03923); \coordinate(box3a) at (5.9*1.03923,1.3*1.03923); \coordinate(box3b) at (4.6*1.03923,-1.3*1.03923); \begin{scope} [ every node/.style = {inner sep = 0 pt, minimum size = 5 pt} ] \node [label=below left:$S_1$] at (box1a) {}; \node [label=below left:$S_2$] at (box2a) {}; \node [label=below left:$S_3$] at (box3a) {}; \end{scope} \begin{scope} [ every path/.style = {thick, {Latex[length=2mm]}-} ] \draw (1) edge (2) (6) edge (5) edge (9) edge (10) (7) edge (2) edge (3) edge (4) edge (5) (8) edge (2) edge (3) edge (4) edge (5) ; \end{scope} \begin{scope} [ every path/.style = {thick, dotted} ] \draw (box1a) rectangle (box1b); \draw (box2a) rectangle (box2b); \draw (box3a) rectangle (box3b); \end{scope} \end{tikzpicture} \caption[Example of algorithm computeUtilities]{ An instance of the load balancing \limtmpmodel{} with a facility placement profile marked by dots and $10$ clients with weight $1$ each. Algorithm~\ref{alg:load-balancing-utilities} successively finds and removes the minimum neighborhood sets $S_1=\{f_1\}$, $S_2=\{f_2, f_3\}$ and $S_3=\{f_4\}$.} \label{fig:load-balancing-algorithm} \end{figure} \noindent The key ingredient of Algorithm~\ref{alg:load-balancing-utilities} is the computation of a MNS in Algorithm~\ref{alg:minimum-neighborhood-set}. \begin{algorithm} \caption{computeMNS($H=(V,E,w), \mathcal{F}, \mathbf{s}$)} \label{alg:minimum-neighborhood-set} construct directed graph $G=(V', E_{st} \cup E_\text{Range})$\; $V' \gets \{s, t\} \cup V \cup \mathcal{F}$\; $E_{st} \gets \{(s, v_i, w(v_i)) \mid v_i \in V\} \cup \{(f_j, t, 0) \mid f_j \in \mathcal{F}\}$\; $E_\text{Range} \gets \{(v_i, f_j, w(v_i)) \mid v_i \in V, f_j \in A_\mathbf{s}(v_i)\}$\; possibleUtilities $\gets$ sorted$(\{x/y \mid x, y \in \mathcal{N}, 0 \leq x \leq w_\mathbf{s}(\mathcal{F}), 1 \leq y \leq k\})$\; \For{binary search over $i \in$ possibleUtilities}{ $\forall{f_j \in \mathcal{F}}:$ capacity$((f_j, t)) \gets i$\; $h \gets$ maximum $s$-$t$-flow in $G$\; \leIf{$\text{value}(h) = i\cdot k$}{$i$ too small}{$i$ too large} } $T \gets \emptyset, \rho \gets$ highest $i \in$ possibleUtilities below threshold\; \For{$f_j \in \mathcal{F}$} { $\forall{f_p \in \mathcal{F}}:$ capacity$((f_p, t)) \gets \rho$\; capacity$((f_j, t)) \gets \infty$\; start with flow from binary search for $i=\rho$\; \If{$\nexists$ augmenting path in $G$}{ $T \gets T \cup \{f_j\}$\; } } \KwRet{T}\; \end{algorithm} Here, we first identify the MNR by a reduction to a maximum flow problem. To this end, we construct a graph, where from a common source vertex $s$ demand flows through the clients to the facility agents in their respective ranges and then to a common sink $t$. See \Cref{fig:smallest-neighborhood-flow-graph} for an example of such a reduction. By using binary search, we find the highest capacity value of the edges from the facility agents to the sink such that the flow can fully utilize all these edges. This capacity value is the value of the MNR~$\rho_\mathbf{s}$. Note that by \Cref{lemma:limited-values-load-balancing} the MNR can only attain a limited number of values. After determining the MNR, we identify the facility agents belonging to a MNS $M$ by individually increasing the capacity of the edge to the sink $t$ for each facility agent. Only if this does not increase the maximum flow, a facility agent belongs to $M$. By reusing the flow for $\rho_\mathbf{s}$ a search for an augmenting path with the increased capacity is sufficient to determine if the flow is increased. \begin{figure}[h] \Large \centering \begin{tikzpicture}[scale= 0.75, transform shape] \begin{scope} [ every node/.style = {circle, thick, draw, inner sep = 0 pt, minimum size = 15 pt} ] \node (v1) [label=above:$v_1$] [label=below:$f_p$] at (-4.5*1.2, 0) {}; \node (v2) [label=above:$v_2$] at (-3*1.2, 0) {}; \node (v3) [label=above:$v_3$] [label=below:$f_q$] at (-1.5*1.2, 0) {}; \node (s) at (0, 0) {$s$}; \node (v1a) at (1.5*1.2, 0.9*1.2) {$v_1$}; \node (v2a) at (1.5*1.2, 0) {$v_2$}; \node (v3a) at (1.5*1.2, -0.9*1.2) {$v_3$}; \node (p) at (3*1.2, 0.45*1.2) {$f_p$}; \node (q) at (3*1.2, -0.45*1.2) {$f_q$}; \node (t) at (4.5*1.2, 0) {$t$}; \end{scope} \begin{scope} [ every node/.style = {circle,fill,inner sep=2pt} ] \node (1a) at (v1) {}; \node (3a) at (v3) {}; \end{scope} \begin{scope} [ every node/.style = {above, midway, fill=none, draw=none}, every path/.style = {thick, -{Latex[length=2mm]}} ] \draw (s) edge node[xshift=-.25cm] {$w(v_1)$} (v1a) (s) edge node[xshift=.25cm] {$w(v_2)$} (v2a) (s) edge node[below, xshift=-.35cm, yshift=.05cm] {$w(v_3)$} (v3a) (v1a) edge node[xshift=.2cm, yshift=-.1cm] {$w(v_1)$} (p) (v2a) edge node[xshift=-.3cm, yshift=-.05cm] {$w(v_2)$} (p) (v3a) edge node[xshift=-.3cm, yshift=-.05cm] {$w(v_3)$} (q) (p) edge node {$i$} (t) (q) edge node {$i$} (t) (v2) edge[bend right=15] (v1) (v1) edge[bend right=15] (v2) (v3) edge (v2) ; \end{scope} \end{tikzpicture} \caption[Example of construction for computing nodesWithSmallestNeighborhoodPerFacility]{Left: An instance of the load balancing \limtmpmodel{} with the graph~$H$ and the facility placement profile $\mathbf{s}$ marked by dots. Right: The maximum flow instance constructed by Algorithm \ref{alg:minimum-neighborhood-set}. } \label{fig:smallest-neighborhood-flow-graph} \end{figure} \noindent We first prove the correctness of Algorithm \ref{alg:minimum-neighborhood-set}: \begin{theorem} \label{lemma:smallest-neighborhood-algorithm-correctness} For an instance of the load balancing \limtmpmodel{}, a facility placement profile $\mathbf{s}$, Algorithm \ref{alg:minimum-neighborhood-set} computes a MNS. \end{theorem} \begin{proof} We show that the MNR $\rho$ computed by the algorithm is correct by proving that $\rho$ is a lower and upper bound for $\rho_\mathbf{s}$. We show that for each set of facility agents $T$, we get $\rho \leq \frac{w(A_\mathbf{s}(T))}{|T|}$. To this end, consider the maximum flow for $i=\rho$. The value of this flow must be $\text{value}(h)=k \rho$, since $\rho$ is below the threshold found by the binary search. As the total capacity of the edges leaving the source $s$ towards vertices $v_i \in A_\mathbf{s}(T)$ is upper bounded by $w(A_\mathbf{s}(T))$ and every vertex $f_p$ with $f_p \in T$ is only reachable via vertices $v_i \in A_\mathbf{s}(T)$, the total inflow to the vertices $f_p \in T$ is $w(A_\mathbf{s}(T))$. Furthermore, the capacity of each edge from a facility vertex to the sink vertex $t$ is exactly $\rho$, hence each of these edges carries a flow of exactly $\rho$. Thus, we get $|T| \rho \le w(A_\mathbf{s}(T))$ for every set of facility agents $T$. For the upper bound, we show that there is a set $T$ for which $\rho \geq \frac{w(A_\mathbf{s}(T))}{|T|}$. We consider the flow at $i=\rho+\delta$, the value immediately above $\rho$ in \emph{possibleUtilities}. We assume that for each set $T$, $\rho+\delta \leq \frac{w(A_\mathbf{s}(T))}{|T|}$. By \Cref{lemma:minimum-neighborhood-set}, there must be a weight distribution~$\sigma$, such that every facility agent receives $\rho+\delta$ load. Thus, setting the flow of every edge $(v_i,f_j)$ in $h$ to $\sigma(\mathbf{s},v_i)_j$ for each $v_i \in V, f_j \in \mathcal{F}$ results in a flow of $(\rho+\delta)k$. This leads to $\rho+\delta$ being below the threshold and, hence, we have a contradiction. Therefore, there must be a set of facility agents $T$ with $\rho+\delta > \frac{w(A_\mathbf{s}(T))}{|T|}$. By \Cref{lemma:limited-values-load-balancing}, there is no value in between $\rho$ and $\rho+\delta$, which $\frac{w(A_\mathbf{s}(T))}{|T|}$ can attain. Thus, there must be a set~$T$ with $\rho \geq \frac{w(A_\mathbf{s}(T))}{|T|}$. It remains to show that the set of facility agents $M$ computed by the algorithm is indeed a MNS. By the feasibility of the total flow of $k\cdot \rho$ for the instance with capacity bounds of $\rho$, we have have for every set of facility agents~$T$, $\frac{w(A_\mathbf{s}(T))}{|T|}\ge \rho$. For every $f_j \notin M$, there exists an augmenting path where the edge $(f_j,t)$ has capacity~$\infty$. Hence, there is a total flow strictly larger than $k\cdot \rho$ with flow of exactly $\rho$ through all $f_q \ne f_j$. As the flow through each $f_i$ is bounded by $w(A_\mathbf{s}(f_i))$, for every $T$ with $f_j \in T$, $\frac{w(A_\mathbf{s}(T))}{|T|} > \rho$. Therefore, $f_j$ does not belong to the MNS. For every $f_j \in M$, the absence of an augmenting path certifies that the flow is constrained by capacity representing the clients' spending capacities. Hence, $\frac{w(A_\mathbf{s}(T))}{|T|} = \rho$ for every $T \subseteq M$. \end{proof} \noindent With that, we bound the runtime of Algorithm \ref{alg:minimum-neighborhood-set}. \begin{lemma} \label{lemma:smallest-neighborhood-algorithm-runtime} Algorithm \ref{alg:minimum-neighborhood-set} runs in $\mathcal{O}(\log(w_\mathbf{s}(\mathcal{F})k)nk(n+k))$. \end{lemma} \begin{proof} Since $|\text{possibleUtilities}| \leq w_\mathbf{s}(\mathcal{F})k$, the binary search needs $\log{w_\mathbf{s}(\mathcal{F})k}$ steps. In each iteration, the dominant part is the computation of the flow, since all other operations are executable in constant time or are linear iterations through $G$. Therefore, the runtime of the binary search is the runtime of $\log{w_\mathbf{s}(\mathcal{F})k}$ flow computations in $G$. For the loop, we need $k$ breadth-first searches to determine the existence of augmenting paths. The graph $G$ we create has $|V'| = n+k+2$ vertices and at most $|E'| \leq n+k+nk$ edges. These values are not changed throughout the algorithm. Thus, by using Orlin's algorithm~\cite{orlin-algo}, to compute the maximum flow in $\mathcal{O}(nk(n+k))$, which dominates the complexity of the loop and its augmenting path searches. Therefore, the algorithm runs in $\mathcal{O}(\log(w_\mathbf{s}(\mathcal{F})k)nk(n+k))$. \end{proof} \noindent We return to Algorithm \ref{alg:load-balancing-utilities} and prove correctness and runtime: \begin{theorem} Given a facility placement profile $\mathbf{s}$, Algorithm~\ref{alg:load-balancing-utilities} computes the agent loads for an instance of the load balancing \limtmpmodel{} in $\mathcal{O}(\log(w_\mathbf{s}(\mathcal{F})k)nk^2(n+k))$. \end{theorem} \begin{proof} \emph{Correctness:} By \Cref{lemma:minimum-neighborhood-set} the utilities determined for the client agents in the MNS $M$ are correct for the given instance. Also by \Cref{lemma:minimum-neighborhood-set}, the client equilibria of $\mathcal{F} \setminus M$ are independent of the facility agents in $M$ and the clients in $A_\mathbf{s}(M)$. Therefore, we can remove $M$, set the weight of each client $v_i \in A_\mathbf{s}(M)$ to $w(v_i)=0$ and proceed recursively. \emph{Runtime:} The recursive function is called at most $k$ times because the instance size is decreased by at least one facility agent in each iteration. Apart from the call to Algorithm \ref{alg:minimum-neighborhood-set}, all computations can be done in constant or linear time. Therefore, the algorithm runs in $\mathcal{O}(\log(w_\mathbf{s}(\mathcal{F})k)nk^2(n+k))$. \end{proof} \noindent Algorithm \ref{alg:minimum-neighborhood-set} implicitly computes a client equilibrium. \begin{corollary} \label{cor:construct-client-eq} A client equilibrium can be constructed by using the flow values on the edges between a client and the facility agents of the MNSs computed during the binary search in \Cref{alg:minimum-neighborhood-set} as the corresponding client weight distribution. \end{corollary} \begin{proof} Let $\mathbf{s}$ be a facility placement profile and for each facility $f_j$ let $h_j$ be the maximum $s$-$t$-flow found by the binary search during the run of \Cref{alg:minimum-neighborhood-set}, which finds $f_j$ to be part of a MNS. We construct a client weight distribution $\sigma$ in the following way: For each pair $v_i, f_j$, we set $\sigma(\mathbf{s}, v_i)_j = h_j(v_i, f_j)$, i.e., equal to the flow between $v_i$ and $f_j$ in $h_j$. We now show that $\sigma$ is indeed a client equilibrium: Let~$v_i$ be an arbitrary client. The algorithm removes her from the instance (i.e., sets her weight to 0) in the first round of \Cref{alg:load-balancing-utilities}, where she has any facility $f_p$ of the MNS $M$ found in that round in her shopping range. Thus, all facilities $f_j$ with $\sigma(\mathbf{s}, v_i)_j > 0$ are part of $M$. By the limit on the outgoing capacity of these facilities in the binary search in \Cref{alg:minimum-neighborhood-set}, all facilities in $M$ have equal load in $\sigma$. Since the MNR is nondecreasing throughout the run of the algorithm, all facilities which are part of an MNS found in a later iteration, have a equal or higher load in $\sigma$ than the facilities in $M$. Therefore, client $v_i$ cannot improve by moving her weight. \end{proof} \subsection{Existence of Subgame Perfect Equilibria} \label{sec:load-eq} We show that the load balancing \limtmpmodel{} always possesses SPE using a lexicographical potential function. For that, we show that when a facility agent $f_p$ changes her strategy, no other facility agent $f_q$'s load decreases below $f_p$'s new load. \begin{lemma} \label{lemma:no-lower-load-balancing} Let $\mathbf{s}$ be a facility placement profile and $f_p$ a facility agent with an improving move $s'_p$ such that $\ell_p((s'_p,s_{-p}),\sigma')>\ell_p(\mathbf{s},\sigma)$, where $\sigma, \sigma'$ are client equilibria. For every facility agent $f_q$ with $\ell_q((s'_p,s_{-p}),\sigma') <\ell_q(\mathbf{s},\sigma)$, we have that $\ell_q((s'_p,s_{-p}),\sigma') \ge \ell_p((s'_p,s_{-p}),\sigma')$. \end{lemma} \begin{proof} Let $Q$ be the set of facility agents $f_q$ with $\ell_q((s'_p,s_{-p}),\sigma') <\ell_q(\mathbf{s},\sigma)$. Let $Q_\text{min} = \argmin_{q \in Q}\{\ell_q((s'_p,s_{-p}),\sigma')\}$. Now, we distinguish two cases for $f_p$: \emph{Case 1: } $f_p \in Q_\text{min}$. The statement is trivially true. \emph{Case 2: } $f_p \notin Q_\text{min}$. All facility agents in $Q_\text{min}$ have the same clients in their ranges as before. Thus, there must be a client $v_i$, who has decreased her weight on a facility agent $f_r \in Q_\text{min}$ and increased her weight on a facility agent $f_s \notin Q_\text{min}$. Hence, we have $\ell_s((s'_p,s_{-p}),\sigma') \leq \ell_r(\mathbf{s},\sigma)$ as otherwise, the client $v_i$ would not put weight on $f_s$. We assume $f_p \not= f_s$. As $\sigma$ is a client equilibrium, we have that $\ell_r(\mathbf{s},\sigma) \leq \ell_s(\mathbf{s},\sigma)$. This implies $\ell_s((s'_p,s_{-p}),\sigma') <\ell_s(\mathbf{s},\sigma)$ which contradicts $f_s \notin Q_\text{min}$. Therefore, $f_p = f_s$ and $\ell_r((s'_p,s_{-p}),\sigma') \geq \ell_p((s'_p,s_{-p}),\sigma')$, which means that for each facility agent $f_q \in Q$, it holds that $\ell_q((s'_p,s_{-p}),\sigma') \geq \ell_p((s'_p,s_{-p}),\sigma')$. \end{proof} \noindent With this lemma, we prove the FIP and, hence, existence of a SPE by a lexicographic potential function argument. \begin{theorem} The load balancing \limtmpmodel{} has the FIP. \end{theorem} \begin{proof} Let $\Phi(\mathbf{s}) \in \mathbb{R}^k$ be the vector that lists the loads $ \{\ell_1(\mathbf{s},\sigma),$ $ \ell_2(\mathbf{s},\sigma), \dots, \ell_k(\mathbf{s},\sigma)\}$ in an increasing order. Let $\mathbf{s}$ be a facility placement profile and $f_p$ a facility agent with an improving move $s'_p$ such that $\ell_p((s'_p,s_{-p}),\sigma')>\ell_p(\mathbf{s},\sigma)$, where $\sigma, \sigma'$ are client equilibria. We show that $\Phi(s'_p, s_{-i}) <_\text{lex} \Phi(\mathbf{s})$. Let $\Phi(\mathbf{s})$ be of the form $ \Phi(\mathbf{s})=(\phi_1,\ldots,\phi_{\alpha}, \ell_p(\mathbf{s},\sigma), \phi_{\alpha+1},\ldots,\phi_{\beta},$ $\phi_{\beta+1},\ldots,\phi_{k-1})\text, $ for some $\alpha\le \beta \le k-1$, such that for every $1\le j \le \beta$: $\phi_j < \ell_p((s'_p,s_{-p}),\sigma')$ and for every $j \ge \beta+1: \phi_j \ge \ell_p((s'_p,s_{-p}),\sigma')$. By \Cref{lemma:no-lower-load-balancing}, we have for all facility agents $f_q$ with a load $\ell_q(\mathbf{s},\sigma) \in \{\phi_1,\ldots,\phi_\beta)$ that their loads did not decrease. and for agents $f_q$ with $\ell_q(\mathbf{s},\sigma) \in \{\phi_{\beta+1},\ldots,\phi_k)$ we have $\ell_q((s'_p,s_{-p}),\sigma') \ge \ell_p((s'_p,s_{-p}),\sigma')$. With the improvement of $f_p$, $ \Phi(s'_p,s_{-p}) >_{\text{lex}} \Phi(\mathbf{s}) $ holds. By \Cref{lemma:limited-values-load-balancing}, there is a finite set of values that the loads can attain, thus, $\Phi$ is an ordinal potential function and the game has the FIP. \end{proof} \section{Comparison with Utility Systems} A \emph{utility system} (US)~\cite{vetta-utility-system} is a game, in which agents gain utility by selecting a set of actions, which they choose from a collection of subsets of a groundset available to them. Utility is assigned to the agents by a function of the set of selected actions of all agents. \begin{definition}[Utility Systems (US) ~\cite{vetta-utility-system}] A utility systems consists of a set of $k$ agents, a groundset $V_p$ for each agent $p$, a strategy set of feasible action sets $\mathcal{A}_p \subseteq 2^{V_p}$, a social welfare function $\gamma : 2^{V^*} \to \mathbb{R}$ and a utility function $\alpha_p : 2^{V^*} \to \mathbb{R}$ for each player~$p$, where $V^*=\cup_{p \in P}{V_p}$. For a strategy vector $(a_1, \dots, a_k)$, let $A=a_1 \cup \dots \cup a_k$ and $A \oplus a_p'$ denotes the set of actions obtained if player $p$ changes her action set from $a_p$ to $a_p'$. A game is a utility system if $\alpha_p(A) \geq \gamma(A \oplus \emptyset)\text.$ The utility system is \emph{basic} if $\alpha_p(A) = \gamma(A \oplus \emptyset)$ and is \emph{valid} if $\sum_{p \in P}{\alpha_p(A)} \leq \gamma(a)\text.$ \end{definition} \noindent We show that the load balancing \limtmpmodel{} is not a basic but a valid US and we can apply the corresponding bounds for the PoA but not the existence of stable states. \begin{lemma} The load balancing \limtmpmodel{} is a US. \end{lemma} \begin{proof} Each facility agent $f_p$ corresponds to a player $p$ in the US with the groundset $V_p = \{v^p \mid \text{ for each } v \in V\}$ and the action set $\mathcal{A}_p = \{\{v^p \} \cup \{w^p \in V \mid (v,w) \in E \} \mid v \in V\} $. We can define $ \gamma(X) = \sum_{v \in V \mid \exists p : v^p \in X}{w(v)}\text,$ which corresponds to the sum of weights of covered clients and $\alpha_p(A)$ to correspond to the load of $f_p$ which can be expressed as a function of the sets of clients in range of each facility. To show the US condition $\alpha_p(A) \geq \gamma(A \oplus \emptyset)$, we let the social welfare decrease by a value of $x$ through a removal of player~$p$ from strategy profile $a$ resulting in a new strategy profile $a_{-p}$. Hence, clients with a total weight of $x$ were only covered by $p$ in $a$. Thus, player $p$ must receive at least $x$ utility in $a$, and the condition is fulfilled. \end{proof} \noindent As $\gamma$ merely depends on the covered clients, we have for every $X, Y \subset V^*$ with $X \subseteq Y$ and any $v^p \in V^*\setminus Y$, we have that $\gamma(X \cup \{v^p\}) -\gamma(X) \ge \gamma(X \cup \{v^p\}) -\gamma(X) $. Hence, the following lemma is immediate. \begin{lemma} The function $\gamma$ is non-decreasing and submodular. \end{lemma} \iffalse Since adding an action does not decrease the number of clients in the range of any player, $\gamma$ is non-decreasing. Hence, it remains to show, that $\gamma$ is submodular as well. Let $X, Y \subseteq V^*$ be two arbitrary sets of action sets. To prove submodularity, we need to show that $\gamma(X) + \gamma(Y) \geq \gamma(X\cap Y) + \gamma(X \cup Y)$ holds. Note, that in $\gamma$ the parts contributed by each single client to the function are independent of each other. Thus, we denote by $\gamma_v(X)$ the part of $\gamma$ contributed by client $v$, which is equal to $1$ if at least one action is present for $v$ and $0$ otherwise. Therefore, $\gamma(X) = \sum_{v\in V}{\gamma_v(X)}$. Thus, it is sufficient to show submodularity for $\gamma_v$ of an arbitrary client $v$. We distinguish between the three following cases of the representation of $v$ in $X$ and $Y$: \emph{Case 1:} Neither $X$ nor $Y$ contain an act for client $v$. Then since all terms are equal to $0$ it holds that $\gamma_v(X) + \gamma_v(Y) \geq \gamma_v(X\cap Y) + \gamma_v(X \cup Y)$. \emph{Case 2:} Either $X$ or $Y$ contain at least one action for client $v$. Wlog, let $X$ contain an action for client $v$. Then, it holds that $\gamma_v(X) + \gamma_v(Y) = 1 \geq \gamma_v(X\cap Y) + \gamma_v(X \cup Y) = 1$. \emph{Case 3:} Both $X$ and $Y$ contain at least one act for client $v$. Then it holds that $\gamma_v(X) + \gamma_v(Y) = 2$ and $\gamma_v(X\cap Y) + \gamma_v(X \cup Y) \leq 2$. Since the inequality holds for each client $v \in V$, it holds for the entire function $\gamma$, and thus $\gamma$ is submodular. \fi \noindent We now show that the load balancing \limtmpmodel{} is a valid but not basic US. \begin{theorem} The load balancing \limtmpmodel{} is a valid, but not a basic US. \end{theorem} \begin{proof} \iffalse We start by proving that the load balancing \limtmpmodel{} is not a basic US via an example. \begin{figure} \centering \begin{tikzpicture} \begin{scope} [ every node/.style = {circle, thick, draw, inner sep = 0 pt, minimum size = 15 pt} ] \node (1) [label=below left:$v_1$][label=above right:$f_p$] at (0, 0) {}; \node (2) [label=below left:$v_2$][label=above right:$f_q$] at (1.2, 0) {}; \end{scope} \begin{scope} [ every node/.style = {circle,fill,inner sep=2pt} ] \node at (1) {}; \node at (2) {}; \end{scope} \begin{scope} [ every path/.style = {thick, -} ] \draw (1) edge (2); \end{scope} \end{tikzpicture} \caption{An example that the load balancing \limtmpmodel{} is not a basic US. } \label{fig:not-basic} \end{figure} Consider \Cref{fig:not-basic}. \fi The following example proves that the load balancing \limtmpmodel{} is not a basic US . Let $H = (V,E,w)$ with $V = (v_1,v_2)$, $w(v_1) = w(v_2) = 1$ and $E = \{(v_1,v_2),(v_2,v_1)\}$. Furthermore, we have two facility players $f_p$ and $f_q$ with $\mathbf{s} = (v_1,v_2)$. Removing player $f_p$ does not change the weighted participation rate $W(\mathbf{s})$ since all clients are still covered. However, the utility of the removed facility player $f_p$ is equal to $1$. Hence, equality in the utility system condition does not hold and the US is not basic. To show that the load balancing \limtmpmodel{} is a valid US, note that each client~$v$ who is in the attraction range of at least one facility player distributes her total weight $w(v)$ among the players. All other clients are uncovered and hence, their distributed weight is equal to $0$. Thus, the total weight $\sum_{v_i \in C(\mathbf{s})}w(v_i)$ distributed by clients, which is equal to the sum of the facility players' loads, is equal to the value of the welfare function $W(\mathbf{s})$. \end{proof} \noindent We are now able to apply the PoA bound of \cite{vetta-utility-system} to our model. \begin{corollary} The PoA of the load balancing \limtmpmodel{} is at most~2. \end{corollary} \section{Arbitrary Client Behavior} In the following, we investigate the quality of stable states of the \limtmpmodel{} with arbitrary client behavior, i.e., the client costs are arbitrarily defined, and provide an upper and lower bound for the PoA as well as a lower bound for the PoS. Additionally, we show that computing the social optimum is NP-hard. \begin{theorem} \label{thm:lar-poa-2} The PoA of the \limtmpmodel{} is at most $2$. \end{theorem} \begin{proof} Fix a \limtmpmodel{} with $k$ facility players. Let OPT be a facility placement profile that maximizes social welfare and let $(SPE,\sigma_{\text{SPE}})$ be a SPE. Let $C(\text{SPE})$ be the set of clients $v_i$ which are covered in SPE and $C(\text{OPT})$ be the set of clients~$v_i$ which are covered in OPT, respectively. Let UNCOV $= C(\text{OPT}) \setminus C(\text{SPE})$ be the set of clients which are covered in OPT but uncovered in SPE. Assume that $W(\text{OPT}) > 2 W(\text{SPE})$ and hence, $\sum_{v \in \text{UNCOV}}w(v) > W(\text{SPE})$. Then, there exists a facility player $f_p$ that receives in OPT more than $\frac{W(\text{SPE})}{k}$ load from the clients in UNCOV. Now consider a facility agent $f_q$ with load $\ell_q \left( \text{SPE},\sigma_{\text{SPE}} \right) \leq \frac{W(\text{SPE})}{k} $. By changing her strategy and selecting the position of facility agent $f_p$ in OPT, agent $f_q$ receives the weight of all clients in UNCOV which are covered by $f_p$ in OPT since they are currently uncovered and therefore, obtains more than $\frac{W(\text{SPE})}{k}$ load. As this contradicts the assumption of $\left( \text{SPE},\sigma_{\text{SPE}} \right)$ being a SPE, we have that $W(\text{OPT}) \le 2 W(\text{SPE})$. \end{proof} \noindent We contrast the upper bound of the PoA with a lower bound for the PoA and PoS. \begin{theorem} The PoA and PoS of the \limtmpmodel{} is at least $2-\frac1k$. \end{theorem} \begin{proof} We prove the statement by providing an example of an instance~\emph{I} which has a unique equilibrium. Let~$x \geq 4$, $x \in \mathbb{N}$. We construct a \limtmpmodel{} with $k$ facility players, a host~graph $H(V,E,w)$ with $V = \{v_1, \ldots , v_k,$ $v_{1,1}, \ldots, v_{1,x-1},$ $ v_{2,1},$ $\ldots, v_{k-1,x-1}, v_{k,1}, \ldots, v_{k,k x}\}$, for all $v \in V$, $w(v) = 1$ and $E = \{(v_i, v_{i,j}) \mid i \in [1,k-1], j \in [1,x-1] \}\ \cup \{(v_k, v_{k,i}) \mid i \in [1,k x]\}\ \cup \{(v_{k,i}, v_{i,1}) \mid i \in [1,k-1]\}$. See \Cref{fig:poa}. \begin{figure} \centering \begin{tikzpicture}[scale= 0.8, transform shape] \begin{scope} [ every node/.style = {circle, thick, draw, inner sep = 0 pt, minimum size = 12 pt} ] \node (c) [label=10:$v_{k}$]at (0, -0.25*1.2) {}; \node (1) [label={[label distance=-2]above:$v_{k,1}$}] at (-4*1.2, -1.2) {}; \node (3) [label=left:$v_{k, k-1}$] at (0, -1.2) {}; \node (4) [label=left:$v_{k, k}$] at (1*1.2, -1.2) {}; \node (6) [label={[label distance=-2]above:$v_{k, kx}$}] at (2.5*1.2, -1.2) {}; \node (1a) [label=right:$v_{1,1}$]at (-4 *1.2, -1.8*1.2) {}; \node (1b) [label=right:$v_1$]at (-4 *1.2, -2.55*1.2) {}; \node (1c) [label={left:$v_{1,2}$}]at (-4 *1.2-0.6*1.03923, -2.6*1.2 -0.6*1.03923) {}; \node (1e) [label={right:$v_{1,x-1}$}]at (-4 *1.2+0.6*1.03923, -2.6*1.2 -0.6*1.03923) {}; \node (3a) [label=right:$v_{k-1,1}$]at (0, -1.8*1.2) {}; \node (3b) [label=right:$v_{k-1}$]at (0, -2.55*1.2) {}; \node (3c) [label={left:$v_{k-1,2}$}]at (0-0.6*1.03923, -2.6*1.2 -0.6*1.03923) {}; \node (3e) [label={right:$v_{k-1,x-1}$}]at (0+0.6*1.03923, -2.6*1.2 -0.6*1.03923) {}; \end{scope} \coordinate(box1a) at (-4.4*1.2,0.15*1.2); \coordinate(box1b) at (2.9*1.2,-1.3*1.2); \coordinate(box2a) at (-4.8*1.2-0.6*1.03923,-1.5*1.2); \coordinate(box2b) at (-2.9*1.2+0.6*1.03923,-3.4*1.2); \coordinate(box3a) at (-1.1*1.2-0.6*1.03923,-1.5*1.2); \coordinate(box3b) at (1.4*1.2+0.6*1.03923,-3.4*1.2); \begin{scope} [ every node/.style = {inner sep = 0 pt, minimum size = 5 pt} ] \node (2) at (-2*1.2, -1.2) {\dots}; \node (5) at (1.75*1.2, -1.2) {\dots}; \node (1d) at (-4*1.2, -2.6*1.2 -0.6*1.03923) {\dots}; \node (3d) at (0, -2.6*1.2 -0.6*1.03923) {\dots}; \node at (-2*1.2, -1.9*1.2) {\dots}; \node at (-2*1.2, -2.55*1.2) {\dots}; \node at (-2*1.2, -2.6*1.2 -0.6*1.03923) {\dots}; \node (b1) [label=below right:$S_k$] at (box1a) {}; \node (b2) [label=below right:$S_1$] at (box2a) {}; \node (b3) [label=below right:$S_{k-1}$] at (box3a) {}; \end{scope} \begin{scope} [ every path/.style = {thick, {Latex[length=2mm]}-} ] \draw (c) edge (1) edge (3) edge (4) edge (6) (1) edge (1a) (1b) edge (1a) edge (1c) edge (1e) (3) edge (3a) (3b) edge (3a) edge (3c) edge (3e) ; \end{scope} \begin{scope} [ every path/.style = {thick, dotted} ] \draw (box1a) rectangle (box1b); \draw (box2a) rectangle (box2b); \draw (box3a) rectangle (box3b); \end{scope} \end{tikzpicture} \caption{The host graph $H$ of an instance $I$ of the \limtmpmodel{} with arbitrary client behavior with a unique SPE.} \label{fig:poa} \end{figure} We note that $H$ consists of a large star $S_k$ with central vertex $v_k$, leaf vertices $(v_{k,1}, \ldots, v_{k,k x})$ and $k-1$ small stars $S_i$ for $i \in [1,k-1]$ with central vertices $v_i$ and leaf vertices $(v_{i,1}, \ldots, v_{i,x-1})$. Each star $S_i$ is connected to $S_k$ via an edge between a leaf vertex of $S_k$ and $S_i$, i.e., $(v_{k,i}, v_{i,1})$. If the $k$ facility players are placed on $\mathbf{s}_{\text{OPT}} = (v_1, \ldots, v_k)$, all clients are covered by exactly one facility. Hence, $W(OPT(H,k)) = |V| = kx + k +(k-1)(x-1)$. In any equilibrium, a facility $f_j$ for $j \in [1,k]$ must receive a load of at least $\frac{kx+1}{k} = x +\frac1k$ as otherwise switching to vertex $v_k$ with $kx+1$ adjacent vertices yields an improvement. However, any other vertex in $H$ has at most $x-1$ adjacent vertices, hence, every facility player gets a load of at most~$x$. Therefore, the unique $SPE$ is $\mathbf{s}_{\text{SPE}} = (v_k, \ldots, v_k)$ with $W(\mathbf{s}_\text{SPE}) = k x+1$ and $PoA = PoS = \frac{kx + k +(k-1)(x-1)}{k x+1} = \frac{(2k-1)x+1}{kx + 1}$. We get $\lim\limits_{x \rightarrow \infty} \left( \frac{(2k-1)x+1}{kx + 1}\right) = \frac{2k-1}{k} = 2 - \frac1k.$ \end{proof} \noindent By a reduction from \textsc{3SAT}, we show that computing \emph{OPT}$(H,k)$ is an NP-hard problem. \begin{theorem} \label{thm:nphard} Given a host graph $H$ and a number of $k$ facilities, computing the facility placement maximizing the weighted participation rate \emph{OPT}$(H,k)$, is NP-hard. \end{theorem} \begin{proof} We prove the theorem by giving a polynomial time reduction from the NP-hard \textsc{3SAT} problem. For a \textsc{3SAT} instance $\phi$ with a set of clauses $C$ and a set of variables $X$, we create a \limtmpmodel{} instance with $k = |X|$ facility players where the host graph $H(V_X \cup V_C,E_X \cup E_C,w)$ is defined as follows: \begin{align*} w(v)&=\{1 \mid v \in V_X \cup V_C\} \\ V_X&=\{v_x, v_{\neg x} \mid x \in X\}\\ V_C&=\{v_c \mid c \in C\}\}\\ E_X&=\{(v_x, v_{\neg x}), (v_{\neg x}, v_x) \mid x \in X\}\\ E_C&=\{(v_c, v_l) \mid c \in C, \text{ literal } l \in c\}, \end{align*} where $v_l=v_x$ if the contained variable $x$ is used as a true literal in~$c$, and $v_l=v_{\neg x}$, otherwise. See Figure~\ref{fig:3sat} for an example. \begin{figure} \centering \begin{tikzpicture} \begin{scope} [ every node/.style = {circle, thick, draw, inner sep = 0 pt, minimum size = 17 pt} ] \node (c) at (-1.2, 1.3*1.2) {$v_{c_1}$}; \node (d) at (1.2, 1.3*1.2) {$v_{c_2}$}; \node (x1) at (-3*1.2, 0) {$v_x$}; \node (x2) at (-2*1.2, 0) {$v_{\neg x}$}; \node (y1) at (-0.5*1.2, 0) {$v_y$}; \node (y2) at (0.5*1.2, 0) {$v_{\neg y}$}; \node (z1) at (2*1.2, 0) {$v_z$}; \node (z2) at (3*1.2, 0) {$v_{\neg z}$}; \end{scope} \begin{scope} [ every path/.style = {thick, -Latex} ] \draw (x1) edge[bend left=15] (x2) (x2) edge[bend left=15] (x1) (y1) edge[bend left=15] (y2) (y2) edge[bend left=15] (y1) (z1) edge[bend left=15] (z2) (z2) edge[bend left=15] (z1) (c) edge (x1) edge (y2) edge (z1) (d) edge (x2) edge (y1) edge (z1) ; \end{scope} \end{tikzpicture} \caption{An example of a corresponding host graph $H$ to the \textsc{3SAT} instance ${(x\vee \neg y \vee z)} \wedge {(\neg x\vee y \vee z)}$.} \label{fig:3sat} \end{figure} Let $\phi$ be satisfiable and $\alpha$ be an assignment of the variables satisfying $\phi$. We set $\mathbf{s} = (s_1,\ldots, s_k)$ such that for $i \in [1,k]$, $x_i \in X$, $s_i = v_{x_i}$ if $x_i$ is true in $\alpha$ and $s_i = v_{\neg x_i}$ otherwise. By $E_X$, $ v_{x_i}$ and $v_{\neg x_i}$ are covered by a facility player either located on $ v_{x_i}$ or $v_{\neg x_i}$. To show that each client $v_c \in V_C$ is covered as well, consider the corresponding clause $c = l_1 \vee l_2 \vee l_3$. Since $\phi$ is satisfied, at least one of the literals is true, which means that at least one of $v_{l_1}$, $v_{l_2}$ and $v_{l_3}$ must be occupied by a facility in $\mathbf{s}$. Thus, if $\phi$ is satisfied, we get a placement where all clients are covered, which is optimal. Let $\mathbf{s}$ be a facility placement profile where all clients are covered. Note that this implies that for each $x \in X$ either $ v_{x}, v_{\neg x}$ is occupied by a facility player. Hence, all facilities are placed on vertices in $V_X$. We construct an assignment of the variables $\alpha$ as follows: $x = $ true, if $v_{x} \in \mathbf{s}$ and $x = $ false, if $v_{\neg x} \in \mathbf{s}$. Let $c \in C$ be an arbitrary clause in $\phi$. The corresponding vertices $v_c$ is covered by a facility player which is placed on an adjacent vertex, $v_{l_1}$, $v_{l_2}$, or $v_{l_3}$. This implies that at least one of the literals $l_1$, $l_2$, and $l_3$ is true in $\alpha$ and therefore $c$ is satisfied. Hence, $\phi$ is satisfiable. \end{proof} \section{Conclusion and Future Work} We provide a general model for non-cooperative facility location with both strategic facilities and clients. Our load balancing \limtmpmodel{} is a proof-of-concept that even in this more intricate setting it is possible to efficiently compute and check client equilibria. Also, in contrast to classical one-sided models and in contrast to Kohlberg's two-sided model, the load-balancing \limtmpmodel{} has the favorable property that stable states always exist and that they can be found via improving response dynamics. Moreover, our bounds on the PoA and the PoS show that the broad class of 2-FLGs is very well-behaved since the societal impact of selfishness is limited. The load balancing \limtmpmodel{} is only one possible realistic instance of a competitive facility location model with strategic clients; other objective functions are conceivable, e.g., depending on the distance and the load of all facilities in their shopping range. Also, besides the weighted participation rate other natural choices for the social welfare function are possible, e.g., the \emph{total facility variety} of the clients, i.e., for each client, we count the facilities in her shopping range. This measures how many shopping options the clients have. Moreover, we are not aware that the total facility variety has been considered for any other competitive facility location model. \bibliographystyle{named}
9d23f3d7a78543e500f151d93bfd654009e1475a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{intro} Understanding the nature and formation of cosmic dust is crucial to our understanding of the cosmos. Over its 50-year history, infrared (IR) astronomy has shown that dust contributes to the physical processes inherent in star formation and mass-loss from evolved stars, as well as to several interstellar processes such as gas heating and the formation of molecules \citep[e.g.][]{vk02,draine03,krishna05,krugel08}. In particular, silicate grains dominate dust emission in many astrophysical environments. The ``amorphous'' $\sim10\mu$m and $\sim18\mu$m silicate spectral features have been observed in almost every direction and to almost any distance, but the precise nature of this silicate dust remains a mystery. Here we present a laboratory investigation of amorphous silicates, the type of dust grains most frequently inferred to exist from observational data. \subsection{A brief history of the ``amorphous'' silicate spectral features} \label{history} The classic ``10\,$\mu$m'' silicate feature was first observed in the late sixties in the IR spectra of several M-type giants and red supergiants \citep[RSGs;][]{gillett68}. Shortly thereafter a 10\,$\mu$m absorption feature was discovered in the interstellar medium \citep[ISM;][]{knacke69,hackwell70}. Since then, it has been found to be almost ubiquitous, occurring in many astrophysical environments including the solar system and extrasolar planetary systems \citep[e.g.,][and references therein]{mann06}, the circumstellar regions of both young stellar objects and evolved intermediate mass stars \citep[asymptotic giant branch; AGB stars, and planetary nebulae; e.g.,][]{speck00,casassus01}; many lines of sight through the interstellar medium in our own galaxy \citep[e.g.,][]{chiar07}; and in nearby and distant galaxies \citep[e.g.,][]{hao05}. Initially this feature was attributed to silicate minerals \citep{woolf69}, based on mixtures of spectra of crystalline silicate species predicted to form by theoretical models \citep{gaustad63,gilman69}. However, laboratory spectra of crystalline silicate minerals showed more structure within the feature than observed in the astronomical spectra \citep[see e.g.,][]{woolf73,huffman73}. Subsequent comparison with natural glasses \citep[obsidian and basaltic glass; from e.g.,][]{pollack73} and with artificially disordered silicates \citep{day79,kh79} showed that ``amorphous'' silicate was a better candidate for the 10\,$\mu$m feature than any individual crystalline silicate mineral. Since then, it has been commonly assumed that description of silicate as disordered or ``amorphous'' is synonymous with glassy silicate. However, this is an oversimplification. The term ``glass'' has a specific definition, i.e. the solid has no long-range order beyond nearest-neighbor atoms. ``Crystalline'' is often taken to mean single crystals, but it is possible to form poly-nanocrystalline agglomerates, with a continuum which essentially extends from a true glass to a single crystal grain. Furthermore, natural and synthetic ``glasses'' often contain microlites\footnote{% micro- or nano-crystalline inclusions within a glassy matrix \citep[see e.g.,][]{pollack73,jager94}. Prior to \citet{zachariasen} the difference between nanocrystalline (i.e. ceramic) and glassy solids was not understood.}% In addition, one might expect agglomerated particles to be polymineralic, and possibly to contain both crystalline and glassy constituents. \citet{NuthHecht} introduced the idea of ``chaotic silicate'' in which the level of disorder is even greater than for glass. A chaotic silicate does not have to be stoichiometric, can contain different compositional zones within a single grain, and may be porous, and therefore much lower density than a glass. This range of possible grain types is demonstrated schematically in Fig.~\ref{fig:XtalC}. In astrophysical environments, whether a solid is glassy, crystalline or some combination of the two has implications for its formation, and subsequent processing, evolution and destruction, and thus it is important to have tools to distinguish between such grain types. For example, true glasses with no inclusions will not transform into crystals below their glass transition temperature ($T_g$; see \S~\ref{silglass}), whereas a glass already containing microlites can continue to crystallize at slightly lower temperatures. In the case of terrestrial obsidians, with 900\,K $ < T_g < $ 1000\,K, elemental diffusion profiles suggest that crystal growth can continue down to $\sim$700\,K \citep[e.g.,][]{watkins09}. If poly-nano-crystalline grains can be distinguished spectroscopically in the lab from truly glassy grains, we can test for their presence in astrophysical environments. For instance \citet{SH04} showed that there is a difference between single crystal and polynanocrystalline silicon carbide (SiC), while \citet{STH05} showed that glassy SiC looks different than various crystalline samples. These laboratory data were invoked to explain changes in SiC grains formed as carbon stars evolve. For silicates, laboratory data on poly-nano-crystalline samples are lacking. Whether grains are glassy or poly-nano-crystalline is an indicator of whether grains form or are processed above $T_g$. If the two forms cannot be distinguished in laboratory spectra, then the ``amorphous'' nature of silicates in space would no longer necessarily imply a truly glassy structure, allowing the possibility of higher dust formation temperatures. Many observations have shown that the 10\,$\mu$m silicate feature varies from object to object and even within a single object both temporally \citep[e.g.,][]{monnier98} and spatially (e.g. in $\eta$ Car, N. Smith, Pers. Comm). Within a single type of astrophysical object, the feature shows huge variations in terms of its peak position, width and its ratio to the $\sim 18\mu$m feature \citep*[e.g.][]{speck98,ohm92}. Variations in feature shape from star to star cannot be explained in terms of optical depth or grain temperature effects. Several interpretations of these observations have been suggested including: grain size effect \citep[e.g.,][]{papoular83}; Mg/Fe ratio and (Mg+Fe)/Si ratio \citep[e.g.][]{dorschner95}; inclusion of oxide grains \citep[e.g.,][]{speck00}; increasing crystallinity \citep[e.g.,][]{sylvester98,bouwman01}; grain shape \citep[e.g.,][]{min07}; and grain porosity \citep[e.g.,][]{vh08,henning93}. However, all models of these effects utilize laboratory spectra, and are only as reliable as the data that goes into them. The near ubiquity of ``amorphous'' silicate features and their variations in strength, width, peak position and the ratio of the 10\,$\mu$m/18\,$\mu$m features potentially provide the diagnostic tools to understand the detailed mechanisms by which dust is formed, processed and destroyed. However, existing laboratory and synthetic spectra are not sufficiently well understood to achieve this goal. \subsection{A brief synopsis of existing laboratory and synthetic spectra for disordered silicates} \label{prevlab} Since the discovery of the 10\,$\mu$m feature, there have been many laboratory studies producing IR spectra and optical functions\footnote{Usually these are called optical ``constants'', but they are wavelength-dependent quantities, so we prefer ``functions''} of various samples for comparison with and modeling of observational data. In addition, synthetic optical functions have been derived from observational spectra, often combined with laboratory mineral data, in order to match the observed features \citep*[e.g.,][]{dl84,vk88,ohm92}. The first laboratory spectra used in astronomical silicate studies were of crystalline silicates and natural glasses \citep[obsidian and basaltic glass, which contain some microlites; e.g.,][]{pollack73}. Various studies produced ``amorphous'' samples through chemical vapor deposition \citep[e.g.,][]{day79}, smokes \citep[e.g.,][]{nd82}, ion-irradiation of crystalline samples \citep[e.g.,][]{kh79}, laser ablation of crystalline samples \citep{sd96}, and quenching melts to glass \citep[e.g.,][]{dorschner95,jager03}. However, these techniques yield different results. For instance the peak positions, full width at half maxima (FHWMa) and ratios of the strengths of the 10 and 18\,$\mu$m features vary between datasets even though the materials investigated are ostenibly the same composition. Spectra from samples with the same reported composition vary, which should not occur if the samples had the same structures, and if all spectra were obtained under optically thin conditions. A detailed comparison between existing and new laboratory data are given in \S~\ref{comparison}. The sample preparation techniques vary widely and lead to a range of disordered structures. Unfortunately we do not have sufficient information on the physical structure or chemical characteristics of previously studied samples to determine the effects quantitively, but a qualitative analysis presented here highlights the need to make such sample information available. The data used most commonly for modeling astronomical environments are synthetic optical functions such as \citet*[hereafter DL]{dl84} and \citet*[hereafter OHM]{ohm92}. These data are favored over laboratory spectra because they have broad wavelength coverage, which is not true of individual laboratory datasets. However, these synthetic functions were produced using compilations of laboratory spectral data and astronomical observed dust opacities from which new optical functions were calculated. The derived optical functions were then modified specifically to match the astronomical observations. For instance, in the \citeauthor{dl84} data, the 9.7\,$\mu$m feature is entirely derived from observational opacities, while the NIR-NUV section of their optical function uses laboratory data from crystalline olivine studies, and the FUV/X-ray region uses laboratory data for crystalline alumina (Al$_2$O$_3$). Both \citeauthor{dl84} and \citeauthor{ohm92} blend laboratory and astronomical data and their optical functions will match some spectral features, and can be used for comparison of optical depths between different dusty environments. However, they do not represent real solids and thus cannot be used to determine the true nature of dust in space, how it varies spatially or temporally, or why. \subsection{A brief guide to the structure of silicate minerals, glasses and the glass transition} \label{silglass} The basic building block of silicates is the SiO$_{4}^{4-}$ tetrahedron. These can be linked in a framework, with each oxygen shared between two tetrahedra (e.g. SiO$_2$ minerals and feldspars), or they can be linked in chains (e.g. pyroxenes such as diopside [Di; CaMgSi$_2$O$_6$], enstatite [En;MgSiO$_3$]) or they can be isolated tetrahedra (e.g. the olivines series: forsterite [fo; Mg$_2$SiO$_4$] to fayalite [fa; Fe$_2$SiO$_4$]). In all cases, the non-shared oxygens (known as non-bridging oxygens or NBOs) are charge-balanced by other cations (e.g. Mg$^{2+}$, Fe$^{2+}$, Ca$^{2+}$, Na$^+$, etc). The Ca-Mg-Al silicates that are expected to form in circumstellar environments are dominantly of pyroxene and olivine composition. Within each mineral group, solid solution allows compositions to vary between end-members. In the case of similarly-sized cations the solution may be continuous e.g., enstatite to ferrosilite (fs; FeSiO$_3$) and the olivine series. Given the availability of other cations, end-member pyroxenes rarely occur in terrestrial or meteoritic samples. Where the substituting cation is a different size, solid solution is more limited, for example between enstatite and diopside. In such cases, solid solution becomes less extensive at lower temperatures, so that cooling may lead to ``exsolution'' of two different pyroxene compositions from an initially homogeneous high-temperature solid. Minerals occur as crystals that possess long-range order, with very narrow distributions of bond angles and lengths. This leads both to anisotropy (properties varying with crystal orientation) and narrow spectral features. Silicate glasses are the ``frozen'' structural equivalents of liquids, possessing short-range order (so that local charge-balance is conserved for example) but lacking the long-range order that gives rise to symmetry and anisotropy in crystals. Glasses are therefore both isotropic and have broad spectral features. The basic structural unit of silicate glasses and melts is the SiO$_{4}^{4-}$ tetrahedron, as is the case in crystalline silicate minerals. Oxygens linking tetrahedra are known as bridging oxygens (BO), while non-bridging oxygens (NBO) are coordinated by metal cations, which are termed network-modifiers in this role. Tetrahedral cations (T) include not only Si$^{4+}$, but also trivalent cations such as Al$^{3+}$ and Fe$^{3+}$; these must be charge-balanced by other cations (usually alkalis or alkaline earths; Fig.~\ref{cartoon}). The degree of polymerization of a melt or glass can be summarized by the ratio of NBO/T, which can range from 0 (fully polymerized, e.g. SiO$_2$) to 4 (fully depolymerized, e.g. Mg$_2$SiO$_4$). In general, more polymerized melts are more viscous and have higher glass transition temperatures. On quenching a melt, its structure is ``frozen in'' at the glass transition if cooling is rapid enough to prevent crystallization. The glass transition is actually an interval, often approximated by the glass transition temperature ($T_g$) which is usually taken to be the temperature at which the viscosity is 10$^{12}$ Pa.s ($T_{12}$). Rapid cooling from a given temperature preserves the network present in the liquid at that particular temperature. Because of this behavior, glasses of any given composition can have subtle differences in structure that depend on cooling rates. The temperature at which the glass has the same structure as the melt is called the fictive temperature ($T_f$). See \citet{mysen05} for a comprehensive review of melt structure and properties. Two issues whose importance will be discussed in the current work are the role of water and the oxidation state of iron. At low water contents (less than about 1 wt.\% total H$_2$O), water dissolves in silicate glasses almost exclusively as hydroxyl (OH$^-$) ions \citep{stolper82}, and acts as a network modifier (Fig.~\ref{cartoon}). Compared to other modifier oxides such as Na$_2$O and MgO, water has a more dramatic effect in reducing melt viscosity and glass transition temperatures \citep{dingwell96}. Iron can play the role of network modifier (octahedral Fe$^{2+}$ or Fe$^{3+}$) or network-forming cation (tetrahedral Fe$^{3+}$). Consequently, the oxidation state of an iron-bearing glass or melt has a significant effect on its structure and properties. From the perspective of cosmic dust formation, the glass transition temperature is essentially the temperature above which a given composition should form as or convert to crystalline solids, whereas solids formed below this temperature will be glassy if they cool sufficiently rapidly \citep{richet1993,swt08}. More depolymerized melts are more difficult to quench, and melts less polymerized than pyroxenes (NBO/T $>$ 2) typically require extreme quench rates (100s K\,s$^{-1}$) using methods such as containerless laser processing to achieve truly glassy samples \citep{tangeman01a}. For a given composition, faster cooling rates result in a higher T$_g$. This dependence can be determined by differential scanning calorimetry using different heating and cooling rates, and is used to determine the cooling rate of natural lava samples \citep{wilding95}. For depolymerized silicates (e.g. olivines and pyroxenes), if glassy grains form they must do so below their $T_g$ because the cooling rate required for quenching to a glass is extremely rapid. Highly polymerized silicates (e.g. silica, obsidian) can be cooled more slowly, over hours or days, and not crystallize \citep{bowen}. However, the cooling timescales (months) determined by \citet{swt08} for AGB star circumstellar shells are sufficiently long as to preclude the preservation of glassy/chaotic solids that form above $T_g$, because annealing timescales are shorter than those for cooling for all but the most polymerized silicates. \subsection{The need for new data} Modeling of silicate dust in space has been limited by the available laboratory data The influence of various model parameters was investigated by \citet{jm76}, who found that using so-called ``clean'' (i.e. pure magnesium) silicate grains to model the observed 9.7\,$\mu$m features did not yield a good fit due to the lack of absorption by these grains in the visible and near-IR. They also found that just mixing in more absorbing grains did not solve the problem. This led to the suggestion that the grains responsible for the 9.7\,$\mu$m feature are ``dirty'' silicates, i.e. Mg-silicates with impurities introduced into the matrix giving more opacity in the optical and near-IR. It is known that NBO/T (polymerization) affects the spectra of amorphous silicates such that the peak position of the 10\,$\mu$m feature shifts redwards as NBO/T increases (e.g. \citeauthor{ohm92}). Aluminium (Al) is a network former and consequently Al content strongly affects NBO/T. \citet{mutschke98} suggested that Al may be an important component of silicates in space that could explain why previous laboratory spectroscopic studies failed to match observational data. Other cations may be equally important. Ca$^{2+}$ and Fe$^{2+}$ both substitute for Mg, while Fe$^{3+}$ will substitute for tetrahedral site (e.g. Si$^4+$ or Al$^{3+}$). Therefore, the oxidation state is another important variable in addition to elemental substitutions. Existing laboratory spectral data for ``amorphous'' silicates were produced using samples that are not sufficiently well-characterized to allow astronomers to interpret their observations without ambiguity. Here we present new laboratory spectra of several silicate glasses of astronomical relevance, and discuss compositional factors that influence their spectral features. We compare these new data with those previously available for ``amorphous'' silicates and discuss how these samples compare to successfully-applied synthetic optical functions. We find that the synthetic spectra cannot be well matched by the conventionally assumed glassy silicate composition and discuss whether astrophysical silicates need to be truly glassy. \section{Experimental Methods} \label{exp} \subsection{Sample selection} The bulk composition of silicate stardust lies somewhere between pyroxene (M$_2$Si$_2$O$_6$) and olivine (M$_2$SiO$_4$), where M indicates metal cations, with Mg and Fe being the most abundant. The cosmic Mg/Si ratio is predicted to be $\sim$1.02, while Fe/Si $\sim$0.84 \citep[e.g.,][]{lf99}. However, most Fe is expected to combine with S, Ni, Cr, Mn and other siderophile elements into metallic grains \citep{gs99,lf99}. This partitioning is reflected in the Earth, where most iron resides in the metallic core while the mantle is dominated by magnesium-rich silicate with Mg/Fe $\sim$9. The predicted bulk cosmic silicate would then be close to MgSiO$_3$, with minor amounts of iron leading to an atomic (Mg+Fe)/Si ratio slightly greater than 1. Determining the spectra of various olivine and pyroxene glasses is therefore of critical importance for identifying the silicate mineralogy. The focus of this study is glass compositions for which data already exist in the astronomy literature i.e. predominantly olivines and pyroxenes (See Table~\ref{litdata} and references therein). Mg-rich endmembers are forsterite and enstatite, respectively. Another mineral that is commonly discussed in astromineralogy is diopside, which is also a pyroxene. Diopside has been invoked to explain observed crystalline silicate features \citep[e.g.][]{Demyk2000,Kemper2002,Hony09,OO03} and appears in the classic condensation sequence for dust formation \citep[e.g.][see Fig.~\ref{condseqfig}]{tielens90}. Furthermore, aluminous diopside formed in the experimental condensation study by \citet{Toppani2006}, and \citet{Demyk2004} showed that crystalline diopside grains can easily be amorphized by heavy ion irradiation. To complement the olivines and pyroxenes (Table~\ref{litdata}) we include four samples from the melilite series, whose endmembers are gehlenite (Ge; Ca$_2$Al$_2$SiO$_7$) and \r{a}kermanite (\r{A}k; Ca$_2$MgSi$_2$O$_7$). gehlenite is predicted to be among the first-formed silicate grains \citep[e.g.,][see Figure~\ref{condseqfig}]{tielens90,lf99}, and the major repository for calcium and aluminum in dust, whereas pyroxenes are predicted to be among the most abundant grains, and the major repository for Mg and Si \citep[e.g.,][]{tielens90,lf99,gs99}. Furthermore aluminium and calcium are highly depleted from the gas phase and are assumed to be included in dust \citep{Whittet1992}. Aluminum-rich silicates like gehlenite and other melilites are major constituents of Calcium-Aluminum-rich Inclusions (CAIs) and gehlenite has been identified in red supergiants \citep{speck00} and Active Galactic Nuclei (AGN) environments \citep{jaffe04} and in meteorites \citep{stroud08,vollmer07}. Furthermore, a recent Type Ib supernova (SN2005E) has been shown to contain more calcium than expected \citep{perets}. Indeed almost 50\% of its ejecta mass (or $>$0.1M$_\odot$) is attributed to calcium. \citet{chihara07} reported laboratory spectra of crystalline melilites every 10\% along the solid-solution join between \r{a}kermanite and gehlenite. \citet{mutschke98} reported laboratory spectra of two glasses: end-member gehlenite, and \r{a}k50-ge50. Given that the cosmic Mg/Ca ratio is $\sim$16, it seems likely that melilites could have substantial \r{a}kermanite contents. The difference in the structure of the glasses is profound: NBO/T is 0.67 for gehlenite and 3.0 for \r{a}kermanite, while Al/Si is 2 for gehlenite and 0 for \r{a}kermanite. Both theoretical models \citep{lili} and observations \citep{jm76} suggest that some iron is incorporated into silicates. \citet{jager94} presented spectra for two silicate glasses containing the most abundant dust forming elements (i.e. Mg, Si, Fe, Ca, Al, Na). However, iron in their sample was partially oxidized (FeO/Fe$_2$O$_3 \sim$1) which leads to problems in interpreting the spectrum (see \S~\ref{silglass}). With this in mind we present a sample we call ``Basalt'' ($\rm Na_{0.09}Mg_{0.62}Ca_{0.69}Fe_{0.39}Ti_{0.10}Al_{0.06}Si_{2.16}O_6$), which contains ferrous iron (Fe$^{2+}$). While iron has been invoked to explain opacity problems, most iron is expected to combine with other siderophile elements to form metal or metal sulfide grains rather than silicate \citep{gs99,lf99}. Consequently, we have synthesized an iron-free silicate glass using cosmic abundance ratios for Mg, Si, Al, Ca, and Na, yielding $\rm (Na_{0.10}Ca_{0.12}Mg_{1.86})(Al_{0.18}Si_{1.84})O_6 $. This sample does not include volatile elements or iron and is quite close to enstatite in composition. This ``cosmic silicate'' is designed to test the \citet{stencel90} hypothesis that dust forms as chaotic solids with the elemental abundances of the gas. \citet{pollack73} provided spectra of obsidian, a naturally occurring glassy silicate. Consequently we include obsidian glass in our sample, in part because this is the origin of the attribution of the $\sim$10\,$\mu$m feature to amorphous silicates. It is also useful for studying the effect of silicate structure and composition on spectral parameters because it is different from the more commonly assumed olivines and pyroxenes. Finally, we include silica (SiO$_2$) glass in our sample. Like obsidian, the structure of SiO$_2$ is significantly different from olivines and pyroxenes and thus provides potential insight into the effects of structure on silicate spectral features. Furthermore, silica dust grains have been invoked to explain observed astronomical spectral features in both evolved stars \citep{speck00} and in young stellar objects \citep[e.g.][]{sargent06}. The samples investigated here are listed in Table~\ref{tab:samples}. \subsection{Sample synthesis and preparation} \label{synth} Samples designated with ``synthetic'' in their name are generated from mixtures of reagent-grade oxides and carbonates, providing glasses with compositions like those of end-member minerals. The ``cosmic'' silicate was also synthesized in this way. In contrast, samples designated as ``remelt'' were generated by melting natural mineral samples. Consequently the ``remelt''-samples have compositions whose additional components reflect the impurities\footnote{ Unlike minerals, glasses do not have to have well-defined formulae. Some studies describe such non-stoichiometric glasses as having large quantities of impurities, but these non-mineral-end-member compositions are simply what the glass is made of.} found in natural crystalline samples of the relevant minerals. Using both synthetic and remelted samples helps to demonstrate the effect and importance of small compositional variations in silicates. Synthesis of silicate glasses from oxide and carbonate starting materials, was undertaken at the MU experimental petrology facility and the procedures are described in detail by \citet{getson07}. Melilite glasses were prepared by fusion in Pt crucibles and quenched by pouring into graphite molds (slow cooling), or on to a copper plate (faster cooling for less polymerized compositions). Glasses in the pyroxene series (including ``cosmic'' compositions) vary in their quenchability; diopside (CaMgSi$_2$O$_6$) is an excellent glass-former, while enstatite (MgSiO$_3$) crystallizes extremely rapidly. Glass of forsterite (Fo, Mg$_2$SiO$_4$) or olivine (Mg$_{2x}$Fe$_{2-2x}$SiO$_4$) can only be formed in the laboratory under special conditions. Specifically, Fo glass in the form of 50 to 200\,$\mu$m diameter beads was first produced by \citet{tangeman01a} at Containerless Research by suspending small particles in argon gas and melting/quenching by pulsing with a laser. In addition to providing rapid cooling of $\sim$700\,K/sec, the lack of container promotes crystal-free glass formation. Commercially prepared samples of Fo were purchased from Containerless Research, Inc. Iron-bearing olivine samples were not available. We were able to prepare ``basalt glass'' at Washington U. by flash melting rock on a glassy carbon substrate in a vacuum chamber with a CO$_2$ laser followed by rapid cooling. When applied to iron-bearing olivine or fayalite (Fe$_2$SiO$_4$) crystals this approach failed to produce glass. \subsection{Chemical Analyses for Sample Composition Determination} Samples were characterized by wavelength dispersive analysis (WDS) using standard procedures on the JEOL-733 and JXA-8200 electron microprobes at Washington University using ``Probe for Windows'' for data reduction (see http://www.probesoftware.com/). The measured data were corrected using CITZAF after \citet{Armstrong1995}. Oxide and silicate standards were used for calibration (e.g., Amelia albite for Na, Si; microcline for K; Gates wollastonite for Ca; Alaska Anorthite for Al; synthetic fayalite for Fe; synthetic forsterite for Mg; synthetic TiO$_2$ for Ti; synthetic Mn-olivine for Mn; synthetic Cr$_2$O$_3$ for Cr). Microprobe analyses are given in Table~\ref{microprobe}, and the resulting compositions are given as chemical formulae in Table~\ref{tab:samples}. Table~\ref{microprobe} also lists water contents, which we determined from near-IR spectra using the method in \citet{hofmeister09}. \subsection{Viscosimetric Determination of Glass Transition Temperature} As discussed in \S~\ref{silglass}, the glass transition temperature ($T_g$) is essentially the temperature above which a given composition should form as or convert to crystalline solids, while solids formed below this temperature should be glassy. We determine $T_{12}$ values as a proxy for $T_g$ for our silicate samples to provide an upper limit on temperature for models of glass formation in space. $T_g$ depends on composition and cooling rate. Viscosity was measured over a range of temperatures using a Theta Instruments Rheotronic III parallel plate viscometer, following procedures described by \citet{whittington09}. Viscosity is calculated from the measured longitudinal strain rate, known load and calculated instantaneous surface area, assuming perfect slip between sample and plates. The measurements are interpolated to find $T_{12}$ with an uncertainty of less than 2\,K. \subsection{Spectroscopy} Room temperature (18--19$^{\circ}$C) IR absorption spectra were acquired using an evacuated Bomem DA 3.02 Fourier transform spectrometer (FTIR) at 1\,cm$^{-1}$ resolution The accuracy of the instrument is $\sim$0.01\,cm$^{-1}$. Far-IR data ($\nu < 650 {\rm cm}^{-1}$) were collected for five samples using a SiC globar source, a liquid helium cooled bolometer, and a coated mylar beam-splitter. Mid-IR data ($\nu =$~450--4000\,cm$^{-1}$) were collected for all samples using a SiC globar source, a liquid nitrogen cooled HgCdTe detector, and a KBr beam-splitter. The spectra were collected from powdered samples pressed in a diamond anvil cell (DAC). The empty DAC was used as the reference spectrum which allows reflections to be removed. The methodology is described by \citet{hofm03}. Interference fringes exist in many of our spectra because the diamond faces are parallel and are separated by a distance within the wavelength range studied. The spacing is larger than film thickness. Fringes are associated with a sideburst in the interferogram. Due to mathematical properties of Fourier transforms, the fringes are convolved with the spectrum and therefore do not affect peak shape or the parameters used to describe the peak (position, width and height). \section{Comparison of new laboratory spectra with previous laboratory and synthetic spectra} \label{comparison} \subsection{New laboratory spectra} \label{newlabdata} The new laboratory spectra, shown in Figure~\ref{newlabdatafig}, are the highest resolution spectra of silicate glasses to date. These data are available online from http://galena.wustl.edu/$\sim$dustspec/idals.html The main observable parameters of astronomical spectra are the peak position of the absorption/emission features at $\sim 10\mu$m and $\sim 18\mu$m; their FWHMa and the ratio of their strengths (see e.g., \citeauthor{ohm92} and references therein). For instance, the ratio of band strengths between the 10 and 18\,$\mu$m features in observations varies markedly, as do the peak positions. Therefore we have extracted the most important spectral parameters from our data using the NOAO onedspec package within the Image Reduction and Analysis Facility (IRAF). The spectral parameters (the peak position, barycentric position, full width half maximum, FWHM, and equivalent width) were determined for the $\sim10$ and $\sim18\mu$m features. Figure~\ref{newlabdatafig} shows the barycentric position of the $\sim10\mu$m feature for all samples, while Figure~\ref{newlabdatafig20um} shows the barycentric positions of both the $\sim 10$ and $\sim18\mu$m features for the five compositions for which far-IR data were collected. The barycentric positions, along with the peak positions and FWHMa from absorbance measurements are listed in Table~\ref{tab:samples3}. There are multiple terms and symbols for the various ways in which absorption of light by solids is described. In order to prevent confusion, we will explain precisely how each term we use is defined. This is particularly important for applying laboratory spectra to astrophysical studies because our spectra are initially in the form of absorbance ($a$), but we typically use either absorption efficiency (Q-values), optical depth or extinction in astronomy. Transmittance, $T$ \citep[which is also referred to as {\em transmissivity} in][]{fox02} is defined as the ratio of the intensity of transmitted ($I_{\rm trans}$) and incident ($I_0$) light, i.e. $I_{\rm trans}/I_0$. Similarly reflectivity, $R$ and absorptivity, $A$ are the ratios of absorbed to incident light and reflected to incident light, respectively\footnote{note that {\em absorbance} is the reciprocal of the log of the {\em transmittance}, in contrast to the absorptivity defined here; see also \citet{STH05}}. \[ I_0 = I_{\rm abs} + I_{\rm trans} + I_{\rm refl} \] \[ \frac{I_{\rm trans}}{I_0} = 1 - \frac{I_{\rm abs}}{I_0} - \frac{I_{\rm refl}}{I_0} \] \[ T = 1 - R - A \] \noindent For simplicity we will assume the reflectivity is negligible or has been accounted for \citep[see][for how we can account for reflections]{hofmeister09} . Then, \begin{equation} T = 1 - A \end{equation} \noindent When light passes through a solid the absorption can be expressed as: \begin{equation} \label{tau1} I_x = I_0 e^{-\alpha L} \end{equation} \noindent where $L$ is pathlength or thickness of the solid sample and $\alpha$ is usually called the absorption coefficient, but is sometimes called opacity. In addition $\alpha = \kappa \rho$, where $\kappa$ is the mass absorption coefficient, and $\rho$ is the mass density. \begin{equation} \label{tau2} T = e^{-\alpha L}= e^{-a} \end{equation} \noindent Absorbance, $a$, is the exponent in the decay of light due to absorption: $a = \alpha L = \kappa \rho L$. Optical depth $\tau_\lambda$ of an absorbing material is defined by: \begin{equation} \label{tau3} I_x = I_0 e^{-\tau_\lambda} \end{equation} \noindent \citep[From, e.g.][]{glass99}. From equations~\ref{tau1}, \ref{tau2} and \ref{tau3} we see that the absorbance $a$ is similar to optical depth $\tau_\lambda$. In order to compare astronomical data in which we have a wavelength-dependent optical depth we can use absorbance (which is how we present our transmission spectra in Fig.~\ref{newlabdatafig} and Fig.~\ref{newlabdatafig20um} after accounting for surface reflections.) Now we can relate the absorbance ($a$) and absorptivity ($A$) via equation~\ref{tau2}: \[ A = 1 - e^{-a} \] \noindent To compare with some astronomical observations we still need to extract a version of the laboratory data that is comparable to the absorption efficiency, $Q$-values. To get this we need to consider how $Q$-values are defined. For a non-blackbody dust grain we define an absorption cross-section $C_{\rm abs}$ as the effective geometrical cross-section of the particle once we account for it not being a blackbody: \[C_{\rm abs} = Q_{\rm abs} \times \Upsilon\] \noindent where $\Upsilon$ is the geometrical cross-sectional area of a dust grain. Now if we consider how the absorption cross section gives rise to absorption we get: \begin{equation} \label{sigma1} \frac{I_{\rm abs}}{I_0} = A = C_{\rm abs} n L \end{equation} \noindent where $n$ is the number density of absorbing particles and $L$ is the pathlength or thickness of the absorbing zone. \begin{equation} \label{rho} n = \frac{\rho}{M_{\rm mol} \times m_H} \end{equation} \noindent where $M_{\rm mol}$ is the molar mass of the solid and $ m_H$ is the mass of a hydrogen atom. Combining equations~\ref{sigma1} and \ref{rho} we get: \begin{equation} \label{Qeq} Q_{\rm abs} = \frac{A \times M_{\rm mol} \times m_H}{ \Upsilon \rho L} \end{equation} \noindent It is clear from equation~\ref{Qeq} that Q-values $\propto$ absorptivity such that: \[ Q_{\rm abs} = \zeta A \] \noindent where, \[ \zeta = \frac{M_{\rm mol} \times m_H}{ \Upsilon \rho L} \] \noindent Consequently, while the shape, peak position and FWHM of spectral features shown in $Q$-values will be identical to those for $A$, the absolute values depend on the pathlength and on the cross-section areas of a given grain distribution. The uncertainty in the thickness of our samples and in potential grain size makes the absolute $Q$-values uncertain. Consequently we normalize our $Q$-value spectra to peak at unity. These Q-value spectra are stacked in Fig~\ref{newlabdatafig2} to demonstrate the shift in both barycentric position and FWHM with composition. Table~\ref{tab:samples3} also includes the FWHM in Q-value. \subsection{How to compare disparate data sets} \label{how2comp} Here we compare the new laboratory spectra with those previously published to distinguish which factors are most important in determining spectral feature parameters. Previously published data are available as complex refractive indices ($n$ and $k$). To make a fair comparison between the many available datasets we converted our absorbance data to the wavelength-dependent imaginary part of the complex refractive index ($k$) for each sample. This absorption index $k$ is chosen as the best comparison of different datasets, as it does not depend on grain sizes or shapes and can be extracted equally well from transmission or reflectance data \citep[see e.g.,][]{hofmeister09,fox02}. Comparing absorption efficiency $Q_{\rm abs}$ requires assumptions about grain shape, which have been shown to affect the shapes and positions of spectral features \citep[e.g.][]{Min03,DePew06}. The conversion of the absorbance spectrum to $k$-values uses: \[ k = 2.303 a /(d 4 \pi \nu) \] \noindent where $a$ is the absorbance, $d$ is the sample thickness (in cm) and $\nu$ is the frequency (wavenumber) in cm$^{-1}$. There is some uncertainty in the measurement of the sample thickness which is estimated to be 0.5 -- 1.5\,$\mu$m thick \citep{HB06}. Consequently, for comparison our data has been normalized to peak at a $k$-value of 1. In Figure~\ref{compare1} we compare spectra of forsterite, enstatite, gehlenite and \r{a}kermanite composition ``amorphous'' silicates. \subsection{Comparison of new glass spectra with existing laboratory data.} \label{comparesect} Various synthesis techniques are associated with samples studied in the laboratory, as described in \S~\ref{prevlab} and listed in Table~\ref{litdata}. Considering, for example, forsterite composition samples shown in the upper left panel of Figure~\ref{compare1}, it is clear that the spectral parameters vary even for ostensibly the same composition. The ion irradiation technique used by \citet{kh79} apparently does not produce a fully amorphized sample, as this spectrum is closer to that of crystalline forsterite, which was their starting material. The sample from \citet{day79} was produced via chemical vapor deposition; the \citet{sd96} sample was produced by laser ablation of crystalline samples; and the sol-gel method was used by \citet{jager03}. The spectra of three different samples produced by chemical vapor deposition, laser ablation and sol-gel techniques are similar. However, although these samples may be ``amorphous'' they are not necessarily glassy. The samples generated by these three techniques may represent chaotic silicates rather than the glassy silicates investigated herein. The difference between previous samples and those presented here is most likely a combination of density and porosity. Glasses should be less porous and denser than chaotic silicates. Among the enstatite composition samples, those from \citet{day79} and \citet{sd96} are, again, similar to each other. Samples produced by melting and quenching \citep[e.g. ours,][]{dorschner95} show some variability in the spectral features. However, the peak positions and FWHMa are similar (see Tables~\ref{litdata} and \ref{tab:samples}). The difference in breadth may result from different cooling rates and fictive temperatures. \citet{jager03} investigated amorphous MgO-SiO$_2$ solids prepared using the sol-gel method. The spectra vary markedly with changes in SiO$_2$ content, and hence with polymerization (Fig.~\ref{compare2}). However, these samples also contained a reported 0.8 to 1.2 wt.\% H$_2$O, which equates to $\sim$3 mol.\% H$_2$O, all dissolved as network-modifying hydroxyl (OH$^-$) ions (see \S~\ref{silglass}). Glasses quenched from melts at atmospheric pressure \citep[e.g.\ those presented here and in][]{dorschner95} contain much less water, typically 0.02--0.1wt.\%. In previous laboratory studies it has been suggested that the changes in peak positions are due to differing silica contents, and hence polymerization states (NBO/T), i.e. forsterite has a redder peak position than enstatite, which in turn is redder than silica \citep[see e.g.,][and reference therein]{ohm92}. This is demonstrated in Fig.~\ref{compare2}. Thus, even the modest levels of water remaining in samples prepared using the sol-gel method will affect the structure of the silicate and thus its spectrum. This makes precise interpretation of spectra from sol-gel samples difficult, especially of the more silica-rich compositions whose structure will be the most affected by the incorporation water. Therefore the sample/material structure is not well known. Spectral differences between the sol-gel MgSiO$_3$ of \citet{jager03} and the quenched MgSiO$_3$ glass of \citet{dorschner95} emphasize the point, especially around the 10$\mu$m feature (Fig.~\ref{compare1}). The bottom left panel of Fig.~\ref{compare1} compares spectra for the gehlenite, the Al-rich endmember of the melilite series. All data were produced by the melt-quench method. Gehlenite from this study closely matches spectra of the \citet{mutschke98} sample, while the sample we generated by melting a natural crystal of gehlenite (designated gehlenite remelt) shows significant differences which can be attributed to the deviation from stoichiometry which results in large shifts in both Al/(Al$+$Si) and NBO/T (see Table~\ref{tab:samples}). The bottom right panel compares the mid-composition melilite (\r{A}k50Ge50) from \citet{mutschke98} with our end-member synthetic melilites. The peak position of the \citet{mutschke98} melilite is similar to our Al-rich endmember, while the feature shape is closer to that of our Mg-rich endmember. \citet{mutschke98} discussed the low contrast feature in the 12--16\,$\mu$m range in various aluminous silicates. This feature moves from $\sim14.5$\,$\mu$m in gehlenite towards shorter wavelengths, with the \r{a}kermanite feature peaking closer to 13.5\,$\mu$m (Fig.~\ref{compare1}). \citet{mutschke98} did not investigate the \r{a}kermanite-rich melilites because their focus was on the effect of aluminium, but their findings, and those seen here may pertain to the carrier of the observed ``$\sim13$\,$\mu$m'' feature \citep[e.g.][and references therein]{sloan03}. Figure~\ref{compare3} shows a comparison between our ``cosmic silicate'', ``basalt'' and the ``dirty silicate'' produced by \citet{jager94}. The 10\,$\mu$m features are very similar in peak position and FWHM, but the ratio of the $\sim$10 and $\sim18$\,$\mu$m features vary. This comparison shows that samples containing several elements give rise to very similar 10\,$\mu$m spectral features even though they differ in Fe/(Mg+Fe), oxidation state, NBO/T and other compositional parameters. In single crystal silicates, the Mg/Fe ratio affects the spectral features \citep[e.g.,][]{koike03,hp07}. \citet{dorschner95} investigated the effect of Mg/Fe ratio on the spectra of (Mg,Fe)SiO$_3$ glasses produced by quenching melts in air. The peak positions shift slightly, and the ratio of the 10\,$\mu$m and 18\,$\mu$m feature heights varies markedly (Fig.~\ref{compare2}). Substitution of Ca for Mg produces substantial broadening of the 10\,$\mu$m feature, consequently the peak (and barycenter) shift redward. The viscosity and glass transition data in Table~\ref{tab:samples} show that small impurities can have important effects on melt properties (and structure). For example, compare synthetic and remelted \r{a}kermanite and gehlenites. This is why it is important to consider Ca in pyroxenes, in addition to the En-Fs series. Indeed based on the spectra presented here, the effect of calcium substitution is larger than that of iron. \citet{dorschner95} reported that the FeO/Fe$_2$O$_3$ ratios of their samples were $\approx$1. Whether molar or weight ratio (unspecified in their paper), these samples do not have pyroxene stoichiometry, and thus have a structure differing from that of our pyroxene glasses. Using their reported compositions, and assuming all Fe$^{3+}$ acts as a network former, calculated NBO/T values ranges from 2.1 for En95 to $\sim$0.8 for En50 and En40 glasses, considerably lower than the value of 2.0 for a true (Mg,Fe)SiO$_3$ composition. Furthermore, the effect of tetrahedral Fe$^{3+}$ on neighboring Si--O bonds, which give rise to the 10\,$\mu$m feature, may be sufficient to produce marked changes in the shape and peak position of the feature even if Mg-Fe substitution does not. Therefore existing lab data on glasses do not allow astronomical spectra to be interpreted reliably in terms of either M$^{2+}$/Si or Mg/Fe ratio, which are important tools for discriminating between competing dust formation models. We will address the roles of oxidation state and Al-content in silicates in future papers. \subsection{Comparison with Synthetic Spectra} As discussed in \S~\ref{prevlab} the most popular silicate spectral data used in astronomy are the synthetic optical functions (complex refractive indices/complex dielectric functions) from \citeauthor{dl84} and \citeauthor{ohm92}. In the spectral regions considered here, both groups derived dielectric functions from astronomical observations. \citeauthor{dl84} used observations of the ISM while \citeauthor{ohm92} produced two sets of optical constants designated as warm O-deficient and cold O-rich. The warm O-deficient is intended to match circumstellar dust features where the \citet{NuthHecht} predicted that non-stoichiometric silicates form in the outflows. The cold O-rich silicate is meant to represent dust formed in molecular clouds, where the cool temperatures and slow dust formation lead to stoichiometric compositions. For the circumstellar (O-deficient warm), opacities are derived by \cite{vk88} based on averaging approximately 500 IRAS LRS spectra of evolved stars. However, this approach is less than ideal. It is well known that the observed ``silicate'' features vary markedly from source to source \citep[see e.g.][]{NuthHecht,mutschke98,speck00} and thus averaging can smear out such differences and give rise to an opacity that is close to matching many objects and actually matches none. To provide a fair comparison, parameters for these synthetic silicate spectra have been included in Table~\ref{litdata}. Whereas the \citeauthor{dl84} spectra provide a reasonable match to the features observed in the ISM, the spectra from \citeauthor{ohm92} provides a narrower $\sim10\mu$m feature which matches the circumstellar silicate features more closely \citep[e.g.][and references therein]{Sargent2010}. However, neither \citeauthor{dl84} nor \citeauthor{ohm92} provide perfect matches to the observed astronomical mid-IR features; nor do they reflect the diversity of features seen in astronomical environments. Although synthetic spectra do not match astronomical data perfectly, they are widely used and come close to matching many astronomical observations. Consequently, comparing our new laboratory data to these synthetic spectra allows us to assess whether any true glasses show promise as carriers of observed silicate features. Figures~\ref{synthcomp1} to \ref{synthcomp4} show how the spectral properties of the new laboratory samples compare to those of synthetic silicates from \citeauthor{dl84} and \citeauthor{ohm92}. In all cases the comparison is made with the absorption index $k$, which is the imaginary part of the complex refractive index, and was chosen to avoid complications arising from grain shape and grain effects (see \S~\ref{how2comp}). None of the samples presented here precisely match the spectral features of the synthetic silicates. In particular, the 10$\mu$m feature in the spectra from \citeauthor{dl84} is too broad to be matched by any of the laboratory samples, with the possible exception of gehlenite (both ``synthetic'' and ``remelt''; see Fig.~\ref{synthcomp2}, bottom row). However, the relative cosmic abundances calcium and aluminium make this an unlikely attribution for the ISM dust. It is usually assumed that astronomical silicates are largely comprised of ``amorphous'' Mg-rich olivine and pyroxene compositions. Therefore, we compare the interstellar feature as represented by \citeauthor{dl84} with such compositions in Fig.~\ref{synthcomp5}--~\ref{synthcomp3}. The spectra of the four Mg-rich pyroxene-like glasses (Synthetic Enstatite, Enstatite Remelt, Cosmic Silicate and Basalt) have grossly similar spectra. In all four cases, the laboratory data match the blue side of the \citeauthor{dl84} feature, but are too narrow and fail to match the red side. This is also true for the broader diopside feature (Fig.~\ref{synthcomp5}). While forsterite composition glass has a redder feature than the pyroxenes, it is still too narrow and still not red enough to match the \citeauthor{dl84} 10$\mu$m feature. If glasses do produce the astronomical 10$\mu$m band, the excess breadth needs accounting for and may suggest processing of dust in the ISM \citep{NuthHecht}. The physical changes to the dust that give rise to the broader interstellar feature remain unknown, but there are numerous interpretations. In circumstellar environments the red-side broadening of the 10\,$\mu$m feature has been interpreted as being due to oxide inclusions \citep[e.g.,][]{speck00}; increasing crystallinity \citep[e.g.,][]{sylvester98,bouwman01}; changes in grain shape \citep[e.g.,][]{min07}; or grain porosity \citep[e.g.,][]{henning93,vh08}. The silicate feature is fairly constant and broad for the diffuse ISM, but varies more and is narrower for molecular clouds \citep[see e.g.][and references therein]{vanbreeman}. These variations have been attributed to a combination of grain agglomeration and ice mantle formation \citep[see][]{chiar06,chiar07,mcclure08,vanbreeman}, but the results of those studies depend on the optical properties input into their models. The spectra of \citeauthor{ohm92} have been used somewhat successfully in modeling circumstellar silicate features \citep[e.g.][and references therein]{Sargent2010}. Of the new laboratory glasses, only forsterite comes close to matching the \citeauthor{ohm92} synthetic spectra (Fig.~\ref{synthcomp4}), but there is still a problem with the breadth and redness of the feature. Most other compositions have 10$\mu$m features that are too blue. However, 10\,$\mu$m silicate emission features seen in AGB stars spectra and observed by ISO and IRAS are consistently slightly bluer than those in the synthetic spectra \citep{wheeler07,wheeler09}. Consequently our samples may provide better matches than the OHM synthetic spectra. \section{Discussion} \subsection{Amorphousness at different scales: Glasses vs. Nanocrystalline solids} We have shown that differences exist between spectra of amorphous samples of ostensibly the same composition due to structural differences arising from synthesis methods. In this section we discuss the range of structures that may be considered amorphous, and other factors such as composite or polycrystalline grains that may produce different spectral features to those obtained from studies of single crystals. The range of structures is shown schematically in Fig.~\ref{fig:XtalC}. Disordered (but not glassy) silicates may form at temperatures above $T_g$ by ion bombardment of initially crystalline materials. Depending on the extent of damage done by ion radiation, an initially crystalline sample may be amorphized and could be indistinguishable spectroscopically from a truly glassy sample \citep[see e.g.][]{Demyk2004}. As discussed in \S~\ref{prevlab}, limited heavy ion irradiation may not completely amorphize a crystalline sample \citep[e.g.][]{kh79}. Consequently ion-irradiation of initially crystalline material should lead to a continuum of structures ranging from perfect crystals to completely disordered, with the accompanying range of spectral features. Furthermore \citet{Demyk2004} showed that ion irradiation leads to more porous samples than simple splat-quenched glasses. While lower densities should not directly affect the spectra, voids give rise to extra reflections which increases extinction and feature widths through increased scattering even though absorption is unchanged. The structures of ion-irradiated crystals may be more similar to smokes than glasses because of ion damage (see Fig.~\ref{fig:XtalC}). Another potential carrier of the observed silicate spectral features is polycrystalline silicate. Spectra of single composition (monomineralic) polycrystals will be affected by multiple scattering which should broaden and smear out the features as demonstrated for hematite in \citet[][]{icarus}. During annealing and crystallization of an initially glassy grain the final structure would depend strongly on composition. For a glass of pure enstatite composition, one may expect either a single crystal of enstatite or a mono-mineralic polycrystalline agglommeration (if crystallization starts at more than one point in the grain.) However, since glassy grains are expected to form to include all the atoms in the outflowing gas, fractional crystallization is possible. For instance, our ``cosmic'' silicate has a bulk composition close to enstatite, but contains a significant amount of other elements. Consequently we might expect an annealed sample of this composition to be poly-mineralic polycrystals with a large abundance of enstatite crystals. Depending on the spectral contributions from other constituents of the polymineralic grain, we may not perceive sharp crystalline features in the spectrum. This is beyond the scope of the present work but will be investigated in the future. Given that there are multiple mechanisms for the formation of both amorphous and crystalline silicate grains, and that processing can lead from one to the other and vice versa, it is possible that many types of grains represented on Fig.~\ref{fig:XtalC} are found in space. \subsection{Potential application to astronomy} \label{astroapp} AGB stars present an interesting environment in which to study dust formation because they are relatively benign and the stability of CO molecules simplifies the chemistry. Whereas many AGB stars exhibit the $\sim10\mu$m feature, this feature varies in peak position and shape from object to object, and even temporally within a single object \citep{speck00,sloan03,monnier98}. \citet{speck00} showed that several Galactic objects show silicate features peaking as short as 9.2\,$\mu$m, while \citet{swt08} found a very red 10\,$\mu$m absorption feature in the spectrum of an obscured AGB star. While the synthetic spectra are commonly used to model the observed features, they rarely match the details of peak position and width. The new laboratory data presented here provide a framework to interpret the observed variations in AGB star spectral features. For example, the red feature seen by \citet{swt08} can be best matched by something forsteritic, while the observed 9.2$\mu$m feature needs very silica-rich dust. Recent studies by \citet{chiar06,chiar07}, \citet{mcclure08} and \citet{vanbreeman} have shown that the shape and peak position of the classic 10\,$\mu$m interstellar silicate absorption feature varies depending on the line of sight. Whereas the $\sim10\mu$m feature remains the same for all diffuse lines of sight, its shape and position varies once the line of sight includes a molecular cloud. This has been attributed to a combination of grain agglomeration and ice mantle formation within the molecular clouds ({\em op.\ Cit.}). The diffuse ISM is well characterized by \citeauthor{dl84} but for molecular clouds this $\sim$10\,$\mu$m feature is too broad \citep{vanbreeman}. The competing hypotheses explaining the molecular cloud spectra could be tested by comparing with the glass spectra presented herein. The importance of dust to astrophysical processes cannot be over stated. For example, observations of high redshift ($Z>7$) galaxies and quasars demonstrate that there was copious dust produced by the time the Universe was $\sim$700 million years old \citep[e.g.,][]{sugerman06,dwek07}. Furthermore, in Active Galactic Nuclei (AGN), the observed silicate absorption feature is shifted to peak at a longer wavelength, and is broader than that observed in our own Galaxy. This spectral shift has been attributed to calcium-aluminum rich silicates \citep{jaffe04} or porous particles \citep{li08}. Moreover, data from Spitzer (SAGE-IRS Legacy program) show a number of both AGB and YSO sources with remarkably blue silicate features. Given that Mg, Fe and Si are formed in different nucleosynthetic processes, the abundance ratios of these elements do not necessarily scale with metallicity. Disentangling dust formation mechanisms through observations of dust requires the optical properties of a range of silicate samples of varying Mg- Fe- and Si- contents and other components as provided here. The work presented here is a subset of a larger study and further sample compositions and structures will be presented in the near future. \section{Conclusions} We have presented new laboratory spectra of astrophysically relevant silicate glasses and compared them to existing data in the literature. We have shown that (1) ``disordered'' is not synonymous with glassy. In addition to structural disorder, porosity also affects spectral features. (2) Sample preparation and characterization are important. (3) We confirm the general trend of decreasing peak wavelength with increasing polymerization for the $\sim$10\,$\mu$m feature. However the scatter about this overall trend indicate that other compositional factors must be important. (4) Nothing quite matches the diffuse ISM in peak position and breadth. Spectral parameters of disordered silicates are sensitive to composition and sample synthesis techniques, which reflect degree of disorder, porosity, oxidation states and water content. To understand dust formation we must disentangle these parameters through further systematic study of major compositional series using high resolution spectroscopy on thoroughly characterized samples. Further studies of the parameters will follow. These new data can be used for interpretation of more esoteric environments e.g. high redshift galaxies and novae. \acknowledgements This work is supported by NSF AST-0908302 (A.K.S. and A.G.W.) and NSF AST-0908309 (A.M.H.) and by NSF CAREER AST-0642991 (for A.K.S.) and NSF CAREER EAR-0748411 (for A.G.W.). We would like to thank Bryson Zullig and Josh Tartar for their help with this work.
d898878b074d0fb043adb7b1186b10ce04b53732
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{General Picture} \label{intro} Most elliptical galaxies are poor with respect to interstellar gas. Also, elliptical galaxies invariably contain central massive black holes (BHs), and there exists a tight relationship between the characteristic stellar velocity dispersion $\sigma$ and the BH mass $M_\mathrm{BH}$ \cite{fm00,tgb+02}, and between $M_\mathrm{BH}$ and the host spheroid mass in stars, $M_*$ \cite{mtr+98}. Are these two facts related? Here we focus on a scenario in which the mass of the central BH grows within gas rich elliptical progenitors until star formation has reduced the gas fraction in the central regions to of order 1\% of the stellar mass. Then radiative feedback during episodes when the BH luminosity approaches its Eddington limit drives much of the gas out of the galaxy, limiting both future growth of the BH and future star formation to low levels. Many works already recognized the importance of feedback as a key ingredient of the mutual BH and galaxy evolution \cite{bt95,co97,sr98,f99,co01,bs01,cv01,cv02,k03,wl03,hco04,gds+04,mqt04}. What is new about this work is the stress on one component of the problem that has had relatively little attention: the radiative output of the central BH is not conjectural -- it must have occured -- and the high energy component of that radiative output will have a dramatic and calculable effect in heating the gas in ellipticals. Using the average quasar spectral output derived in \cite{sos04}, we show below and in more detail in \cite{soc+04} that the limit on the central BH produced by the above argument coincides accurately with the observed $M_\mathrm{BH}$--$\sigma$ relation. Not only the slope, but also the small normalization factor is nicely reproduced. The present work is complementary to \cite{co97,co01} in that, while it does not attempt to model the complex flaring behavior of an accreting BH with an efficient hydrodynamical code, it does do a far more precise job of specifying the input spectrum and the detailed atomic physics required to understand the interaction between that spectrum and the ambient interstellar gas in elliptical galaxies. \section{Radiative Heating of ISM in Spheroids} \label{heatfar} Below we assess the conditions required for the central BH radiation to significantly heat the ISM over a substantial volume of the galaxy. In this section we shall {\sl assume} that the central BH has a mass as given by the observed $M_\mathrm{BH}$--$\sigma$ relation for local ellipticals and bulges \cite{tgb+02}: \begin{equation} M_\mathrm{BH}=1.5\times 10^8M_\odot\left(\frac{\sigma}{200~{\rm km}~{\rm s}^{-1}}\right)^4. \label{sigma_mbh} \end{equation} This assumption will be dropped in \S\ref{origin}, where we {\sl predict} the $M_\mathrm{BH}$--$\sigma$ relation. In \cite{sos04} we computed for the average quasar spectrum the equilibrium temperature $T_\mathrm{eq}$ (at which heating due to Compton scattering and photoionization balances cooling due to line and continuum emission) of gas of cosmic chemical composition as a function of the ionization parameter $\xi\equiv L/nr^2$, where $L$ is the BH bolometric luminosity. In the range $3\times 10^4$--$10^7$\,K, \begin{equation} T_\mathrm{eq}(\xi)\simeq 2\times 10^2\xi\,{\rm K}, \label{tstat_eq} \end{equation} while at $\xi\ll 10^2$ and $\xi\gg 10^5$, $T_\mathrm{eq}\approx 10^4$\,K and $2\times 10^7$\,K, respectively. On the other hand, the galactic virial temperature is given by \begin{equation} T_\mathrm{vir}\simeq 3.0\times 10^6\,{\rm K} \left(\frac{\sigma}{200~{\rm km}~{\rm s}^{-1}}\right)^2. \label{tvir} \end{equation} We can then find the critical density $n_{\rm crit}$ defined by \begin{equation} T_\mathrm{eq}(L/n_{\rm crit} r^2)=T_\mathrm{vir} \label{ncrit} \end{equation} as a function of distance $r$ from the BH. Gas with $n<n_{\rm crit}(r)$ will be heated above $T_\mathrm{vir}$ and expelled from the galaxy. We show in Fig.~\ref{nr_tvir} the resulting $(r,n)$ diagrams for a small and large BH/galaxy. \begin{figure} \centering \includegraphics[bb=10 170 570 680, width=0.68\columnwidth]{sazonovF1.ps} \caption{The $(r,n)$ plane for a galaxy with $\sigma=180$\,km\,s$^{-1}$ ($T_\mathrm{vir}=2.4\times 10^6$\,K, $M_\mathrm{BH}=10^8M_\odot$, upper panel), and with $\sigma=320$\,km\,s$^{-1}$ ($T_\mathrm{vir}=7.7\times 10^6$\,K, $M_\mathrm{BH}=10^9M_\odot$, lower panel). In the dashed area, gas can be heated above $T_\mathrm{vir}$ by radiation from the central BH emitting at the Eddington luminosity. The upper boundary of this area scales linearly with luminosity. Vertical boundaries are $R_\mathrm{C}$, $R_1$, $R_2$ and $R_{\rm e}$. } \label{nr_tvir} \end{figure} In reality, provided that $T_\mathrm{eq}>T_\mathrm{vir}$, significant heating will take place only out to a certain distance that depends on the luminosity and duration of the quasar outburst. Since the BH releases via accretion a finite total amount of energy, $\epsilonM_\mathrm{BH} c^2$, there is a characteristic limiting distance: \begin{equation} R_\mathrm{C}=\left(\frac{\sigma_\mathrm{T}\epsilonM_\mathrm{BH}}{3\pim_\mathrm{e}}\right)^{1/2} = 400\,{\rm pc}\left(\frac{\epsilon}{0.1}\right)^{1/2} \left(\frac{M_\mathrm{BH}}{10^8M_\odot}\right)^{1/2}. \label{rc} \end{equation} Inside this radius, a low density, fully photoionized gas will be heated to the Compton temperature $T_\mathrm{C}\approx 2$\,keV characteristic of the quasar spectral output. More relevant for the problem at hand is the distance out to which low density gas will be Compton heated to $T\gtrsimT_\mathrm{vir}$: \begin{equation} R_1=R_\mathrm{C}\left(\frac{T_\mathrm{C}}{T_\mathrm{vir}}\right)^{1/2}= 1,300\,{\rm pc}\left(\frac{\epsilon}{0.1}\right)^{1/2} \frac{\sigma}{200~{\rm km}~{\rm s}^{-1}}. \label{r1} \end{equation} Yet another characteristic radius is out to which gas of critical density $n_{\rm crit}$ will be heated to $T\gtrsimT_\mathrm{vir}$ via photoinization and Compton scattering: \begin{equation} R_2=R_1 [\Gamma(n_{\rm crit})/\Gamma_{\rm C}]^{1/2}, \label{r2} \end{equation} where $\Gamma_{\rm C}$ and $\Gamma$ are the Compton and total heating rates, respectively. Depending on gas density ($0<n<n_{\rm crit}$), the outer boundary of the ``blowout region'' will be located somewhere between $R_1$ and $R_2$. The size of the heating zone can be compared with the galaxy effective radius \begin{equation} R_{\rm e}\sim 4,000\,{\rm pc} \left(\frac{\sigma}{200~{\rm km}~{\rm s}^{-1}}\right)^2. \label{reff} \end{equation} The different characteristic distances defined above are shown as a function of $M_\mathrm{BH}$ in Fig.~\ref{mbh_radii}. One can see that a BH of mass $<10^7M_\odot$ should be able to unbind the ISM out to several $R_{\rm e}$. In the case of more massive BHs/galaxies with $M_\mathrm{BH}\sim 10^8$--$10^9M_\odot$, the heating will be localized to innermost $\sim$0.3--0.5$R_{\rm e}$. \begin{figure} \centering \includegraphics[bb=10 170 570 680, width=0.65\columnwidth]{sazonovF2.ps} \caption{Different heating radii: $R_\mathrm{C}$ (dotted line), $R_1$ (short-dashed line), and $R_2$ (long-dashed line), and the galactic effective radius (solid line), as a function of $M_\mathrm{BH}$. } \label{mbh_radii} \end{figure} \section{Possible Origin of the $M_\mathrm{BH}$--$\sigma$ Relation} \label{origin} We now consider the following general idea. Before the BH grows to a certain critical mass, $M_\mathrm{\rm BH, crit}$, its radiation will be unable to efficiently heat the ambient gas, and the BH will accrete gas efficiently. Once the BH has grown to $M_\mathrm{\rm BH, crit}$, its radiation will heat and expell a substantial amount of gas from the central regions. Feeding of the BH will then become self-regulated on a time scale of order the cooling time of the low density gas. Subsequent central activity will be characterized by a very small duty cycle ($\sim$0.001), as predicted by hydrodynamical simulations \cite{co97,co01} and suggested by observations \cite{hco04}. BH growth will be essentially terminated. Suppose that the galaxy density distribution is that of a singular isothermal sphere, with the gas density following the total density: \begin{equation} \rho_{\rm gas}(r)=\frac{M_\mathrm{gas}}{M}\frac{\sigma^2}{2\pi Gr^2}. \label{rho_gas} \end{equation} Here $M_\mathrm{gas}$ and $M$ are the gas mass and total mass within the region affected by radiative heating. The size of the latter is uncertain but is less than a few kpc (see \S\ref{heatfar}), so that $M$ is dominated by stars rather than by dark matter. Radiation from the central BH can heat the ambient gas up to \begin{equation} T_\mathrm{eq}\approx 6.5\times 10^3\,{\rm K} \frac{L}{L_\mathrm{Edd}}\left(\frac{M_\mathrm{gas}}{M}\right)^{-1}\frac{M_\mathrm{BH}}{10^8M_\odot} \left(\frac{200\,{\rm km~s}^{-1}}{\sigma}\right)^2, \label{tstat} \end{equation} this approximate relation being valid in the range $3\times 10^4$--$10^7$\,K. Remarkably, $T_\mathrm{eq}$ does not depend on distance for the adopted $r^{-2}$ density distribution. We then associate the transition from rapid BH growth to slow, feedback-limited BH growth with the critical condition \begin{equation} T_\mathrm{eq}=\etaescT_\mathrm{vir}, \label{tstat_tvir} \end{equation} where $\eta_{\rm esc}\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1$ and $T_\mathrm{vir}$ is given by (\ref{tvir}). Once heated to $T_\mathrm{eq}\gtrsimT_\mathrm{vir}$, the gas will stop feeding the BH. The condition (\ref{tstat_tvir}) will be met for \begin{equation} M_\mathrm{\rm BH, crit}=4.6\times 10^{10}M_\odot\eta_{\rm esc}\left(\frac{\sigma} {200\,{\rm km~s}^{-1}}\right)^4\frac{L_\mathrm{Edd}}{L}\frac{M_\mathrm{gas}}{M}. \label{mcrit} \end{equation} Therefore, for fixed values of $\eta_{\rm esc}$, $L/L_\mathrm{Edd}$ and $M_\mathrm{gas}/M$ we expect $M_\mathrm{\rm BH, crit}\propto\sigma^4$, similarly to the observed $M_\mathrm{BH}$--$\sigma$ relation. Equally important is the normalization of the $M_\mathrm{BH}$--$\sigma$ relation. By comparing (\ref{mcrit}) with (\ref{sigma_mbh}) we find that the observed correlation will be established if \begin{equation} M_\mathrm{gas}/M=3\times 10^{-3}\eta_{\rm esc}^{-1}L/L_\mathrm{Edd}. \label{mgas_crit} \end{equation} The gas-to-stars ratio is thus required to be low and approximately constant for spheroids of different mass at a certain stage of their evolution. As for the Eddington ratio, it is reasonable to expect $L/L_\mathrm{Edd}\sim$0.1--1 during quasar outbursts. \begin{figure} \centering \includegraphics[bb=10 170 570 680, width=0.65\columnwidth]{sazonovF3.ps} \caption{Thick solid line shows the predicted $M_\mathrm{BH}$--$\sigma$ correlation resulting from heating of the ISM by the radiation from the central BH assuming $M_\mathrm{gas}(R_{\rm e})/M=0.003$ and $\eta_{\rm esc}=1$. Thin solid line corresponds to $M_\mathrm{gas}(R_{\rm e})/M=0.0015$ and $\eta_{\rm esc}=2$. Dashed line is the observed $M_\mathrm{BH}\propto\sigma^4$ correlation in the range $10^6$--a few $10^9M_\odot$, extrapolated to lower and higher BH masses. Dotted lines are $M_\mathrm{BH}\propto\sigma^3$ and $M_\mathrm{BH}\propto\sigma^5$ laws. } \label{s_mbh} \end{figure} The approximately linear $T_\mathrm{eq}(\xi)$ dependence [see (\ref{tstat_eq})] was crucial to the above argument leading to the $M_\mathrm{\rm BH, crit}\propto\sigma^4$ result. However, the $T_\mathrm{eq}(\xi)$ function becomes nonlinear outside the range $3\times 10^4\,{\rm K}<T_\mathrm{eq}<10^7\,{\rm K}$ \cite{sos04}. In Fig.~\ref{s_mbh} we show the predicted correlation between $M_\mathrm{\rm BH, crit}$ and $\sigma$ for $L/L_\mathrm{Edd}=1$ and $M_\mathrm{gas}/M=3\times 10^{-3}$. It can be seen that the $M_\mathrm{BH}\propto\sigma^4$ behavior is expected to break down for $M_\mathrm{BH}<10^4M_\odot$ and for $M_\mathrm{BH}\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$~a few $10^9M_\odot$. It is perhaps interesting that the range of masses shown in Fig.~\ref{s_mbh} for which $M_\mathrm{BH}\propto\sigma^4$ is obtained from considerations of atomic phyisics (and the observed AGN spectra) corresponds closely with the range of masses for which this power law provides a good fit to the observations. Exploring the $M_\mathrm{BH}$--$\sigma$ relation observationally near $10^9M_\odot$ would be a sensitive test of the importance of radiative feedback. \section{Detailed Modelling of the BH-Galaxy Co-evolution} In \cite{soc+04,cos+04,oc04} we addressed in a more quantitative way the BH growth in the context of the parent galaxy evolution. We adopted a physically-motivated one-zone model, taking into account the mass and energy return from the evolving stellar population. This model predicts that after an initial ``cold'' phase dominated by gas infall, once the gas density becomes sufficiently low the gas heating dominates and the galaxy switches to a ``hot'' solution. The gas mass/stellar mass ratio at that epoch ($\sim$0.003) is remarkably close to the value inferred above from the argument leading to the right $M_\mathrm{BH}$--$\sigma$ relation. Other predictions of the toy model are also in satisfactory agreement with observations. The ``cold'' phase would probably be identified observationally with the Lyman Break and SCUBA galaxies, while the ``hot'' phase with normal, local ellipticals. A proper investigation of the importance of radiative heating on the BH/galaxy co-evolution, based on hydrodynamical numerical simulations, is now in progress.
7dc0939e9a4fa3270daa76f38810e5e354bd42f4
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} Studies involving human subjects are becoming increasingly common, and are often a required element in many types of research, from medical to social and everything in between. Best practices in designing and running such studies dictate that subject selection must be fair (in terms of sharing the benefits and burdens of the research), and researchers should not exclude subjects based on characteristics such as age, gender, race, religion, sexual orientation, etc.\cite{miser2005educational,pech2007understanding}. The recent \emph{"Black Lives Matter"} movement has demonstrated vividly the difference between \emph{not being racist} and \emph{being anti-racist}. In other words, the need to go from not excluding people deliberately (e.g., based on race) to being \emph{actively inclusive}. One early example of unintended consequences when inclusivity is not a design principle in a data-driven project is the Boston pothole detection mobile app. The app automatically identified potholes by detecting the bumps of cars driving over potholes (while the app was running on a smartphone in the car). The resulting maps showed entire neighborhoods virtually pothole-free; unsurprisingly, these were the more impoverished neighborhoods, where not enough people had smartphones at the time~\cite{harvardbigdata}. The driving principle of this paper is \emph{how to guarantee inclusiveness for subject selection in user studies}. Traditionally, subject selection involves randomized sampling techniques. However, such techniques assume that it is perfectly acceptable for the characteristics of the selected population to align with the ones of the "input" population. For small-scale studies, that is resolved by manual intervention by the researchers running the study. For larger-scale studies, this is often unresolved and leads to instances of the too-many-college-kids-as-subjects problem, where studies performed at or near university campus settings rely primarily on subjects not representative of the broader population. This phenomenon, which is due to misrepresentation in data, is known as \emph{Selection Bias} and can lead to unreliable analyses and unfair decisions~\cite{ntoutsi2020bias,cortes2008sample}. \smallskip \noindent\textbf{Motivating Example \#1:} \input{2-fig_tex/motivating_example1.tex} Let us assume we want to select subjects for a field experiment to study the impact of incentives in changing people's public transportation habits. This example is the actual motivation behind this work and a need we have in the PittSmartLiving project (\url{https://PittSmartLiving.org}), whose goal is to encourage pro-social transportation behavior by providing real-time information and incentives to bus riders. For example, a field experiment subject will see a notification that says: \emph{"the bus you are waiting for will be full, if you take the following one, the coffee shop around the corner will give you a \$2 discount on coffee"}. Obviously, for such a study to be impactful, we need to make sure we have a representative sample population when compared to that of the entire city. When we compare the characteristics of the participants of a prior transportation survey we performed to that of the population of Pittsburgh (as summarized in the census), we saw a big discrepancy. The two pie charts in Figure \ref{fig:motiv1} demonstrate the breakdown by gender in the Pittsburgh census\cite{censusreporter} vs. in our survey (Section \ref{lab:data}). While, in the census data, the percentage of female subjects accounted for over half the population, in the survey data, the number of females is more than twice the number of males, and there is also a tiny proportion (0.75\%) listed as "other gender." Furthermore, bar charts in Figure \ref{fig:motiv2} compare the age distribution in the Pittsburgh census vs. our survey data. The survey data is entirely biased and not representative of the whole population. \input{2-fig_tex/motivating_example2.tex} So if we are to create a sample out of the people surveyed that would be representative of the Pittsburgh population, \emph{we would need to match the distribution of characteristics of the selected subjects to that of the general population. } And this is the exact point where traditional sampling-based subject selection techniques would fail, and we need new methods. \smallskip \noindent\textbf{Motivating Example \#2:} As a second motivating example, consider a human subjects study for Type II Diabetes. The researchers running the study want to include people who have the disease and create a cohort with a balanced mix of male, female, plus people that do not identify as male or female, i.e., "other gender." Additionally, they would like to primarily have people in their 30s and 40s participate in the study, plus a few in their 20s and 50s. If we try to cast the above requirements into a set of desired/ideal distributions for the different characteristics, we run into the problem that it is difficult to determine specific percentages (like one would get from the census). For example, how much should the researchers allocate to the different age groups? 35\% to each of the 30s and 40s vs. 15\% to the 20s and 50s? How about 40\% to the first group and only 10\% to the second one? A fitting solution here should support \textbf{defining ranges of ideal percentages} so that the researchers, for example, can specify 30-40\% for the first group and 10-20\% for the second group. And such ranges could start at 0\%, to account, for example, for the possibility of not having a subject identifying as "other gender." Another interesting desideratum is the ability to specify generalized constraints on the subject characteristics. For example, for our fictional diabetes study, the researchers may also want to make sure all neighborhoods from a city are equally included in the subject pool. Although this constraint can be trivially specified with a function, it cannot be part of any known sampling technique. \smallskip Traditional subject selection methods such as Stratified Random Sampling either do not support the situations from our two motivating examples or provide expensive solutions, especially when dealing with multiple data characteristics~\cite{taherdoost2016sampling}. There are some biased sampling techniques such as Disproportionate Stratified Random Sampling by which some of the groups will be over-represented, and some will be under-represented~\cite{haque2010sampling}. However, these will not work for our second motivating example. \smallskip \noindent \textbf{Contributions:} In this work, we propose a novel approach that not only resolves the problem of multi-characteristic subject selection from biased datasets but also supports different variations of the problem, including range-based and constraint-based (generalized). To the best of our knowledge, our work is unique, and no work has ever addressed the same problem or proposed a solution for it. In particular, we make the following contributions: \begin{enumerate} \item We formulate the problem of multi-characteristic subject selection from biased datasets in the form of an optimization problem that minimizes the distance between the ideal and the final sampling distributions. Our proposed method works with different variations of the problem, including fixed, range-based, and generalized (with more constraints and objective functions). \item We propose an algorithm that computes the joint ideal percentage in multi-characteristic problems. \item We perform several experimental evaluations using data from real-world applications and compare our proposed method to three different baselines. Our results show that our proposed method outperforms the baselines by up to 90\% for some experiments. \end{enumerate} \section{Problem Statement} The main objective of this work is to develop a subject selection method that allocates the best possible sampling percentage to each subgroup of the population who shares similar characteristics. What we mean by "the best possible sampling percentage" is that the percentage of \emph{final selection} in each subgroup should be as close as possible to the percentage of \emph{ideal selection} in the same subgroup while satisfying a set of given constraints. In this work, we assume that the researcher knows the total sample size or has already obtained it based on the required precision, confidence levels, and population variability. The first step before subject selection is dividing the population into homogeneous subgroups called \emph{"strata"} which are collectively exhaustive and mutually exclusive\cite{wiki2}. For example, one stratum could be (male, 30-39) and another stratum could be (female, 60-69). Since we want to include several data characteristics in the subject selection problem, the ideal percentage given for characteristic groups (e.g. 40\% male and 60\% female) cannot be used for the strata. Therefore, we propose an algorithm (see section \ref{lable:jointideal}) that calculates the \emph{"joint ideal percentage"} for each stratum using the provided ideal percentages for the characteristic groups that are involved in that stratum. We address three variations of the stated problem as follows:\\ \begin{itemize} \item \textbf{Fixed} (e.g., Motivating Example \#1) In this variation, we are provided with a fixed ideal percentage for each group of data characteristics. For example, male: 48\%\ and female: 52\%\ . \\ \item \textbf{Range-based} (e.g., Motivating Example \#2) In this variation, some or all of the ideal percentages for characteristic groups are range-based. For instance, the percentage of people between 18-21 years old can be between 10\%\ and 20\%. \\ \item \textbf{Generalized} (e.g., Motivating Example \#2) This variation is, in fact, an extension of the range-based variation where a set of constraints or objective functions, on the characteristic groups, need to be satisfied. For example, the number of female subjects must be twice as many as the number of male subjects or the payments made to subjects must be evenly distributed across all states. \end{itemize} \section{Baseline Algorithms} In this section, we present the evaluation metric, our proposed algorithm for computing the joint ideal percentage, the variations of the baselines and the baseline methods. \subsection{Evaluation Metric} As mentioned earlier, the purpose of this work is to allocate a percentage to each stratum such that the distance between the joint ideal selection/distribution and the allocated final selection/distribution becomes as small as possible. We considered different distance functions, which included \emph{Cosine distance}, \emph{Euclidean distance}, and \emph{Kullback–Leibler divergence}\cite{kldivergence}. We settled on using the \emph{Cosine Distance} as our evaluation metric to measure the distance between these two vectors in a multi-dimensional space. The Cosine Distance is easier and faster to implement; it is also easier to understand compared to the other metrics while their results are quite similar. The Cosine Distance (equation \ref{eq:distance}) can be computed using Cosine Similarity (equation \ref{eq:cosine}) that captures the cosine of the angle between the two vectors\cite{wiki1}. The space dimension D (or the number of strata) for these two vectors can be computed by equation \ref{eq:dimension}. \input{5-equations/cosdistance.tex} \input{5-equations/cosSim.tex} where $\overrightarrow{V_{F}} = (F_{1}, F_{2}, ...\;, F_{D})$ and $\overrightarrow{V_{JI}} = (JI_{1}, JI_{2}, ...\;, JI_{D})$ are the final and joint ideal percentage vectors respectively. The components of vector $\overrightarrow{V_{JI}}$ are joint ideal percentages for the strata that come from algorithm \ref{alg:jointideal}. \input{5-equations/dimension.tex} where $\overrightarrow{V_{F}}$ and $\overrightarrow{V_{JI}}$ are the final and joint ideal vectors, $G_k$ is the number of groups in data characteristic k and C is the total number of data characteristics. \subsection{Joint Ideal Percentage} \label{lable:jointideal} In this section, we introduce Algorithm \ref{alg:jointideal} that calculates the \emph{"joint ideal percentage"} for each stratum. In this algorithm, $I_{g}^{c}$ represents the given ideal percentage for group g in characteristic c (e.g. 52\%\ female) and $JI_h$ is the joint ideal percentage in stratum h that is computed by multiplying the ideal percentages of all the characteristic groups involved in stratum h. In this paper, we assume that the data characteristics are independent. \input{5-equations/jointideal.tex} \subsection{Baseline Methods}\label{lab:baselines} In this section, we describe the three different baseline methods that we compare with our proposed method in the experimental evaluation.\\ \noindent\textbf{Stratified Random Sampling}: Stratified Random Sampling (SRS) is a probability sampling technique in which the researcher partitions the entire population into strata, then randomly selects the final subjects from the strata\cite{taherdoost2016sampling,etikan2017sampling}. There are two types of stratified sampling techniques: Proportionate and Disproportionate. In \emph{Proportionate Stratified Random Sampling (PSRS)} the sample size used per stratum is proportional to the size of that stratum. The sample size in each stratum $n_{h}$ is determined by the following equation: \input{5-equations/propSRS.tex} where $N_{h}$ is the population size for stratum h, N is the total population size and n is the total sample size\cite{wiki2,haque2010sampling}. If we remove n from this equation and multiply the remaining fraction by 100, we get the final percentage in each stratum obtained from this baseline. With the disproportionate stratification, there is an intentional over-sampling of certain strata. In practice, disproportionate stratified sampling will only minimize sampling variance if the sampling fraction of each stratum is proportional to the standard deviation within that stratum. In other words, we should sample more from the more variable strata. This technique is called \emph{Optimum Stratified Random Sampling (OSRS)}\cite{wiki2}. We used a well-known method of optimum allocation called \emph{Neyman Allocation}\cite{neyman1992two,lavrakas2008encyclopedia} in our experiments with the following formula: \input{5-equations/neyman.tex} where $N_{h}$ is the population size for stratum h, $\sigma_{h}$ is the standard deviation of stratum h and n is the total sample size. In practice, we do not know $\sigma_{h}$ in advance, so we randomly draw a set of samples and use them to derive an estimate of $\sigma_{h}$ for each stratum\cite{mathew2013efficiency}. The percentages of the final selection for this baseline can be computed by dividing the Neyman formula by n and multiplying it by 100. \noindent\textbf{Rank Aggregation}: \emph{Rank Aggregation (RA)} is the process of merging multiple rankings of a set of subjects into a single ranking. There are two variations of rank aggregation: score-based and order-based. In the first category, the subjects in each ranking list are assigned scores and the rank aggregation function uses these scores to create a single ranking. In the second category, only the orders of the subjects are known and used by the rank aggregation function\cite{fox1994combination}. However, if we have the scores we can either use score-based algorithms (e.g. Fagin’s algorithm\cite{fagin2003optimal}) or we can convert the scores to the orders and then apply order-based algorithms. According to our experiments, order-based algorithms return better results for our problem. In order to score the subjects, first, we define a weight function (equation \ref{eq:weight}) which is the ratio of the ideal percentage to the initial percentage of the characteristic group that each subject belongs to. As a result, a list of weights corresponding to each data characteristic is created and can be converted to the orders (the higher the weight, the lower the order). The list of orders is then used as input to an order-based rank aggregation method which produces a single ranked list with minimum total disagreement among the input lists. \input{5-equations/weight.tex} In this function, $W_{s}^c$ is the weight of subject s for data characteristic c, $I_{g}^c$ is the given ideal percentage for group g in characteristic c, $init_{g}^c$ is the initial percentage of group g in characteristic c which is obtained based on the ratio of the number of subjects in group g to the total population size. To measure the disagreement between the input lists, the rank aggregation method can use either the \emph{Kendall Tau Distance} or the \emph{Spearman Footrule Distance}. The aggregation obtained by optimizing the Kendall distance is called \emph{Kemeny optimal aggregation} which is NP-Hard and the aggregation obtained by optimizing the Footrule distance is called \emph{Footrule optimal aggregation} which is a polynomial-time algorithm\cite{dwork2001rank,sculley2007rank}. The second algorithm is more efficient and so was selected as our rank aggregation method. After applying the Footrule optimal aggregation on the lists of the ordered subjects, we select the top-n (n = total sample size) subjects from the final ranked list and then compute the percentage of the final selection in each stratum. \noindent\textbf{Weighted Random Sampling}: \emph{Random Sampling (RS)} without replacement is the selection of m distinct random subjects out of a population of size n. If the probability of selection of each subject is the same as others', the problem is called \emph{Uniform Random Sampling}. However, in our problem we want each subject to have a different probability to be selected (based on the stratum it belongs to) which is not the case for uniform random sampling. Instead, we should use a \emph{Weighted Random Sampling (WRS)} in which subjects are weighted and the probability of selecting each subject is defined by its relative weight\cite{efraimidis2006weighted}. If in the subject selection problem we only address one data characteristic, the weight of each subject can be defined by the function we proposed in equation \ref{eq:weight}. However, in most real applications we have to deal with several characteristics involved in each stratum so we need to define a \emph{"joint weight"} function for each stratum (equation \ref{eq:jointweight}). Since each subject belongs to only one stratum, the joint weight of each subject will be equal to the weight of the stratum that subject belongs to. \input{5-equations/jointweight.tex} In this equation, $JI_{h}$ is the joint ideal percentage for stratum h (obtained by algorithm \ref{alg:jointideal}) and $JInit_{h}$ is the joint initial percentage obtained by the "joint frequency" concept. In other words, $JInit_{h}$ is the number of subjects belongs to stratum h divided by the total population size. After calculating the joint weight, the probability of each subject to be selected can be defined by the product of its weight and its uniform probability: \input{5-equations/WRSprob.tex} where $JW_s$ is the joint weight assigned to subject s in stratum h and 1/N is the probability of selecting a subject randomly form a population with size N where all subjects have an equal opportunity to be selected. Having the assigned probabilities, a random sampling function can select n subjects (n = total sample size) from the population which results in computing the final percentage for each stratum. \subsection{Baseline Method Variations} In this section, we explain how our baseline methods, presented in the previous section, address the three problem variations (fixed, range-based, and generalized). For all of the baseline methods, the joint ideal percentage for each stratum is computed by algorithm \ref{alg:jointideal}. The baseline method variations are as follows: \begin{itemize} \item \textbf{Fixed}: in this variation, given the fixed ideal percentages for all data characteristic groups, each baseline introduces a method that generates the distribution of the final selection. \item \textbf{Range-based}: in this variation, some or all of the ideal percentages are provided in the form of ranges. For this purpose, each baseline needs to iterate over all the possible enumerations that can be generated from a given range (must sum up to 100\%). Each baseline with every ideal percentage enumeration returns a different joint ideal distribution, a different final distribution and thus a different cosine distance. So, after going through all the feasible enumerations, we should find the smallest cosine distance and pick the final distribution, that generates this distance. \item \textbf{Generalized}: in this variation, a set of constraints or objective functions need to be addressed in addition to the given ranges. However, considering a set of constraints or adding new objective functions to the problem adds more complexity to the baseline methods so they will not be able to handle this variation very well. \end{itemize} \section{Proposed Method} As mentioned earlier, our goal is to develop an approach that finds a final distribution that is as close as possible to the joint ideal distribution. The baseline methods we explained in the previous section introduce different ways to select required subjects from a population but do not have any suggestions for minimizing the distance between the final and the joint ideal percentage vectors. Furthermore, in range-based variations, baselines might end up with too many enumerations, even if there are only a few data characteristics or a few groups involved in the subject selection problem. For example, Suppose a researcher wants to consider only three characteristics namely age, gender, and race in a subject selection problem such that each data characteristic consists of three groups and each group is provided with a range of length five. As previously mentioned, in the range-based problems, our baseline methods have to generate all the possible enumerations for all characteristics. For our particular example, this means we will end up with at least $5.12\times10^{10}$ combinations to try out (if every time we increase by 0.25 point within a range). In reality, the number of characteristics or characteristic groups can be even higher than what is given in this motivating example.\\ Moreover, in some real applications, a set of constraints or objective functions need to be satisfied within the subject selection but they will add more complexity to our baseline methods and so the baselines will not be able to meet all the given constraints in a simple way.\\ Considering all the above issues, we came up with an optimization-based approach that finds an optimal solution for minimizing the cosine distance between the joint ideal and final distributions, in the presence of a set of constraints. This approach is quite faster and less expensive in comparison to the baseline methods, especially for the range-based and generalized variations. Our proposed method supports all the problem variations we introduced as follows: \begin{itemize} \item \textbf{Fixed}: in this variation of the optimization problem, the objective function is defined to minimize the cosine distance between the final percentage vector $\overrightarrow{V_{F}} = (F_{1}, F_{2}, ...\;, F_{D})$ and the joint ideal percentage vector $\overrightarrow{V_{JI}} = (JI_{1}, JI_{2}, ...\;, JI_{D})$. In formulation \ref{eq:optfixed}, the cosine distance is defined based on equations \ref{eq:distance} and \ref{eq:cosine}, $F_h$ (for h=1,2,...,D) represents the components of $\overrightarrow{V_{F}}$ that are defined as variables and $JI_h$ (for h=1,2,...,D) represents the components of $\overrightarrow{V_{JI}}$ that are defined as constants. The objective function needs to be optimized subject to a constraint on the sum of the final percentages (the sum of final percentages is equal to 1, not 100 because we converted each percentage to a fraction) and a boundary where the final number of selected subjects in each stratum ($F_h * n$: n is the total sample size) must be smaller than the initial number of subjects ($init_{h}$) in that stratum. \smallskip \input{5-equations/optFixed.tex} \\ \item \textbf{Range-based}: since in this variation, some or all the provided ideal percentages are range-based, we consider the range-based ideal percentages as variables and the fixed ones as constants. However, in this optimization problem, we assumed that all of the ideal percentages are range-based so they are all determined as variables and their given ranges (e.g. $[a_k,b_k]$) are specified as the boundaries of these variables (one boundary per group). There is also one constraint defined per each characteristic in which the ideal percentages of its groups must add up to 1. The remaining of this formulation including the variables for the final percentages, its constraint, and boundaries is similar to the fixed variation in formulation \ref{eq:optfixed}. Formulation \ref{eq:optrange} represents the optimization problem for a range-based variation with C characteristics including $\{c_1, c_2, ..., c_C\}$ whose number of groups are $\{G_1, G_2, ..., G_C\}$. In this optimization problem, the objective function is constructed using equations \ref{eq:distance} and \ref{eq:cosine}, $F_h$ is the final percentage for stratum h, $JI_h$ is the joint ideal percentage for stratum h that is formulated in algorithm \ref{alg:jointideal} where $I_{g}^{c}$ is the ideal percentage for group g in characteristic c, $init_{h}$ is the initial number of subjects in stratum h and n is the total sample size. The optimal solution to this optimization model consists of the components of $\overrightarrow{V_{F}}$ and $\overrightarrow{V_{IJ}}$ that are real numbers between 0 and 1 and the optimal objective function value to it, is the cosine distance between the two vectors. \bigskip \input{5-equations/optRange.tex} \item\textbf{Generalized}: we can simply generalize the range-based variation in formulation \ref{eq:optrange} by adding the required equality and inequality constraints on the specific characteristic groups. For example, if the required constraint is to select twice as many female subjects as male subjects, we should add $I_{{female}}^{gender} - 2*I_{{male}}^{gender} = 0$ as a constraint to the specific version of formulation \ref{eq:optrange}. In terms of adding new objectives (e.g. motivating example \#2), the existing problem should address more than one objective function simultaneously and transform to a multi-objective optimization problem.\\ \end{itemize} \input{3-table_tex/data_info.tex} \noindent \textbf{Complexity:} \\ If we have k data characteristics and each characteristics has n groups then the number of variables in the optimization-based approach (all variations) will be $O(n^k)$ and the number of constraints will be $O(1)$. In terms of time complexity, running the proposed optimization-based approach (for all experiments) on a modern laptop takes less than a second while running the baseline methods takes a few seconds for the fixed variation and a few minutes for the range-based variations. \section{Experiments} \label{sec:experiments} For the experimental evaluation of this work, our code-base was written in Python 3. We used cosine distance as the evaluation metric and four different datasets to evaluate the baseline methods and our proposed approach. In each experiment, we compare the optimal objective function value achieved by the optimization-based method with the cosine distance obtained from the baselines. It should be pointed out that we examined and compared several of the available solvers that were applicable to our optimization problems (constrained nonlinear programming with boundaries) including gradient-based (e.g., SLSQP) and gradient-free (e.g., COBYLA and Ant Colony). Among these algorithms, SLSQP\cite{kraft1988software} (supported by Python scipy.optimize.minimize package) was the fastest solver with the most desirable results. \subsection{Datasets (Table \ref{tab:datainfo} , Figure \ref{fig:kloan_initial})} \label{lab:data} We used four different datasets for evaluating the baselines and the optimization-based method. Pittsburgh survey data was collected through an online survey of transportation preferences of adults from a diverse population (in terms of age, income, etc) in the broader Pittsburgh area. Kaggle Loan data\cite{kaggleloan} and Kaggle Titanic data\cite{kaggletitanic} are open datasets and received from the Kaggle website, and Github Loan data was collected from Github. More information about these datasets and the configurations is presented in Table \ref{tab:datainfo}. \\ Furthermore, Figure \ref{fig:kloan_initial} illustrates the joint initial distribution for all the strata in Kaggle loan data (only the top 10 strata with the highest initial percentages are labeled). As one can see in this figure, the data is biased and not representative of the general population, since it is male-dominated. \input{2-fig_tex/initial_kaggle_loan} \input{2-fig_tex/merged_figures} \subsection{Baselines vs Optimization-based Method - One Characteristic (Figure~\ref{fig:oneVar_results})} {\noindent\bf Setup:} in this experiment, we evaluate our baselines and the optimization-based method using one of our datasets called "Kaggle Titanic", considering "age" as the data characteristic, along with a custom fixed ideal distribution given for the age groups. Since we only have one characteristic, the age groups themselves can be considered as the strata. The age groups include \{-10, 10-20, 20-30, 30-40, 40-50, 50-60, 60+\} and their assigned ideal distribution is \{40\%, 30\%, 20\%, 10\%, 0\%, 0\%, 0\%\} where young people have higher priority to be selected.\\ {\noindent\bf Results:} as one can see in Figure \ref{fig:oneVar_results}, the cosine distances obtained from the Proportionate Stratified Random Sampling (PSRS) and the Optimal Stratified Random Sampling (OSRS) are quite higher than the other methods'. We exclude stratified sampling baselines from the next experiments due to this big difference. According to this figure, the proposed method with the optimization-based (OP) approach improved the cosine distance by 97\% and 85\% with respect to to the RA and WRS methods.\\ {\noindent\bf Take-away:} the optimization-based method performed better (up to 97\%) than the baselines in the experiment that was conducted with one data characteristic and a custom fixed ideal distribution. As a result, the stratified random sampling methods were not considered in the following experiments due to their unacceptable performance. \subsection{Baselines vs Optimization-based Method - Multi-characteristics (Figures~\ref{fig:pitt_fixed_range}, \ref{fig:pitt_panel} and Tables~\ref{tab:allsurveycomp}, \ref{tab:alldatacomp})} \noindent\textbf{1- Experiments with Pittsburgh Survey Data (Figures~\ref{fig:pitt_fixed_range}, \ref{fig:pitt_panel}):} {\noindent\bf Setup:} this experiment is done using the Pittsburgh survey data, focusing on two data characteristics including: age and gender. In the fixed variation, the ideal distributions for age and gender groups are collected from the Pittsburgh census reporter\cite{censusreporter}. In the range-based variation, the ideal distribution for age groups are fixed and from census, but the gender groups are provided with a range-based ideal distribution. \input{2-fig_tex/pitt_panel.tex} {\noindent\bf Results:} Figure \ref{fig:pitt_fixed_range} shows the comparison between the two baselines (RA and WRS) and the optimization-based (OP) method for the two variations of the problem: fixed and range-based. The optimization-based method improves the distance by 18\% and 40\% in comparison to WRS, and by 71\% and 74\% in comparison to RA, for the fixed and range-based variations respectively. On the other hand, the experiments in Figure \ref{fig:pitt_panel} were conducted for the fixed variation only. The x-axis represents the age groups and the y-axis (the scale is 0\% to 35\%) shows the percentage of male/female in each gender group. As indicated, Figures \ref{fig:pitt_ideal_male} and \ref{fig:pitt_ideal_female} show the joint ideal distributions for the age groups separated by male and female. These values were computed by algorithm \ref{alg:jointideal} using the age and gender distributions from the Pittsburgh census. Figures \ref{fig:pitt_initial_male} and \ref{fig:pitt_initial_female} illustrate the joint initial distributions extracted from the survey data whose distance from the joint ideal distribution (shown in Figures \ref{fig:pitt_ideal_male} and \ref{fig:pitt_ideal_female}) is 0.38. The total sum of the joint initial percentages in the two figures is not equal to 100\%. The reason is that there are other gender groups (neither male nor female) in the Pittsburgh survey data which are excluded from this distribution. In the next three rows of the Figure \ref{fig:pitt_panel} including \ref{fig:pitt_RA_male} to \ref{fig:pitt_OP_female}, we represent the final distributions as the outputs of our two baseline methods (RA and WRS) and the proposed method (OP). When the cosine distance between the final distributions and the joint ideal distribution becomes smaller, the final distributions become more similar to the joint ideal distribution. It is worth mentioning that we have about 20\% improvement (from 0.11 to 0.09) in the distance when using the optimization-based method compared to the WRS method and about 70\% improvement (from 0.31 to 0.09) in comparison with the RA method. \input{3-table_tex/all_survey_comparisons.tex} \input{3-table_tex/all_datasets_comparisons.tex} {\noindent\bf Take-away:} our proposed method outperformed the baseline methods (by up to 70\%) in the experiments conducted with the Pittsburgh survey data, with both fixed and range-based variations.\\ \noindent\textbf{2- Experiments with All Datasets (Tables~\ref{tab:allsurveycomp}, \ref{tab:alldatacomp}):} \\ {\noindent\bf Setup:} these experiments are done using all the datasets introduced in Table \ref{tab:datainfo} considering the outlined configurations. For the fixed variations, the ideal distributions are census-based, uniform (all groups in a data characteristic have the same ideal percentage) or custom (defined by us). For the range-based variations, we set the ideal distribution of the gender groups to be range-based and for the other characteristic groups to be fixed.\\ {\noindent\bf Results:} Table \ref{tab:allsurveycomp} reports the cosine distances obtained from the baselines and the proposed method using the Pittsburgh survey data with fixed and ranged-based variations. For the fixed variation, we used three different census distributions from three different cities in the US including Pittsburgh, New York, NY and College Park, MD\cite{censusreporter}. As can be observed, our optimization-based method always outperforms its competitors for both variations. In particular, our proposed method improves the cosine distance up to 40\% compared to WRS and up to 76\% compared to RA. Table \ref{tab:alldatacomp} represents the comparison between cosine distance computed by the baseline methods and our proposed approach with three more datasets. The fixed ideal distributions that we used for these experiments are either uniform or customized. As numbers in the table indicate, the optimization-based method performs better than the baselines and improves the cosine distance up to 75\% compared to WRS and up to 90\% compared to RA. {\noindent\bf Take-away:} our proposed method outperformed the baseline methods (by up to 90\%) for the experiments conducted with all the datasets for both fixed and range-based variations over a different number of data characteristics. \subsection{Discussion} \input{3-table_tex/methods_comparison.tex} As previously stated, baselines and the proposed method can all handle the fixed variation of the problem very well. However, the baseline methods will face more complexity in the range-based variation especially in high dimensional space where the number of characteristics or characteristic groups is big. In the generalized variation, the complexity can get even worse or the baselines might not be able to handle the required constraints or objective functions at all. Table \ref{tab:methods_comp} summarizes our comparison between the baseline methods and the optimization-based method.\\ A different use case is one where the researcher running the study that does not have ideal percentages, but instead would like to \emph{select by example}. That would mean that the researcher could simply label the subjects as "desired" and "not-desired". Having the labeled data, we can count the number of desired subjects in each group for every data characteristic and divide it by the total number of the desired subjects. This way, we technically extract the hidden ideal percentage for each group and simply transform this case to a fixed variation. \section{Conclusion} In this paper, we proposed an optimization-based method for multi-characteristic subject selection from biased datasets. Our proposed method supports fixed and ranged-based variations along with any required constraints or objective functions on the characteristic groups. We compared our method's performance with three baseline methods including stratified random sampling, rank aggregation, and weighted random sampling. Our experimental evaluation showed that our proposed method outperforms the baseline methods in all variations of the problem by up to 90\%.
d8d7e1af2b3f183ebf5e5e13f2e4fa268f33349e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{\textbf{Introduction}} The Poincar\'{e} algebra and Poincar\'{e} group describe the symmetries of empty Minkowski spacetime. It is known since $1970$ \cite{bacry}, that the presence of a constant electromagnetic field in Minkowski spacetime leads to the modification of Poincar\'{e} symmetries. The presence of a constant classical electromagnetic field in Minkowski spacetime modifies the Poincar\'{e} algebra into the so-called Maxwell algebra \cite{schrader},\cite{beckers},\cite{soroka},\cite{bonanos}, \cit {bonanos1}, \cite{bonanos2}, \cite{gomis}. This algebra can also be obtained from the anti-de Sitter (AdS) algebra and a particular semigroup $S$ by means of the $S$-expansion procedure introduced in Refs. \cite{hat}, \cit {salg2}, \cite{salg3}, \cite{azcarr}. Using this method it is possible to obtain more general modifications to the Poincar\'{e} algebra (see, for example, \cite{irs}, \cite{salg4}). An interesting modification to the Poincar\'{e} symmetries, obtained by the aforementioned expansion procedure, is given by the so-called Lie $\mathfrak{B}$ algebra also known as generalized Poincar\'{e} algebra, whose generators satisfy the commutation relation shown in Eq. (7) of Ref. \cite{gomez}. The Einstein-Chern-Simons (EChS) gravity \cite{irs} is a gauge theory whose Lagrangian density is given by a $5$-dimensional Chern-Simons form for the \mathfrak{B}$ algebra. The field content induced by the $\mathfrak{B}$ algebra includes the vielbein $e^{a}$, the spin connection $\omega ^{ab}$, and two extra bosonic fields $h^{a}$ and $k^{ab}.$ The EChS gravity has the interesting property that the $5$-dimensional Chern-Simons Lagrangian for the $\mathcal{B}$ algebra, given by \cite{irs \begin{eqnarray} L_{ChS}^{(5)}[e,\omega ,h,k] &=&\alpha _{1}l^{2}\varepsilon _{abcde}R^{ab}R^{cd}e^{e} \label{1t} \\ &&+\alpha _{3}\varepsilon _{abcde}\left( \frac{2}{3 R^{ab}e^{c}e^{d}e^{e}+2l^{2}k^{ab}R^{cd}T^{\text{ }e}+l^{2}R^{ab}R^{cd}h^{e \right) , \notag \end{eqnarray where $R^{ab}=\mathrm{d}\omega ^{ab}+\omega _{\text{ }c}^{a}\omega ^{cb}$ and $T^{a}=\mathrm{d}e^{a}+\omega _{\text{ }c}^{a}e^{c}$, leads to the standard general relativity without cosmological constant in the limit where the coupling constant $l$ tends to zero while keeping the Newton's constant fixed. It should be noted that there is an absence of kinetic terms for the fields $h^{a}$ and $k^{ab}$ in the Lagrangian $L_{ChS}^{(5)}$ (for details see Ref. \cite{salg5}). Recently was shown in Ref. \cite{bra1} that the $5$-dimensional EChS gravity can be consistent with the idea of a $4$-dimensional spacetime. In this Reference was replaced a Randall-Sundrum type metric \cite{randall} \cit {randall1} in the EChS gravity Lagrangian (\ref{1t}) to get (see Appendix) \begin{eqnarray} \tilde{S}[\tilde{e},\tilde{h}] &=&\int_{\Sigma _{4}}\tilde{\varepsilon _{mnpq}\left( -\frac{1}{2}\tilde{R}^{mn}\tilde{e}^{p}\tilde{e}^{q}\right. + \notag \\ &&\left. +C\tilde{R}^{mn}\tilde{e}^{p}\tilde{h}^{q}-\frac{C}{4r_{c}^{2} \tilde{e}^{m}\tilde{e}^{n}\tilde{e}^{p}\tilde{h}^{q}\right) , \label{s3} \end{eqnarray which is an gravity action with a cosmological constant for a $4 -dimensional brane embedded in the $5$-dimensional spacetime of the EChS theory of gravity.\textbf{\ }$\tilde{\varepsilon}_{mnpq}$\textbf{, }$\tilde{ }^{m}$, $\tilde{R}^{mn}$\textbf{\ }and\textbf{\ }$\tilde{h}^{m}$\textbf{\ represent, respectively, the\textbf{\ }$4$\textbf{-}dimensional versions of the Levi-Civita symbol,\textbf{\ }the vielbein, the curvature form and \textbf{\ }matter field. It is of interest to note that the field $h^{a}$, a bosonic gauge field from the Chern-Simons gravity action, which gives rise to a form of positive cosmological constant, appears as a consequence of modification of the Poincar\'{e} symmetries, carried out through the expansion procedure. On the other hand\textbf{, }$C$\textbf{\ }and\textbf{\ }$r_{c}$\textbf{\ ( the "compactification radius") are constants\textbf{. }The corresponding version in tensor language (see Appendix) is given by \begin{equation} \tilde{S}[\tilde{g},\tilde{h}]=\int d^{4}\tilde{x}\sqrt{-\tilde{g}}\left[ \tilde{R}+2C\left( \tilde{R}\tilde{h}-2\tilde{R}_{\text{ }\nu }^{\mu }\tilde h}_{\text{ }\mu }^{\nu }\right) -\frac{3C}{2r_{c}^{2}}\tilde{h}\right] , \label{31t'} \end{equation where we can see that when $l\rightarrow 0$ then $C\rightarrow 0$ and hence \ref{a}) becomes the $4$-dimensional Einstein-Hilbert action. \ In this paper we introduce the geometric framework obtained by gauging of the so called\textbf{\ }$\mathfrak{B}$ algebra. Besides the vierbein $e_{\mu }^{a}$ and the spin connection $\omega _{\mu }^{ab}$, our scheme includes the fields\textbf{\ }$k_{\mu }^{ab}$ and $h_{\mu }^{a}$ whose dynamic is described by the field equation obtained from the corresponding actions. The application of the cosmological principle shows that the field $h^{a}$\ has a similar behavior to that of a cosmological constant, which leads to the conjecture that the equations of motion and their accelerated solutions are compatible with the era of dark energy. It might be of interest to note that, according to standard GR (Einstein framework in a FLRW background), a simple way to describe dark energy (also dark matter) is through an equation of state that relates density ($\rho $) of a fluid and its pressure ($p$) through the equation $p=\omega \rho $, where $\omega $ is the parameter of the equation of state. Dark energy is characterized by $-1\leq \omega <-1/3$\textbf{, \ }$\omega =-1$ represents the cosmological constant and $\omega <-1$ corresponds to the so-called phantom dark energy. This means that in the context of general relativity the parameter $\omega $ is "set by hand" and then contrasted with observational information. In the present work, the cosmological constant is not "set by hand" but rather arises from the framework that we present. An example is shown where a quintessence-type evolution as well as a phantom evolution are equally possible. This means that a possible geometric origin of dark energy can be conjectured in the context of the so-called Einstein-Chern-Simons gravity. The article is organized as follows: in Section\textbf{\ }$II$\textbf{, }we rewrite the action\textbf{\ }(\ref{31t'})\textbf{\ }by introducing a scalar field associated to the field\textbf{\ }$\tilde{h}_{\mu \nu }$, we find the corresponding equations of motion, and then we discuss the cosmological consequences of this scheme. In Section\textbf{\ }$III$\textbf{, }a kinetic term is added in the action and its effects on cosmology are studied.\ Finally, Concluding Remarks are presented in Section\textbf{\ }$IV$\textbf{. }An Appendix is also included where we review the derivation of the actio \textbf{\ }(\ref{31t'}). \section{\textbf{Cosmological consequences}} In this Section we will study the cosmological consecuences associated with the action\ (\ref{31t'})\textbf{. }If we consider a maximally symmetric spacetime (for instance, the de Sitter's space), the equation 13.4.6 of Ref. \cite{weinberg} allows us to write the field\textbf{\ }$\tilde{h}_{\mu \nu } \textbf{\ }as\textbf{\ } \begin{equation} \tilde{h}_{\mu \nu }=\frac{\tilde{F}(\tilde{\varphi})}{4}\tilde{g}_{\mu \nu }, \label{66t} \end{equation where $\tilde{F}$ is an arbitrary function of an $4$-scalar field $\tilde \varphi}=$ $\tilde{\varphi}(\tilde{x})$.\ This means \begin{equation} \tilde{R}_{\text{ }\nu }^{\mu }\tilde{h}_{\text{ }\mu }^{\nu }=\frac{\tilde{ }(\tilde{\varphi})}{4}\tilde{R}\text{ \ \ },\text{\ \ }\tilde{h}=\tilde{h _{\mu \nu }\tilde{g}_{\text{ }}^{\mu \nu }=\tilde{F}(\tilde{\varphi}), \end{equation so that the action (\ref{31t'}) takes the form (see Appendix) \begin{equation} \tilde{S}[\tilde{g},\tilde{\varphi}]=\int d^{4}\tilde{x}\sqrt{-\tilde{g} \left[ \tilde{R}+C\tilde{R}\tilde{F}(\tilde{\varphi})-\frac{3C}{2r_{c}^{2} \tilde{F}(\tilde{\varphi})\right] , \label{32t} \end{equation which corresponds to an action for the $4$-dimensional gravity coupled non-minimally to a scalar field. Note that this action has the for \begin{equation} \tilde{S}_{B}=\tilde{S}_{g}+\tilde{S}_{g\varphi }+\tilde{S}_{\varphi }, \end{equation where, $\tilde{S}_{g}$ is a pure gravitational action term, $\tilde{S _{g\varphi }$ is a non-minimal interaction term between gravity and a scalar field, and $\tilde{S}_{\varphi }$ represents a kind of scalar field potential. In order to write down the action in the usual way, we define the constant $\varepsilon $ and the potential $V(\varphi )$ as (removing the symbols $\sim $ in (\ref{32t})). In fact \begin{equation} \varepsilon =\frac{4\kappa r_{c}^{2}}{3}\text{ \ \ },\text{\ \ }V(\varphi ) \frac{3C}{4\kappa r_{c}^{2}}F(\varphi ), \label{100t} \end{equation where $\kappa $ is the gravitational constant. This permits to rewrite the action for a $4$-dimensional brane non-minimally coupled to a scalar field, immersed in a $5$-dimensional space-time a \begin{equation} S[g,\varphi ]=\int d^{4}x\sqrt{-g}\left[ R+\varepsilon RV(\varphi )-2\kappa V(\varphi )\right] . \label{33t} \end{equation The corresponding field equations describing the behavior of the $4 -dimensional brane in the presence of the scalar field $\varphi $ are given by \begin{equation} G_{\mu \nu }\left( 1+\varepsilon V\right) +\varepsilon H_{\mu \nu }=-\kappa g_{\mu \nu }V, \label{z1} \end{equation \begin{equation} \frac{\partial V}{\partial \varphi }\left( 1-\frac{\varepsilon R}{2\kappa \right) =0, \label{z2} \end{equation wher \begin{equation} H_{\mu \nu }=g_{\mu \nu }\nabla ^{\lambda }\nabla _{\lambda }V-\nabla _{\mu }\nabla _{\nu }V. \end{equation} In order to construct a model of universe based on Eqs. (\ref{z1}-\ref{z2}), we consider the Friedmann-Lema\^{\i}tre-Robertson-Walker metri \begin{equation} ds^{2}=-dt^{2}+a^{2}(t)\left( \frac{dr^{2}}{1-kr^{2}}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\psi ^{2}\right) \right) , \end{equation where $a(t)$ is the so called "cosmic scale factor" and $k=0,+1,-1$ describes flat, spherical and hyperbolic spatial geometries, respectively. Following the usual procedure, we find the following field equations \begin{eqnarray} 3\left( H^{2}+\frac{k}{a^{2}}\right) \left( 1+\varepsilon V\right) +3\varepsilon H\dot{\varphi}\frac{\partial V}{\partial \varphi } &=&V, \label{iii1} \\ \left( 2\dot{H}+3H^{2}+\frac{k}{a^{2}}\right) \left( 1+\varepsilon V\right) +\varepsilon \left( \dot{\varphi}^{2}\frac{\partial ^{2}V}{\partial \varphi ^{2}}+\left( \ddot{\varphi}+2H\dot{\varphi}\right) \frac{\partial V} \partial \varphi }\right) &=&V, \label{iii2} \\ \frac{\partial V}{\partial \varphi }\left[ 1-3\varepsilon \left( \dot{H +2H^{2}+\frac{k}{a^{2}}\right) \right] &=&0, \label{iii3} \end{eqnarray where $H=\dot{a}/a$ is the Hubble parameter and we have used natural unities $\kappa =8\pi G=c=1$. Dot means derivative with respect to time. From (\ref{iii1}, \ref{iii2}, \ref{iii3}) we see that when $\varepsilon =0$ and $V=$ $const.$, we have a de Sitter behavior governed by $H=\sqrt{V/3}.$ On the another hand, from equation (\ref{100t}) we can see that $\varepsilon $ is a positive quantity. This fact allows us to define an effective cosmological constant a \begin{equation} \Lambda _{eff}=\frac{1}{2\varepsilon }, \end{equation which will play an important role in the cosmological consequences that we will show below. In the flat case, the Eqs. (\ref{iii1}, \ref{iii2}, \ref{iii3}) are given by \begin{eqnarray} 3H^{2} &=&-\frac{1}{V+2\Lambda _{eff}}\left( 3H\frac{dV}{dt}-2\Lambda _{eff}V\right) , \label{iii4} \\ 2\dot{H}+3H^{2} &=&-\frac{1}{V+2\Lambda _{eff}}\left( \frac{d^{2}V}{dt^{2} +2H\frac{dV}{dt}-2\Lambda _{eff}V\right) , \label{iii5} \end{eqnarray where the equation\textbf{\ (}\ref{iii3}) was not considered because it is not an independent equation. In fact, subtracting\textbf{\ (}\ref{iii5} \textbf{\ }from\textbf{\ }(\ref{iii4})\textbf{\ }we obtai \begin{equation} 2\dot{H}=-\frac{1}{V+2\Lambda _{eff}}\left( \frac{d^{2}V}{dt^{2}}-H\frac{dV} dt}\right) . \label{iii5'} \end{equation Deriving\textbf{\ (}\ref{iii4}\textbf{) }with respect to time and using \textbf{(}\ref{iii5'})\textbf{\ }we find\textbf{\ (}\ref{iii3}), when\textbf \ }$k=0$\textbf{. }Bear in mind that, at the end of this Section, we will study an interesting consequence derived from this equation. We write now the Eqs. (\ref{iii4}, \ref{iii5}) in the "standard" for \begin{eqnarray} 3H^{2} &=&\rho \text{ \ \ },\text{ \ }\rho =-\frac{1}{V+2\Lambda _{eff} \left( 3H\frac{dV}{dt}-2\Lambda _{eff}V\right) , \label{iii6} \\ \dot{H}+H^{2} &=&-qH^{2}=-\frac{1}{6}\left( \rho +3p\right) , \notag \\ \frac{1}{6}\left( \rho +3p\right) &=&\frac{1}{4\Lambda _{eff}}\frac{1} \left( 1+V/2\Lambda _{eff}\right) }\left( \frac{d^{2}V}{dt^{2}}+H\frac{dV}{d }-\frac{4\Lambda _{eff}}{3}V\right) , \label{iii7} \end{eqnarray being $q$ the deceleration parameter defined by $q=-1-\dot{H}/H^{2}$ and $p$ the pressure associated to $\rho $ given by \begin{equation} p=\frac{1}{V+2\Lambda _{eff}}\left( \frac{d^{2}V}{dt^{2}}+2H\frac{dV}{dt -2\Lambda _{eff}V\right) , \label{iii8} \end{equation which allows to write the barotropic equation $p=\omega \rho $, where \begin{equation} \omega =-\left( \frac{2\Lambda _{eff}V-2HdV/dt-d^{2}V/dt^{2}}{2\Lambda _{eff}V-3HdV/dt}\right) , \label{iii9} \end{equation and we note that $V=const.$ leads to $\omega =-1$, i.e., a de Sitter evolution. Considering again Eqs. (\ref{iii4}, \ref{iii5}) and defining\ $x=V/2\Lambda _{eff}$,\ we write the field equations in the form \begin{eqnarray} 3\left( H^{2}+H\frac{d}{dt}\ln \left( 1+x\right) \right) &=&2\Lambda _{eff}\left( \frac{x}{1+x}\right) , \label{iii11} \\ 3\left( qH^{2}-H\frac{d}{dt}\ln \left( 1+x\right) \right) &=&-2\Lambda _{eff}\left( \frac{x}{1+x}\right) +\frac{3}{2}\left( \frac{1}{1+x}\right) \frac{d^{2}x}{dt^{2}}, \label{iii12} \end{eqnarray and we discuss some examples: \textbf{(a) }$x=x_{0}$\ $=$\textbf{\ }const.\textbf{\ }If $x$\ behaves as a constant\textbf{,} the solution for the Hubble parameter is given by \begin{equation} H=\sqrt{\frac{2\Lambda _{eff}}{3}\frac{x_{0}}{1+x_{0}}}, \label{iii14} \end{equation i.e., a de Sitter evolution for all time. \textbf{(b)} $x=t/t_{0}$. In this case, the solution is\textit{\ } \begin{equation} H\left( t\right) =\frac{1}{2t_{0}\left( 1+t/t_{0}\right) }\left( \sqrt{1 \frac{8\Lambda _{eff}t_{0}}{3}\left( 1+t/t_{0}\right) t}-1\right) , \label{iii15} \end{equation where we can see that \begin{equation} H\left( t\rightarrow \infty \right) \rightarrow \sqrt{\frac{2\Lambda _{eff}} 3}}\ \text{\ \ },\text{\ \ }\rho \left( t\rightarrow \infty \right) \rightarrow 2\Lambda _{eff}\text{ \ \ \ }and\text{\ \ \ }q\left( t\rightarrow \infty \right) =-1, \label{iii16} \end{equation which means that we have a late de Sitter evolution. \textbf{(c) }$x=$ $\exp \left( t/t_{0}\right) $. Here, the Hubble parameter turns out to be \begin{equation} H\left( t\right) =\frac{1}{2t_{0}\left[ 1+\exp \left( t/t_{0}\right) \right] }\left( \sqrt{1+\frac{8\Lambda _{eff}t_{0}^{2}}{3}\exp \left( t/t_{0}\right) \left[ 1+\exp \left( t/t_{0}\right) \right] }-1\right) , \end{equation and \begin{equation} \text{ \ \ \ }H\left( t\rightarrow \infty \right) \rightarrow \sqrt{\frac 2\Lambda _{eff}}{3}}\text{ \ \ },\text{ \ }\rho \left( t\rightarrow \infty \right) \rightarrow 2\Lambda _{eff}\text{ \ \ }and\text{ \ \ }q\left( t\rightarrow \infty \right) \rightarrow -1, \end{equation and, as in the previous case, we have a late de Sitter evolution. From (\ref{iii9}) the parameter of the equation of state takes the form \begin{equation} \omega \left( t\right) =-\left( \frac{2\Lambda _{eff}x-2Hdx/dt-d^{2}x/dt^{2 }{2\Lambda _{eff}x-3Hdx/dt}\right) , \label{iii19} \end{equation and if $x=t/t_{0}$ one find \begin{equation} \omega \left( t\right) =-\left( \frac{t-H\left( t\right) /\Lambda _{eff}} t-3H\left( t\right) /2\Lambda _{eff}}\right) , \end{equation so that if we identify $t_{0}$\ as the current time, then \begin{equation} \text{\ }\omega \left( t_{0}\right) =-\left( \frac{t_{0}-H\left( t_{0}\right) /\Lambda _{eff}}{t_{0}-3H\left( t_{0}\right) /2\Lambda _{eff} \right) <-1\text{ \ \ \ }and\text{ \ \ \ }\omega \left( t\rightarrow \infty \right) =-1,\text{ \ \ \ } \end{equation and we have a transient phantom evolution (not ruled out by the current observational data). Theoretical frameworks where this type of evolution is discussed can be seen in \cite{lepe1}, \cite{zar}, and \cite{lepe2}. The shown examples above have a common characteristic, namely they show a late de Sitter evolution like, for instance, $\Lambda CDM$ at late times, but we do not know if this characteristic comes from the formalism that we are inspecting or from the choice (Ansatz) that we make for $V\left( t\right) $. Since we do not have something to guide us towards a form for V\left( t\right) $ from first principles, we are tied to playing with different Ansatze for that potential. At least those shown here, give us interesting results, in particular, that obtained from the Ansatz given in ( b$), a transient phantom evolution. Previously, we have seen that equation\textbf{\ }(\ref{iii3})\textbf{\ }is not independent, and therefore it was not analyzed in the first instance. However, it reveals an interesting fact of our scheme, the presence of a cosmological bounce. In case that\textbf{\ }$k=0$\textbf{,\ }$\Lambda _{eff}= $ $1/2\varepsilon $ and $\partial V/\partial \varphi \neq 0$,\textbf \ }the equation\textbf{\ }(\ref{iii3})\textbf{\ }takes the form \begin{equation} \frac{2}{3}\Lambda _{eff}-\left( \dot{H}+2H^{2}\right) =0, \label{v1} \end{equation which leads to the following solution for the Hubble parameter \begin{equation} H\left( t\right) =\sqrt{\Lambda _{eff}/3}\left( \frac{\exp \left[ 4\sqrt \Lambda _{eff}/3}\left( t-t_{0}\right) \right] -1/\Delta \left( t_{0}\right) }{\exp \left[ 4\sqrt{\Lambda _{eff}/3}\left( t-t_{0}\right) \right] +1/\Delta \left( t_{0}\right) }\right) , \label{v2} \end{equation wher \begin{equation} \Delta \left( t_{0}\right) =\frac{\sqrt{\Lambda _{eff}/3}+H_{0}}{\sqrt \Lambda _{eff}/3}-H_{0}}. \end{equation Note that the equation (\ref{v2})\textbf{\ }can be written in terms of an hyperbolic tangent a \begin{equation} H\left( t\right) =\sqrt{\frac{\Lambda _{eff}}{3}}\tanh \left( 2\sqrt{\frac \Lambda _{eff}}{3}}\left( t-t_{0}\right) +\frac{1}{2}\ln \left[ \Delta \left( t_{0}\right) \right] \right) , \label{v3} \end{equation which reveals a cosmological bounce i \begin{equation} t_{b}=t_{0}-\frac{1}{4}\sqrt{\frac{3}{\Lambda _{eff}}}\ln \Delta \left( t_{0}\right) \Longrightarrow H\left( t_{b}\right) =0. \label{v4} \end{equation Moreover, the expression (\ref{v3}) is also showing that $H<0$ for $t<t_{b} , and $H>0$ for $t>t_{b}$; i.e., there is a contraction for $t<t_{b}$ and an expansion for $t>t_{b}$. In fact, from equation (\ref{v3}) the cosmic scale factor is obtaine \begin{equation} a(t)=a_{b}\cosh ^{1/2}\left( 2\sqrt{\frac{\Lambda _{eff}}{3}}\left( t-t_{b}\right) \right) , \label{v5} \end{equation where $a_{b}$ is the minimum of the scale factor, which occurs at $t_{b}$, and it is given b \begin{equation} a_{b}=a(t_{b})=a\left( 0\right) \cosh ^{-1/2}\left( 2\sqrt{\frac{\Lambda _{eff}}{3}}\left( t_{0}-t_{b}\right) \right) . \label{v6} \end{equation Behaviors of this kind have been frequently studied in the literature in the context of cosmological bouncing; see e.g., \cite{Biswas1}, \cite{Biswas}, \cite{Bamba}. Finally, we note that \begin{equation} H(t\rightarrow \infty )\rightarrow \sqrt{\frac{\Lambda _{eff}}{3}}, \label{v7} \end{equation i.e., a late de Sitter evolution. \section{\textbf{Introduction of the kinetic term }$\left( 1/2\right) \dot \protect\varphi}^{2}$} In Ref. \cite{salg4} it was found that the surface term $B_{\text{EChS }^{(4)}$ in the Lagrangian (\ref{1t}) is given b \begin{align} B_{EChS}^{(4)}& =\alpha _{1}l^{2}\epsilon _{abcde}e^{a}\omega ^{bc}\left( \frac{2}{3}d\omega ^{de}+\frac{1}{2}\omega _{\text{ \ }f}^{d}\omega ^{fe}\right) \notag \\ & \quad +\alpha _{3}\epsilon _{abcde}\left[ l^{2}\left( h^{a}\omega ^{bc}+k^{ab}e^{c}\right) \left( \frac{2}{3}d\omega ^{de}+\frac{1}{2}\omega _ \text{ \ }f}^{d}\omega ^{fe}\right) \right. \notag \\ & \qquad \qquad \qquad \left. +l^{2}k^{ab}\omega ^{cd}\left( \frac{2}{3 de^{e}+\frac{1}{2}\omega _{\text{ \ }f}^{d}e^{e}\right) +\frac{1}{6 e^{a}e^{b}e^{c}\omega ^{de}\right] . \label{8'} \end{align} From (\ref{1t}) and (\ref{8'}) we can see that kinetic terms corresponding to the fields $h^{a}$ and $k^{ab}$, absent in the Lagrangian, are present in the surface term. This situation is common to all Chern-Simons theories. This has the consequence that the action (\ref{33t}) does not have the kinetic term for the scalar field $\varphi $. It could be interesting to add a kinetic term to the $4$-dimensional brane action. In this case, the action (\ref{33t}) takes the for \begin{equation} S[g,\varphi ]=\int d^{4}x\sqrt{-g}\left[ R+\varepsilon RV(\varphi )-2\kappa \left[ \frac{1}{2}\left( \nabla _{\mu }\varphi \right) \left( \nabla ^{\mu }\varphi \right) +V(\varphi )\right] \right] . \label{34t} \end{equation The corresponding field equations are given by \begin{equation} G_{\mu \nu }\left( 1+\varepsilon V\right) +\varepsilon H_{\mu \nu }=\kappa T_{\mu \nu }^{\varphi }, \label{40t} \end{equation \begin{equation} \nabla _{\mu }\nabla ^{\mu }\varphi -\frac{\partial V}{\partial \varphi \left( 1-\frac{\varepsilon R}{2\kappa }\right) =0, \label{41t} \end{equation where $T_{\mu \nu }^{\varphi }$ is the energy-momentum tensor of the scalar fiel \begin{equation} T_{\mu \nu }^{\varphi }=\nabla _{\mu }\varphi \nabla _{\nu }\varphi -g_{\mu \nu }\left( \frac{1}{2}\nabla ^{\lambda }\varphi \nabla _{\lambda }\varphi +V\right) , \label{37t} \end{equation and the rank-2 tensor $H_{\mu \nu }$ is defined a \begin{equation} H_{\mu \nu }=g_{\mu \nu }\nabla ^{\lambda }\nabla _{\lambda }V-\nabla _{\mu }\nabla _{\nu }V. \label{38t} \end{equation} Following the usual procedure, we find that the FLRW type equations are given b \begin{equation} 3\left( \frac{\dot{a}^{2}+k}{a^{2}}\right) \left( 1+\varepsilon V\right) +3\varepsilon \frac{\dot{a}}{a}\dot{\varphi}\frac{\partial V}{\partial \varphi }=\kappa \left( \frac{1}{2}\dot{\varphi}^{2}+V\right) , \label{48x} \end{equation \begin{equation} \left( 2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}+k}{a^{2}}\right) \left( 1+\varepsilon V\right) +\varepsilon \left[ \dot{\varphi}^{2}\frac{\partial ^{2}V}{\partial \varphi ^{2}}+\left( \ddot{\varphi}+2\frac{\dot{a}}{a}\dot \varphi}\right) \frac{\partial V}{\partial \varphi }\right] =-\kappa \left( \frac{1}{2}\dot{\varphi}^{2}-V\right) , \label{49x} \end{equation \begin{equation} \ddot{\varphi}+3\frac{\dot{a}}{a}\dot{\varphi}+\frac{\partial V}{\partial \varphi }\left[ 1-\frac{3\varepsilon }{\kappa }\left( \frac{\ddot{a}}{a} \frac{\dot{a}^{2}+k}{a^{2}}\right) \right] =0. \label{50x} \end{equation In the case $k=0$, \textbf{and using }$\kappa =1$, Eqs. (\ref{48x},\ref{49x} \ref{50x}) takes the form \begin{eqnarray} 3H^{2}\left( 1+\epsilon V\right) +3\epsilon H\frac{dV}{dt} &=&\frac{1}{2 \dot{\varphi}^{2}+V, \label{iv1} \\ \left( 2\dot{H}+3H^{2}\right) \left( 1+\epsilon V\right) +\epsilon \left( \frac{d^{2}V}{dt^{2}}+2H\frac{dV}{dt}\right) &=&-\left( \frac{1}{2}\dot \varphi}^{2}-V\right) , \label{iv2} \\ \left( \ddot{\varphi}+3H\dot{\varphi}\right) \dot{\varphi}+\frac{dV}{dt \left[ 1-3\epsilon \left( \dot{H}+2H^{2}\right) \right] &=&0, \label{iv3} \end{eqnarray and here, the equation\textbf{\ }(\ref{iv3})\textbf{\ }it is not an independent equation. In fact, subtracting the equation\textbf{\ }(\ref{iv2} \textbf{\ }from the\textbf{\ }equation\textbf{\ }(\ref{iv1})\textbf{\ }we obtai \begin{equation} 2\dot{H}\left( 1+\varepsilon \right) =\varepsilon H\frac{dV}{dt}-\varepsilon \frac{d^{2}V}{dt^{2}}+\dot{\varphi}^{2}. \label{iv3'} \end{equation Deriving the equation\textbf{\ }(\ref{iv1})\textbf{\ }with respect to time and using the equation\textbf{\ }(\ref{iv3'})\textbf{\ }we find the equatio \textbf{\ }(\ref{iv3})\textbf{.} The combination $\left( 1/2\right) \dot{\varphi}^{2}+V\left( \varphi \right) $, together with the combination $\left( 1/2\right) \dot{\varphi ^{2}-V\left( \varphi \right) $, reminds us that in a standard scalar field theory \begin{equation} \rho =\frac{1}{2}\dot{\varphi}^{2}+V\text{ \ },\text{ \ }p=\frac{1}{2}\dot \varphi}^{2}-V, \label{iv4} \end{equation which is recovered at the limit $\varepsilon \rightarrow 0$. In fact, when \varepsilon \rightarrow 0$ the equations (\ref{iv1}, \ref{iv2}, \ref{iv3} \textbf{\ }takes the for \begin{eqnarray} 3H^{2} &=&\rho , \label{iv5} \\ \text{ \ }2\dot{H}+3H^{2} &=&-p, \\ \dot{\rho}+3H\left( p+\rho \right) &=&0, \label{iv7} \end{eqnarray where we have used\textbf{\ }(\ref{iv4})\textbf{\ }to obtain\textbf{\ }(\re {iv7})\textbf{\ }from\textbf{\ }(\ref{iv3})\textbf{\ }when\textbf{\ } \varepsilon \rightarrow 0$\textbf{.} The equation (\ref{iv4}) allows us to write (\ref{iv1},\ref{iv2}) in the for \begin{eqnarray} 3H^{2} &=&\frac{1}{1+\varepsilon V}\left( \rho -3\varepsilon H\dot{\varphi \frac{\partial V}{\partial \varphi }\right) , \label{iv17} \\ 2\dot{H}+3H^{2} &=&-\frac{1}{1+\varepsilon V}\left( p-\varepsilon \left[ \left( \rho +p\right) \frac{\partial ^{2}V}{\partial \varphi ^{2}}+\left( \ddot{\varphi}+2H\dot{\varphi}\right) \frac{\partial V}{\partial \varphi \right] \right) , \label{iv18} \end{eqnarray and if we choose $p=-\rho $, by thinking in a de Sitter evolution, we obtai \begin{equation} 3H^{2}=\frac{\rho }{1+\varepsilon V}-3H\frac{d}{dt}\ln \left( 1+\varepsilon V\right) , \label{(a)} \end{equation \begin{equation} \dot{H}=\frac{1}{2}\left( \frac{\ddot{\varphi}}{\dot{\varphi}}+5H\right) \frac{d}{dt}\ln \left( 1+\varepsilon V\right) . \label{(b)} \end{equation According to (\ref{(b)}), $\dot{H}=0$ say us \begin{equation} \frac{\ddot{\varphi}}{\dot{\varphi}}+5H=0\text{ \ \ }or\text{ \ \ }V\left( t\right) =const., \end{equation and, according to (\ref{(a)}), $V\left( t\right) =const.$ implies $\rho =const.$ i.e., $H=const$., i.e., an usual de Sitter evolution. But, if \ddot{\varphi}/\dot{\varphi}+5H=0$ and $V\left( t\right) \neq const.$, after to see (\ref{(a)}) we have, with $H=H_{0}=const$., \begin{equation} \rho \left( t\right) =3H_{0}\left[ H_{0}+\frac{d}{dt}\ln \left( 1+\varepsilon V\left( t\right) \right) \right] \left( 1+\varepsilon V\left( t\right) \right) , \end{equation and we have a de Sitter evolution although $\rho \neq const$. One more detail, the equation $\ddot{\varphi}/\dot{\varphi}+5H=0$ has the solution \dot{\varphi}\sim a^{-5}$ and so, the kinetic term $\left( 1/2\right) \dot \varphi}^{2}$ dissolves very quickly with evolution leading us to $\rho \sim V\left( t\right) $ and $p\sim -V\left( t\right) $ at late times, i.e., a de Sitter evolution. Writing (\ref{iv17},\ref{iv18}) in the form \begin{eqnarray} 3H^{2} &=&\frac{1}{1+\varepsilon V}\left( \rho -3\varepsilon H\frac{dV}{dt \right) , \label{iv25} \\ \dot{H}+H^{2} &=&-\frac{1}{6\left( 1+\varepsilon V\right) }\left[ \left( \rho +3p\right) +3\varepsilon \left( \frac{d^{2}V}{dt^{2}}+H\frac{dV}{dt \right) \right] , \label{iv26} \end{eqnarray we see that when $\varepsilon =0$ we recover the results of General Relativity, i.e., $3H^{2}=\rho $ and $\dot{H}+H^{2}=-\left( 1/6\right) \left( 1+3\omega \right) \rho $. By following this reminder, we write (\re {iv25}, \ref{iv26}) in the standard form \begin{eqnarray} 3H^{2} &=&\rho _{tot}, \\ \text{ \ \ }\dot{H}+H^{2} &=&-\frac{1}{6}\left( \rho _{tot}+3p_{tot}\right) , \\ \rho _{tot} &=&\frac{1}{1+\varepsilon V}\left( \rho -3H\frac{d\left( \varepsilon V\right) }{dt}\right) , \\ \text{\ \ }p_{tot} &=&\frac{1}{1+\varepsilon V}\left( p+2H\frac{d\left( \varepsilon V\right) }{dt}+\frac{d^{2}\left( \varepsilon V\right) }{dt^{2} \right) , \label{iv28} \end{eqnarray we can see that we can build a barotropic equation $p_{tot}=\omega _{tot}\rho _{tot}$, where \begin{equation} \omega _{tot}=\frac{p+2Hd\left( \varepsilon V\right) /dt+d^{2}\left( \varepsilon V\right) /dt^{2}}{\rho -3Hd\left( \varepsilon V\right) /dt}\text{ \ \ }and\text{ \ \ }q=\frac{1}{2}\left( 1+3\omega _{tot}\right) . \label{iv29} \end{equation By doing $p=\omega \rho $, we can write (\ref{iv29}) as \begin{equation} \omega _{tot}=\omega +\frac{\left( 2+3\omega \right) Hd\left( \varepsilon V\right) /dt+d^{2}\left( \varepsilon V\right) /dt^{2}}{\rho -3Hd\left( \varepsilon V\right) /dt}. \label{iv30} \end{equation Here we can see that if $d^{2}\left( \varepsilon V\right) /dt^{2}=0$ and d\left( \varepsilon V\right) /dt\neq 0$, then $\omega =-2/3$, $\omega _{tot}=-2/3$ and $q=-1/2.$ This means that $\omega _{tot}$ belongs to the quintessence zone. So, with $\left( \alpha ,\beta \right) $ constants, \varepsilon V\left( t\right) =\alpha \left( t/t_{0}\right) +\beta $ is an obvious choice for $\varepsilon V\left( t\right) $. On the other hand, it is direct to show tha \begin{equation} \text{ \ }\dot{\rho}_{tot}+3H\left( 1+\omega _{tot}\right) \rho _{tot}=0, \label{iv32} \end{equation so tha \begin{eqnarray} \rho _{tot}\left( a\right) &=&\rho _{tot}\left( a_{0}\right) \left( \frac a_{0}}{a}\right) ^{3}\exp \left( -3\int_{t_{0}}^{t}\omega _{tot}\left( t\right) d\ln a\right) , \label{iv33} \\ and\text{ \ \ \ }\omega _{tot} &=&-\frac{2}{3}\rightarrow \rho _{tot}\left( a\right) =\rho _{tot}\left( a_{0}\right) \left( \frac{a_{0}}{a}\right) , \notag \end{eqnarray and the same is true for $\rho \left( a\right) $, that is, $\rho \left( a\right) =\rho \left( 0\right) \left( a_{0}/a\right) $. Note that, if $V\left( t\right) =V_{0}=const. \begin{equation} \rho _{tot}=\frac{1}{1+\epsilon V_{0}}\rho \text{ \ \ },\text{ \ \ }p_{tot} \frac{1}{1+\epsilon V_{0}}p\text{ \ \ \ }and\text{ \ \ }\omega _{tot}=\omega , \end{equation and if $\omega =0$ then $\omega _{tot}=0$ and then $p_{tot}=0$. This means that $\omega _{tot}=0$ plays the role of the usual dark matter ($\omega =0 ), although $\rho _{tot}\neq \rho $. Finally, we have been using the quantity\textbf{\ }$\Lambda _{eff}=1/2\varepsilon $\textbf{, }where\textbf{\ }$\varepsilon =4\kappa r_{c}^{2}/3=const.$\textbf{\ }is a parameter derived from the mechanism of dimensional reduction under consideration, which depends on the gravitational constant\textbf{\ }$\kappa $\textbf{\ }and the compactification radius\textbf{\ }$r_{c}$\textbf{. }This parameter plays the role of an effective cosmological constant (its inverse) recalling that in the action\textbf{\ }$S[g,\varphi ]=\int d^{4}x\sqrt{-g}\left[ R+\left( \varepsilon R-2\kappa \right) V(\varphi )\right] $\textbf{\ }there is no a "bare" cosmological constant. This fact could lead us to conjecture that th \textbf{\ }$h$\textbf{-}field\textbf{\ (}or\textbf{\ }$\tilde{h}$\textbf{- field\textbf{), }in some way, manifests itself as dark energy. If so, the next step will be to submit the present outline to the verdict of observation. \section{\textbf{Concluding remarks}} We have considered a modification of the Poincar\'{e} symmetries known as given $\mathfrak{B}$ Lie algebra also known as generalized Poincar\'{e} algebra, whose generators satisfy the commutation relation shown in Eq. (7) of Ref. \cite{gomez}. Besides the vierbein $e_{\mu }^{a}$ and the spin connection $\omega _{\mu }^{ab}$, our scheme includes the fields\textbf{\ } k_{\mu }^{ab}$ and $h_{\mu }^{a}$ whose dynamic is described by the field equation obtained from the corresponding actions. We have used the field equations for a $4$-dimensional brane embedded in the $5$-dimensional spacetime of \cite{bra1} to study their cosmological consequences. The corresponding FLRW equations are found by means of the usual procedure and cosmological solutions are shown and discussed. We highlight two solutions, by choosing $\partial V/\partial t=const.$, a transient phantom evolution (not ruled out by the current observational data) is obtained and if $\partial V/\partial t\neq const.$ we obtain a bouncing solution. Since the kinetic terms corresponding to the fields $h^{a}$ and $k^{ab}$ are present in the surface term (see (\ref{1t}) and (\ref{8'})) it was necessary to introduce a kinetic term to the $4$-dimensional action. As a consequence of this, in the corresponding cosmological framework we highlight a de Sitter evolution even when the energy density involved is not constant. Whatever it is, and since we do not have something to guide us towards a form for $V\left( t\right) $ from first principles, we are tied to playing with different Ansatze for that potential. At least in the cases that were considered give us interesting results. But, we must insist, we are completely dependent on the Ansatze for $V\left( t\right) $. If we are thinking on cosmology, the results shown here suffer from this "slavery". The hope, a common feeling, is that what is shown can be a contribution that guides us towards a better understanding of the present formalism and its chance of being a possible alternative to General Relativity. It is evident that the observational information will be key when it comes to discriminating between both models\textbf{. }To extract information that leads us to\textbf{\ }$V\left( t\right) $\textbf{\ }in order to visualize if the scalar field\textbf{\ }philosophy has a viable chance of being real when it comes to doing cosmology is the challenge to face. \section{\textbf{Appendix. Derivation of the} \textbf{action for a }$4 \textbf{-dimensional brane embedded in the }$5$\textbf{-dimensional spacetim }} In this Appendix we briefly review the derivation of the action\textbf{\ \ref{31t'}). }In order to find it, we will first consider the following $5 -dimensional Randall-Sundrum \cite{randall} \cite{randall1} type metri \begin{eqnarray} ds^{2} &=&e^{2f(\phi )}\tilde{g}_{\mu \nu }(\tilde{x})d\tilde{x}^{\mu } \tilde{x}^{\nu }+r_{c}^{2}d\phi ^{2}, \notag \\ &=&\eta _{ab}e^{a}e^{b}, \notag \\ &=&e^{2f(\phi )}\tilde{\eta}_{mn}\tilde{e}^{m}\tilde{e}^{n}+r_{c}^{2}d\phi ^{2}, \label{5t2} \end{eqnarray where $e^{2f(\phi )}$ is the so-called "warp factor", and $r_{c}$ is the so-called "compactification radius" of the extra dimension, which is associated with the coordinate $0\leqslant \phi <2\pi $. The symbol $\sim $ denotes $4$-dimensional quantities$.$ We will use the usual notatio \begin{eqnarray} x^{\alpha } &=&\left( \tilde{x}^{\mu },\phi \right) ;\text{ \ \ \ \ }\alpha ,\beta =0,...,4;\text{ \ \ \ \ }a,b=0,...,4; \notag \\ \mu ,\nu &=&0,...,3;\text{ \ \ \ \ }m,n=0,...,3; \notag \\ \eta _{ab} &=&diag(-1,1,1,1,1);\text{ \ \ \ \ }\tilde{\eta _{mn}=diag(-1,1,1,1), \label{6t2} \end{eqnarray which allows us to write the vielbein \begin{equation} e^{m}(\phi ,\tilde{x})=e^{f(\phi )}\tilde{e}^{m}(\tilde{x})=e^{f(\phi ) \tilde{e}_{\text{ }\mu }^{m}(\tilde{x})d\tilde{x}^{\mu };\text{ \ \ e^{4}(\phi )=r_{c}d\phi ,\text{\ } \label{s22} \end{equation where\textbf{\ }$\tilde{e}^{m}$\textbf{\ }is the vierbein. From the vanishing torsion conditio \begin{equation} T^{a}=de^{a}+\omega _{\text{ }b}^{a}e^{b}=0, \label{2t} \end{equation we obtain the connections \begin{equation} \omega _{\text{ }b\alpha }^{a}=-e_{\text{ }b}^{\beta }\left( \partial _{\alpha }e_{\text{ }\beta }^{a}-\Gamma _{\text{ }\alpha \beta }^{\gamma }e_ \text{ }\gamma }^{a}\right) , \label{3t} \end{equation where $\Gamma _{\text{ }\alpha \beta }^{\gamma }$ is the Christoffel symbol. From Eqs. (\ref{s22}) and (\ref{2t}) we fin \begin{equation} \omega _{\text{ }4}^{m}=\frac{e^{f}f^{\prime }}{r_{c}}\tilde{e}^{m}, \label{102t} \end{equation and the $4$-dimensional vanishing torsion condition \begin{equation} \tilde{T}^{m}=\tilde{d}\tilde{e}^{m}+\tilde{\omega}_{\text{ }n}^{m}\tilde{e ^{n}=0, \label{1030t} \end{equation where\textbf{\ \ }$f^{\prime }=\frac{\partial f}{\partial \phi }$\textbf{, } \tilde{\omega}_{\text{ }n}^{m}=\omega _{\text{ }n}^{m}$\textbf{\ }and\textbf \ }$\tilde{d}=d\tilde{x}^{\mu }\frac{\partial }{\partial \tilde{x}^{\mu }}.$ From (\ref{102t}), (\ref{1030t}) and the Cartan's second structural equation, $R^{ab}=d\omega ^{ab}+\omega _{\text{ }c}^{a}\omega ^{cb}$, we obtain the components of the $2$-form curvatur \begin{equation} R^{m4}=\frac{e^{f}}{r_{c}}\left( f^{\prime 2}+f^{\prime \prime }\right) d\phi \tilde{e}^{m},\text{ \ }R^{mn}=\tilde{R}^{mn}-\left( \frac e^{f}f^{\prime }}{r_{c}}\right) ^{2}\tilde{e}^{m}\tilde{e}^{n},\text{\ } \label{105t} \end{equation where the $4$-dimensional $2$-form curvature is given b \begin{equation} \tilde{R}^{mn}=\tilde{d}\tilde{\omega}^{mn}+\tilde{\omega}_{\text{ }p}^{m \tilde{\omega}^{pn}. \end{equation} The torsion-free condition implies that the third term in the EChS action, given in equation (\ref{1t}), vanishes. This means that the corresponding Lagrangian is no longer dependent on the field $k^{ab}$. So, the Lagrangian \ref{1t}) has now two independent fields, $e^{a}$ and $h^{a}$, and it is given by \begin{equation} L_{ChS}^{(5)}[e,h]=\alpha _{1}l^{2}\varepsilon _{abcde}R^{ab}R^{cd}e^{e}+\alpha _{3}\varepsilon _{abcde}\left( \frac{2}{3 R^{ab}e^{c}e^{d}e^{e}+l^{2}R^{ab}R^{cd}h^{e}\right) . \label{4t} \end{equation} From Eq. (\ref{4t}) we can see that the Lagrangian contains the Gauss-Bonnet term $L_{GB}$, the Einstein-Hilbert term $L_{EH}$ and a term $L_{H}$ which couples geometry and matter. In fact, replacing (\ref{s22}) and (\ref{105t}) in (\ref{4t}) and using $\tilde{\varepsilon}_{mnpq}=\varepsilon _{mnpq4}$, we obtain \begin{eqnarray} \tilde{S}[\tilde{e},\tilde{h}] &=&\int_{\Sigma _{4}}\tilde{\varepsilon _{mnpq}\left( A\tilde{R}^{mn}\tilde{e}^{p}\tilde{e}^{q}+B\ \tilde{e}^{m \tilde{e}^{n}\tilde{e}^{p}\tilde{e}^{q}+\right. \notag \\ &&\left. +C\tilde{R}^{mn}\tilde{e}^{p}\tilde{h}^{q}+E\tilde{e}^{m}\tilde{e ^{n}\tilde{e}^{p}\tilde{h}^{q}\right) , \label{999} \end{eqnarray wher \begin{equation} h^{m}(\phi ,\tilde{x})=e^{g(\phi )}\tilde{h}^{m}(\tilde{x}),\text{ \ h^{4}=0, \label{o} \end{equation and\textbf{\ \begin{equation} A=2r_{c}\int_{0}^{2\pi }d\phi e^{2f}\left[ \alpha _{3}-\frac{\alpha _{1}l^{2 }{r_{c}^{2}}\left( 3f^{\prime 2}+2f^{\prime \prime }\right) \right] , \label{12t} \end{equation \begin{equation} B=-\frac{1}{r_{c}}\int_{0}^{2\pi }d\phi e^{4f}\left[ \frac{2\alpha _{3}}{3 \left( 5f^{\prime 2}+2f^{\prime \prime }\right) -\frac{\alpha _{1}l^{2}} r_{c}^{2}}f^{\prime 2}\left( 5f^{\prime 2}+4f^{\prime \prime }\right) \right] , \label{13t} \end{equation \begin{equation} C=-\frac{4\alpha _{3}l^{2}}{r_{c}}\int_{0}^{2\pi }d\phi e^{f}e^{g}\left( f^{\prime 2}+f^{\prime \prime }\right) , \label{14t} \end{equation \begin{equation} E=\frac{4\alpha _{3}l^{2}}{r_{c}^{3}}\int_{0}^{2\pi }d\phi e^{3f}e^{g}f^{\prime 2}\left( f^{\prime 2}+f^{\prime \prime }\right) , \label{15t} \end{equation with\textbf{\ }$f(\phi )$\textbf{\ }and\textbf{\ }$g(\phi )$\textbf{\ representing functions that can be chosen (non-unique choice) as\textbf{\ } f(\phi )=g(\phi )=ln(K+sin\phi )$\textbf{\ }with\textbf{\ }$K=constant>1 \textbf{; }and therefore we hav \begin{equation} A=\frac{2\pi }{r_{c}}\left[ \alpha _{3}r_{c}^{2}\left( 2K^{2}+1\right) +\alpha _{1}l^{2}\right] , \label{201t} \end{equation \begin{equation} B=\frac{\pi }{2r_{c}}\left[ \alpha _{3}\left( 4K^{2}+1\right) -\frac{\alpha _{1}l^{2}}{2r_{c}^{2}}\right] , \label{21t} \end{equation \begin{equation} C=-4r_{c}^{2}E=\frac{4\pi \alpha _{3}l^{2}}{r_{c}}. \label{75t} \end{equation $\ \ \ \ \ \ \ $Taking into account that $L_{ChS}^{(5)}[e,h]$ flows into L_{EH}^{(5)}$ when $l\longrightarrow 0$ \cite{irs}, we have that action (\re {999}) should lead to the action of Einstein-Hilbert when $l\longrightarrow 0 $. From (\ref{999}) it is direct to see that this occurs when $A=-1/2$ and $B=0$. In this case, from Eqs. (\ref{201t}), (\ref{21t}) and (\ref{75t}), we can see tha \begin{equation} \alpha _{1}=-\frac{r_{c}\left( 4K^{2}+1\right) }{2\pi l^{2}\left( 10K^{2}+3\right) }, \label{24t} \end{equation \begin{equation} \alpha _{3}=-\frac{1}{4\pi r_{c}\left( 10K^{2}+3\right) }, \label{25t} \end{equation} \begin{equation} C=-4r_{c}^{2}E=-\frac{l^{2}}{r_{c}^{2}\left( 10K^{2}+3\right) }, \end{equation and therefore the action (\ref{999}) takes the form \begin{eqnarray} \tilde{S}[\tilde{e},\tilde{h}] &=&\int_{\Sigma _{4}}\tilde{\varepsilon _{mnpq}\left( -\frac{1}{2}\tilde{R}^{mn}\tilde{e}^{p}\tilde{e}^{q}\right. + \notag \\ &&\left. +C\tilde{R}^{mn}\tilde{e}^{p}\tilde{h}^{q}-\frac{C}{4r_{c}^{2} \tilde{e}^{m}\tilde{e}^{n}\tilde{e}^{p}\tilde{h}^{q}\right) , \label{a} \end{eqnarray corresponding to a $4$-dimensional brane embedded in the $5$-dimensional spacetime of the EChS gravity. We can see that when $l\rightarrow 0$ then C\rightarrow 0$ and hence (\ref{a}) becomes the $4$-dimensional Einstein-Hilbert action. Finally, it is convenient to express the action\textbf{\ }(\ref{a})\textbf{\ }in tensorial language. To achieve this, we write\textbf{\ }$\tilde{e}^{m} \tilde{x})=\tilde{e}_{\text{ }\mu }^{m}(\tilde{x})d\tilde{x}^{\mu }$\textbf \ }and\textbf{\ }$\tilde{h}^{m}=\tilde{h}_{\text{ \ }\mu }^{m}d\tilde{x ^{\mu }$\textbf{, }and then we compute the individual terms in\textbf{\ } \ref{a})\textbf{\ }a \begin{eqnarray} \tilde{\varepsilon}_{mnpq}\tilde{R}^{mn}\tilde{e}^{p}\tilde{e}^{q} &=&- \sqrt{-\tilde{g}}\tilde{R}d^{4}\tilde{x}, \\ \tilde{\varepsilon}_{mnpq}\tilde{R}^{mn}\tilde{e}^{p}\tilde{h}^{q} &=&2\sqrt -\tilde{g}}\left( \tilde{R}\tilde{h}-2\tilde{R}_{\text{ }\nu }^{\mu }\tilde{ }_{\text{ }\mu }^{\nu }\right) d^{4}\tilde{x}, \\ \tilde{\varepsilon}_{mnpq}\tilde{e}^{m}\tilde{e}^{n}\tilde{e}^{p}\tilde{h ^{q} &=&6\sqrt{-\tilde{g}}\tilde{h}d^{4}\tilde{x}, \end{eqnarray where it has been defined\textbf{\ }$\tilde{h}\equiv \tilde{h}_{\text{ }\mu }^{\mu }$\textbf{. }So, the $4$-dimensional action for the brane immersed in the $5$-dimensional space-time of the EChS gravitational theory is given by \begin{equation} \tilde{S}[\tilde{g},\tilde{h}]=\int d^{4}\tilde{x}\sqrt{-\tilde{g}}\left[ \tilde{R}+2C\left( \tilde{R}\tilde{h}-2\tilde{R}_{\text{ }\nu }^{\mu }\tilde h}_{\text{ }\mu }^{\nu }\right) -\frac{3C}{2r_{c}^{2}}\tilde{h}\right] . \end{equation} \textbf{Acknowledgements} This work was supported in part by\textit{\ }FONDECYT Grant\textit{\ }No \textit{\ }1180681 from the Government of Chile. One of the authors (FG) was supported by Grant \# R12/18 from Direcci\'{o}n de Investigaci\'{o}n, Universidad de Los Lagos.
a4c459c41fbb5dfff3f3616978106c0a4ef68093
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} \label{sec:Intro} The Standard Model (SM) of Particle Physics is successful in describing particles and their electroweak and strong interactions, still, several aspects are problematic. In this paper, we concentrate on the Flavour Problem. The introduction of additional symmetries beyond the SM gauge group acting on the three fermion generations can produce realistic mass hierarchies and mixing textures. The Lagrangian is invariant under the gauge group of the SM and under the additional flavour symmetry at an energy scale equal or higher than the electroweak one. Fermion masses and mixings arise once these symmetries are broken, spontaneously or explicitly. Such flavour models differ from each other in the nature of the symmetries and the symmetry breaking mechanism. On the other hand, they all share the same top-down approach: the main goal is the explanation of fermion masses and mixings by the introduction of flavour symmetries; only as a second step their phenomenological consistency with FCNC processes (sometimes) is investigated (see Refs. \cite{Feruglio:2009hu,Merlo:2011hw} and references therein). A bottom-up approach consists in first identifying a low-energy effective scheme in which the contributions to FCNC observables are under control and subsequently in constructing high-energy models from which the effective description can be derived. The so-called Minimal Flavour Violation (MFV) \cite{Chivukula:1987py,Hall:1990ac,Ciuchini:1998xy,Buras:2000dm} follows this second approach. The fact that so far no evident deviations from the SM predictions have been found in any flavour process observed in the hadronic sector \cite{Isidori:2010kg}, from rare decays in the kaon and pion sector to $B$ decays at super$B$--factories, can be a sign that any physics beyond the SM does not introduce significant new sources of flavour and CP violation with respect to the SM. In Refs.~\cite{D'Ambrosio:2002ex,Cirigliano:2005ck,Davidson:2006bd,Alonso:2011jd}, this criterion has been rigorously defined in terms of flavour symmetries, considering an effective operator description within the SM. More in detail, restricted to the quark sector, the flavour symmetry coincides with the symmetry of the SM Lagrangian in the limit of vanishing Yukawa couplings. This symmetry can be written as the product of non-Abelian $SU(3)$ terms, \begin{equation} G_f=SU(3)_{Q_L}\times SU(3)_{U_R}\times SU(3)_{D_R}\,, \label{FSMFV} \end{equation} and three additional $U(1)$ factors, that can be arranged to correspond to the Baryon number, the Hypercharge and a phase transformation only on the right-handed (RH) down-type quarks. Interestingly, only the non-Abelian terms of $G_f$ control the flavour structures of the quark mass-matrices, while the $U(1)$ factors can only be responsible for overall suppressions \cite{Alonso:2011jd}. The $SU(2)_L$-doublet $Q_L$ and the $SU(2)_L$-singlets $U_R$ and $D_R$ transform under $G_f$ as \begin{equation} Q_L\sim({\bf3},{\bf1},{\bf1})\,,\qquad\qquad U_R\sim({\bf1},{\bf3},{\bf1})\,,\qquad\qquad D_R\sim({\bf1},{\bf1},{\bf3})\,. \end{equation} In order to write the usual SM Yukawa terms, \begin{equation} \mathcal{L}_Y=\ov{Q}_L {\cal Y}_dD_RH+\ov{Q}_L {\cal Y}_uU_R\tilde{H}+\text{h.c.}\,, \end{equation} where $\tilde{H}=i\tau_2H^*$, manifestly invariant under $G_f$, the Yukawa couplings are promoted to dimensionless fields -- called spurions -- with non-trivial transformation properties under $G_f$: \begin{equation} {\cal Y}_u\sim({\bf3},{\bf \ov{3}},{\bf1})\;,\qquad\qquad {\cal Y}_d\sim({\bf3},{\bf1},{\bf\ov{3}})\;. \label{YuYd} \end{equation} Following the MFV ansatz, quark masses and mixings arise once the electroweak symmetry is spontaneously broken by the Higgs VEV, $\mean{H}=v/\sqrt2$ with $v=246$ GeV, and the spurion fields obtain the values, \begin{equation} {\cal Y}_d=\dfrac{\sqrt2}{v}\left( \begin{array}{ccc} m_d & 0 & 0 \\ 0 & m_s & 0 \\ 0 & 0 & m_b \\ \end{array} \right)\quad\text{and}\quad {\cal Y}_u=\dfrac{\sqrt2}{v}{\cal V}^\dag\left( \begin{array}{ccc} m_u & 0 & 0 \\ 0 & m_c & 0 \\ 0 & 0 & m_t \\ \end{array}\right)\,, \label{SpurionVEVsMFV} \end{equation} where ${\cal V}$ is the unitary CKM matrix. Recently, several papers \cite{Grinstein:2010ve,Feldmann:2010yp,Guadagnoli:2011id} appeared where a MFV-like ansatz is implemented in the context of maximal gauge flavour (MGF) symmetries: in the limit of vanishing Yukawa interactions these gauge symmetries are the largest non-Abelian ones allowed by the Lagrangian of the model. The particle spectrum is enriched by new heavy gauge bosons, carrying neither colour nor electric charges, and exotic fermions, to cancel anomalies. Furthermore, the new exotic fermions give rise to the SM fermion masses through a See-Saw mechanism, in a way similar to how the light left-handed (LH) neutrinos obtain masses by the heavy RH ones. Moreover, the MFV spurions are promoted to scalar fields -- called flavons -- invariant under the gauge group of the SM, but transforming as bi-fundamental representations of the non-Abelian part of the flavour symmetry. Once the flavons develop suitable VEVs, the SM fermion masses and mixings are correctly described. Still, Refs.~\cite{Grinstein:2010ve,Feldmann:2010yp,Guadagnoli:2011id} do not provide a natural mechanism for the specific structure of the flavon VEVs. This mechanism is highly model dependent, as discussed in Refs.~\cite{Feldmann:2009dc,Alonso:2011yg}, in contrast to the fermion and gauge sectors. Such scalar fields may have a phenomenological impact, but it is above the scope of the present analysis to provide a realistic explanation for the flavon VEV alignment and we will therefore not include these scalar contributions. Even if this approach has some similarities to the usual MFV description, the presence of flavour-violating neutral gauge bosons and exotic fermions introduces modifications of the SM couplings and tends to lead to dangerous contributions to FCNC processes mediated by the new heavy particles. Consequently, the MGF framework goes beyond the standard MFV and a full phenomenological analysis of this NP scenario is mandatory to judge whether it is consistent with all available data. In this paper we focus on the specific MGF realisation presented in Ref.~\cite{Grinstein:2010ve}, even if our analysis can be easily applied to other models with gauge flavour symmetries. In particular, we extend the study performed in Ref.~\cite{Grinstein:2010ve} and point out that the parameter space of such a model can be further constrained performing a full analysis on meson oscillations. The number of parameters is much smaller than in other popular extensions of the SM and therefore it is not obvious that the present tensions on the flavour data can be removed or at least softened. Indeed, we observe that the model, while solving the $\varepsilon_K-S_{\psi K_S}$ tension, cannot simultaneously remove other SM flavour anomalies, which in some cases become even more pronounced. Relative to Ref.~\cite{Grinstein:2010ve} the new aspects of our analysis are: \begin{itemize} \item[-] In addition to new box-diagram contributions to $\Delta F=2$ processes, considered already in Ref.~\cite{Grinstein:2010ve}, we perform a detailed analysis including the tree-level exchanges of new heavy flavour gauge bosons. These diagrams generate LR operators that are strongly enhanced, by the renormalisation group (RG) QCD running, relatively to the standard LL operators and could a priori be very important. \item[-] The impact of the new neutral current-current operators, arising from integrating out the heavy flavour gauge bosons, to the $\bar B\to X_s\gamma$ has been studied in Ref.~\cite{Buras:2011zb} and we apply those results to the model. \item[-] We point out that for a value of $|V_{ub}|$ close to its determination from exclusive decays, i.e. in the ballpark of $3.5\times 10^{-3}$, the model can solve the present tension between $\varepsilon_K$ and $S_{\psi K_S}$. For slightly larger values of $|V_{ub}|$, the model can still accommodate the considered observables within the errors, but for the inclusive determination of $|V_{ub}|$ it suffers from tensions similar to the SM. \item[-] We scan over all NP parameters and present a correlated analysis of $\varepsilon_K$, the mass differences $\Delta M_{B_{d,s}}$ the $B^+\to\tau^+\nu$ and $\bar B\to X_s\gamma$ decays, the ratio $\Delta M_{B_d}/\Delta M_{B_s}$, the mixing-induced CP asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$, and the $b$ semileptonic CP-asymmetry $A^b_{sl}$. \item[-] We find that large corrections to the CP observables in the meson oscillations, $\varepsilon_K$, $S_{\psi K_s}$ and $S_{\psi\phi}$, are allowed. However, requiring $\varepsilon_K$ to stay inside its $3\sigma$ error range, only small deviations from the SM values of $S_{\psi K_s}$ and $S_{\psi\phi}$ are allowed. \item[-] We find that requiring $\varepsilon_K$--$S_{\psi K_S}$ tension to be removed in this model implies the values of $\Delta M_{B_d}$ and $\Delta M_{B_s}$ to be significantly larger than the data. While the inclusion of theoretical and parametric uncertainties and in particular the decrease of the weak decay constants could soften this problem, it appears from the present perspective that the model suffers from a serious $\varepsilon_K-\Delta M_{B{s,d}}$ tension. \item[-] We also investigate the correlation among two theoretically cleaner observables, $\Delta M_{B_d}/\Delta M_{B_s}$ and $BR(B^+\to\tau^+\nu)/\Delta M_{B_d}$. In this way, we strongly constrain the parameter space of the model and conclude that the tension in $BR(B^+\to\tau^+\nu)$, present already within the SM, is even increased. \item[-] We compare the patterns of flavour violation in this model with those found in the original MFV, the MFV with the addition of flavour blind phases and MFV in the left-right asymmetric framework. \item[-] As a by-product of our work we present a rather complete list of Feynman rules relevant for processes in the quark sector. \end{itemize} The structure of the paper is shown in the table of contents. \section{The Model} \label{sec:MGF} In this section we summarise the relevant features of the MGF construction presented in Ref.~\cite{Grinstein:2010ve}, dealing only with the quark sector. The flavour symmetry is that of eq.~(\ref{FSMFV}), but it is gauged. The spectrum is enriched by the corresponding flavour gauge bosons and by new exotic quarks, necessary to cancel the anomalies: in particular the new quarks are two coloured RH $SU(3)_{Q_L}$-triplets, one LH $SU(3)_{U_R}$-triplet and one LH $SU(3)_{D_R}$-triplet. In table \ref{tab.GRV}, we list all the fields present in the theory and their transformation properties under the gauge groups. \begin{table}[ht] \begin{center} \begin{tabular}{|c||cccc||cccc|cc|} \hline &&&&&&&&&&\\[-4mm] & $Q_L$ & $U_R$ & $D_R$ & $H$ & $\Psi_{u_R}$ & $\Psi_{d_R}$ & $\Psi_{u_L}$ & $\Psi_{d_L}$ & $Y_u$ & $Y_d$ \\[2mm] \hline \hline &&&&&&&&&&\\[-4mm] $SU(3)_c$ & $\bf3$ & $\bf3$ & $\bf3$ & $\bf1$ & $\bf3 $ & $\bf3 $ & $\bf3 $ & $\bf3 $ & $\bf1$ & $\bf1$\\[2mm] $SU(2)_L$ & $\bf2$ & $\bf1$ & $\bf1$ & $\bf2$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf1$ \\[2mm] $U(1)_Y$ & $+^1/_6$ & $+^2/_3$ & $-^1/_3$ & $+^1/_2$ & $+^2/_3$ & $-^1/_3$ & $+^2/_3$ & $-^1/_3$ & $0$ & $0$ \\[2mm] \hline &&&&&&&&&&\\[-4mm] $SU(3)_{Q_L}$ & $\bf3$ & $\bf1$ & $\bf1$ & $\bf1 $ & $\bf3$ & $\bf3$ & $\bf1 $ & $\bf1$ & $\bf\ov3$ & $\bf\ov3$ \\[2mm] $SU(3)_{U_R}$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf3$ & $\bf1$ \\[2mm] $SU(3)_{D_R}$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf1$ & $\bf3$ & $\bf1$ & $\bf3$\\[2mm] \hline \end{tabular} \end{center} \caption{\it The transformation properties of the fields under the SM and flavour gauge symmetries.} \label{tab.GRV} \end{table} With this matter content, the most general renormalisable Lagrangian invariant under the SM and flavour gauge groups can be divided into three parts: \begin{equation} \mathcal{L}=\mathcal{L}_{kin} + \mathcal{L}_{int} - V\left[H, Y_u, Y_d \right] \,. \label{Lagrangian} \end{equation} The first one, $\mathcal{L}_{kin}$, contains the kinetic terms of all the fields and the couplings of fermions and scalar bosons to the gauge bosons. The covariant derivative entering $\mathcal{L}_{kin}$ accounts for SM gauge boson-fermion interactions and additional flavour interactions involving new gauge bosons and fermions: \begin{equation}\label{eq:covariantderivatives} D_{\mu} \supset \sum_{f=Q,U,D} i\, g_f\, N_f\, (A_f)_{\mu}\,, \qquad\qquad\qquad (A_f)_{\mu}\equiv\sum_{a=1}^8 (A_f^a)_\mu\dfrac{\lambda_{SU(3)}^a}{2}\,, \end{equation} where $g_f$ are the flavour gauge coupling constants, $N_f$ the quantum numbers, $A_f^a$ the flavour gauge bosons and $\lambda_{SU(3)}^a$ the Gell-Mann matrices. The second term in eq.~(\ref{Lagrangian}), $\mathcal{L}_{int}$, contains the quark mass terms and the quark-scalar interactions: \begin{equation}\label{eq:lagrangian} \begin{split} \mathcal{L}_{\text{int}} =&\;\; \lambda_u\, \ov{Q}_L \tilde H\, \Psi_{u_R}+\lambda_u'\ov{\Psi}_{u_L} Y_u\, \Psi_{u_R} + M_u\, \ov{\Psi}_{u_L} U_R+ \\[2mm] & + \lambda_d\, \ov{Q}_L H\, \Psi_{d_R}+\lambda_u'\ov{\Psi}_{d_L} Y_d\, \Psi_{d_R} + M_d\, \ov{\Psi}_{d_L} D_R + \text{h.c.}\,, \end{split} \end{equation} where $M_{u,d}$ are universal mass parameters and $\lambda^{(\prime)}_{u,d}$ are universal coupling constants that can be chosen real, through a redefinition of the fields. The last term in eq.~(\ref{Lagrangian}), $V \left[H, Y_u, Y_d \right]$, is the scalar potential of the model, containing the SM Higgs and the flavons $Y_{u,d}$. The mechanisms of both electroweak and flavour symmetry breaking arise from the minimisation of this scalar potential. It has not been explicitly constructed in Ref.~\cite{Grinstein:2010ve} and it is beyond the scope of the present paper to provide such a scalar potential (see Ref.~\cite{Alonso:2011yg} for a recent analysis). Therefore, we assume that the spontaneous breaking of the electroweak symmetry proceeds as in the SM through the Higgs mechanism and that the spontaneous flavour symmetry breaking is driven by the flavon fields $Y_{u,d}$ which develop the following VEVs: \begin{equation} \mean{Y_d}=\hat{Y}_d\,,\qquad\qquad\mean{Y_u}=\hat{Y}_u\,V\,. \label{eq:flavonvev} \end{equation} Here $\hat{Y}_{u,d}$ are diagonal $3\times3$ matrices and $V$ is a unitary matrix. We emphasise that, despite the similarity to eq.~(\ref{SpurionVEVsMFV}) of MFV, the matrix $V$ is not the CKM matrix and the vacuum expectation values $\left\langle Y_{u,d} \right\rangle$ do not coincide with the SM Yukawa matrices. This is illustrated by moving to the fermion-mass eigenbasis. In what follows we focus on the up-quark sector, but analogous formulae can also be written for the down-quark sector. The LH and RH up-quarks mix separately giving rise to SM up-quarks $u^i_{R,L}$ and exotic up-quarks $u^{\prime i}_{R,L}$: \begin{equation} \begin{pmatrix} u^i_{R,L} \\[2mm] u^{\prime i}_{R,L} \end{pmatrix} = \begin{pmatrix} c_{u_{(R,L)i}} & -s_{u_{(R,L)i}} \\[2mm] s_{u_{(R,L)i}} & c_{u_{(R,L)i}} \end{pmatrix} \begin{pmatrix} U^i_{R,L} \\[2mm] \Psi^i_{u_{R,L}} \end{pmatrix}\,, \end{equation} where $c_{u_{(R,L)i}}$ and $s_{u_{(R,L)i}}$ are cosines and sines, respectively. Denoting with $m_{f^i}$ the mass of the up-type $f^i=\{u^i,\,u^{\prime i}\}$ quark, what follows is a direct inverse proportionality between $m_{u^i}$ and $m_{u^{\prime i}}$: \begin{equation} m_{u^i}\,m_{u^{\prime i}}=M_u\,\lambda_u\,\dfrac{v}{\sqrt2}\,. \label{SeeSawMasses} \end{equation} We can express these masses in terms of the flavour symmetry breaking parameters: \begin{equation} m_{u^i}=\dfrac{s_{u_{Ri}}\,s_{u_{Li}}}{c^2_{u_{Ri}}-s^2_{u_{Li}}}\lambda'_u(\hat{Y}_{u})_i\,,\qquad\qquad m_{u^{\prime i}}=\dfrac{c_{u_{Ri}}\,c_{u_{Li}}}{c^2_{u_{Ri}}-s^2_{u_{Li}}}\lambda'_u(\hat{Y}_{u})_i\,, \end{equation} where a straightforward calculation gives \begin{equation} s_{u_{Li}} =\sqrt{\dfrac{m_{u^i}}{M_u}\left\vert\dfrac{\lambda_u\,v\,m_{u^{\prime i}}-\sqrt2\,M_u\, m_{u^i}}{\sqrt2\,\left(m^{2}_{u^{\prime i}}-m^2_{u^i}\right)}\right\vert} \,,\quad\quad s_{u_{Ri}} =\sqrt{\dfrac{m_{u^i}}{\lambda_u\,v}\left\vert\dfrac{\sqrt2\,M_u\,m_{u^{\prime i}}-\lambda_u\,v\, m_{u^i}}{m^{2}_{u^{\prime i}}-m^2_{u^i}}\right\vert}\,. \label{FormulaSin1} \end{equation} These results are exact and valid for all quark generations. However, taking the limit $m_{u^{\prime i}}\gg m_{u^i}$, we find simple formulae that transparently expose the behaviour of the previous expressions. In this limit we find \begin{align} &m_{u^i} \approx \dfrac{v}{\sqrt{2}} \dfrac{\lambda_u\, M_u}{\lambda'_u\, (\hat{Y}_u)_i}\,,\qquad\qquad &&m_{u^{\prime i}}\approx \lambda'_u\, (\hat{Y}_u)_i\,,\\ &s_{u_{Li}}\approx \sqrt{\dfrac{m_{u^i}}{m_{u^{\prime i}}}\dfrac{\lambda_u\,v}{\sqrt2\,M_u}}\,,\qquad\qquad &&s_{u_{Ri}}\approx \sqrt{\dfrac{m_{u^i}}{m_{u^{\prime i}}}\dfrac{\sqrt2\,M_u}{\lambda_u\,v}}\,, \label{FormulaSin2} \end{align} as it is in the usual see-saw scheme in the limit of $(\hat{Y}_u)_i\gg M_u\,,v$. These simplified relations are valid for all the fermions, apart from the top-quark for which the condition $m_{t'}\gg m_t$ is not satisfied and large corrections to eq.~(\ref{FormulaSin2}) are expected. From eq.~(\ref{FormulaSin2}) we see that to reproduce the correct SM quark spectrum, $\hat{Y}_u$ must have an inverted hierarchy with respect to the SM Yukawas.\\ The presence of new exotic quarks has a relevant impact on the SM couplings. Indeed, the charged current-current interactions including SM and heavy quarks are governed by a $6\times6$ matrix which is constructed from the unitary $3\times 3$ matrix $V$ of eq.~(\ref{eq:flavonvev}) and the $c_{u_{Li}}$, $c_{d_{Li}}$, $s_{u_{Li}}$ and $s_{d_{Li}}$ with ($i=1,2,3$) introduced above. Adopting a matrix notation, the non-unitary $3\times3$ matrices \begin{equation}\label{CKM1} c_{u_L}\,V\,c_{d_L}\,,\quad\quad s_{u_L}\,V\,s_{d_L} \end{equation} describe the charged ($W^+$) current-current interactions within the light and heavy systems, respectively. The analogous matrices \begin{equation}\label{CKM2} c_{u_L}\,V\,s_{d_L}\,,\quad\quad s_{u_L}\,V\,c_{d_L} \end{equation} describe the charged current-current interactions between light and heavy fermions. In this notation, $c_{{u,d}_L}$ and $s_{{u,d}_L}$ are diagonal matrices, whose entries are $c_{{u,d}_{Li}}$ and $s_{{u,d}_{Li}}$, respectively. Moreover, we point out that in the no-mixing limit, $c_{{u,d}_{Li}}\to1$ and $s_{{u,d}_{Li}}\to0$, the (non-unitary) $6\times6$ matrix reduces to \begin{equation} \begin{pmatrix} V & 0 \\[2mm] 0 & 0 \end{pmatrix}\,. \end{equation} In this case the CKM matrix coincides with the unitary matrix $V$. As soon as the mixing is switched on, the CKM is modified to include $c_{{u,d}_L}$, which breaks unitarity. However, these deviations from unitarity are quite small (see sec.~\ref{sec:ckmmatrix}). Moreover no new CP violating phases appear in the resulting CKM matrix. At first sight their absence implies no impact of new contributions to the CP-violating observables $S_{\psi K_s}$ and $S_{\psi\phi}$. However, this is not the case due to the modification of the CKM matrix and the presence of flavour gauge bosons. In this respect, this framework does differ from the original MFV of Ref.~\cite{D'Ambrosio:2002ex}. A consequence of the modification of the CKM matrix is the breaking of the GIM mechanism if only SM quarks are considered in loop-induced processes. However, once also the exotic quarks are included the GIM mechanism is recovered. We return to this issue in sec.~\ref{sec:deltaF2properties}. The interactions with the $Z$ boson and the Higgs field are modified too. Their effects have been already discussed in Ref.~\cite{Grinstein:2010ve} and it turned out that the largest constraint comes from the modified $Z\,b\,\bar b$ coupling.\\ Once the flavour symmetry is spontaneously broken by the flavon VEVs, the flavour gauge bosons acquire masses and mix among themselves. Using the vector notation for the flavour gauge bosons, \begin{equation} \chi = \left( A_Q^1, \ldots, A_Q^8, A_U^1, \ldots, A_U^8, A_D^1, \ldots, A_D^8 \right)^T\,, \end{equation} the corresponding mass Lagrangian reads \begin{equation} \mathcal{L}_{\text{mass}} = \dfrac{1}{2}\, \chi^T\,{\cal M}_A^2\,\chi\,\qquad\text{with}\qquad {\cal M}_A^2 = \begin{pmatrix} M^2_{QQ} & M^2_{QU} & M^2_{QD} \\[1mm] M^2_{UQ} & M^2_{UU} & 0 \\[1mm] M^2_{DQ} & 0 & M^2_{DD} \end{pmatrix}\,, \label{eq:massmatrix} \end{equation} and \begin{equation} \begin{aligned} \left( M^2_{QQ} \right)_{ab} = & \frac{1}{4}\, g_Q^2\, \text{Tr} \left[ \left\langle Y_u \right\rangle \left\{ \lambda_{SU(3)}^a,\lambda_{SU(3)}^b \right\} \left\langle Y_u \right\rangle^{\dag} + \left\langle Y_d \right\rangle \left\{ \lambda_{SU(3)}^a,\lambda_{SU(3)}^b \right\} \left\langle Y_d \right\rangle^{\dag} \right] \\ \left( M^2_{UU} \right)_{ab} = & \frac{1}{4}\, g_U^2\, \text{Tr} \left[ \left\langle Y_u \right\rangle \left\{ \lambda_{SU(3)}^a,\lambda_{SU(3)}^b \right\} \left\langle Y_u \right\rangle^{\dag} \right] \\ \left( M^2_{DD} \right)_{ab} = & \frac{1}{4}\, g_D^2\, \text{Tr} \left[ \left\langle Y_d \right\rangle \left\{ \lambda_{SU(3)}^a,\lambda_{SU(3)}^b \right\} \left\langle Y_d \right\rangle^{\dag} \right] \\ \left( M^2_{QU} \right)_{ab} = & \left( M^2_{UQ} \right)_{ba} = - \frac{1}{2}\, g_Q\, g_U\, \text{Tr} \left[ \lambda_{SU(3)}^a \left\langle Y_u \right\rangle^{\dag} \lambda_{SU(3)}^b \left\langle Y_u \right\rangle \right] \\ \left( M^2_{QD} \right)_{ab} = & \left( M^2_{DQ} \right)_{ba} = - \frac{1}{2}\, g_Q\, g_D\, \text{Tr} \left[ \lambda_{SU(3)}^a \left\langle Y_d \right\rangle^{\dag} \lambda_{SU(3)}^b \left\langle Y_d \right\rangle \right]\,. \end{aligned} \label{FGBmasses} \end{equation} In general, the diagonalisation of this mass-matrix is only numerically possible; for the rest of the paper we shall indicate with $\hat{{\cal M}}^2_A$ the diagonal matrix of the gauge boson mass eigenstates $\hat{A}^m$, where $m=1,\ldots,24$, and with ${\cal W}(\hat A^m,\,A_f^a)$, where $f=\{Q,\,U,\,D\}$ and $a=1,\ldots,8$, the transformation to move from the flavour-basis to the mass-basis (see App.~\ref{app:coup2flavour}). \section{\texorpdfstring{\mathversion{bold}$\Delta F=2$\mathversion{normal} Transitions}{Delta F=2 Transitions}} \label{sec:DeltaF2} \subsection{Effective Hamiltonian} In the model in question the effective Hamiltonian for $\Delta F=2$ observables with external down-type quarks consists at the leading order in weak and flavour-gauge interactions of two parts: \begin{itemize} \item[-] Box-diagrams with SM $W$-boson and up-type quark exchanges. Due to the mixing among light and heavy quarks, there are three different types of such diagrams: with light quarks only, with heavy quarks only or with both light and heavy quarks running in the box, as shown in Fig.~\ref{fig:deltaF2box}. \begin{figure}[h!] \begin{center} \includegraphics{deltaF2box} \end{center} \caption{\it The box-diagrams contributing to $K^0-\bar{K}^0$ mixing. Similarly for $B^0_q-\bar{B}^0_q$ mixing.} \label{fig:deltaF2box} \end{figure} If only exchanges of SM quarks are considered, the GIM mechanism is broken in these contributions. It is recovered when also the exchanges of heavy quarks are taken into account. \item[-] The tree-level contributions from heavy gauge boson exchanges of Fig.~\ref{fig:deltaF2tree}, that generate new neutral current-current operators, which violate flavour. \begin{figure}[h!] \begin{center} \includegraphics{deltaF2tree} \end{center} \caption{\it The tree-diagrams contributing to $K^0-\bar{K}^0$ mixing. Similarly, for $B^0_q-\bar{B}^0_q$ mixing. $\hat{A}^m$ is a flavour gauge boson mass eigenstate.} \label{fig:deltaF2tree} \end{figure} \end{itemize} In principle one could consider box-diagrams with flavour-violating neutral heavy boson exchanges but they are negligible with respect to the tree-level contributions. The effective Hamiltonian for $\Delta F=2$ transitions can then be written in a general form as \begin{equation}\label{Heff-general} {\cal H}_\text{ eff}^{\Delta F=2} =\frac{G_F^2\,M^2_{W}}{4\pi^2} \sum_{u^i} C_i(\mu)Q_i, \end{equation} where $M_W$ is the mass of the $W$-boson, $Q_i$ are the relevant operators for the transitions, that we list below, and $C_i(\mu)$ their Wilson coefficients evaluated at a scale $\mu$, which will be specified in the next section. While in the SM only one operator contributes to each $\Delta F=2$ transition, i.e. $Q_1^{\rm VLL}(M)$ in the list of eq.~(\ref{normalM}), in the model in question there are more dimension-six operators. In the absence of flavon exchanges, the relevant operators for the $M^0$--$\bar{M}^0$ ($M=K,B_d,B_s$) systems are \cite{Buras:2000if}: \begin{equation} \begin{aligned} Q_1^{\rm VLL}(K) &= (\bar{s}^{\alpha} \gamma_{\mu} P_L d^{\alpha}) (\bar{s}^{ \beta} \gamma^{\mu} P_L d^{ \beta})\,,\qquad\quad &Q_1^{\rm VLL}(B_q) &= (\bar{b}^{\alpha} \gamma_{\mu} P_L q^{\alpha}) (\bar{b}^{ \beta} \gamma^{\mu} P_L q^{ \beta})\,,\\[2mm] Q_1^{\rm VRR}(K) &= (\bar{s}^{\alpha} \gamma_{\mu} P_R d^{\alpha}) (\bar{s}^{ \beta} \gamma^{\mu} P_R d^{ \beta})\,,\qquad\quad &Q_1^{\rm VRR}(B_q) &= (\bar{b}^{\alpha} \gamma_{\mu} P_R q^{\alpha}) (\bar{b}^{ \beta} \gamma^{\mu} P_R q^{ \beta})\,,\\[2mm] Q_1^{\rm LR}(K) &= (\bar{s}^{\alpha} \gamma_{\mu} P_L d^{\alpha}) (\bar{s}^{ \beta} \gamma^{\mu} P_R d^{ \beta})\,,\qquad\quad &Q_1^{\rm LR}(B_q) &= (\bar{b}^{\alpha} \gamma_{\mu} P_L q^{\alpha}) (\bar{b}^{ \beta} \gamma^{\mu} P_R q^{ \beta})\,,\\[2mm] Q_2^{\rm LR}(K) &= (\bar{s}^{\alpha} P_L d^{\alpha}) (\bar{s}^{ \beta} P_R d^{ \beta})\,,\qquad\quad &Q_2^{\rm LR}(B_q) &= (\bar{b}^{\alpha} P_L q^{\alpha}) (\bar{b}^{ \beta} P_R q^{ \beta})\,. \end{aligned} \label{normalM} \end{equation} where $P_{L,R} = (1\mp \gamma_5)/2$. In the next section, we collect the Wilson coefficients of these operators separating the contributions from box-diagrams and from the tree-level heavy gauge boson exchanges so that \begin{equation} C^{(M)}_i=\Delta^{(M)}_{\rm Box}\,C_i+\Delta^{(M)}_{\rm A}C_i\,, \end{equation} where $M=K,\,B_d,\,B_s$. \subsection{Wilson Coefficients from Box-Diagrams} Keeping in mind the discussion around eqs.~(\ref{CKM1}) and (\ref{CKM2}) we introduce the mixing parameters: \begin{equation} \lambda_i(K)=V_{is}^{*}V_{id},\qquad \lambda_i(B_q)=V_{ib}^{*}V_{iq}, \end{equation} where $q=d,s$ and $V$ is not the CKM matrix but the unitary matrix of eq.~\eqref{eq:flavonvev}. Calculating the usual box-diagrams but including also contributions from heavy fermions (see Fig.~\ref{fig:deltaF2box}) and corrections to $W$-quark vertices according to the Feynman rules in App.~\ref{app:coup2smgauge} we find the following contributions to the Wilson coefficients relevant for the $K^0-\bar K^0$ system at the matching scale $\mu_t$ in the ballpark of the top quark mass\footnote{We explain this choice in the context of QCD corrections below.}: \begin{equation} \Delta^{(K)}_{\rm Box}C_1^{VLL}(\mu_t)=\Delta_1(\mu_t,K)+\Delta_2(\mu_t,K)+\Delta_3(\mu_t,K)\,, \end{equation} where \begin{align} \label{VLLK1} &\Delta_1(\mu_t,K)=(c_{d_{L1}}\,c_{d_{L2}})^2\,\sum_{i,j=1,2,3}\lambda_i(K)\,\lambda_j(K)\, c^2_{u_{Li}}\,c^2_{u_{Lj}}\, F(x_i,x_j)\,,\\ \label{VLLK2} &\Delta_2(\mu_t,K)=(c_{d_{L1}}\,c_{d_{L2}})^2\,\sum_{i,j=1,2,3}\lambda_i(K)\,\lambda_j(K)\, s^2_{u_{Li}}\,s^2_{u_{Lj}}\, F(x^\prime_i,x^\prime_j)\,,\\ \label{VLLK3} &\Delta_3(\mu_t,K)=(c_{d_{L1}}\,c_{d_{L2}})^2\sum_{i,j=1,2,3}\lambda_i(K)\,\lambda_j(K)\, \left[ c^2_{u_{Li}}\,s^2_{u_{Lj}}\, F(x_i,x^\prime_j)+ s^2_{u_{Li}}\,c^2_{u_{Lj}}\, F(x^\prime_i,x_j) \right]\,. \end{align} The arguments of the box-functions $F$ are \begin{equation} x_i=\left(\frac{m_{u^i}}{M_{W}}\right)^2\,, \qquad x^\prime_j=\left(\frac{m_{u^{\prime j}}}{M_{W}}\right)^2\,, \end{equation} where both $i$ and $j$ run over $1,2,3$. The loop-function $F(x_i,x_j)$ is \begin{equation} F(x_i,x_j) = \dfrac{1}{4}\Big[ \left(4+x_i\,x_j\right)\,I_2\left(x_i,\,x_j\right) - 8\, x_i\, x_j\, I_1\left(x_i,\, x_j\right)\Big] \end{equation} with \begin{equation} \begin{aligned} I_1(x_i,\,x_j) & = & \dfrac{1}{(1-x_i)(1-x_j)} + \left[\frac{x_i\, \ln(x_i)}{(1-x_i)^2 (x_i-x_j)} + (i\leftrightarrow j)\right]\,,\\ I_2(x_i,\,x_j) & = & \dfrac{1}{(1-x_i)(1-x_j)} + \left[ \frac{x_i^2\, \ln(x_i)}{(1-x_i)^2 (x_i-x_j)} + (i\leftrightarrow j)\right]. \end{aligned} \end{equation} For the $B_{q}^0-\bar B_{q}^0$ mixing we have to replace $K$ by $B_q$ and $c_{d_{L1}}\,c_{d_{L2}}$ by $c_{d_{L1}}\,c_{d_{L3}}$ ($c_{d_{L2}}\,c_{d_{L3}}$) in the case of $q=d$ ($q=s$). There are no contributions to other coefficients from box-diagrams. \subsection{Wilson Coefficients from Tree-Diagrams} Calculating the tree-level diagrams in Fig.~\ref{fig:deltaF2tree} with the exchange of neutral gauge boson mass-eigenstates $\hat{A}^m$ ($m=1,\ldots,24$) we find the following contributions to the Wilson coefficient at the high scale $\mu_H$, which is of the order of the mass of the corresponding neutral gauge boson: for the $K$ system we have \begin{align} \label{AVLL} &\Delta^{(K)}_AC_1^{VLL}(\mu_H)=\frac{4\pi^2}{G_F^2M_W^2}\sum_{m=1}^{24}\frac{1}{2\, \hat{M}^2_{A^m}} \left[\left(\hat {\cal G}^d_L\right)_{ds,m}\right]^2\\ \label{AVRR} &\Delta^{(K)}_AC_1^{VRR}(\mu_H)=\frac{4\pi^2}{G_F^2M_W^2}\sum_{m=1}^{24}\frac{1}{2\, \hat{M}^2_{A^m}} \left[\left(\hat {\cal G}^d_R\right)_{ds,m}\right]^2\\ \label{ALR} &\Delta^{(K)}_AC_1^{LR}(\mu_H)=\frac{4\pi^2}{G_F^2M_W^2}\sum_{m=1}^{24}\frac{1}{2\, \hat{M}^2_{A^m}} \left[2\,\left(\hat {\cal G}^d_L\right)_{ds,m}\, \left(\hat {\cal G}^d_R\right)_{ds,m} \right] \end{align} where the indices $d$ and $s$ stand for the external quarks $d$ and $s$, while the index $m$ refers to the $\hat{A}^m$ gauge boson mass-eigenstate. The corresponding expressions for the $B_d$ ($B_s$) system are easily derived from the previous ones by substituting $ds$ with $db$ ($sb$) in the indices of the couplings. The explicit expression for the couplings $\left(\hat {\cal G}^d_{L,R}\right)_{ij,m}$ are given in App.~\ref{app:coup2flavour}. \subsection{Properties}\label{sec:deltaF2properties} We note a few properties: \begin{itemize} \item[-] Focussing on eqs.~(\ref{VLLK1})--(\ref{VLLK3}) and the corresponding expressions in the $B_{d,s}$ systems, for a fixed $\lambda_i\lambda_j$, we have in the box-diagram contributions the combination \begin{equation} {\cal F}_{ij}\equiv c^2_{u_{Li}}c^2_{u_{Lj}} F(x_i,x_j)+s^2_{u_{Li}}s^2_{u_{Lj}} F(x^\prime_i,x^\prime_j)+ c^2_{u_{Li}}s^2_{u_{Lj}} F(x_i,x^\prime_j)+ s^2_{u_{Li}}c^2_{u_{Lj}} F(x^\prime_i,x_j). \label{CalliF} \end{equation} If all fermion masses were degenerate, this combination would be independent of $i,j$ and the unitarity of the matrix $V$ would assure the vanishing of FCNC currents. This is precisely what one expects from the GIM mechanism. \item[-] It is possible to arrange the function ${\cal F}$ in order to match with the usual notation: for the $K$ system we write \begin{equation} \label{S0} \begin{aligned} S_0(x_t)&\longrightarrow S_t^{(K)}\equiv(c_{d_{L1}}\,c_{d_{L2}})^2 \left({\cal F}_{33}+{\cal F}_{11}-2{\cal F}_{13}\right)\,,\\ S_0(x_c)&\longrightarrow S_c^{(K)}\equiv(c_{d_{L1}}\,c_{d_{L2}})^2 \left({\cal F}_{22}+{\cal F}_{11}-2{\cal F}_{12}\right)\,,\\ S_0(x_c,x_t)&\longrightarrow S_{ct}^{(K)}\equiv(c_{d_{L1}}\,c_{d_{L2}})^2 \left({\cal F}_{23}+{\cal F}_{11}-{\cal F}_{13}-{\cal F}_{12}\right)\,. \end{aligned} \end{equation} For the $B_q$ systems we define similar functions $S_i^{(B_q)}$ that can be simply derived from the previous ones by substituting $c_{d_{L1}}\,c_{d_{L2}}$ with $c_{d_{L1}}\,c_{d_{L3}}$ ($c_{d_{L2}}\,c_{d_{L3}}$) in the case of $q=d$ ($q=s$). In particular the combination of the ${\cal F}_{ij}$ factors are universal. In order to recover the $S_0$ functions from the $S^{(M)}_i$ expressions it is necessary to take the limit in which all the cosines are equal to $1$ and all the sines are zero. \item[-] The appearance of $c_i$ and $s_j$ factors introduces in general new flavour dependence, implying violation of certain MFV relations even in the absence of new CP-violating phases. \item[-] There are no purely new CP-violating phases in this model, but the CP-odd phase of the CKM matrix induces sizeable new effects through new contributions to the mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$, in the $B_d^0-\bar B_d$ and the $B_s^0-\bar B_s^0$ systems, respectively. Moreover, similarly to the mass differences $\Delta M_{B_{d,s}}$, new flavour-violating contributions affect the parameter $\varepsilon_K$ and there are correlations between the new physics contributions to all these observables as we shall see below. \item[-] The heavy flavour gauge bosons show flavour-violating couplings that can be strongly hierarchical: looking at the largest values of these couplings we find \begin{equation} \left(\hat {\cal G}^d_{L,R}\right)_{sb}\gg\left(\hat {\cal G}^d_{L,R}\right)_{db}\gg\left(\hat {\cal G}^d_{L,R}\right)_{ds}\,,\quad\quad \left(\hat {\cal G}^u_{L,R}\right)_{ct}\gg\left(\hat {\cal G}^u_{L,R}\right)_{ut}\gg\left(\hat {\cal G}^u_{L,R}\right)_{uc}\,. \end{equation} An example is presented in App.~\ref{app:coup2flavour} for the lightest gauge boson. This hierarchy is due to both the mixings among SM and exotic quarks and the sequential breaking of the flavour symmetry encoded in the flavon VEVs, as seen from Eqs.~(\ref{FGBmasses}), (\ref{DefGunhat}) and (\ref{DefGhat}). \end{itemize} \subsection{QCD Corrections and Hadronic Matrix Elements} The complete analysis requires the inclusion of the renormalisation group QCD evolution from the high scales, at which the initial effective Hamiltonians given above are constructed, down to low energy scales, at which the hadronic matrix elements are evaluated by lattice methods. A complication arises in the model in question as several rather different high scales are involved, such as the masses of the $W$-boson $M_{W}$, the masses of the neutral gauge bosons $\hat{M}_{A^m}$ and the masses of heavy quarks $m_{q^{\prime i}}$. Before accounting for this problem we recall a very efficient method for the inclusion of all these QCD effects in the presence of a single high scale, which we denote by $\mu_H$. Instead of evaluating the hadronic matrix elements at the low-energy scale, we can evaluate them at $\mu_H$, corresponding to the scale at which heavy particles are integrated out. The amplitude for $M-\bar{M}$ mixing ($M= K^0, B^0_d,B^0_s$) at the scale $\mu_H$ is then simply given by \begin{equation} \label{amp6} {\cal A}(M\to \bar{M})=\dfrac{G_F^2\,M^2_{W}}{4\pi^2}\sum_{i,a} C^a_i(\mu_H)\langle \bar{M} |Q^a_i(\mu_H)|M\rangle\,, \end{equation} where the sum runs over all the operators listed in eq.~(\ref{normalM}). The matrix element for $M-\bar{M}$ mixing is given by \begin{equation} \label{eq:matrix} \langle \bar M|Q_i^a(\mu_H)|M\rangle = \dfrac{2}{3}\,m_{M}^2\, F_{M}^2\, P_i^a(M), \end{equation} where the coefficients $P_i^a(M)$ collect compactly all RG effects from scales below $\mu_H$ as well as hadronic matrix elements obtained by lattice methods at low energy scales. Analytic formulae for all these coefficients, $P_i^a(B_q)$ and $P_i^a(K)$, are given in Ref.~\cite{Buras:2001ra}, while the corresponding numerical values will be given below for some interesting values of $\mu_H$. The question then is how to generalise this method to the case at hand which involves several rather different high scales. There are three types of contributions for which the relevant high energy scales attributed to the coefficients quoted above will differ from each other: \begin{enumerate} \item The SM box-diagrams involving $W$-bosons and the SM quarks. Here the scale is chosen to be $\mu_t={\cal O}(m_t)$. \item Tree-level diagrams mediated by neutral heavy gauge bosons, $\hat A^m$. Since we are taking into consideration the contributions from all such gauge bosons, we shall take as the initial scale for the RG evolution in each case exactly the mass of the involved gauge boson. \item The only problematic case at first sight are the contributions from box-diagrams that involve simultaneously heavy and light particles. Here the correct procedure would be to integrate out first the heavy fermions and construct an effective field theory not involving them as dynamical degrees of freedom. However, as the only relevant contribution comes from the lightest exotic fermion\footnote{This is strictly true only for the $B_q$ systems, because in the $K$ system due to the CKM suppressions, the contribution from $c'$ may be non-negligible, as accounted for in our numerical analysis. Still, the most relevant contribution comes from $t'$.}, that is $t^\prime$, whose mass is relatively close to $m_t$, we can also here set the matching scale to be $\mu_t$. As the dominant effects from RG evolution, included here, come from scales below $M_{W}$, this procedure should sufficiently well approximate the exact one. \end{enumerate} Having the initial conditions for Wilson coefficients at a given high scale $\mu_H$ and provided also the corresponding hadronic matrix elements at this scale are known, we can calculate the relevant $M-\bar{M}$ amplitude by means of eq.~(\ref{amp6}). As seen in eq.~(\ref{eq:matrix}) these matrix elements are directly given in terms of the parameters $P_i^a(K)$, $P_i^a(B_d)$ and $P_i^a(B_s)$ for which explicit expressions in terms of RG QCD factors and the non-perturbative parameters $B_i^a(\mu_L)$ are given in eqs.~(7.28)--(7.34) of Ref.~\cite{Buras:2001ra}: the $\mu_L$ denotes the low energy scale and it takes the value $2\, {\rm GeV}$ ($4.6\, {\rm GeV}$) for the $K$ system ($B_q$ systems). The $B_i^a(\mu_L)$ parameters are subject to considerable uncertainties. Exception are the $B_1^{VLL}$ parameters for which a significant progress has been made in the recent years by lattice simulations. In the SM analysis, the RG invariant parameters $\hat B_1^{VLL}$ are usually considered and denoted by $\hat B_K$ and $\hat B_{B_q}$. We report their values in tab.~\ref{tab:input}. For completeness we recall the values of $B_1^{VLL}$ that we extracted from the most recent lattice simulations: \begin{equation} \begin{aligned} &B_1^{VLL} = 0.515(14)\,,\qquad\qquad &&\text{for $K$ system}\\ &B_1^{VLL} = 0.825(72)\,,\qquad\qquad &&\text{for $B_d$ system}\\ &B_1^{VLL} = 0.871(39)\,,\qquad\qquad &&\text{for $B_s$ system}\,. \end{aligned} \end{equation} As these parameters are the same for VRR contributions we will combine them together with the VLL contributions in the final formula at the end of this section. Neglecting the unknown ${\cal O}(\alpha_s)$ contributions to Wilson coefficients of the remaining operators at the high energy scale, our NLO RG analysis involves only the values of the coefficients $P_1^{LR}(K)$, $P_1^{LR}(B_d)$ and $P_1^{LR}(B_s)$ calculated at $\mu_H$. We are not considering the intermediate thresholds of exotic quarks, since the smallness of $\alpha_s$ and the absence of the flavour dependence in the LO anomalous dimensions of the contributing operators render the corresponding effects negligible. To obtain these values we need only the values of $B_1^{LR}$ and $B_2^{LR}$, that we report below in the NDR scheme\footnote{These values can be found in Refs.~\cite{Babich:2006bh,Becirevic:2001xt}, where $B_1^{LR}$ ($B_2^{LR}$) is called $B_5$ ($B_4$).} \cite{Babich:2006bh,Becirevic:2001xt}: \begin{equation} \begin{aligned} &B_1^{LR} = 0.562(39)(46)\,,\qquad &&B_2^{LR} = 0.810(41)(31)\,,\qquad &&\text{for $K$ system}\\ &B_1^{LR} = 1.72(4)({}^{+20}_{-6})\,,\qquad &&B_2^{LR} = 1.15(3)({}^{+5}_{-7})\,,\qquad &&\text{for $B_d$ system}\\ &B_1^{LR} = 1.75(3)({}^{+21}_{-6})\,,\qquad &&B_2^{LR} = 1.16(2)({}^{+5}_{-7})\,,\qquad &&\text{for $B_s$ system}\,. \end{aligned} \end{equation} In tab.~\ref{tab:PiFactors}, we show the resulting $P_i$ factors for some relevant values of $\mu_H$. \begin{table}[ht] \begin{center} \begin{tabular}{|l||c|c|c|c|} \hline &&&&\\[-3mm] $~\qquad\mu_H$ & $500\, {\rm GeV}$ & $1\, {\rm TeV}$ & $3\, {\rm TeV}$ & $10\, {\rm TeV}$ \\[2mm] \hline &&&&\\[-3mm] $P_1^{VLL}(\mu_H,\,K)$ & 0.392 & 0.384 & 0.373 & 0.363 \\[2mm] $P_1^{LR}(\mu_H,\,K)$ & -35.7 & -39.3 & -45.0 & -51.4 \\[2mm] $P_1^{VLL}(\mu_H,\,B_d)$ & 0.675 & 0.662 & 0.643 & 0.624 \\[2mm] $P_1^{LR}(\mu_H,\,B_d)$ & -2.76 & -2.97 & -3.31 & -3.69 \\[2mm] $P_1^{VLL}(\mu_H,\,B_s)$ & 0.713 & 0.698 & 0.678 & 0.659 \\[2mm] $P_1^{LR}(\mu_H,\,B_s)$ & -2.76 & -2.97 & -3.31 & -3.69\\[2mm] \hline \end{tabular} \end{center} \caption{\it Central values of $P_i$ factors for \mbox{$\mu_H=\{0.5,\,1,\,3,\,10\}\, {\rm TeV}$.}} \label{tab:PiFactors} \end{table} Notice that the LR operators, arising from integrating out the heavy flavour gauge bosons, are strongly enhanced by the RG QCD running as can be deduced from the values of the $P_1^{LR}$ factors. A priori, such contributions could be very important. \subsection[\texorpdfstring{Final Formulae for $\Delta F=2$ Observables} {Final Formulae for Delta F=2 Observables}]{ \mathversion{bold}Final Formulae for $\Delta F=2$ Observables\mathversion{normal} } We collect here the formulae we shall use in our numerical analysis. The mixing amplitude $M^i_{12}$ $(i=K,d,s)$ is related to the relevant effective Hamiltonian through \begin{equation} 2\,m_K\left(M_{12}^K\right)^\ast= \langle\bar K^0|{\cal H}_\text{ eff}^{\Delta S=2}|K^0\rangle\,,\qquad\qquad 2\,m_{B_q}\left(M_{12}^q\right)^\ast= \langle\bar B_q^0|{\cal H}_\text{ eff}^{\Delta B=2}|B_q^0\rangle \label{MixingAmplitudeHamiltonian} \end{equation} with $q=d,s$. The $K_L-K_S$ mass difference and the CP-violating parameter $\varepsilon_K$ are then given by \begin{equation} \Delta M_K=2\RE\left(M_{12}^K\right)\,,\qquad\qquad \varepsilon_K=\dfrac{\kappa_\epsilon\, e^{i\,\varphi_\epsilon}}{\sqrt{2}(\Delta M_K)_\text{exp}}\IM\left(M_{12}^K\right)\,, \label{DeltaMKepsilonK} \end{equation} where $\varphi_\epsilon = (43.51\pm0.05)^\circ$ and $\kappa_\epsilon=0.923\pm0.006$ takes into account that $\varphi_\epsilon\ne \pi/4$ and includes long distance effect in $\IM \Gamma_{12}$ \cite{Buras:2008nn} \footnote{This value has been confirmed by lattice and presented with a smaller error in Ref.~\cite{Blum:2011ng}.} and $\IM M_{12}$ \cite{Buras:2010pza}. The mixing amplitude entering the previous expressions can be decomposed into two parts, one containing the $LL$ and $RR$ contributions and the second only the $LR$ ones: \begin{equation} M^K_{12}=(M^K_{12})_1+(M^K_{12})_2, \end{equation} where \begin{equation} \begin{aligned} &\begin{split} (M_{12}^K)_1=\dfrac{G_F^2\,M_W^2}{12\pi^2}\,F_K^2\,m_K\Big[&\hat{B}_K\,\eta_1\,\lambda^2_2(K)\,S^{(K)}_c+\hat{B}_K\,\eta_2\,\lambda^2_3(K)\,S^{(K)}_t+\\ &+2\,\hat{B}_K\,\eta_3\,\lambda_2(K)\,\lambda_3(K)\,S_{ct}^{(K)}+\\ &+P_1^{VLL}(\mu_H,K)\,\left(\Delta^{(K)}_A\,C_1^{VLL}(\mu_H)+\Delta^{(K)}_A\,C^{VRR}(\mu_H)\right)\, \Big]^*\,, \end{split}\\ &(M_{12}^K)_2=\dfrac{G_F^2\,M_W^2}{12\pi^2}\,F_K^2\,m_K\,P_1^{LR}(\mu_H,K)\,\Delta^{(K)}_A\,C_1^{LR*}(\mu_H)\,. \end{aligned} \end{equation} Analogously, for the the $B_{d,s}^0-\bar B_{d,s}^0$ systems the two parts of the mixing amplitude are given by: \begin{equation} \begin{aligned} &\begin{split} (M_{12}^{q})_1=\dfrac{G_F^2\,M^2_W}{12\pi^2}\,F^2_{B_q}\,m_{B_q}\, \Big[&\eta_B\,\hat{B}_{B_q}\,\lambda^2_3(B_q)\,\,S_t^{(B_q)}+\\ &+P_1^{VLL}(\mu_H,B_q)\,\left(\Delta^{(B_q)}_A C_1^{VLL}(\mu_H)+\Delta^{(B_q)}_A C^{VRR}(\mu_H)\right)\Big]^*\,, \end{split}\\ &(M_{12}^{q})_2=\frac{G_F^2\,M^2_W}{12\pi^2}\,F^2_{B_q}\,m_{B_q} \, P_1^{LR}(\mu_H,B_q)\,\Delta^{(B_q)}_A C_1^{LR*}(\mu_H)\,. \end{aligned} \end{equation} Here $\eta_{1,2,3,B}$ are known SM QCD corrections given in tab.~\ref{tab:input} and $P_i^a(\mu_H,M)$ describe the QCD evolution from $\mu_H$ down to $\mu_L$ for the considered system. For the $B_q$ systems, it is useful to rearrange the definition of the mixing amplitude $M_{12}^q$ as follows \cite{Bona:2005eu} \begin{equation} M_{12}^q=\left(M_{12}^q\right)_\text{SM}C_{B_q}e^{2\,i\,\varphi_{B_q}}\,, \label{NewMixingAmplitudeBds} \end{equation} where $C_{B_{d,s}}$ and $\varphi_{B_{d,s}}$ account for deviations from the SM contributions. Therefore, the mass differences turn out to be \begin{equation} \Delta M_{B_q}=2\left|M_{12}^q\right|=(\Delta M_{B_q})_\text{SM}C_{B_q}\qquad (q=d,s)\,, \label{DeltaMB} \end{equation} where \begin{equation} \bigl(M_{12}^d\bigr)_\text{SM}= \bigl|\bigl(M_{12}^d\bigr)_\text{SM}\bigr|e^{2\,i\,\beta}\,,\qquad\qquad \bigl(M_{12}^s\bigr)_\text{SM}= \bigl|\bigl(M_{12}^s\bigr)_\text{SM}\bigr|e^{2\,i\,\beta_s}\,. \label{MB12SM} \end{equation} Here the phases $\beta\approx 22^\circ$ and $\beta_s\simeq -1^\circ$ are defined through \begin{equation} V^{SM}_{td}=|V_{td}^{SM}|e^{-i\beta} \qquad\qquad\textrm{and}\qquad\qquad V^{SM}_{ts}=-|V^{SM}_{ts}|e^{-i\beta_s}\,. \label{eq:3.40} \end{equation} The coefficients of $\sin(\Delta M_{B_d}\, t)$ and $\sin(\Delta M_{B_s}\, t)$ in the time dependent asymmetries in $B_d^0\to\psi K_S$ and $B_s^0\to\psi\phi$ are then given, respectively, by: \begin{equation} S_{\psi K_S} =\sin(2\beta+2\varphi_{B_d})\,,\qquad\qquad S_{\psi\phi} = \sin(2|\beta_s|-2\varphi_{B_s})\,. \label{Sobservables} \end{equation} Notice that in the presence of non-vanishing $\varphi_{B_d}$ and $\varphi_{B_s}$ these two asymmetries do not measure $\beta$ and $\beta_s$ but $(\beta+\varphi_{B_d})$ and $(|\beta_s|-\varphi_{B_s})$, respectively. \mathversion{bold} \subsection{\texorpdfstring {The Ratio $\Delta M_{B_d}/\Delta {M_{B_s}}$ and the $B^+\to\tau^+\nu$ Decay} {The Ratio Delta MBd/Delta MBs and the B^+ -> tau+ nu Decay}} \mathversion{normal} The expressions for the mass-differences recovered in the previous section are affected by large uncertainties, driven by the decay constants $F_{B_{d,s}}$. To soften the dependence of our analysis on these theoretical errors, we consider the ratio among $\Delta M_{B_d}$ and $\Delta M_{B_s}$, that we call $R_{\Delta M_B}$, and the ratio among the branching ratio of the $B^+\to\tau^+\nu$ decay and $\Delta M_{B_d}$, that we name $R_{BR/\Delta M}$. Indeed, when considering $R_{\Delta M_B}$, we notice that the SM theoretical errors are encoded into the parameter $\xi=1.237\pm0.032$, that is much less affected by uncertainties with respect to the mass differences. When considering the NP effects, we obtain \begin{equation} R_{\Delta M_B}=\dfrac{(\Delta M_{B_d})_\text{SM}}{(\Delta M_{B_s})_\text{SM}}\dfrac{C_{B_d}}{C_{B_s}}\,. \end{equation} On the other hand, in the SM, the $B^+\to\tau^+\nu$ decay occurs at the tree-level through the exchange of the $W$-boson. Therefore, the expression for its branching ratio is only slightly modified in our model: \begin{equation} BR(B^+\to\tau^+\nu)=\dfrac{G_F^2\,m_{B^+}\,m_\tau^2}{8\pi}\left(1-\dfrac{m_\tau^2}{m^2_{B^+}}\right)^2\,F^2_{B^+}\,|c_{u_{L1}} \,V_{ub}\,c_{d_{L3}}|^2\,\tau_{B^+}\,, \end{equation} where the NP effects are represented by the cosines. Notice that heavy flavour gauge boson contributions could contribute only at the loop-level and can be safely neglected, since they compete with a tree-level process. Furthermore, in the previous expression we have assumed the SM couplings for the leptons to the $W$-boson: even if we are not considering the lepton sector in our analysis it is reasonable to assume that any NP modification can be safely negligible, as these couplings are strongly constrained by the SM electroweak analysis. In the ratio $R_{BR/\Delta M}$, the dependence on $F_{B_d}$, which is indeed the main source of the theoretical error on $\Delta M_{B_d}$, is cancelled \cite{Ikado:2006un,Isidori:2006pk}: \begin{equation} R_{BR/\Delta M}=\dfrac{3\,\pi\,\tau_{B^+}}{4\,\eta_B\,\hat B_{B_d}\,S_0(x_t)}\dfrac{c^2_{u_{L1}}\,c^2_{d_{L3}}}{C_{B_d}}\dfrac{m^2_\tau}{M_W^2}\dfrac{\left|V_{ub}\right|^2}{\left|V^*_{tb}\,V_{td}\right|^2}\left(1-\dfrac{m_\tau^2}{m_{B_d}^2}\right)^2\,, \end{equation} where the second fraction contains all NP contributions and we took $m_{B^+}\approx m_{B_d}$, well justified considering the errors in the other quantities. The SM prediction of this observable should be compared with the data \begin{equation} R_{BR/\Delta M}=(3.25\pm0.67)\times 10^{-4}\, {\rm ps}\,. \end{equation} See tab.~\ref{tab:predictionsexperiment} for the SM prediction. \mathversion{bold} \section[\mathversion{bold} The $b$ semileptonic CP-asymmetry \mathversion{normal}]{The $b$ semileptonic CP-asymmetry} \mathversion{normal} \label{sec:ASL} In the $B_q$ systems, apart from $\Delta M_{B_q}$, $S_{\psi K_S} $ and $S_{\psi\phi}$, a third quantity providing information on the meson mixings is the $b$ semileptonic CP-asymmetry $A^b_{sl}$ \cite{Abazov:2011yk,Lenz:2011ww}: \begin{equation} A^b_{sl}=(0.594\pm0.022)\,a^d_{sl}+(0.406\pm0.022)\,a^s_{sl}\,, \end{equation} where \begin{equation} \begin{aligned} &a^d_{sl}=\left|\dfrac{\left(\Gamma_{12}^d\right)_{SM}}{\left(M_{12}^d\right)_{SM}}\right|\sin\phi_d=(5.4\pm1.0)\times10^{-3}\,\sin\phi_d\,,\\ &a^s_{sl}=\left|\dfrac{\left(\Gamma_{12}^s\right)_{SM}}{\left(M_{12}^s\right)_{SM}}\right|\sin\phi_s=(5.0\pm1.1)\times10^{-3}\,\sin\phi_s\,, \end{aligned} \end{equation} with \begin{equation} \begin{aligned} &\phi_d=\arg\Big(-\left(M_{12}^d\right)_{SM}/\left(\Gamma_{12}^d\right)_{SM}\Big)=-4.3^\circ\pm 1.4^\circ\,,\\ &\phi_s=\arg\Big(-\left(M_{12}^s\right)_{SM}/\left(\Gamma_{12}^s\right)_{SM}\Big)=0.22^\circ\pm 0.06^\circ\,. \end{aligned} \end{equation} In the presence of NP, these expressions are modified. Since we have already discussed the NP effects on $M_{12}^q$ in the previous sections, we focus now only on $\Gamma_{12}^q$. It is useful to adopt a notation for $\Gamma_{12}^q$ similar to the one in eq.~(\ref{NewMixingAmplitudeBds}) for $M_{12}^q$: \begin{equation} \Gamma_{12}^q=(\Gamma_{12}^q)_\text{SM}\,\tilde C_{B_q}\,e^{-2\,i\,\tilde\varphi_{B_q}}\,, \label{NotationGamma12withNP} \end{equation} where $\tilde C_{B_q}$ is a real parameter. With such a notation we get, \begin{equation} a^q_{sl}=\left|\dfrac{\left(\Gamma_{12}^d\right)_{SM}}{\left(M_{12}^d\right)_{SM}}\right|\dfrac{\tilde C_{B_q}}{C_{B_q}}\sin\left(\phi_d+2\varphi_{B_q}+2\tilde \varphi_{B_q}\right)\,. \end{equation} Notice, that in the MGF context we are considering, the phase $\tilde\varphi_{B_q}$ is vanishing, while $\tilde C_{B_q}$ is mainly given by $c^2_{u_{L2}}\,c_{d_{Lb}}\,c_{d_{Lq}}\approx1$. As a result the only NP modifications are provided by the NP contributions on $M^q_{12}$. \mathversion{bold} \section[\texorpdfstring{ \mathversion{bold}The $\bar B\to X_s \gamma$ Decay\mathversion{normal}} {The B -> Xs gamma Decay}]{The $\bar B \to X_s \gamma$ Decay} \mathversion{normal} \label{sec:BSG} \subsection{Effective Hamiltonian} The decay $\bar B\to X_s\gamma$ is mediated by the photonic dipole operators $Q_{7\gamma}$ and $Q_{7\gamma}^{\prime}$ and through mixing also by the gluonic dipole operators $Q_{8G}$ and $Q_{8G}^{\prime}$. In our conventions they read \begin{equation} \label{O6B} \begin{aligned} Q_{7\gamma} &= \dfrac{e}{16\pi^2}\, m_b\, \bar{s}_\alpha\, \sigma^{\mu\nu}\, P_R\, b_\alpha\, F_{\mu\nu}\,,\\[2mm] Q_{8G} &= \dfrac{g_s}{16\pi^2}\, m_b\, \bar{s}_\alpha\, \sigma^{\mu\nu}\, P_R\, T^a_{\alpha\beta}\, b_\beta\, G^a_{\mu\nu} \end{aligned} \end{equation} and the corresponding primed dipole operators are obtained by substituting $P_R$ with $P_L$. The effective Hamiltonian for $b\to s\gamma$ at a scale $\mu$ in the SM normalisation and considering only the dipole operators reads \begin{equation} \label{HeffW_at_mu} \begin{split} {\cal H}_{\text{eff}}^{b\to s\gamma} = - \dfrac{4 G_{\rm F}}{\sqrt{2}} V_{ts}^* V_{tb} \Big[& \Delta C_{7\gamma}(\mu) Q_{7\gamma} + \Delta C_{8G}(\mu) Q_{8G}+\\ &+\Delta C^{\prime}_{7\gamma}(\mu) Q^{\prime}_{7\gamma} + \Delta C^{\prime}_{8G}(\mu) Q^{\prime}_{8G} \Big]\,. \end{split} \end{equation} We have kept the contributions of the primed dipole operators $Q_{7\gamma}^{\prime}$ and $Q_{8G}^{\prime}$ even though their Wilson coefficients are suppressed by $m_s/m_b$ with respect to the unprimed Wilson coefficients. However, the mixing of neutral current-current operators into $Q_{7\gamma}^{\prime}$ and $Q_{8G}^{\prime}$ can affect $\Delta C^{\prime}_{7\gamma}(\mu_b)$ as shown in Ref.~\cite{Buras:2011zb}. Similarly to the Hamiltonian for the $\Delta F=2$ transitions, the Wilson coefficients in the Hamiltonian can be separated into two parts: \begin{itemize} \item[-] The SM-like contribution from diagrams with $W$-bosons with modified couplings to both SM and exotic quarks of charge $+2/3$, denoted below by $u$ and $u'$, respectively: \begin{center} \includegraphics{bsgamma_W} \end{center} \item[-] The contribution of heavy neutral gauge bosons exchanges with virtual SM and exotic quarks of charge $-1/3$, denoted below by $d$ and $d'$, respectively: \begin{center} \includegraphics{bsgamma_AH} \end{center} \end{itemize} The first contribution has already been considered in Ref.~\cite{Grinstein:2010ve}, while the second, the impact of the heavy neutral gauge bosons on $b\rightarrow s\gamma$, has been recently pointed out in Ref.~\cite{Buras:2011zb}. In particular it has been found that the QCD renormalisation group effects in the neutral gauge boson contributions can strongly affect the branching ratio of $\bar B\to X_s\gamma$ and cannot be neglected a priori. \mathversion{bold} \subsection{Contributions of $W$-exchanges} \label{subsec:HeffBsgammaW} \mathversion{normal} For the $W$-exchange the matching is performed at the EW scale, $\mu_W$. The Wilson coefficients are the sum of $t$ and $t^\prime$ contribution, since $c^{\prime}$ and $u^{\prime}$ contributions are suppressed by their small couplings to $b$ and $s$ quarks. Hence, the Wilson coefficients of $Q_{7\gamma}$ and $Q_{8G}$ are \begin{align} \Delta_W C_{7\gamma}(\mu_W)&= c_{d_{L2}}\,c_{d_{L3}} \left( c^2_{u_{L3}} C_{7\gamma}^{SM}(x_t) + s^2_{u_{L3}} C_{7\gamma}^{SM}(x_t^\prime)\right)\,,\\ \Delta_W C_{8G}(\mu_W)&= c_{d_{L2}}\,c_{d_{L3}} \left( c^2_{u_{L3}} C_{8G}^{SM}(x_t) + s^2_{u_{L3}} C_{8G}^{SM}(x_t^\prime)\right)\,, \end{align} with \begin{align} C^{SM}_{7\gamma} (x) &= \dfrac{3 x^3-2 x^2}{4(x-1)^4}\ln x - \frac{8 x^3 + 5 x^2 - 7 x}{24(x-1)^3}\,,\\ C^{SM}_{8G}(x) &= \dfrac{-3 x^2}{4(x-1)^4}\ln x - \frac{x^3 - 5 x^2 - 2 x}{8(x-1)^3} \end{align} being the SM Inami-Lim functions \cite{Inami:1980fz}. At last we need to evolve $\Delta_W C(\mu_W)$ down to $\mu_b$ to obtain the contribution of $W$ exchanges to the branching ratio of $\bar B\to X_s\gamma$. The QCD analysis, that involves the SM charged current-current operators $Q_1$ and $Q_2$ as well as the QCD-penguins $Q_3$ to $Q_6$, which mix with $Q_{7\gamma}$ and $Q_{8G}$ below $\mu_t$, is the same as in the SM and we proceed as in Ref.~\cite{Buras:2011zb}. \mathversion{bold} \subsection{\texorpdfstring {Contributions of $\hat{A}^m$-exchanges} {Contributions of A-exchanges}} \label{subsec:HeffBsgammaAH} \mathversion{normal} The contribution of a neutral gauge boson to the effective Hamiltonian in eq.~(\ref{HeffW_at_mu}) derives from integrating out the mass-eigenstate of the heavy flavour gauge boson $\hat{A}^m$ at its mass-scale $\mu_H$. The Wilson coefficients $\Delta_A C^{(\prime)}_{7\gamma}(\mu_H)$ and $\Delta_A C^{(\prime)}_{8G}(\mu_H)$ have been calculated within a generic framework in Ref.~\cite{Buras:2011zb}. The results for the special MGF case we are discussing are fixed by the couplings of the flavour gauge bosons to both SM and exotic fermions. Applying general formulae of Ref.~\cite{Buras:2011zb} to the present case we found that these contributions are below $1\%$ and can be safely neglected. As discussed in Ref.~\cite{Buras:2011zb} the reason for such suppression are the See-saw-like couplings of flavour gauge bosons to both SM and exotic fermions and the heavy neutral gauge boson masses. \section{Numerical Analysis} Having at hand the analytic expressions derived in the previous sections, we are ready to perform a numerical analysis of the MGF model in question. The first question we ask is whether the model is able to remove various anomalies in the flavour data hinting the presence of NP. Since the number of parameters is much smaller than in other popular extensions of the SM like SUSY models, LHT model, RS-scenario and models with left-right symmetry, it is indeed not obvious that these anomalies can be removed or at least softened. We briefly review the flavour anomalies as seen from the SM point of view. \subsection{Anomalies in the Flavour Data} \boldmath \subsubsection{\texorpdfstring {The $\varepsilon_K-S_{\psi K_S}$ Anomaly} {The e_K - SpsiKs Anomaly}} \unboldmath It has been pointed out in Refs.~\cite{Lunghi:2008aa,Buras:2008nn,Buras:2009pj,Lenz:2010gu} that the SM prediction for $\varepsilon_K$ implied by the measured value of $S_{\psi K_S}=\sin 2\beta$, the ratio $R_{\Delta M_B}$, and the value of $|V_{cb}|$ is too small to agree well with the experiment. We obtain the SM $\varepsilon_K$ value by taking the experimental value of $S_{\psi K_S}$, the ultimate $|V_{cb}|$ determination, the most recent value of the non-perturbative parameter $\hat B_K$ \cite{Antonio:2007pb,Aubin:2009jh,Laiho:2009eu,Bae:2010ki,Constantinou:2010qv,Aoki:2010pe}, and by including long-distance effects in ${\rm Im}\Gamma_{12}$ \cite{Buras:2008nn} and ${\rm Im}M_{12}$ \cite{Buras:2010pza} as well as recently calculated NNLO QCD corrections to $\varepsilon_K$ \cite{Brod:2010mj,Brod:2011ty}. We find\footnote{The small discrepancy with respect to the value of Ref.~\cite{Brod:2011ty} comes solely from updated input values.} $|\varepsilon_K|=(1.82\pm0.28)\times 10^{-3}$, visibly below the experimental value. On the other hand $\sin 2\beta=0.85\pm 0.05$ from SM fits of the Unitarity Triangle is significantly larger than the experimental value. This discrepancy is to some extent caused by the desire to fit both $\varepsilon_K$ \cite{Lunghi:2008aa,Buras:2008nn,Buras:2009pj} and $BR(B^+\to\tau^+\nu)$ \cite{Lunghi:2010gv}. As demonstrated in \cite{Buras:2008nn,Buras:2009pj}, whether the NP is required in $\varepsilon_K$ or $S_{\psi K_S}$ depends on the values of $\gamma$, $|V_{ub}|$ and $|V_{cb}|$. The phase $\gamma$ should be measured precisely by LHCb in the coming years while $|V_{ub}|$ and $|V_{cb}|$ should be precisely determined by Belle II and Super-$B$ provided that also the hadronic uncertainties will be under a better control. \boldmath \subsubsection{The $|V_{ub}|$-Problem} \unboldmath There is a tension between inclusive and exclusive determinations of $|V_{ub}|$. This means that if we take the unitarity of the CKM matrix as granted and also consider the good agreement of the ratio $R_{\Delta M_B}$ with the data, the inclusive and exclusive determinations imply different patterns of NP in CP-violating observables. Indeed one is lead to consider two limiting scenarios: \begin{description} \item[{\boldmath Scenario 1: Small $|V_{ub}|.$}] Here $|V_{ub}|$ is in principle the exclusive determination, \begin{equation} |V_{ub}|=(3.38\pm0.36)\times 10^{-3}\,. \end{equation} Within the SM, when the $\Delta M_{B_s}/\Delta M_{B_d}$ constraint is taken into account, one finds $S_{\psi K_S}\approx 0.67$ in agreement with the data, but $\varepsilon_K\approx 1.8\times 10^{-3}$ visibly below the data. As discussed in Refs.~\cite{Buras:2008nn,Buras:2009pj}, a sizeable constructive NP contribution to $\varepsilon_K$ would not require an increased value of $\sin 2\beta$ relative to the experimental value of $S_{\psi K_S}$. NP of this type would then remove the $\varepsilon_K-S_{\psi K_S}$ anomaly in the presence of the exclusive value of $|V_{ub}|$. \item[{\boldmath Scenario 2: Large $|V_{ub}|$.}] In this case $|V_{ub}|$ corresponds to its inclusive determination, \begin{equation} |V_{ub}|=(4.27\pm0.38)\times 10^{-3}\,. \end{equation} In this scenario the SM predicts $\varepsilon_K\approx 2.2\times 10^{-3}$, in agreement with the data, while $S_{\psi K_S}\approx 0.81$ is significantly above the data. As discussed in Refs.~\cite{Lunghi:2008aa,Buras:2008nn}, a negative NP phase $\varphi_{B_d}$ in $B^0_d-\bar B^0_d$ mixing would solve the $\varepsilon_K-S_{\psi K_S}$ anomaly in this case (see eq.~(\ref{Sobservables})), provided such a phase is phenomenologically allowed by other constraints. With a negative $\varphi_{B_d}$, $\sin 2\beta$ is larger than $S_{\psi K_S}$, implying a higher value on $|\varepsilon_K|$, in reasonable agreement with data and a better Unitary Triangle fit. \end{description} In both scenarios, new physics contributions to other observables, such as $\Delta M_{B_{d,s}}$, are expected and a dedicate analysis is necessary. In fact as we will see below the correlations between $\varepsilon_K$ and $\Delta M_{B_{d,s}}$ are powerful tests of the ability of MGF to describe properly all data on $\Delta F=2$ observables. \subsection{Input Parameters and the Parameter Space of the Model} Before proceeding with our numerical analysis, it is necessary to fix the input parameters and to define the parameter space of the model. \subsubsection{Input Parameters} \begin{table}[ht|] \renewcommand{\arraystretch}{1}\setlength{\arraycolsep}{1pt} \center{\begin{tabular}{|l|l|} \hline $G_F = 1.16637(1)\times 10^{-5}\, {\rm GeV}^{-2}$\hfill\cite{Nakamura:2010zzi} & $m_{B_d}= 5279.5(3)\, {\rm MeV}$\hfill\cite{Nakamura:2010zzi}\\ $M_W = 80.399(23) \, {\rm GeV}$\hfill\cite{Nakamura:2010zzi} & $m_{B_s} = 5366.3(6)\, {\rm MeV}$\hfill\cite{Nakamura:2010zzi}\\ $\sin^2\theta_W = 0.23116(13)$\hfill\cite{Nakamura:2010zzi} & $F_{B_d} = 205(12)\, {\rm MeV}$\hfill\cite{Laiho:2009eu}\\ $\alpha(M_Z) = 1/127.9$\hfill\cite{Nakamura:2010zzi} & $F_{B_s} = 250(12)\, {\rm MeV}$\hfill\cite{Laiho:2009eu}\\ $\alpha_s(M_Z)= 0.1184(7) $\hfill\cite{Nakamura:2010zzi} & $\hat B_{B_d} = 1.26(11)$\hfill\cite{Laiho:2009eu}\\\cline{1-1} $m_u(2\, {\rm GeV})=1.7\div3.1\, {\rm MeV} $ \hfill\cite{Nakamura:2010zzi} & $\hat B_{B_s} = 1.33(6)$\hfill\cite{Laiho:2009eu}\\ $m_d(2\, {\rm GeV})=4.1\div5.7\, {\rm MeV}$ \hfill\cite{Nakamura:2010zzi} & $F_{B_d} \sqrt{\hat B_{B_d}} = 233(14)\, {\rm MeV}$\hfill\cite{Laiho:2009eu}\\ $m_s(2\, {\rm GeV})=100^{+30}_{-20} \, {\rm MeV}$ \hfill\cite{Nakamura:2010zzi} & $F_{B_s} \sqrt{\hat B_{B_s}} = 288(15)\, {\rm MeV}$\hfill\cite{Laiho:2009eu}\\ $m_c(m_c) = (1.279\pm 0.013) \, {\rm GeV}$ \hfill\cite{Chetyrkin:2009fv} & $\xi = 1.237(32)$\hfill\cite{Laiho:2009eu}\\ $m_b(m_b)=4.19^{+0.18}_{-0.06}\, {\rm GeV}$\hfill\cite{Nakamura:2010zzi} & $\eta_B=0.55(1)$\hfill\cite{Buras:1990fn,Urban:1997gw}\\ $M_t=172.9\pm0.6\pm0.9 \, {\rm GeV}$\hfill\cite{Nakamura:2010zzi} & $\tau_{B^\pm}=(1641\pm8)\times10^{-3}\, {\rm ps}$\hfill\cite{Nakamura:2010zzi}\\\hline $m_K= 497.614(24)\, {\rm MeV}$ \hfill\cite{Nakamura:2010zzi} &$|V_{us}|=0.2252(9)$\hfill\cite{Nakamura:2010zzi}\\ $F_K = 156.0(11)\, {\rm MeV}$\hfill\cite{Laiho:2009eu} &$|V_{cb}|=(40.6\pm1.3)\times 10^{-3}$\hfill\cite{Nakamura:2010zzi}\\ $\hat B_K= 0.737(20)$\hfill\cite{Laiho:2009eu} &$|V^\text{incl.}_{ub}|=(4.27\pm0.38)\times10^{-3}$\hfill\cite{Nakamura:2010zzi}\\ $\kappa_\epsilon=0.94(2)$\hfill\cite{Buras:2010pza} &$|V^\text{excl.}_{ub}|=(3.38\pm0.36)\times10^{-3}$\hfill\cite{Nakamura:2010zzi}\\ $\varphi_\epsilon=(43.51\pm0.05)^\circ$\hfill\cite{Buras:2008nn} &$\gamma=(73^{+22}_{-25})^\circ$\hfill\cite{Nakamura:2010zzi}\\ $\eta_1=1.87(76)$\hfill\cite{Brod:2011ty} & \\ $\eta_2=0.5765(65)$\hfill\cite{Buras:1990fn} & \\ $\eta_3= 0.496(47)$\hfill\cite{Brod:2010mj} & \\\hline \end{tabular} } \caption{Values of experimental and theoretical quantities used throughout our numerical analysis. Notice that $m_i(m_i)$ are the masses $m_i$ at the scale $m_i$ in the $\ov{MS}$ scheme. $M_t$ is the pole top-quark mass. \label{tab:input}} \renewcommand{\arraystretch}{1.0} \end{table} In Table~\ref{tab:input} we list the nominal values of the input parameters that we will use for the numerical analysis, except when otherwise stated. At this stage it is important to recall the theoretical and the experimental uncertainties on some relevant parameters and on the observables we shall study. Considering the $K^0-\bar K^0$ mixing, remarkable improvements have been made in the case of the CP-violating parameter $\varepsilon_K$, where the decay constant $F_K$ is known within $1\%$ accuracy. Moreover the parameter $\hat{B}_K$ is known within $3\%$ accuracy from lattice calculations with dynamical fermions {\cite{Antonio:2007pb}} and an improved estimate of {\it long distance} contributions to $\varepsilon_K$ reduced this uncertainty down to $2\%$ \cite{Buras:2008nn,Buras:2010pza}. The NNLO QCD corrections to $\eta_1$ and $\eta_3$ \cite{Brod:2010mj,Brod:2011ty} allowed to access the remaining scale uncertainties that amount according to \cite{Brod:2011ty} to roughly $6\%$, dominantly due to the uncertainty in $\eta_1$. Including also parametric uncertainties, dominated by the value of $|V_{cb}|$, Brod and Gorbahn estimate conservatively the present error in $\varepsilon_K$ to amount to roughly $15\%$ \cite{Brod:2011ty}. The reduction of this total error down to $7\%$ in the coming years appears to be realistic. Further reduction will require progress both in the evaluation of long distance contributions and in $\eta_1$. $\Delta M_K$ is very accurately measured, but is subject to poorly known long distance contributions. Regarding the $B_q^0-\bar B_q^0$ mixings, lattice calculations considerably improved in recent years reducing the uncertainties in $F_{B_s}$ \footnote{Recently a remarkably precise value for $F_{B_s}$ was reported in Ref.~\cite{McNeile:2011ng}: $F_{B_s}=(225\pm4)\, {\rm MeV}$. Still, we shall adopt a conservative approach and use the value of $F_{B_s}=(250\pm12)\, {\rm MeV}$ in our analysis except when explicitly stated.} and $F_{B_d}$ and also in $\sqrt{\hat B_{B_s}}F_{B_s}$ and $\sqrt{\hat B_{B_d}}F_{B_d}$ down to $5\%$. This implies an uncertainty of $10\%$ in $\Delta M_{B_d}$ and $\Delta M_{B_s}$ within the SM. On the other hand, the mixing induced CP-asymmetries $S_{\psi\phi}$ and $S_{\psi K_S}$ have much smaller hadronic uncertainties. The hadronic uncertainties in the ratio $R_{\Delta M_B}$ are roughly at the $3\%$ level; the theoretical error on the $b$ semileptonic CP-asymmetry $A^b_{sl}$ is around the $20\%$ level; the theoretical uncertainties in the rate of the $\bar B\to X_s\gamma$ decay are below $10\%$; given the $5\%$ uncertainty on $F_{B_q}$ decay functions, the branching ratio for $B^+\to\tau^+\nu$ has a theoretical error around the $10\%$ and a large parametric error due to $|V_{ub}|$. We stress that the situation with other $B_i$ parameters, describing the hadronic matrix elements of $\Delta F=2$ operators absent in the SM, is much worse. Here a significant progress is desired.\\ On the experimental side, $\varepsilon_K$, $\Delta M_{B_d}$, $\Delta M_{B_s}$ and the ratio $R_{\Delta M_B}$ are very precisely measured with errors below the $1\%$ level. $S_{\psi K_S}$ is known with an uncertainty of $\pm 3\%$ and the rate for the branching ratio of the $\bar B\to X_s\gamma$ decay is known within $10\%$. On the contrary, larger experimental uncertainties affect the measurements of $S_{\psi\phi}$ in D0 and LHCb, which differ from one another by an order of magnitude, but are still in agreement within the $2\sigma$-level, due to the large errors of the single determinations. $A^b_{sl}$ has only been measured by D0 its experimental error is around the $20\%$. Similarly, $BR(B^+\to\tau^+\nu)$ is plagued by the same uncertainty. \subsubsection{The CKM Matrix}\label{sec:ckmmatrix} To evaluate the observables we need to specify the values of the CKM elements. As already stated in sec.~\ref{sec:MGF}, the CKM matrix in this model is not unitary and is defined by \begin{equation} \tilde V = c_{u_L}\,V\,c_{d_L}\,, \end{equation} where $V$ is by construction a unitary $3\times3$ matrix and $c_{(u,d)_L}$ are the cosines encoding the mixing between SM and exotic fermions. From eqs.~(\ref{FormulaSin1}) and (\ref{FormulaSin2}), we deduce that $c_{(u,d)_L}\approx1$, except for $t$ and $t'$. As a result, within an excellent accuracy the CKM matrix reads \begin{equation} \tilde V \simeq \left( \begin{array}{ccc} V_{ud} & V_{us} & V_{ub} \\ V_{cd} & V_{cs} & V_{cb} \\ c_{u_{L3}}\,V_{td} & c_{u_{L3}}\,V_{ts} & c_{u_{L3}}\,V_{tb}\\ \end{array} \right)\,. \label{Vtilde} \end{equation} In this approximation, the deviation from the unitarity of the CKM matrix is \begin{equation} \big(\tilde V^\dagger\,\tilde V\big)_{ij}=\delta_{ij}-s^2_{u_{L3}}\,V^*_{ti}\,V_{tj}\,,\qquad\qquad \big(\tilde V\,\tilde V^\dagger\big)_{ij}=\delta_{ij}-s^2_{u_{L3}}\,\delta_{it}\,\delta_{jt}\,, \end{equation} The deviations are present only when the top-quark entries are considered and are proportional to $s^2_{u_{L3}}$. All other entries of the CKM matrix coincide with the corresponding entries of the unitary matrix $V$ up to negligible corrections. The important implication of the latter finding is that the angle $\gamma$ in the unitary triangle is unaffected by such deviations. In the approximation of eq.~(\ref{Vtilde}), \begin{equation} \tilde \gamma\equiv \arg\left(-\dfrac{\tilde V_{ud}\,\tilde V_{ub}^*}{\tilde V_{cd}\,\tilde V_{cb}^*}\right)= \arg\left(-\dfrac{V_{ud}\,V_{ub}^*}{V_{cd}\,V_{cb}^*}\right)\,, \end{equation} and thus $\gamma$ does not depend on $c_{u_{L3}}$ or $s_{u_{L3}}$. We state now how we fix the values of the CKM elements. From the tree-level experimental determinations of $|V_{us}|$, $|V_{cb}|$, $|V_{ub}|$ and $\gamma$, we fix the corresponding parameters of $\tilde V$. In this way, also the corresponding parameters of the $V$ matrix are univocally fixed and using the unitarity of $V$, we evaluate all the other entries of $V$. With all entries of $V$ fixed we compute the masses and mixings of all fermions and flavour gauge bosons by means of eqs.~(\ref{SeeSawMasses})--(\ref{FGBmasses}). Finally, knowing $c_{u_{L3}}$, we also determine the elements of the third row of $\tilde V$. \subsubsection{The Parameter Space of the Model} Having determined $V$ what remains is the calculation of the spectrum and the couplings of NP particles. In principle they are fixed once, in addition to the SM parameters, we fix the {\it seven} NP couplings $\lambda_{u,d}^{(\prime)}\,,g_Q\,,g_U\,,g_D$ and the {\it two} mass parameters $M_u$ and $M_d$ in eqs.~\eqref{eq:covariantderivatives} and \eqref{eq:lagrangian}. Still, their actual determination is subtle since the energy scale at which the see-saw relations of eqs.~\eqref{SeeSawMasses} hold is a priori not known. We identify this scale with the mass of the lightest flavour gauge boson. We fix the spectrum and the see-saw scale iteratively using the condition that all exotic masses are above $m_t(m_t)$. As a first step we evaluate the see-saw relation at $m_t(m_t)$ to obtain a rough estimate of the masses of exotic fermions and lightest gauge boson. With this initial spectrum we run the masses of the SM fermions to the newly defined see-saw scale including all intermediate exotic fermion thresholds. The evaluation of the see-saw relation corrects the NP spectrum. We repeat the procedure until the values of exotic fermion masses and see-saw scale no longer change. Lastly, we evolve the exotic fermion masses down to the EW scale. For the numerical analysis it is necessary to scan the parameter space of the model. We choose $\lambda_{u,d}\in(\, 0,\,1.5\,]$ and all other couplings $\{\lambda^\prime_{u,d},\,g_Q,\,g_U,\,g_D\}\in(\, 0,\,1.1\,]$ to stay in the perturbative regime of the theory. The two mass parameters are varied between $M_u\in[\,100\, {\rm GeV}\,,\,1\, {\rm TeV}\,]$ and $M_d\in[\,30\, {\rm GeV}\,,250\, {\rm GeV}\,]$ following the discussion in Ref.~\cite{Grinstein:2010ve}. Unphysical points of the parameter space, namely cases with $s_{u,d}$ or $c_{u,d}$ larger than $1$, are not considered. Larger $M_u$ and $M_d$ values decouple the NP from the SM and are therefore phenomenologically irrelevant. With respect to the analysis of Ref.~\cite{Grinstein:2010ve} we are scanning over all NP parameters, including $\lambda^\prime_{u,d}$ and $g_Q,\,g_U,\,g_D$. \subsection{Results} To present the features of the MGF model we are discussing, we use $|V_{us}|$ and $|V_{cb}|$ at their central values in tab.~\ref{tab:input} and \begin{equation} |V_{ub}|=3.38\times 10^{-3}\, \qquad\text{and}\qquad \gamma=68^\circ\,, \end{equation} which are among the favoured values within the SM when the experimental values of both $S_{\psi K_S}$ and $R_{\Delta M_B}$ are taken into account. With this CKM matrix we list in tab.~\ref{tab:predictionsexperiment} the central values for the SM predictions of the observables under consideration together with their experimental determinations. \begin{table}[h!] \renewcommand{\arraystretch}{1}\setlength{\arraycolsep}{1pt} \center{ \begin{tabular}{|l||l|} \hline SM predictions for exclusive $|V_{ub}|$ & Experimental values\\ \hline\hline $\Delta M_{B_d} = 0.592 \,\text{ps}^{-1}$ & $\Delta M_{B_d} = 0.507(4) \,\text{ps}^{-1}$\hfill\cite{Nakamura:2010zzi}\\ $\Delta M_{B_s} = 20.28 \,\text{ps}^{-1}$ & $\Delta M_{B_s} = 17.77(12) \,\text{ps}^{-1}$\hfill\cite{Nakamura:2010zzi}\\ $R_{\Delta M_B}= 2.92\times 10^{-2}$ & $R_{\Delta M_B}=(2.85\pm0.03)\times 10^{-2}$\hfill\cite{Nakamura:2010zzi}\\ $S_{\psi K_S}= 0.671$ & $S_{\psi K_S}= 0.673(23)$\hfill\cite{Nakamura:2010zzi}\\ $S_{\psi\phi}= 0.0354 $ & $\phi_s^{\psi\phi}= 0.55^{+0.38}_{-0.36}$\hfill\cite{Giurgiu:2010is,Abazov:2011ry}\\ & $\phi_s^{\psi\phi}= 0.03\pm0.16\pm0.07 $\hfill\cite{Koppenburg:PC}\\ $\Delta M_K = 0.4627 \times 10^{-2}\,\text{ps}^{-1}$ & $\Delta M_K= 0.5292(9)\times 10^{-2} \,\text{ps}^{-1}$\hfill\cite{Nakamura:2010zzi} \\ $|\epsilon_K|= 1.791\times 10^{-3}$ & $|\epsilon_K|= 2.228(11)\times 10^{-3}$\hfill\cite{Nakamura:2010zzi} \\ $A^b_{sl}=-0.0233\times10^{-2}$ & $A^b_{sl}=(-0.787\pm0.172\pm0.093)\times10^{-2}$\hfill\cite{Abazov:2011yk}\\ $BR(b\to s\gamma)=3.15 \times10^{-4}$ & $BR(b\to s\gamma)=(3.55\pm0.24\pm0.09) \times10^{-4}$\hfill\cite{Nakamura:2010zzi}\\ $BR(B^+\to\tau^+\nu)=0.849\times10^{-4}$ & $BR(B^+\to\tau^+\nu)=(1.65\pm0.34)\times10^{-4}$\hfill\cite{Nakamura:2010zzi}\\ $R_{BR/\Delta M}=1.43\times 10^{-4}\, {\rm ps}$ & $R_{BR/\Delta M}=(3.25\pm0.67)\times 10^{-4}\, {\rm ps}$\\ \hline \end{tabular}} \caption{ The SM predictions for the observables we shall consider using the exclusive determination of $|V_{ub}|$ and the corresponding experimental values. \label{tab:predictionsexperiment}} \end{table} Comparing these results with the data we make the following observations \begin{itemize} \item[-] $|\varepsilon_K|$ is smaller than its experimental determination, while $S_{\psi K_S}$ is very close to the central experimental value, as it should be for the chosen $|V_{ub}|$ and $\gamma$. \item[-] The mass differences $\Delta M_{B_d}$ and $\Delta M_{B_s}$ are visibly above the data; also their ratio $R_{\Delta M_B}$ is above the experimental determination, but in agreement at the $3\sigma$ level. \item[-] $BR(B^+\to\tau^+\nu)$ is well below the data and consequently also the ratio $R_{BR/\Delta M}$ turns out to be below the measured central value by more than a factor of two. Even if the experimental error in $BR(B^+\to\tau^+\nu)$ is large, the parameter space of the model is strongly constrained. Furthermore, from the correlation among $R_{\Delta M_B}$ and $R_{BR/\Delta M}$ it is evident that the model can only deteriorate the SM tension in these observables. \item [-] Concerning $S_{\psi\phi}$, the predicted value is consistent with the most recent data from CDF, D0 and LHCb. \item[-] $A^b_{sl}$ is well below the D0 data. \item[-] Finally the predicted central value for $BR(\bar B\to X_s\gamma)$ is smaller than the central experimental value but consistent with it within the $2\sigma$ error range. \end{itemize} Any NP model that aims to remove or soften the anomalies listed above should simultaneously: \begin{enumerate} \item Enhance $|\varepsilon_K|$ by roughly $20\%$ without affecting significantly $S_{\psi K_S}$. \item Suppress $\Delta M_{B_d}$ and $\Delta M_{B_s}$ by roughly $15\%$ and $10\%$, respectively. \item Slightly suppress $R_{\Delta M_{B}}$ by $3\%$. \item Strongly enhance $R_{BR/\Delta M}$ by $130\%$. \item Moderately enhance the value $BR(\bar B\to X_s\gamma)$ by $5-10\%$. \end{enumerate} As we shall see below, the model naturally satisfies requirements 1., 3. and 5. On the other side, it fails in 2. and 4.: indeed the mass differences $\Delta M_{B_{d,s}}$ can only be enhanced with respect to the corresponding would-be SM value; this enhancement is predicted to be significant if ones requires to solve the $|\varepsilon_K|$-$S_{\psi K_S}$ anomaly. Furthermore, the predicted value for the $R_{BR/\Delta M}$ can only be decreased, resulting in a tension on this observables more serious than in the SM. In what follows we will look closer at the pattern of flavour violations in the MGF still keeping the input parameters at their central values. Subsequently we will comment on how some of our statements are softened when hadronic uncertainties in the input parameters are taken into account. Considering now the NP contributions within MGF, we find the following pattern of effects: \begin{itemize} \item[-] If we neglect the contributions of flavour gauge bosons and of exotic quarks, i.e. considering only the would-be SM contributions, the mixing amplitudes $M^i_{12}$ and $BR(\bar B\to X_s\gamma)$ are reduced with respect to the SM ones due to the modification of the CKM matrix, encoded in the mixings $c_{{u,d}_L}$. As a result, once the third-row entries of the CKM matrix are involved, the would-be SM values of the considered observables are smaller than the values reported in tab.~\ref{tab:predictionsexperiment}. \item[-] The RR flavour gauge boson contributions are negligible for all observables in all the parameter space. We shall not consider such contributions in the following description. \item[-] $|\varepsilon_K|$ is uniquely enhanced by the new box-diagram contributions involving exotic quarks, while it is uniquely suppressed by heavy gauge flavour boson contributions. Among the latter, the $LR$ contributions are the dominant ones, while the $LL$ ones are safely negligible. \item[-] $\Delta M_{B_{d,s}}$ are also uniquely enhanced by the new box-diagram contributions, but are mostly unaffected by heavy flavour gauge boson contributions. This is in particular true for $\Delta M_{B_d}$, while for $\Delta M_{B_s}$ the latter contributions can be non-negligible either enhancing or suppressing it. This is best appreciated when considering the ratio $R_{\Delta M_B}$: this observable does not show any dependence on the new box-diagram contributions, since in MGF the operator structure in box-diagram contributions does not change with respect to the SM and the NP effects are the same in the $B_d$ and $B_s$ systems. As a result any NP effect in this ratio should be attributed to the heavy gauge flavour boson contributions, both $LL$ and $LR$. \item[-] The mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$ are unaffected by the new box-diagram contributions. Similarly to $R_{\Delta M_B}$, this allows to see transparently the heavy gauge flavour boson contributions, which was much harder in the case of $\Delta M_{B_{d,s}}$. We find that $S_{\psi K_S}$ is only affected by $LL$ contributions and can only be suppressed. $S_{\psi\phi}$ depends on both $LL$ and $LR$ contributions. Interestingly, the NP contributions interfere destructively with the SM contribution such that the sign of $S_{\psi\phi}$ can in principle be reversed in this model. Similar conclusions hold for $A^b_{sl}$: it is not affected by box-diagram contributions, the $LR$ contributions are almost completely negligible and the $LL$ ones are the only relevant enhancing $|A^b_{sl}|$ towards the central value of the experimental determination. \item[-] Finally, the branching ratio of $\bar B\to X_s\gamma$ can be significantly affected by the modifications in the SM magnetic penguin contributions that can only enhance this observable, as already pointed out in Ref.~\cite{Grinstein:2010ve}. The heavy gauge flavour boson contributions are negligible as discussed in Ref.~\cite{Buras:2011zb}. \end{itemize} Having listed the basic characteristic of NP contributions in this model we will now present our numerical results in more detail stressing the important role of correlations among various observables identified in this model by us for the first time. \subsubsection{Correlations Among the Observables} In this section we discuss correlations among the observables. They will allow us to constrain the parameter space of the model and see whether this model is able or not to soften, or even solve, the anomalies in the flavour data. \begin{figure}[h!] \begin{center} \subfloat[Exclusive $V_{ub}$]{\label{fig:eK_SpsiKs_exclusive}\includegraphics[height=5cm]{SpsiKs_eK_EXCL_BT-crop}} ~ \subfloat[Inclusive $V_{ub}$]{\label{fig:eK_SpsiKs_inclusive}\includegraphics[height=5cm]{SpsiKs_eK_INCL_BT-crop}} \end{center} \caption{\it The correlation of $\varepsilon_K$ and $S_{\psi K_S}$. The shaded grey regions are the experimental $1\sigma$-$3\sigma$ error ranges, while the cross is the central SM values reported in tab.~\ref{tab:predictionsexperiment}. The colour of the points represent the percentage of the box-diagram contributions (purple) and of the flavour gauge boson ones (red) in $\varepsilon_K$. In the NP points the theoretical error on $\varepsilon_K$ is included. \label{fig:eK_SpsiKs}} \end{figure} In fig.~\ref{fig:eK_SpsiKs_exclusive}, we show the correlation between $\varepsilon_K$ and $S_{\psi K_S}$. The plot confirms that the exclusive value of $|V_{ub}|$ is favoured in this model. Indeed the NP contributions are able to solve the $\varepsilon_K$--$S_{\psi K_S}$ anomaly in a reasonably large region of the parameters space. This happens when the $\varepsilon_K$ prediction approaches the data due to box-diagram contributions (purple points), while $S_{\psi K_S}$ is mostly unaffected. This is in particular possible when the flavour gauge bosons contributions are negligible. When the flavour gauge boson contributions are significant (red points) $S_{\psi K_S}$ is uniquely suppressed relatively to the SM value. However, as seen in the figure a combination of large box contributions as well as flavour gauge boson contributions (yellow points) allows bringing $\varepsilon_K$ in agreement with the data while keeping $S_{\psi K_S}$ within the $2\sigma$ experimental error range. On the other hand, points for which the box contributions are negligible and instead the flavour gauge boson contributions dominate in $\varepsilon_K$ (purely red points) cannot explain the observed value of $\varepsilon_K$. However, this kind of contributions are best suited for the case with the inclusive determination of $|V_{ub}|$, reported in fig.~\ref{fig:eK_SpsiKs_inclusive}. Still, there exist no points, which simultaneously bring $\varepsilon_K$ and $S_{\psi K_S}$ in a $3\sigma$ agreement. The flavour gauge bosons contributions are not large enough to suitably correct $S_{\psi K_S}$. We conclude that the inclusive determination of $|V_{ub}|$ is disfavoured in this model and we shall not further pursue this case. \begin{figure}[h!] \begin{center} \subfloat[$F_{B_d}=205\, {\rm MeV}$ ]{\label{fig:eK_DeltaMB_a}\includegraphics[height=5cm]{DMBd_eK_205_BT-crop}}~ \subfloat[$F_{B_s}=250\, {\rm MeV}$ ]{\label{fig:eK_DeltaMB_b}\includegraphics[height=5cm]{DMBs_eK_250_BT-crop}}\\[1em] \subfloat[$F_{B_d}=175\, {\rm MeV}$ ]{\label{fig:eK_DeltaMB_c}\includegraphics[height=5cm]{DMBd_eK_175_BT-crop}}~ \subfloat[$F_{B_s}=225\, {\rm MeV}$ ]{\label{fig:eK_DeltaMB_d}\includegraphics[height=5cm]{DMBs_eK_225_BT-crop}} \end{center} \caption{\it Correlation plot of $\varepsilon_K$ with $\Delta M_{B_d}$ and $\Delta M_{B_s}$ on the left and right, respectively. In the upper plots, we use for $F_{B_{d,s}}$ the values reported in tab.~\ref{tab:input}, while for the lower plots we adopt smaller values: $F_{B_d}$ is reduced down to $85\%$ of its value, close to the $3\sigma$ error level and $F_{B_s}$ is taken to be the new determination reported in Ref.~\cite{McNeile:2011ng}. In the plots above we have not included the $1\sigma$ error in $\varepsilon_K$ to best illustrate the interplay of box- (purple) and tree-contributions (red).} \label{fig:eK_DeltaMB} \end{figure} In fig.~\ref{fig:eK_DeltaMB}, we present the correlations between $\varepsilon_K$ and $\Delta M_{B_{d,s}}$. From \subref{fig:eK_DeltaMB_a} and \subref{fig:eK_DeltaMB_b}, we conclude that the model cannot solve the $|\varepsilon_K|-S_{\psi K_S}$ anomaly present in the SM, without worsening the already moderate agreement of this model with the $\Delta M_{B_{d,s}}$ experimental data. Indeed the values $\Delta M_{B_d}\approx0.75/ps$ and $\Delta M_{B_s}\approx27/ps$ are so larger that in the case that the central values of the weak decay constants $F_{B_{d,s}}$ do not change in the future, but their and other input parameter uncertainties are further reduced, we will have to conclude that the model fails to describe the $\Delta F=2$ data. However, we should emphasise that this problem could be avoided if the values for the weak decay constants $F_{B_{d,s}}$ are smaller than the ones used in the plots \subref{fig:eK_DeltaMB_a} and \subref{fig:eK_DeltaMB_b}. Indeed, in \subref{fig:eK_DeltaMB_c} we adopt a $15\%$ reduced value for $F_{B_d}$, close to its $3\sigma$ value, while in \subref{fig:eK_DeltaMB_d} the last determination of $F_{B_s}$ reported in Ref.~\cite{McNeile:2011ng}. This input, modifies the SM values to be \begin{equation} \Delta M_{B_d}=0.43 \, {\rm ps}^{-1}\, \qquad\text{and}\qquad \Delta M_{B_s}=16.4 \, {\rm ps}^{-1}\,, \end{equation} such that now the enhancements of these observables by NP is welcomed by the data. From plots \subref{fig:eK_DeltaMB_c} and \subref{fig:eK_DeltaMB_d} we deduce that the NP contributions and the requirement of agreement of $|\varepsilon_K|$ with data within MGF automatically enhance $\Delta M_{B_{d,s}}$. Even if also in this case $\Delta M_{B_{d,s}}$ are found above the data, the model is performing much better than in the previous case. This exercise shows that on one hand it is crucial to get a better control over hadronic parameters in order to obtain a clearer picture of NP contributions and on the other hand that other more precise observables should be analysed until the uncertainties on $\Delta M_{B_{d,s}}$ are lowered. \begin{figure}[h!] \begin{center} \includegraphics[width=8cm]{BR_R_eK-crop}~ \includegraphics[width=8cm]{BR_R_BT-crop} \end{center} \caption{\it Correlation plot for $R_{\Delta M_B}$ and $R_{BR/\Delta M}$. The grey regions refer to the experimental $1\sigma$-$3\sigma$ and $2\sigma$-$3\sigma$ error ranges for $R_{\Delta M_B}$ and $R_{BR/\Delta M}$, respectively. The big black point refers to the SM values reported in tab.~\ref{tab:predictionsexperiment}. On the left, red (blue) points refer to agreement (disagreement) of the points prediction of $\varepsilon_K$ and the data at $3\sigma$ level. On the right the colours represent contribution of boxes and trees in $\varepsilon_K$.} \label{fig:RDelta_RBRDelta} \end{figure} In fig.~\ref{fig:RDelta_RBRDelta}, we show the correlation between the ratio of the mass differences, $R_{\Delta M_B}$, and the ratio among the branching ratio of the $B^+\to\tau^+\nu$ decay and the $\Delta M_{B_d}$, $R_{BR/\Delta M}$. Both observables have negligible theoretical uncertainties and are therefore very useful to provide strong constraints on the parameter space. On the left, the red points correspond to an $\varepsilon_K$ prediction in agreement with the data at $3\sigma$, while the blue ones do not satisfy the $\varepsilon_K$ constraint. This plot largely constrains the parameter space of the model; for only very few points $\varepsilon_K$, $R_{\Delta M_B}$ and $R_{BR/\Delta M}$ agree at $3\sigma$-level with the data simultaneously. Furthermore, all red points correspond to values for $R_{BR/\Delta M}$ smaller than the SM prediction and therefore the model can only worsen the SM tension. In the case that the experimental sensitivity to the $BR(B^+\to \tau^+\nu)$ improves, it will be possible to further constrain and possible exclude the present MGF model. \begin{figure}[h!] \begin{center} \includegraphics[width=11cm]{MAH_mtp_Sigma-crop} \end{center} \caption{\it $m_{t'}-\hat M_{A^{24}}$ parameter space. For all red points in the plot $R_{\Delta M_B}$, $R_{BR/\Delta M}$ and $\varepsilon_K$ agree with the data at $3\sigma$-level.} \label{fig:mtopprime_MAH} \end{figure} Having reduced the parameter space, we concentrate now on a few other predictions of the model. In fig.~\ref{fig:mtopprime_MAH}, we show the $m_{t'}-\hat M_{A^{24}}$ parameter space, where $m_{t'}$ is the mass of the exotic partner of the top-quark and $\hat M_{A^{24}}$ the mass of the lightest neutral gauge boson; the corresponding particles have the best chances to be detected at the LHC. The red and blue points are those identified in fig.~\ref{fig:RDelta_RBRDelta} to agree and disagree in $R_{\Delta M_B}$, $R_{BR/\Delta M}$ and $\varepsilon_K$ with the data at the $1\sigma$-$3\sigma$ level, respectively. Interestingly, the phenomenological results presented above hold not only for light but also for heavy $t^\prime$'s. Also the mass of the lightest flavour gauge boson is not bounded. \begin{figure}[h!] \begin{center} \subfloat[]{\label{fig:SpsiphiAbsl}\includegraphics[height=5.4cm]{Absl_Spsiphi_Sigma-crop}} \quad \subfloat[]{\label{fig:BSgammamtp}\includegraphics[height=5.5cm]{bsgamma_mtp_Sigma-crop}} \end{center} \caption{\it Correlation plot of $S_{\psi\phi}$ and $A^b_{sl}$ on the left and $BR(\bar B\to X_s\gamma)$ and $m_t^\prime$ on the right. Grey regions refer to the experimental error ranges. The big black point refers to the SM values reported in tab.~\ref{tab:predictionsexperiment}. In red the points for which $R_{\Delta M_B}$, $R_{BR/\Delta M}$ and $\varepsilon_K$ agree with the data at $3\sigma$ level, in blue all others for which there is no agreement. \label{fig:Predictions}} \end{figure} Furthermore, in fig.~\ref{fig:Predictions} we show two correlation plots which represent also clear predictions for this model. Plot \subref{fig:SpsiphiAbsl} is the correlation for $S_{\psi\phi}$ and $A^b_{sl}$ showing that only tiny deviations from the SM values are allowed: this turns out to be an interesting result for $S_{\psi\phi}$, which is indeed close to the recent determination of LHCb. On the other hand, $A^b_{sl}$ has only been measured by D0, but hopefully LHCb will also have something to say in the near future. Once the experimental uncertainties are lowered, such clear predictions will be essential to provide the final answer on how well this model performs. In plot \subref{fig:BSgammamtp}, we show the correlation of $BR(\bar B\to X_s\gamma)$ and $m_t^\prime$. We confirm the finding of Grinstein {\it et al.} that in this model the NP contributions to $BR(B\to X_s\gamma)$ always enhance it towards the central experiment value. However, interestingly only very small enhancements of this branching ratio are allowed when also the bounds from $\Delta F=2$ observables are taken into account. \section{Comparison with other Models} A complete comparison of the patterns of flavour violation in MGF with corresponding patterns found in numerous models \cite{Buras:2010wr} would require the study of $\Delta F=1$ processes, however already $\Delta F=2$ observables allow a clear distinction between the MGF and the simplest extensions of the SM. Here we just quote a few examples: \begin{itemize} \item[-] In the original MFV framework restricted to LL operators, the so-called constrained MFV \cite{Blanke:2006ig}, the $|\varepsilon_K|-S_{\psi K_S}$ anomaly can only be solved by enhancing $|\varepsilon_K|$ since in this framework $S_{\psi K_S}$ remains SM-like. In this framework then only the exclusive value of $|V_{ub}|$ is viable. An example of such a framework is the model with a single universal extra dimension (UED) for which a very detailed analysis of $\Delta F=2$ observables has been performed in \cite{Buras:2002ej}. In fact this is a general property of CMFV models as demonstrated in \cite{Blanke:2006yh}. Thus after $|\varepsilon_K|$ has been taken into account and contributions from tree-level heavy gauge boson exchanges have been eliminated MGF resembles CMFV if only $\Delta F=2$ processes are considered. However $\Delta F=1$ processes can provide a distinction. In fact whereas in MGF the NP contributions uniquely enhance $BR(\overline{B}\to X_s\gamma)$, in UED they uniquely suppress this branching ratio \cite{Buras:2003mk}. Concerning the $|\varepsilon_K|$--$\Delta M_{B_{d,s}}$ tension MGF and CMFV are again similar. \item[-] The 2HDM framework with MFV and flavour blind phases, the so-called ${\rm 2HDM_{\overline{MFV}}}$ \cite{Buras:2010mh}, can on the other hand be easily distinguished from MGF. In this model NP contributions to $\varepsilon_K$ are tiny and the inclusive value of $|V_{ub}|$ is required in order to obtain the correct value of $|\varepsilon_K|$. The interplay of the CKM phase with the flavour blind phases in Yukawa couplings and Higgs potential suppress $S_{\psi K_S}$ simultaneously enhancing the asymmetry $S_{\psi \phi}$. As in the case of MGF this asymmetry is SM-like or has a reversed sign. It is $S_{\psi \phi}$ together with the value of $|V_{ub}|$ which will distinguish MGF from ${\rm 2HDM_{\overline{MFV}}}$. \item[-] Finally, let us mention the left-right asymmetric model (LRAM) for which a very detailed FCNC analysis has been recently presented in \cite{Blanke:2011ry}. As this model has many free parameters both values of $|V_{ub}|$, inclusive and exclusive, are valid. The model contains many new phases and the $|\varepsilon_K|-S_{\psi K_S}$ anomaly can be solved in many ways. Moreover, the model struggles with the $\varepsilon_K$ constraint due to huge neutral Higgs tree-level contributions. However, as demonstrated in Section 7 of that paper a simple structure of the right-handed mixing matrix gives a transparent solution to the $|\varepsilon_K|-S_{\psi K_S}$ anomaly by enhancing $|\varepsilon_K|$, keeping $S_{\psi K_S}$ at the SM value and in contrast to MGF automatically {\it suppressing} $\Delta M_{B_{d,s}}$ and significantly {\it enhancing} $S_{\psi\phi}$. While MGF falls back in this comparison, one should emphasise than on the MGF has very few parameters and provides the explanation of quark masses and mixings, while this is not the case in the LRAM. \end{itemize} \section{Conclusion} We have presented an extensive analysis of $\Delta F=2$ observables and $B\to X_s\gamma$ for a specific MGF model presented in Ref.~\cite{Grinstein:2010ve}, which is of special interest due to the small number of new parameters. In particular we performed a detailed study of the effects of tree-level contributions due to the presence of heavy flavour gauge bosons. Our main findings are as follows. The model predicts a clear pattern of deviations from the SM: \begin{itemize} \item[-] Enhancements of $|\varepsilon_K|$ and $\Delta M_{B_{d,s}}$ in a correlated manner by new box-diagram contributions and suppression of $|\varepsilon_K|$ by tree-level heavy gauge boson contributions with only small impact on $\Delta M_{B_{d,s}}$. \item[-] Mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi \phi}$ are unaffected by box-diagram contributions, but receive sizeable destructive contributions from tree-level heavy gauge boson exchanges such that the sign of $S_{\psi \phi}$ can be reversed. However, these effects are basically eliminated once the $\varepsilon_K$ constraint is taken into account. \item[-] The $\varepsilon_K-S_{\psi K_S}$ anomaly present in the SM is removed through the enhancement of $|\varepsilon_K|$, leaving $S_{\psi K_S}$ practically unmodified. This is achieved with the help of box-diagram contributions in the regions of the parameter space for which they are dominant over heavy flavour gauge boson contributions, which interfere destructively with the SM amplitudes. \item[-] This structure automatically implies that in this model the exclusive determination of $|V_{ub}|$ is favoured. \item[-] $b$ semileptonic CP-asymmetry $A^b_{sl}$, that a priori could receive large contributions from the tree-level flavour gauge boson diagrams, remains close to the SM values once requiring $\vert\varepsilon_K\vert$ to be in agreement with the data. \item[-] Most importantly, the $\varepsilon_K$ constraint implies the central values of $\Delta M_{B_{d,s}}$ to be roughly $50\%$ higher than the very precise data. This disagreement cannot be cured fully by hadronic uncertainties although significant reduction in the values of $F_{B_{d,s}}$ could soften this problem. \item[-] We have pointed out that the ratio of the mass differences, $\Delta M_{B_d}/\Delta M_{B_s}$ and the ratio of the $B^+\to\tau^+\nu$ branching ratio and $\Delta M_{B_d}$, together with $\varepsilon_K$, provide strong constraints on the parameter space of the model. Furthermore, the correlation among these two observables encodes a serious tension on the flavour data that can only be deteriorated in the model. \item[-] In agreement with Ref.~\cite{Grinstein:2010ve}, we find that $BR(B\to X_s\gamma)$ is naturally enhanced in this model, bringing the theory closer to the data, still only small corrections are allowed by the $\Delta F=2$ bounds. \item[-] We have demonstrated how this model can be distinguished by means of the flavour data from other extensions of the SM. \end{itemize} In summary, the great virtue of this model is its predictivity, such that within the coming years it will be evident whether it can be considered as a valid description of low-energy data. Possibly the most transparent viability tests of the model are the future values of $F_{B_{d,s}}$ and $|V_{ub}|$. For the model to accommodate the data, $F_{B_{d,s}}$ have to be reduced by $15\%$ and $|V_{ub}|$ has to be close to $3.4\times 10^{-3}$. Violation of any of these requirements will put this extension of the SM in trouble. We emphasise that the triple correlation between $\Delta M_{B_{d,s}}$, $\varepsilon_K$ and $S_{\psi K_S}$ was instrumental in reaching this conclusion. The same result is derived by the complementary study on the $R_{BR/\Delta M}-R_{\Delta M_B}$ correlation. Our work shows that the study of correlations among flavour observables and the accuracy of non-perturbative parameters like $\hat B_K$ and $F_{B_{d,s}}$ are crucial for the indirect searches for physics beyond the Standard Model. \section*{Acknowledgements} We would like to thank Benjam\'in Grinstein for useful details on the Ref.~\cite{Grinstein:2010ve}, Ulrich Nierste and Paride Paradisi for very interesting discussions. This research was done in the context of the ERC Advanced Grant project ''FLAVOUR''(267104). The work of MVC has been partially supported by the Graduiertenkolleg GRK 1054 of DFG.
88141333df97fc638985ad8dca7d3083b0d1db43
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} \label{intro} A central issue in natural language processing is that of ambiguity resolution in syntactic parsing and it is generally acknowledged that a certain amount of semantic knowledge is required for this. In particular, the case frames of verbs, namely the knowledge of which nouns are allowed at given case slots of given verbs, is crucial for this purpose. Such knowledge is not available in existing dictionaries in a satisfactory form, and hence the problem of automatically acquiring such knowledge from large corpus data has become an important topic in the area of natural language processing and machine learning. (c.f. \cite{PTL92,ALN95,LA95}) In this paper, we propose a new method of learning such knowledge, and empirically demonstrate its effectiveness. The knowledge of case slot patterns can be thought of as the co-occurrence information between verbs and nouns\footnote{We are interested in the co-occurrence information between any two word categories, but in much of the paper we assume that it is between nouns and verbs to simplify our discussion.} at a fixed case slot, such as at the subject position. In this paper, we employ the following quantity as a measure of co-occurrence (called `association norm'): \begin{equation} A(n, v) = \frac{p(n,v)}{p(n) p(v)} \label{eqassoc} \end{equation} where $p(n,v)$ denotes the joint distribution over the nouns and the verbs (over $N \times V$), and $p(n)$ and $p(v)$ the marginal distributions over $N$ and $V$ induced by $p(n,v)$, respectively. Since $A(n,v)$ is obtained by dividing the joint probability of $n$ and $v$ by their respective marginal probabilities, it is intuitively clear that it measures the degree of co-occurrence between $n$ and $v$. This quantity is essentially the same as a measure proposed in the context of natural language processing by Church and Hanks \cite{CH89} called the `association ratio,' which can be defined as $I(n,v) = \log A(n, v)$. Note that $I(n,v)$ is the quantity referred to as `self mutual information' in information theory, whose expectation with respect to $p(n,v)$ is the well-known `mutual information' between random variables $n$ and $v$. The learning problem we are considering, therefore, is in fact a very general and important problem with many potential applications. A question that immediately arises is whether the association norm as defined above is the right measure to use for the purpose of ambiguity resolution. Below we will demonstrate why this is indeed the case. Consider the sentence, `the sailor smacked the postman with a bottle.' The ambiguity in question is between `smacked ... with a bottle' and `the postman with a bottle.' Suppose we take the approach of comparing conditional probabilities, $p_{inst}(smack|bottle)$ and $p_{poss}(postman|bottle)$, as in some past research \cite{LA95}. (Here we let $p_{case}$, in general, denote the joint/conditional probability distribution over two word categories at the case slot denoted by $case$.) Then, since the word `smack' is such a rare word, it is likely that we will have $p_{inst}(smack|bottle) < p_{poss}(postman|bottle)$, and conclude as a result that the `bottle' goes with the `postman.' Suppose on the other hand that we compare $A_{inst}(smack,bottle)$ and $A_{poss}(postman, bottle)$. This time we are likely to have $A_{inst}(smack,bottle) > A_{poss}(postman, bottle)$, and conclude that the `bottle' goes with `smack,' giving the intended reading of the sentence. The crucial fact here is that the two words `smack' and `postman' {\em have} occurred in the sentence of interest, and what we are interested in comparing is the respective likelihood that two words co-occurred at two different case slots (possessive/instrumental), {\em given} that the two words have occurred. It therefore makes sense to compare the joint probability divided by the respective marginal probabilities, namely $A(n,v) = p(n,v)/p(n) p(v)$. If one employed $p(n|v)$ as the measure of co-occurrence, its learning problem, for a fixed verb $v$, would reduce to that of learning a simple distribution. In contrast, as $A(n,v)$ does not define a distribution, it is not immediately clear how we should formulate its estimation problem. In order to resolve this issue, we make use of the following identity: \begin{equation} p(n|v) = \frac{p(n,v)}{p(v)} = \frac{p(n,v)}{p(n) p(v)} \cdot p(n) = A(n,v) \cdot p(n). \label{eqid} \end{equation} In other words, $p(n|v)$ can be {\em decomposed} into the product of the association norm and the marginal distribution over $N$. Now, since $p(n)$ is simply a distribution over the nouns, it can be estimated with an ordinary method of density estimation. (We let $\hat{p}(n)$ denote the result of such an estimation.) It is worth noting here that for this estimation, even when we are estimating $p(n|v)$ for a particular verb $v$, we can use the {\em entire} sample for $N \times V$. We can then estimate $p(n|v)$, using as hyopthesis class $H(\hat{p}) = \{ A(n,v) \cdot \hat{p}(n) | A \in {\cal A} \}$, where ${\cal A}$ is some class of representations for the association norm $A(n,v)$. Again, for a fixed verb, this is a simple density estimation problem, and can be done using any of the many well-known estimation strategies. In particular, we propose and employ a method based on the MDL (Minimum Description Length) principle \cite{Ris78,qr-idtmdlp-89}, thus guaranteeing a near optimal estimation of $p(n|v)$ \cite{yama}. As a result, we will obtain a model for $p(n|v)$, expressed as a product of $A(n,v)$ and $\hat{p}$, thus giving an estimation for the association norm $A(n,v)$ as a {\em side effect} of estimating $p(n|v)$. It has been noticed in the area of corpus-based natural language processing that any method that attempts to estimate either a co-occurrence measure or a probability value for each noun separately requires far too many examples to be useful in practice. (This is usually referred to as the {\em data sparseness problem}.) In order to circumvent this difficulty, we proposed in an earlier paper \cite{LA95} an MDL-based method that estimates $p(n|v)$ (for a particular verb), using a noun classification that exists within a given thesaurus. That is, this method estimates the noun distribution in terms of a `tree cut model,' which defines a probability distribution by assigning a generation probability to each category in a `cut' within a given thesaurus tree.\footnote{See Section 2 for a detailed definition of the `tree cut models.'} Thus, the categories in the cut are used as the `bins' of a histogram, so to speak. The use of MDL ensures that an optimal tree cut is selected, one that is fine enough to capture the tendency in the input data, but coarse enough to allow the estimation of probabilities of categories within it with reasonable accuracy. The shortcoming of the method of \cite{LA95}, however, is that it estimates $p(n|v)$ but {\em not} $A(n, v)$. In this paper, we apply the general framework of estimating association norm to this particular problem setting, and propose an efficient estimation method for $A(n,v)$ based on MDL. More formally, we assume that the marginal distribution over the nouns is definable by a tree cut model, and that the association norm (for each verb) can also be defined by a similar model which associates an $A$ value with each of the cateogories in a cut in the same thesaurus tree (called an `association tree cut model'), and hence $p(n|v)$ for a particular $v$ can be represented as the product of a pair of these tree cut models (called a `tree cut pair model'). (See Figure~\ref{fig:tcm}~(a),(b) and (c) for examples of a `tree cut,' a `tree cut model,' and an `association tree cut model,' all in a same thesaurus tree.) We have devised an efficient algorithm for each of the two steps in the general estimation strategy, namely, of finding an optimal tree cut model for the marginal distribution $p(n)$ (step 1), and finding an optimal association tree cut model for $A(n,v)$ for a particular $v$ (step 2). Each step will select an {\em optimal tree cut} in the thesaurus tree, thus providing appropriate levels of generalization for both $p(n)$ and $A(n,v)$. We tested the proposed method in an experiment, in which the association norms for a number of verbs and nouns are acquired using WordNet \cite{Miller93} as the thesaurus and using corpus data from the Penn Tree Bank as training data. We also performed ambiguity resolution experiments using the association norms obtained using our learning method. The experimental results indicate that the new method achieves better performance than existing methods for the same task, especially in terms of `coverage.'\footnote{Here `coverage' refers to the percentage of the test data for which the method could make a decision.} We found that the optimal tree cut found for $A(n,v)$ was always coarser (i.e.\ closer to the root of the thesaurus tree) than that for $p(n|v)$ found using the method of \cite{LA95}. This, we believe, contributes directly to the wider coverage achieved by our new method. \section{The Tree Cut Pair Model} \label{sec:models} In this section, we will describe the class of representations we employ for distributions over nouns as well as the association norm between nouns and a particular verb.\footnote{In general, this can be between words of any two category, but for ease of exposition, we assume here that it is between nouns and verbs.} A thesaurus is a tree such that each of its leaf nodes represents a noun, and its internal nodes represent noun classes.\footnote{This condition is not strictly satisfied by most of the publically available thesauruses, but we make this assumption to simplify the subsequent discussion.} The class of nouns represented by an internal node is the set of nouns represented by leaf nodes dominated by that node. A `tree cut' in a thesaurus tree is a sequence of internal/leaf nodes, such that its members dominate all of the leaf nodes exhaustively and disjointly. Equivalently, therefore, a tree cut is a set of noun categories/nouns which defines a partition over the set of all nouns represented by the leaf nodes of the thesaurus. Now we define the notion of a `tree cut model' (or a TCM for short) representing a distribution over nouns.\footnote{This definition essentially follows that given by Li and Abe in \cite{LA95}.} \begin{definition} Given a thesaurus tree $t$, a `tree cut model' is a pair $p = (\tau, q)$, where $\tau$ is a tree cut in $t$, and $q$ is a parameter vector specifying a probability distribution over the members of $\tau$. \end{definition} A tree cut model defines a probability distribution by sharing the probability of each noun category uniformly by all the nouns belonging to that category. That is, the probability distribution $p$ represented by a tree cut model $(\tau, q)$ is given by \begin{equation} \forall C \in \tau \: \forall x \in C \: p(x) = \frac{q(C)}{|C|} \label{deftc} \end{equation} A tree cut model can also be represented by a tree, each of whose leaf node is a pair consisting of a noun (cateogory) and a parameter specifying its (collective) probability. We give an example of a simple TCM for the category `ANIMAL' in Figure~\ref{fig:tcm}(b). \begin{figure*}[tb] \begin{center} $\begin{array}{cc} {\epsfxsize7.0cm\epsfysize2.8cm\epsfbox{treecut1.eps}} & {\epsfxsize7.0cm\epsfysize3.0cm\epsfbox{tcm.eps}} \\ \mbox{(a)} & \mbox{(b)} \\ {\epsfxsize7.0cm\epsfysize3.0cm\epsfbox{atcm.eps}} & {\epsfxsize7.0cm\epsfysize3.0cm\epsfbox{tcmp.eps}} \\ \mbox{(c)} & \mbox{(d)} \end{array}$ \vspace*{-0.4cm} \caption{(a) a tree cut (b) a TCM $p$ (c) an ATCM $A$ (d) distribution of $h = A \cdot p$} \label{fig:tcm} \end{center} \vspace*{-0.8cm} \end{figure*} We similarly define the `association tree cut model' (or ATCM for short). \begin{definition} Given a thesaurus tree $t$ and a fixed verb $v$, an `association tree cut model'(ATCM) $A(\cdot, v)$ is a pair $(\tau, p)$, where $\tau$ is a tree cut in $t$, and $p$ is a function from $\tau$ to $\Re$. \end{definition} An association tree cut model defines an association norm by assigning the same value $A$ of association norm to each noun belonging to a noun category. That is, \begin{equation} \forall C \in \tau \: \forall x \in C \: A(x, v) = A(C, v) \label{defatc} \end{equation} We give an example ATCM in Figure~\ref{fig:tcm}(c), which is meant to be an ATCM for the subject slot of verb `fly' within the category of `ANIMAL.' We then define the notion of a `tree cut pair model,' which is a model for $p(n|v)$ for some fixed verb $v$. \begin{definition} A `tree cut pair model' $h$ is a pair $h = (A, p)$, where $A$ is an association tree cut model (for a certain verb $v$), and $p$ is a tree cut model (for $N$), which satisfies the stochastic condition, namely, \begin{equation} \sum_{n \in N} A(n, v) \cdot p(n) = 1. \label{eq:stoch} \end{equation} \end{definition} The above stochastic condition ensures that $h$ defines a legal distribution $h(n|v)$. An example of a tree cut pair model is the pair consisting of the models of Figure~\ref{fig:tcm}(b) and (c), which together defines the distribution shown in Figure~\ref{fig:tcm}(d), verifying that it in fact satisfies the stochastic condition (\ref{eq:stoch}). \section{A New Method of Estimating Association Norms} As described in Introduction, our estimation procedure consists of two steps: The first step is for estimating $p$, and the second for estimating $A$ given an estimation $\hat{p}$ for $p$. The first step can be performed by an estimation method for tree cut models proposed by the authors in \cite{LA95}, and is related to `Context' of Rissanen \cite{riss83}. This method, called `Find-MDL,' is an efficient implementation of the MDL principle for the particular class of tree cut models, and will be exhibited for completeness, as sub-procedure of the entire estimation algorithm. Having estimated $p$ by Find-MDL using the entire sample of $S$ (we write $\hat{p}$ for the result of this estimation), we will then estimate $A$. As explained in Introduction, we will use as the hypothesis class for this estimation, $H(\hat{p}) = \{ A(n,v) \cdot \hat{p}(n) | A \in {\cal A}(t) \} $ where ${\cal A}(t)$ is the set of ATCMs for the given thesaurus tree $t$, and select, according to the MDL principle, a member of $H(\hat{p})$ that best explains the part of the sample that corresponds to the verb $v$, written $S_v$. That is, the result of the estimation, $\hat{h}$, is to be given by\footnote{All logarithms in this paper are to the base 2.} \begin{equation} \hat{h} = \arg \min_{h \in H(\hat{p})} d.l.(h) + \sum_{n \in S_v} - \log h(n|v). \label{mdleq1} \end{equation} In the above, we used `$d.l.(h)$' to denote the model description length of $h$, and as is well-known, $\sum_{n \in S_v} - \log h(n|v)$ is the data description length for sample $S_v$ with respect to $h$. Since the model description length of $\hat{p}$ is fixed, we only need to consider the model description length of $A$, which consists of two parts: the description length for the tree cut, and that for the parameters. We assume that we employ the `uniform' coding scheme for the tree cuts, that is, all the tree cuts have exactly the same description length. Thus, it suffices to consider just the parameter description length for the purpose of minimization. The description length for the parameters is calculated as $(par(A)/2) \log |S_v|$, where $par(A)$ denotes the number of free parameters in the tree cut of $A$. Using $(1/2) \log |S_v|$ bits per parameter is known to be asymptotically optimal, since the variance of estimation is of the order $\sqrt{|S_v|}$. Note here that we use $\log |S_v|/2$ bits and {\em not} $\log |S|/2$, since the numerator $\hat{h}$ of $\hat{A}$ is estimated using $S_v$, even though the denominator $\hat{p}$ is estimated using the entire sample $S$. The reason is that the estimation error for $\hat{A}$, provided that we assume $\hat{p}(C) \geq \epsilon$ for a reasonable constant $\epsilon$, is dominated by the estimation error for $\hat{h}$. Now, since we have $h(n|v) = A(n,v) \cdot \hat{p}(n)$ by definition, the data description length can be decomposed into the following two parts: \begin{equation} \sum_{n \in S_v} - \log h(n|v) = \sum_{n \in S_v} - \log A(n,v) + \sum_{n \in S_v} - \log \hat{p}(n) \end{equation} Notice here that the second term does not depend on the choice of $A$, and hence for the minimization in (\ref{mdleq1}), it suffices to consider just the first term, $\sum_{n \in S_v} - \log A(n|v)$. From this and the preceding discussion on the model description length, (\ref{mdleq1}) yields: \begin{equation} \hat{h} = \arg \min_{h \in H(\hat{p})} \frac{par(A)}{2} \log |S_v| + \sum_{n \in S_v} - \log A(n,v) \label{mdleq2} \end{equation} We will now describe how we calculate the data description length for a tree cut pair model $h = (A, \hat{p})$. The data description length {\em given} a fixed tree cut is calculated using the maximum likelihood estimation (MLE) for $h(n|v)$, i.e.\ by maximizing the likelihood $L(h, S_v) = \prod_{n \in S_v} h(n|v)$. Since in general the tree cut of $A$ does not coincide with the tree cut of $\hat{p}$, this maximization problem appears somewhat involved. The following lemma, however, establishes that it can be solved efficiently. \begin{lemma} Given a tree cut model $\hat{p} = (\sigma, p)$ and a tree cut $\tau$, the MLE(maximum likelihood estimate) $\hat{h}$ = $\hat{A} \cdot \hat{p}$ is given by setting $\hat{h}(C'|v)$ for each $C' \in \tau$ by \[ \hat{h}(C'|v) = \frac{\sharp(C', S_v)}{|S_v|} \] where in general we let $\sharp(C, S)$ denote the number of occurrences of nouns belonging to class $C$ in sample $S$. The estimate for $\hat{A}$ is then given by letting for each $C' \in \tau$, \[ \hat{A}(C',v) = \frac{\hat{h}(C'|v)}{\hat{p}(C')} \] where $\hat{p}(C')$ is defined inductively as follows: \begin{enumerate} \item If $C' = C$ for some $C \in \sigma$, then $\hat{p}(C') = \hat{p}(C)$. \item If $C'$ dominates $C_1,...,C_k$ and $\hat{p}(C_1),...,\hat{p}(C_k)$ are defined, then $\hat{p}(C') = \sum_{i=1}^{k} \hat{p}(C_i)$. \item If $C'$ is dominated by $C$ and if $\hat{p}(C)$ is defined, then $\hat{p}(C') = \frac{|C'|}{|C|} \hat{p}(C)$. \end{enumerate} \label{lem:mle} \end{lemma} \noindent{{\bf Proof of Lemma~\ref{lem:mle}}} Given the tree cuts, $\tau$ and $\sigma$, define $\tau \wedge \sigma$ to be the tree cut whose noun partition equals the coarsest partition that is finer than or equal to both the noun partitions of $\tau$ and $\sigma$. Then, the likelihood function $L(h, S_v)$ which we are trying to maximize (for $h = (A, \hat{p})$) can be written as follows, \begin{equation} L(h, S_v) = \prod_{C \in \tau \wedge \sigma} (A(C, v) \cdot \hat{p}(C))^{\sharp(C, S_v)} \label{eq:max1} \end{equation} where $A(C|v)$ for $C \not\in \tau$ and $\hat{p}(C)$ for $C \not\in \sigma$ are defined so that they be consistent\footnote{That is, $\hat{p}$ is defined as specified in the lemma, and $A$ is defined by inheriting the same value as the $A$ value of the ascendant in $\tau$.} with the definitions of $A(n|v)$ and $\hat{p}(n)$. As before, since $\hat{p}$ is fixed, the above maximization problem is equivalent to maximizing just the product of $A$ values, namely, \begin{equation} \arg \max_A L(h, S_v) = \arg \max_A \prod_{C \in \tau \wedge \sigma} A(C, v)^{\sharp(C, S_v)} \label{eq:max2} \end{equation} Since $\tau \wedge \sigma$ is always finer than $\tau$, for each $C \in \tau \wedge \sigma$, there exists some $C' \in \tau$ such that $A(C, v) = A(C', v)$. Thus, \begin{equation} \arg \max_A L(h, S_v) = \arg \max_A \prod_{C' \in \tau} A(C', v)^{\sharp(C', S_v)} \label{eq:max3} \end{equation} Note that the maximization is subject to the condition: \begin{equation} \sum_{n \in S_v} A(n,v) \cdot \hat{p}(n) = \sum_{C' \in \tau} A(C',v) \cdot \hat{p}(C') = 1. \label{eq:cond} \end{equation} Since multiplying by a constant leaves the argument of maximization unchanged, (\ref{eq:max3}) yields \begin{equation} \arg \max_A L(h, S_v) = \arg \max_{A} \prod_{C' \in \tau} (A(C', v) \cdot \hat{p}(C'))^{\sharp(C', S_v)} \label{eq:max4} \end{equation} where the maximization is under the same condition (\ref{eq:cond}). Emphatically, the quantity being maximized in (\ref{eq:max4}) is {\em different} from the likelihood in (\ref{eq:max1}), but both attain maximum for the same values of $A$. Thus the maximization problem is reduced to one of the form: `maximize $\prod (a_i \cdot p_i)^{k_i}$ subject to $\sum a_i \cdot p_i = 1$. As is well-known, this is given by setting, $a_i \cdot p_i = \frac{k_i}{\sum_i k_i}$ for each $i$. Thus, (\ref{eq:max2}) is obtained by setting, for each $C' \in \tau$, \begin{equation} h(C') = A(C', v) \cdot \hat{p}(C') = \frac{\sharp(C', S_v)}{|S_v|} \end{equation} Hence, $\hat{A}$ is given, for each $C' \in \tau$, by \begin{equation} \hat{A}(C', v) = \frac{\hat{h}(C'|v)}{\hat{p}(C')} \end{equation} This completes the proof. $\Box$ We now go on to the issue of how we can find a model satisfying (\ref{mdleq2}) efficiently: This is possible with a recursive algorithm which resembles Find-MDL of \cite{LA95}. This algorithm works by recursively applying itself on subtrees to obtain optimal tree cuts for each of them, and decides whether to return a tree cut obtained by appending all of them, or a cut consisting solely of the top node of the current subtree, by comparison of the respective description length. In calculating the data description length at each recursive call, the formulas of Lemma~\ref{lem:mle} are used to obtain the MLE. The details of this procedure are shown below as algorithm `Assoc-MDL.' Note, in the algorithm description, that $S$ denotes the input sample, which is a sequence of elements of $N \times V$. For any fixed verb $v \in V$, $S_v$ denotes the part of $S$ that corresponds to verb $v$, i.e. $S_v= \{ n \in S | (n,v) \in S \}_M$. (We use $\{ \}_M$ when denoting a `multi-set.') We use $\pi_1(S)$ to denote the multi-set of nouns appearing in sample $S$, i.e. $\pi_1(S) = \{ n \in S | \exists v \in V \: (n,v) \in S \}_M$. In general, $t$ stands for a node in a tree, or equivalently the class of nouns it represents. It is initially set to the root node of the input thesaurus tree. In general, `$[...]$' denotes a list. \begin{tabbing} {\bf algorithm} Assoc-MDL($t, S$) \\ 1. $\hat{p}$ := Find-MDL($t, \pi_1(S)$) \\ 2. $\hat{A}$ := Find-Assoc-MDL($S_v, t, \hat{p}$)\\ 3. return(($\hat{A},\hat{p}$)) \\ \\ {\bf sub-procedure} Find-MDL($t, S$) \\ 1. {\bf if} $t$ is a leaf node \\ 2. {\bf then} return($([t], \hat{p}(t, S))$) \\ 3. {\bf else} \\ 4. \hspace{5mm} For each child $t_i$ of $t$, $c_i$ $:=$Find-MDL($t_i, S$)\\ 5. \hspace{5mm} $\gamma$$:=$ append($c_i$) \\ 6. \hspace{5mm} {\bf if} $\sharp(t, \pi_1(S)) (- \log \frac{\hat{p}(t)}{|t|}) + \frac{1}{2} \log N$ $<$ \\ \hspace{5mm} \hspace{5mm} $\sum_{t_i \in children(t)} \sharp(t_i, \pi_1(S)) (- \log \frac{\hat{p}(t_i)}{|t_i|}) + \frac{|\gamma|}{2} \log N$ \\ 7. \hspace{5mm} {\bf then} return($([t], \hat{p}(t, S))$) \\ 8. \hspace{5mm} {\bf else} return$(\gamma)$ \\ \end{tabbing} \begin{tabbing} {\bf sub-procedure} Find-Assoc-MDL($S_v, t, \hat{p}$) \\ 1. {\bf if} $t$ is a leaf node \\ 2. {\bf then} return($([t], \hat{A}(t,v))$) \\ 3. {\bf else} Let $\tau :=$ {\em children}$(t)$ \\ 4. \hspace{5mm} $\hat{h}(t|v) := \frac{\sharp(t,S_v)}{|S_v|}$\\ 5. \hspace{5mm} $\hat{A}(t,v) := \frac{\hat{h}(t|v)}{\hat{p}(t)}$\\ /* We use definitions in Lemma~\ref{lem:mle} to calculate $\hat{p}(t)$ */\\ 6. \hspace{5mm} For each child $t_i \in \tau$ of $t$ \\ 7. \hspace{5mm} $\gamma_i$ $:=$Find-Assoc-MDL($S_v, t_i, \hat{p}$)\\ 8. \hspace{5mm} $\gamma$$:=$ append($\gamma_i$) \\ 9. \hspace{5mm} {\bf if} $ \sharp(t, S_v) (- \log \hat{A}(t, v)) + \frac{1}{2} \log |S_v|$ $<$ \\ \hspace{5mm} \hspace{5mm} $\sum_{t_i \in \tau} \sharp(t, S_v) (- \log \hat{A}(t_i,v)) + \frac{|\tau|}{2} \log |S_v|$ \\ /* The values of $\hat{A}(t_i,v)$ used above are to be */ \\ /* those in $\gamma_i$ */ \\ 11. {\bf then} return($([t], \hat{A}(t, v))$) \\ 12. {\bf else} return($\gamma$) \end{tabbing} Given Lemma~\ref{lem:mle}, it is not difficult to see that Find-Assoc-MDL indeed does find a tree cut pair model which minimizes the total description length. Also, its running time is clearly linear in the size (number of leaf nodes) in the thesaurus tree, and linear in the input sample size. The following proposition summarizes these observations. \begin{proposition} Algorithm {\em Find-Assoc-MDL} outputs $\hat{h} \in H(\hat{p}) = \{ A \cdot \hat{p} | A \in {\cal A}(t) \}$ (where ${\cal A}(t)$ denotes the class of association tree cut models for thesaurus tree $t$) such that \[ \hat{h} = \arg \min_{h \in H(\hat{p})} d.l.(h) + \sum_{n \in S_v} - \log h(n|v) \] and its worst case running time is $O(|S| \cdot |t|)$, where $|S|$ is the size of the input sample, and $|t|$ is the size (number of leaves) of the thesaurus tree. \end{proposition} We note that an analogous (and easier) proposition on Find-MDL is stated in \cite{LA95}. \section{Comparison with Existing Methods} A simpler alternative formulation of the problem of acquiring case frame patterns is to think of it as the problem of learning the distribution over nouns at a given case slot of a given verb, as in \cite{LA95}. In that paper, the algorithm Find-MDL was used to estimate $p(n|v)$ for a fixed verb $v$, which is merely a distribution over nouns. The method was guaranteed to be near-optimal as a method of estimating the noun distribution, but it suffered from the disadvantage that it tended to be influenced by the {\em absolute frequencies} of the nouns. This is a direct consequence of employing a simpler formulation of the problem, namely as that of learning a distribution over nouns at a given case slot of a given verb, and {\em not} an association norm between the nouns and verbs. To illustrate this difficulty, suppose that we are given 4 occurrences of the word `swallow,' 7 occurrences of `crow,' and 1 occurrence of `robin,' say at the subject position of `fly.' The method of \cite{LA95} would probably conclude that `swallow' and `crow' are likely to appear at subject position of `fly,' but not `robin.' But, the reason why the word `robin' is not observed many times may be attributable to the fact that this word simply has a low absolute frequency, irrespective of the context. For example, `swallow,' `crow,' and `robin' might each have absolute frequencies of 42, 66, and 9, in the same data with unrestricted contexts. In this case, their frequencies of 4, 7 and 1 as subject of `fly' would probably suggest that they are all roughly equally likely to appear as subject of `fly,' given that they do appear at all. An earlier method proposed by Resnik \cite{Res92} takes into account the above intuition in the form of a heuristic. His method judges whether a given noun tends to co-occur with a verb or not, based on its super-concept having the highest value of association norm with that verb. The association norm he used, called the `selectional association' is defined, for a noun class $C$ and a verb $v$, as \[ \sum_{n \in C} p(n) \log \frac{p(n,v)}{p(n) p(v)}. \] Despite its intuitive appeal, the most serious disadvantage of Resnik's method, in our view, is the fact that no theoretical justification is provided for employing it as an estimation method, in contrast to the method of Li and Abe \cite{LA95}, which enjoyed theoretical justification, if at the cost of an over-simplied formulation. It thus naturally leads to the question of whether there exists a method which estimates a reasonable notion of association norm, and at the same time is theoretically justified as an estimation method. This, we believe, is exactly what the method proposed in the current paper provides. \section{Experimental Results} \vspace*{-0.2cm} \subsection{Learning Word Assocation Norm} \vspace*{-0.1cm} The training data we used were obtained from the texts of the {\em tagged} Wall Street Journal corpus (ACL/DCI CD-ROM1), which contains 126,084 sentences. In particular, we extracted triples of the form $(verb, case\_slot, noun)$ or $(noun, case\_slot, noun)$ using a standard pattern matching technique. (These two types of triples can be regarded more generally as instances of $(head, case\_slot, slot\_value)$.) The thesaurus we used is basically `WordNet' (version1.4) \cite{Miller93}, but as WordNet has some anomalies which make it deviate from the definition of a `thesaurus tree' we had in Section~\ref{sec:models}, we needed to modify it somewhat.\footnote{These anomalies are: (i) The structure of WordNet is in fact not a tree but a DAG; (ii) The (leaf and internal) nodes stand for a word sense and not a word, and thus the same word can be contained in more than one word senses and vice-versa. We refer the interested reader to \cite{LA95} for the modifications we made.} Figure~\ref{fig:wordnet} shows selected parts of the ATCM obtained by Assoc-MDL for the direct object slot of the verb `buy,' as well as the TCM obtained by the method of \cite{LA95}, i.e.\ by applying Find-MDL on the data for that case slot. Note that the nodes in the TCM having probabilities less than 0.01 have been discarded. \begin{figure*}[tb] \begin{center} {\epsfxsize14.0cm\epsfysize3.8cm\epsfbox{assoc.eps}} \end{center} \caption{Parts of the ATCM and the TCM} \label{fig:wordnet} \vspace*{-0.5cm} \end{figure*} We list a number of general tendencies that can be observed in these results. First, many of the nodes that are assigned high $A$ values by the ATCM are not present in the TCM, as they have negligible absolute frequencies. Some examples of these nodes are $\langle property, belonging... \rangle$, $\langle right \rangle$, $\langle owernership \rangle$, and $\langle part,... \rangle$. Our intuition agrees with the judgement that they do represent suitable direct objects of `buy,' and the fact that they were picked up by Assoc-MDL despite their low absolute frequencies seems to confirm the advantage of our method. Another notable fact is that the cut in the ATCM is always `above' that of the TCM. For example, as we can see in Figure~\ref{fig:wordnet}, the four nodes $\langle action \rangle$, $\langle activity \rangle$, $\langle allotment \rangle$, and $\langle commerce \rangle$ in the TCM are all generalized as one node $\langle act \rangle$ in the ATCM, reflecting the judgement that despite their varying absolute frequencies, their association norms with `buy' do not significantly deviate from one another. In contrast, note that the nodes $\langle property \rangle$, $\langle asset \rangle$, and $\langle liability \rangle$ are kept separate in the ATCM, as the first two have high $A$ values, whereas $\langle liability \rangle$ has a low $A$ value, which is consistent with our intuition that one does not want to buy debt. \vspace*{-0.1cm} \subsection{PP-attachment Disambiguation Experiment} \vspace*{-0.1cm} We used the knowledge of association norms acquired in the experiment described above to resolve pp-attachment ambiguities. For this experiment, we used the bracketed corpus of the Penn Tree Bank (Wall Street Journal Corpus) \cite{Marcus93} as our data. First we randomly selected one directory of the WSJ files containing roughly $1/26$ of the entire data as our test data and what remains as the training data. We repeated this process ten times to conduct {\em cross validation}. At each of the ten iterations, we extracted from the test data $(verb,noun_1,prep,noun_2)$ quadruples, as well as the `answer' for the pp-attachment site for each quadruple by inspecting the parse trees given in the Penn Tree Bank. Then we extracted from the training data $(verb,prep,noun_2)$ and $(noun_1,prep,noun_2)$ triples. Having done so, we preprocessed both the training and test data by removing obviously noisy examples, and subsequently applying 12 heuristic rules, including: (1) changing the inflected form of a word to its stem form, (2) replacing numerals with the word `number,' (3) replacing integers between $1900$ and $2999$ with the word `year,' etc..\ On the average, for each iteration we obtained $820.4$ quadruples as test data, and $19739.2$ triples as training data. For the sake of comparison, we also tested the method proposed in \cite{LA95}, as well as a method based on Resnik's \cite{Res92}. For the former, we used Find-MDL to learn the distribution of $case\_value$s (nouns) at a specific $case\_slot$ of a specific $head$ (a noun or a verb), and used the acquired conditional probability distribution $p_{head}(case\_value | case\_slot)$ to disambiguate the test patterns. For the latter, we generalized each $case\_value$ at a specific $case\_slot$ of a specific $head$ to the appropriate level in WordNet using the `selectional association' (SA) measure, and used the SA values of those generalized classes for disambiguation.\footnote{Resnik actually generalizes both the $head$s and the $case\_value$s, but here we only generalize $case_value$s to allow a fair comparison.} More concretely, for a given test pattern ($verb$, $noun_1$, $prep$, $noun_2$), our method compares $\hat{A}_{prep}$$(noun_2, verb)$ and $\hat{A}_{prep}$$(noun_2, noun_1)$, and attach $(prep, noun_2)$ to $verb$ or $noun_1$ depending on which is larger. If they are equal, then it is judged that no decision can be made. Disambiguation using SA is done in a similar manner, by comparing the two corresponding SA values, while that by Find-MDL is done by comparing the conditional probabilities, $\hat{P}_{prep}(noun_2|$ $verb$) and $\hat{P}_{prep}(noun_2|$ $noun_1$). Table~\ref{tab:results} shows the results of this pp-attachment disambiguation experiment in terms of `coverage' and `accuracy.' Here `coverage' refers to the percetage of the test patterns for which the disambiguation method made a decision, and `accuracy' refers to the percetage of those decisions that were correct. In the table, `Default' refers to the method of always attaching $(prep,noun_2)$ to $noun_1$, and `Assoc' `SA,' and `MDL' stand for using Assoc-MDL, selectional association, and Find-MDL, respectively. The tendency of these results is clear: In terms of prediction accuracy, Assoc remains essentially unchanged from both SA and MDL at about 95 per cent. In terms of coverage, however, Assoc, at 80.0 per cent, significantly out-performs both SA and MDL, which are at 63.7 per cent and 73.3 per cent, respectively. \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline & Coverage(\%) & Accuracy(\%) \\ \hline Default & $100$ & $70.2$ \\ MDL & $73.3$ & $94.6$ \\ SA & $63.7$ & $94.3$ \\ Assoc & $80.0$ & $95.2$ \\ \hline \end{tabular} \end{center} \caption{Results of PP-attachment disambiguation} \label{tab:results} \end{table} \begin{figure}[tb] \begin{center} {\epsfxsize8.2cm\epsfysize5.0cm\epsfbox{curve.eps}} \caption{The coverage-accuracy curves for MDL, SA and Assoc.} \label{fig:curves} \end{center} \vspace*{-0.5cm} \end{figure} Figure~\ref{fig:curves} plots the `coverage-accuracy' curves for all three methods. The x-axis is the coverage (in ratio not in percentage) and the y-axis is the accuracy. These curves are obtained by employing a `confidence test'\footnote{We perform the following heuristic confidence test to judge whether a decision can be made. We divide the difference between the two estimates by the approximate standard deviation of that difference, heuristically calculated by $\sqrt{\frac{\hat{\sigma}_1^2}{N_1} + \frac{\hat{\sigma}_2^2}{N_2}}$, where $\hat{\sigma}_i^2$ is the variance of the association values for the classes in the tree cut output for $head$ and $prep$ in question, and $N_i$ is the size of the corresponding sub-sample. (The test is simpler for MDL.) } for judging whether to make a decision or not, and then changing the threshold confidence level as parameter. It can be seen that overall Assoc enjoys a higher coverage than the other two methods, since its accuracy does not drop nearly as sharply as the other two methods as the required confidence level approaches zero. Note that ultimately what matters the most is the performance at the `break-even' point, namely the point at which the accuracy equals the coverage, since it achieves the optimal accuracy overall. It is quite clear from these curves that Assoc will win out there. The fact that Assoc appears to do better than MDL confirms our intuition that the association norm is better suited for the purpose of disambiguation than the conditional probability. The fact that Assoc out-performs SA, on the other hand, confirms that our {\em estimation method} for the association norm based on MDL is not only theoretically sound but excels in practice, as SA is a heuristic method based on essentially the same notion of association norm. \section{Concluding Remarks} We have proposed a new method of learning the `association norm' $A(x,y) = p(x,y)/p(x) p(y)$ between two discrete random variables. We applied our method on the important problem of learning word association norms from large corpus data, using the class of `tree cut pair models' as the knowledge representation language. A syntactic disambiguation experiment conducted using the acquried knowledge shows that our method improves upon other methods known in the literature for the same task. In the future, we hope to demonstrate that the proposed method can be used in practice, by testing it on even larger corpus data. \vspace*{-0.2cm} \section*{Acknowledgement} \vspace*{-0.1cm} We thank Ms.\ Y.\ Yamaguchi of NIS for her programming efforts, and Mr.\ K.\ Nakamura and Mr.\ T.\ Fujita of NEC Corporation for their encouragement. \newcommand{\etalchar}[1]{$^{#1}$}
4dc08b543b99f4b544fe3579691cc4f1fbd87d24
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} In the last two decades since a solution of geometrization conjecture due to Perelman \cite{per01, per02, per03}, there has been increasing interest in the study of Riemannian manifolds endowed with metrics satisfying some geometric structural equations, possibly involving curvatures and some globally defined smooth functions called potential functions. Typical examples are gradient Ricci solitons arisen as self-similar solutions of the Ricci flow (cf. \cite{cao}, \cite{e-l-m}), a critical point equation arisen as an Euler-Lagrange equation of the total scalar curvature functional restricted to the metrics of constant scalar curvature with unit volume on a compact smooth manifold (cf. \cite{Be}, p.128), and vacuum static equations handled in mathematical physics or general relativity (cf. \cite{h-e}). In this paper we consider an $n$-dimensional smooth Riemannian manifold $(M,g)$ with $n\geq 3$ which admits smooth functions $f$ and $h$ to the system of equations \begin{eqnarray} f {\rm Ric}= Ddf +h g,\label{eq1} \end{eqnarray} which called the Einstein-type equation. Here, ${\rm Ric}$ is the Ricci curvature of $(M, g)$ and $Ddf$ denotes the Hessian of $f$. We say that the quadruple $(M,g, f, h)$ is called an Einstein-type manifold if $(M, g)$ is a smooth Riemannian manifold and the pair $(f,h)$ satisfies the Einstein-type equation (\ref{eq1}). The notion of Einstein-type equations is widely handled in several papers. For example, Qing and Yuan \cite{QY} considered Riemannian manifolds satisfying Einstein-type equation, and obtained rigidity results on critical point equation and vacuum static spaces under completely divergence-freeness of the Cotton tensor. Catino et al. \cite{c-m-m-r} provided a more general Einstein-type equation and classified it under the Bach-flat condition. Recently, Leandro \cite{lea} also studied the Einstein-type equation, and showed harmonicity of Weyl curvature tensor under completely divergence-freeness of Weyl curvature and zero radial Weyl curvature condition. \vskip .75pc \noindent {\bf Notations: } Hereafter, for convenience and simplicity, we denote the Ricci curvature ${\rm Ric}$ just by $r$ if there is no ambiguity. We also denote by $s$ the scalar curvature of $(M, g)$ and, if necessary, we use the notation $\langle \,\, ,\,\, \rangle$ for metric $g$ or inner product induced by $g$ on tensor spaces. \vskip .75pc First of all, by taking the trace of (\ref{eq1}), we trivially obtain \begin{eqnarray} \Delta f=sf-nh. \label{eq7227} \end{eqnarray} If $f \equiv 0$, then $h$ must be zero and the Einstein-type equation becomes trivial having no informations. Thus, we assume $f$ is not identically zero whenever we consider the Einstein-type equation. It is also easy to see that $g$ is Einstein if $f$ is constant, and so, in particular, space forms are Einstein-type manifolds. We sometimes call an Einstein-type manifold $(M, g,f,h)$ is {\it trivial} if $f$ is a non-zero constant. As mentioned above, some well known structural equations are directly related to Einstein-type manifolds. For example, if $h=0$, it reduces to the static vacuum Einstein equation (cf. \cite{and, h-c-y}). If $h=\frac s{n-1}f$, then we have $s_g'^*(f)=0$, or the vacuum static equation (cf. \cite{amb, f-m, kob, laf}), where $s_g'^*$ is the $L^2$-adjoint operator of the linearization $s_g'$ of the scalar curvature given by $$ s_g'^*(f)= -(\Delta f)g +Ddf-fr. $$ If $h=\frac s{n-1}f+\frac {\kappa}{n-1}$ for some constant $\kappa$, we have $s_g'^*(f)=\kappa g$, or the $V$-static equation. Also, if $h$ is constant with $s=0$, then $(M,g,f)$ satisfies the $V$-static equation \cite{c-e-m}, since $$ s_g'^*(f)= (n-1)h g.$$ If $h=\frac s{n-1}f-\frac s{n(n-1)}$ with $f=1+\varphi$, we have $s_g'^*(f)= {\mathring {\rm Ric}}$, called the critical point equation, where ${\mathring {\rm Ric}}$ denotes the trace-less Ricci tensor. Finally, if $h=\frac {s-\rho-\mu}{n-1}f$, we have the static perfect fluid equation \cite{k-o}. By considering $-f$ instead of $f$, we always assume that $h \ge 0$ when $h$ is constant. In Section 2, we see that if the scalar curvature is vanishing, then the function $h$ must be constant, and conversely, if $h$ is constant and $f$ is not constant, then the scalar curvature $s$ must vanish. In fact, we see that if both $h$ and the scalar curvature $s$ are constants, then either $(M, g)$ is Einstein or $s =0$. Moreover, when the function $h$ is vanishing, the Einstein-type equation becomes, in fact, a static vacuum Einstein equation (cf. \cite{wal}) with zero scalar curvature if $M$ is compact. The first result is a rigidity theorem when $h=0$. \begin{thm} \label{thm1} Let $(M,g,f)$ be an $n$-dimensional compact Einstein-type manifold satisfying \begin{eqnarray} fr = Ddf.\label{eqn2022-3-5-1} \end{eqnarray} Then, $(M,g)$ is Ricci flat with constant $f$. \end{thm} We would like to mention a remark on noncompact manifolds satisfying (\ref{eqn2022-3-5-1}). In addition to (\ref{eqn2022-3-5-1}), if we assume $\Delta f = 0$, Theorem~\ref{thm1} is still true for a complete noncompact Riemannian manifold $(M, g)$ (cf. \cite{lic, and0, and}). Of course, if we do not assume $\Delta f = 0$, Theorem~\ref{thm1} does not hold anymore (see Section 2). In fact, in case of non-compact Riemannian manifolds which are not complete, there are much more complicated geometric structures in static vacuum Einstein manifolds (cf. \cite{and}). In case when a compact Einstein-type manifold has positive scalar curvature, by applying the maximum principle, we can show the following a gap-type theorem. \begin{thm}\label{thm4} Let $(M,g,f,h)$ be a compact Einstein-type manifold with positive scalar curvature with constant $h$. If $ {\min_M s} \geq nh$, then $(M,g)$ is Einstein. \end{thm} When the Ricci curvature is nonpositive, it is easy to see from the Bochner-Weitzenb\"ock formula that the maximum principle holds for the function $|\nabla f|^2$. In particular, we have the following result. \begin{thm} \label{thm930} Let $(M^n,g,f, h)$ be a complete noncompact Einstein-type manifold with nonpositive Ricci curvature satisfying (\ref{eq1}) with constant $h$. If $f$ satisfies $$ \int_{M}|\nabla f|^2 < \infty, $$ then $f$ is constant and $(M, g)$ is Einstein. \end{thm} As an immediate consequence, we have the following result. \begin{cor}\label{cor930} Let $(M^n,g,f,h)$ be a compact Einstein-type manifold with nonpositive Ricci curvature satisfying (\ref{eq1}) with constant $h$. Then $(M, g)$ is Einstein with constant $f$. \end{cor} Next, we consider rigidity question on Einstein-type manifolds with locally conformally flat structure. Since an Einstein manifold having locally conformally flat structure has constant sectional curvature, it is natural to consider Einstein-type manifolds with locally conformally flat metrics. When an Einstein-type manifold $(M,g, f, h)$ is locally conformally flat, we have the following result. \begin{thm}\label{thm129} Let $(M,g,f,h)$ be a locally conformally flat Einstein-type manifold satisfying (\ref{eq1}) with constant $h$ and constant scalar curvature. If $f$ is a proper map, then $(M, g)$ is Einstein. \end{thm} For an Einstein-type manifold satisfying conditions in Theorem~\ref{thm129}, we can show that, around any regular point of $f$, the manifold is locally a warped product of an interval and a $(n-1)$-dimensional space form. Related to conformally flat Einstein-type manifolds, we would like to mention a result due to Kobayashi and Obata in \cite{k-o}. They proved that if a warped product manifold $\widetilde M = {\Bbb R} \times M$ is locally conformally flat and the base manifold $(M, g)$ satisfies the Einstein-type equation by a positive potential function, then $(M, g)$ is of constant curvature. We say an Einstein-type Riemannian manifold $(M, g)$ satisfying (\ref{eq1}) has radially flat Weyl curvature if $\tilde{i}_{\nabla f}{\mathcal W}=0$. Here $\tilde{i}_{\nabla f}$ denotes the interior product to the last component defined by $$ \tilde{i}_{\nabla f}{\mathcal W}(X, Y, Z) = {\mathcal W}(X, Y, Z, \nabla f). $$ Note that if the dimension of $M$ is four, an Einstein-type manifold having radially flat Weyl curvature is locally conformally flat (see the proof of Lemma 4.3 in \cite{cc}). Thus, as an immediate consequence of the above theorem, we have the following result. \begin{cor} Let $(M,g,f,h)$ be a $4$-dimensional Einstein-type manifold with radially flat Weyl curvature satisfying (\ref{eq1}) with constant $h$ and constant scalar curvature. If $f$ is a proper map, then $(M, g)$ is Einstein. \end{cor} Since $h$ must be constant if the scalar curvature is vanishing for an Einstein-type manifold $(M, g,f,h)$ satisfying (\ref{eq1}), we have the following result. \begin{cor} Let $(M,g,f,h)$ be a locally conformally flat Einstein-type manifold with vanishing scalar curvature. If $f$ is a proper map, then $(M, g)$ is flat. \end{cor} The paper is organized as follows. In Section 2, we derive basic facts on Einstein-type manifolds in general and Einstein-type manifolds with positive scalar curvature, and prove Theorem~\ref{thm4}. In section 3, we handle Einstein-type manifolds with constant $h$ or $h=0$ and having nonpositive Ricci curvature, and prove Theorem~\ref{thm1} and Theorem~\ref{thm930}. In Section 4, we study Einstein-type manifolds with zero Cotton tensor (see Section 4 for the definition) or locally conformally flat structure. \section{Basic Properties} In this section, we shall find basic properties of the scalar curvature of Einstein-type manifolds satisfying (\ref{eq1}). First of all, since $\delta Ddf = -r(\nabla f, \cdot) - d\Delta f$ and $\Delta f = fs - nh$, by taking the divergence $\delta$ of (\ref{eq1}), we obtain \begin{eqnarray} dh= \frac 1{2(n-1)}\left( fds +2sdf\right). \label{eq83} \end{eqnarray} This identity shows that if $s = 0$, then $h$ must be constant. The converse is also true if both $s$ and $h$ are constants and $f$ is not constant. From (\ref{eq83}), it is easy to see that the following equalities hold in general. \begin{lem} \label{cor722} On an Einstein-type manifold $(M,g,f,h)$ we have $$Ddh=\frac 1{2(n-1)}df\otimes ds +\frac 1{2(n-1)}f Dds +\frac 1{n-1}ds\otimes df +\frac s{n-1}(fr -hg). $$ In particular, $$\Delta h = \frac 3{2(n-1)}\langle \nabla s, \nabla f\rangle +\frac 1{2(n-1)}f\Delta s +\frac {s^2f}{n-1} -\frac {nhs}{n-1}. $$ \end{lem} A direct observation from (\ref{eq83}) is the following. \begin{lem}\label{lem2022-1-26-1} Let $(M^n, g,f, h)$ be an Einstein-type manifold satisfying (\ref{eq1}). If $h$ is constant, then the function $f^2 s$ must be constant. \end{lem} \begin{proof} From (\ref{eq83}), we have $$ d(f^2 s) = f(fds + 2sdf) = 0. $$ \end{proof} \begin{prop}\label{prop2021-11-11-32} Let $(M, g, f)$ be a compact Einstein-type manifold satisfying (\ref{eq1}) for a constant $h $. Then $$ \int_M f(nh - fs) dv_g \ge 0. $$ The equality holds if and only if $(M, g)$ is Einstein. \end{prop} \begin{proof} Multiplying both sides of $\Delta f = fs - nh$ by $f$ and integrating it over $M$, we obtain $$ \int_M f(nh - fs) dv_g = - \int_M f \Delta f dv_g = \int_M |\nabla f|^2 dv_g \ge0. $$ The equality holds if and only if $f$ is constant and so $(M, g)$ is Einstein. \end{proof} In the rest of this section, we discuss the rigidity of Einstein-type manifolds when $h=0$ and we prove Theorem~\ref{thm1}. To do this, we need the following lemma which shows that a compact Riemannian manifold satisfying the static vacuum Einstein equation must have nonnegative scalar curvature. \begin{lem}\label{lem83} Let $(M,g,f)$ be an $n$-dimensional compact Einstein-type manifold satisfying $fr = Ddf$. Then the scalar curvature is nonnegative. \end{lem} \begin{proof} First, we claim that there are no critical points of $f$ on $f^{-1}(0)$ unless $f^{-1}(0)$ is empty. Suppose that $p\in f^{-1}(0)$ is a critical point of $f$. Let $\gamma$ be a normal geodesic starting from $p$ moving toward the inside of $M^+=\{x\in M\, \vert\, f(x) >0\}$. Then, by (\ref{eq1}), the function $\varphi(t)=f\circ \gamma (t)$ satisfies $$ \varphi''(t)= Dd\varphi(\gamma'(t), \gamma'(t))=\varphi(t)r(\gamma'(t), \gamma'(t)). $$ Since $\varphi(0)=0$ and $\varphi'(0)=df_p(\gamma'(0))=0$, by the uniqueness of ODE solution, $\varphi$ vanishes identically, implying that $\gamma(t)$ stays in $f^{-1}(0)$, which is a contradiction. Since $ s\, df=0$ on the set $f^{-1}(0)$ by (\ref{eq83}), and there are no critical points on $f^{-1}(0)$, the scalar curvature vanishes on $f^{-1}(0)$. Now, by (\ref{eq83}) again, we have $$ \frac 32 \langle \nabla s, \nabla f\rangle +s\Delta f +\frac f2 \Delta s=0 $$ with $\Delta f =sf$, implying that \begin{eqnarray} \Delta s -\frac {6|\nabla f|^2}{f^2}s=-2s^2\leq 0 \label{eqn2022-1-26-2} \end{eqnarray} on the set $f \ne 0$. For a sufficiently small positive real number $\epsilon>0$, let $$ M^\epsilon =\{x\in X\,:\, f(x) > \epsilon\}. $$ Applying the maximum principle (cf. \cite{g-t}) to (\ref{eqn2022-1-26-2}) on the set $M^\epsilon$, we have $$ \inf_{M^\epsilon} \, s \geq \inf_{\partial M^\epsilon}\, s^{-}, $$ where $s^{-} = \min\{s, 0\}$. By letting $\epsilon \to 0$, we have $$ \inf_{M^0} \, s \geq \inf_{\partial M^0}\, s^{-} =0, $$ where $M^0=\{x\in M\, \vert\, f(x) >0\}$ with $\partial M^0=f^{-1}(0)$. The equality in right-hand side follows from the above claim. In a similar way, we may argue that $$ \inf_{M_0} \, s \geq \inf_{\partial M_0}\, s^{-} =0, $$ where $M_0=\{x\in M\, \vert\, f(x) <0\}$ with $\partial M_0=f^{-1}(0)$. As a result, we may conclude that $s\geq 0$ on $M$. Finally, assume that $f^{-1}(0)$ is empty. Letting $\min_M s = s(x_0)$, we have $\Delta s(x_0) = 0$ by Lemma~\ref{lem2022-1-26-1} and (\ref{eqn2022-1-26-2}) and so $s(x_0) = 0$ from (\ref{eqn2022-1-26-2}) again. \end{proof} Now, we are ready to prove Theorem~\ref{thm1}. \vspace{.12in} \noindent {\bf Proof of Theorem~\ref{thm1}.} By Lemma~\ref{lem83}, we have $s\geq 0$ on $M$, and so, from $\Delta f =sf$, we have $$\Delta f\geq 0$$ on the set $M^0$. Since $f \not\equiv 0$, from the maximum principle, we can see that there is only one case: $f$ is a nonzero constant and $s = 0$ on $M$. Thus $(M, g)$ is Ricci-flat. \hfill $\Box$ \begin{exam} {\rm Let $M^3 = (a, b) \times {\Bbb R}^2$ be a smooth $3$-manifold with the metric \begin{eqnarray} g = \frac{dt^2}{f^2} + t^2(d\theta^2+\sin^2 \theta d\varphi^2 ),\quad f: (a,b) \to {\Bbb R}^+ \,\,\,{\rm smooth.}\label{eqn2021-1-7-1} \end{eqnarray} Recall that $d\theta^2+\sin^2 \theta d\varphi^2$ is the standard spherical metric on ${\Bbb S}^2$ so that $dt^2 +t^2(d\theta^2+\sin^2 \theta d\varphi^2)$ is just the flat metric on ${\Bbb R}^3$. A standard frame is $$e_1 = f\frac{\partial}{\partial t}, \quad e_2 = \frac{1}{t}\frac{\partial}{\partial \theta}, \quad e_3 = \frac{1}{t\sin\theta}\frac{\partial}{\partial \varphi}$$ and the corresponding coframe is $$ \omega^1 = \frac{1}{f}dt, \quad \omega^2 = t d\theta,\quad \omega^3 = t\sin \theta d\varphi. $$ Using connection form and curvature form from these, it is easy to compute the Ricci curvature as follows: denoting $R_{ij} = {\rm Ric}(e_i, e_j)$, \begin{eqnarray} R_{11} = - \frac{2f'f}{t}, \quad R_{22} = - \frac{tf'f +f^2 -1}{t^2},\quad R_{33} = - \frac{tf'f +f^2 -1}{t^2} = R_{22}\label{eqn2022-3-5-2} \end{eqnarray} and $$ R_{12} = 0 = R_{13} = R_{23}. $$ \vspace{.15in} \noindent Here we consider two cases: \begin{itemize} \item[(1)] $\displaystyle{f(t) = \sqrt{1-\frac{2m}{t}}}$ \,\,\, and \,\,\, $(a,b) = (2m, \infty)$ \item[(2)] $\displaystyle{f(r) = \sqrt{1-\frac{2mt^2}{R^3}}, \,\, R>0}$ \,\,\, and \,\,\, $\displaystyle{(a,b) = \left(-\sqrt{\frac{R^3}{2m}}, \,\, \sqrt{\frac{R^3}{2m}}\, \, \right)}$ \end{itemize} \vspace{.152in} \noindent {\bf Case (1)}: $\displaystyle{f(r) = \sqrt{1-\frac{2m}{t}}}$ \,\,\, and \,\,\, $(a,b) = (2m, \infty)$ \vspace{.15in} In this case, we have $$ f' = \frac{1}{2f}\cdot \left(- \frac{2m}{t}\right)' = \frac{m}{t^2f},\quad f'f = \frac{m}{t^2}. $$ So, by (\ref{eqn2022-3-5-2}), we obtain $$ R_{11} = - \frac{2m}{t^3}, \quad R_{22} = R_{33} = \frac{m}{t^3} $$ and $$ s = R_{11} + R_{22} + R_{33} = 0. $$ \vspace{.15in} \noindent {\bf Case (2)}: \,\, $\displaystyle{f(t) = \sqrt{1-\frac{2mt^2}{R^3}}}$\,\,\, and \,\,\, $\displaystyle{(a,b) = \left(-\sqrt{\frac{R^3}{2m}}, \,\, \sqrt{\frac{R^3}{2m}}\, \, \right)}$ \vspace{.15in} In this case, we have $\displaystyle{\frac{f^2 - 1}{t^2} = - \frac{2m}{R^3}}$ and \begin{eqnarray} f' =- \frac{2mt}{R^3 f}, \quad \frac{f'f}{t} = - \frac{2m}{R^3}. \label{eqn2021-1-7-6} \end{eqnarray} So, by (\ref{eqn2022-3-5-2}) $$ R_{11} = \frac{4m}{R^3} = R_{22} = R_{33} $$ and $$ s = \frac{12m}{R^3}. $$ In particular, this is an Einstein manifold. \vspace{.15in} \noindent Now, we compute the Laplacian and Hessian of $f$. A computation shows that \begin{eqnarray*} \Delta = f^2 \frac{\partial^2}{\partial t^2} + \left(\frac{2f^2}{t} + f'f\right) \frac{\partial}{\partial t} + \frac{1}{t^2} \frac{\partial^2}{\partial \theta^2} + \frac{\cos \theta}{t^2 \sin \theta}\frac{\partial}{\partial \theta} + \frac{1}{t^2\sin^2\theta}\frac{\partial^2}{\partial \varphi^2}. \end{eqnarray*} For the first case $\displaystyle{f(t) = \sqrt{1-\frac{2m}{t}}}$, we have $\Delta f = 0$ and the Hessian of $f$ is given by $$ Ddf(e_1, e_1) = - \frac{2m}{t^3}f, \quad Ddf(e_2, e_2) = \frac{m}{t^3}f, \quad Ddf(e_3, e_3) = \frac{m}{t^3}f. $$ Thus, we have $$ f{\rm Ric} = Ddf. $$ For the second case $\displaystyle{f(t) = \sqrt{1-\frac{2mt^2}{R^3}}}$, we have $$ \Delta f = - \frac{6m}{R^3} f = - \frac{s}{2} f $$ and $$ \frac13 \left(sf - \Delta f\right) = \frac{4m}{R^3}f + \frac{2m}{R^3}f = \frac{6m}{R^3} f. $$ The Hessian of $f$ is given as follows. $$ Ddf(e_1, e_1) = - \frac{2m}{R^3}f, \quad Ddf(e_2, e_2) = - \frac{2m}{R^3}f, \quad Ddf(e_3, e_3) = - \frac{2m}{R^3}f. $$ Hence \begin{eqnarray*} f {\rm Ric} = Ddf + \frac13 (sf - \Delta f)g \end{eqnarray*} \hfill $\Box$} \end{exam} \section{Einstein-type manifolds with positive scalar curvature} In this section, we consider Einstein-type manifolds with positive constant scalar curvature. First, the following property shows that if $(M, g, f, h)$ is a compact Einstein-type manifold satisfying (\ref{eq1}) with nonzero constant $h$, and $(M, g)$ has positive scalar curvature, then we have $h f >0$ on $M$. \begin{prop}\label{eqn2021-11-1-1} Let $(M, g, f, h)$ be a compact Einstein-type manifold with positive scalar curvature satisfying (\ref{eq1}) for a nonzero constant $h$. Then we have $h f >0$ on the whole $M$, and so $$ h \int_M f dv_g >0. $$ \end{prop} \begin{proof} If $f$ is a nonzero constant, then we have $nhf = f^2 s >0$. Now, we assume that $f$ is not constant. Let $$ \min_M f = f(x_0)\quad \mbox{and}\quad \max_M f = f(x_1). $$ If $ f(x_1) >0$, then $\Delta f = fs - nh \le 0$ at the point $x_1$, and so we have $h>0$. From $\Delta f(x_0) = fs - nh \ge 0$ at the point $x_0$, we have $$ f>0 $$ on the whole $M$. Similarly, if $ f(x_0) <0$, then $\Delta f = fs - nh \ge 0$ at the point $x_0$, and so we have $h <0$. From $\Delta f(x_1) = fs - nh \le 0$ at the point $x_1$, we have $$ f<0 $$ on the whole $M$. Finally, it is easy to see that $h<0$ if $f \le 0$, and $h>0$ if $f \ge0$. \end{proof} A similar proof as in Proposition~\ref{prop2021-11-11-32} shows the following. \begin{prop}\label{prop2021-11-11-33} Let $(M, g, f)$ be a compact Einstein-type manifold with positive scalar curvature satisfying (\ref{eq1}) for a positive constant $h$. Then $$ \int_M s(nh - fs) dv_g \le 0. $$ The equality holds if and only if $(M, g)$ is Einstein. \end{prop} \begin{proof} Multiplying both side of $\Delta f = fs - nh $ by s and integrating it over $M$, we have $$ \int_M s(nh - fs) dv_g = -\int_M s \Delta f = \int_M \langle \nabla s, \nabla f\rangle = -2\int_M \frac{s}{f}|\nabla f|^2 dv_g. $$ In the last equality, we used the identity $fds = - 2s df$. Since $h >0$ and $h f>0$ by Proposition~\ref{eqn2021-11-1-1}, the proof is complete. Finally, the equality holds if and only if $f$ is constant and so $(M, g)$ is Einstein. \end{proof} For a nonnegative integer $m$, let us define $\varphi_m = f^m s$ so that $\varphi_2$ is constant by Lemma~\ref{lem2022-1-26-1}. When $(M, g)$ has positive scalar curvature, we can see that, for $m=0, 1$, the function $\varphi_m$ attains its maximum and minimum at the points $x_0$ and $x_1$, respectively, where $f(x_0) = \min_M f$, and $f(x_1) = \max_M f$. For $m \ge 3$, the function $\varphi_m$ attains its maximum and minimum at the points $x_1$ and $x_0$, respectively. In fact, note that $$ d\varphi_m = m\varphi_{m-1}df + f^m ds $$ and, from $fds = - 2s df$, we have $$ d\varphi_m = (m-2) f^{m-1} s df $$ and \begin{eqnarray*} Dd\varphi_m = (m-2)(m-3) f^{m-2}s df \otimes df + (m-2) f^{m-1} s Ddf.\label{eqn2021-11-11-30} \end{eqnarray*} We are ready to prove one of our main result. \begin{thm}\label{prop2021-11-11-33} Let $(M, g, f)$ be a compact Einstein-type manifold with positive scalar curvature satisfying (\ref{eq1}) for constant $h$. If $\min_M s \ge nh$, then $(M,g)$ is Einstein. \end{thm} \begin{proof} First of all, note that we must have $h \ge 0$. In fact, if $h <0$, then $f<0$ on $M$ by Proposition~\ref{eqn2021-11-1-1}. So, letting $\min_M f = f(x_0)$, we have $$ 0 \le \Delta f(x_0) = f(x_0) s(x_0) - nh <0, $$ which is a contradiction. Since it is reduced to Theorem 1.1 when $h=0$, we may assume that $h>0$ and so the potential function $f$ is positive on $M$ by Proposition~\ref{eqn2021-11-1-1} again. Since $$ \Delta f = fs - n h \ge (f-1)nh, $$ considering the maximum point $x_1$ of $f$, we obtain $$ f(x_1) \le 1,\quad \mbox{i.e.,}\quad f \le 1 \,\,\,\, \mbox{on $M$}. $$ First, assume that $$ \max_M f = f(x_1) = 1. $$ Since the function $fs$ attains its minimum at the point $x_1$, where $f(x_1) = \max_M f$, we have $$ 0 \ge \Delta f(x_1) = (fs)(x_1) - n h. $$ Since this implies $s(x_1) \le n h \le s(x_1)$, we have $$ s(x_1) = n h. $$ Now since $f^2 s = k$, constant, we have $$ k = f(x_1)^2 s(x_1) = n h, $$ and so $$ fs = \frac{nh}{f}\ge nh. $$ Therefore, $f$ must be constant since it is a subharmonic function. Now, assume that $\max_M f = f(x_1) <1$. Then it is easy to compute that $$ \Delta \ln(1-f) = \frac{|\nabla f|^2}{(1-f)^2} - \frac{fs-nh}{1-f}, $$ which can be written in the following form \begin{eqnarray} \Delta \ln(1-f) - |\nabla \ln(1-f)|^2 - a \ln(1-f) = - a \ln(1-f) - \frac{fs}{1-f} + \frac{nh}{1-f}, \label{eqn2021-11-12-10} \end{eqnarray} where $a$ is chosen to be a positive constant so that \begin{eqnarray} -a(1-f(x_0)\ln (1-f(x_0)) - (fs)(x_0) + n h >0.\label{eqn2021-11-13-1} \end{eqnarray} Since $\nabla f(\ln(1-f)) = -\frac{|\nabla f|^2}{1-f)}$ and $a>0$, the function $-a\ln(1-f)$ is non-decreasing in the $\nabla f$-direction. Also, since $$ \nabla f\left( \frac{nh}{1-f}\right) = \frac{nh|\nabla f|^2}{(1-f)^2}\quad\mbox{and} \quad \nabla f\left(\frac{f}{1-f}\right) = \frac{|\nabla f|^2}{1-f} + \frac{f|\nabla f|^2}{(1-f)^2}, $$ both two functions $ \frac{nh}{1-f}$ and $ \frac{f}{1-f}$ are non-decreasing in the $\nabla f$-direction. Since $s$ is non-increasing in the $\nabla f$-direction, the function $-\frac{fs}{1-f}$ is non-decreasing in the $\nabla f$-direction. Thus in view of (\ref{eqn2021-11-13-1}), we can see that the right-hand side of (\ref{eqn2021-11-12-10}) is positive. Applying the maximum principle to (\ref{eqn2021-11-12-10}), we can conclude that $f$ must be constant. \end{proof} \begin{cor} Let $(M, g, f)$ be a compact Einstein-type manifold with positive scalar curvature satisfying (\ref{eq1}) for a non-zero constant $h$. If $\min_M s \ge nh$, then, up to finite cover and scaling, $(M, g)$ is isometric to a standard sphere ${\Bbb S}^n$. \end{cor} \begin{proof} Since $(M, g)$ is Einstein, it has positive Ricci curvature. By Myers' theorem, the fundamental group $\pi_1(M)$ of $M$ is finite. Thus, up to finite cover and scaling, $(M, g)$ is isometric to a standard sphere ${\Bbb S}^n$. \end{proof} \section{Einstein-type manifolds with nonpositive Ricci curvature} In this section, we will handle Einstein-type manifolds with nonpositive Ricci curvature. First of all, we show that for such an Einstein-type manifold with nonpositive Ricci curvature, the square norm of the gradient of potential function satisfies the maximum principle. \begin{lem} \label{lem930} Let $(M^n,g,f, h)$ be an Einstein-type manifold with nonpositive Ricci curvature satisfying (\ref{eq1}) for a constant $h$. Then $|\nabla f|^2$ cannot attain its maximum in the interior. \end{lem} \begin{proof} By the assumption on the Ricci curvature, it is obvious that $s-r(N,N)\leq 0,$ where $N=\nabla f/|\nabla f|$. Since $\Delta f = fs$ and $f \nabla s = - 2 s \nabla f$, we have $$ \langle d\Delta f, \nabla f\rangle = f \langle \nabla s, \nabla f \rangle +s|\nabla f|^2=-s|\nabla f|^2. $$ So, from the Bochner-Weitzenb\"ock formula, we obtain $$ \frac 12 \Delta |\nabla f|^2 +(s-r(N,N))|\nabla f|^2=|Ddf|^2\geq 0. $$ Applying the maximum principle, we have the conclusion. \end{proof} Now, we present the proof of Theorem~\ref{thm930}. \begin{thm} \label{thm930-1} Let $(M^n,g,f, h)$ be a complete noncompact Einstein-type manifold with nonpositive Ricci curvature satisfying (\ref{eq1}) with constant $h$. If $f$ satisfies $$ \int_{M}|\nabla f|^2 < \infty, $$ then $f$ is constant and $(M, g)$ is Einstein. \end{thm} \begin{proof} For a cut-off function $\varphi$ (which will be determined later), we have \begin{eqnarray*} \frac 12 \int_M \varphi^2 \Delta |\nabla f|^2&=&-2\int_M\varphi |\nabla f | \langle \nabla \varphi, \nabla |\nabla f|\rangle\\ &\leq & \int_M |\nabla f|^2 |\nabla \varphi|^2 +\varphi^2 |\nabla |\nabla f||^2. \end{eqnarray*} Here we omit the volume form $dv_g$ determined by the metric $g$. From the Bochner formula derived in the proof of Lemma~\ref{lem930} with $N = \nabla f/|\nabla f|$, we have $$ \frac 12\int_M \varphi^2 \Delta |\nabla f|^2= \int_M \varphi^2 |Ddf|^2-(s-r(N,N))|\nabla f|^2 \varphi^2 . $$ By combining these, we obtain \begin{eqnarray} \int_M \left(|Ddf|^2 - |\nabla |\nabla f||^2\right)\varphi^2 - \int_M (s-r(N,N))|\nabla f|^2 \varphi^2 \leq \int_M |\nabla f|^2 |\nabla \varphi|^2. \label{eqn2022-1-26-3} \end{eqnarray} Now, take a geodesic ball $B_p(r)$ for some fixed point $p\in M$ and choose a cut-off function $\varphi$ so that $$ {\rm supp} \, \varphi \subset B_p(r), \quad \varphi\vert_{B_p(\frac r2)}\equiv 1 , \quad 0\leq \varphi \leq 1,\quad |\nabla \varphi|\leq \frac 1r. $$ Substituting this into (\ref{eqn2022-1-26-3}) and using the Kato's inequality, $ |\nabla |\nabla f||^2\leq |Ddf|^2$, we have $$ 0\leq \int_M \left(|Ddf|^2 - |\nabla |\nabla f||^2\right) -\int_{B_p(\frac r2)} (s-r(N,N))|\nabla f|^2 \leq \frac 4{r^2}\int_{B_p(r)} |\nabla f|^2 . $$ Letting $r\to \infty$, we obtain $$ |Ddf|^2 = |\nabla |\nabla f||^2\quad \mbox{and}\quad (s-r(N,N))|\nabla f|^2=0 $$ on $M$. Hence, in particular, we have $$ \frac 12 \Delta |\nabla f|^2 =|\nabla |\nabla f||^2.$$ From $$\frac 12 \Delta |\nabla f|^2= |\nabla f|\Delta |\nabla f|+|\nabla |\nabla f||^2,$$ we may conclude that $$|\nabla f|\Delta |\nabla f|=0.$$ Again, for a cut-off function $\varphi$, $$ 0= \int_M \varphi^2 |\nabla f|\Delta |\nabla f|= -2\int_M \varphi |\nabla f|\langle \nabla \varphi, \nabla |\nabla f|\rangle -\int_M \varphi ^2 |\nabla |\nabla f||^2.$$ By Young's inequality, we have $$\int_M \varphi^2 |\nabla |\nabla f||^2 \leq \frac 12 \int_M \varphi^2 |\nabla |\nabla f||^2 + 2\int_M |\nabla \varphi|^2 |\nabla f|^2,$$ implying that $$ \int_M \varphi^2 |\nabla |\nabla f||^2 \leq 4 \int_M |\nabla f|^2|\nabla \varphi|^2.$$ As shown above with the same cut-off function, the right-hand side tends to $0$. As a result, we have $$ |\nabla |\nabla f||^2=0$$ on $M$, implying that $Ddf\equiv 0$. Therefore, $g$ is Einstein with constant $f$. \end{proof} Before closing this section, we give an example of Einstein-type manifolds including space forms of nonnegative sectional curvature. \begin{exam}[warped product] {\rm Let $g = dt^2 + \varphi^2(t) g_0$ be a warped product metric on $M:= {\Bbb S}^1 \times_\varphi {\Bbb S}^{n-1}$ or $M:= {\Bbb R} \times_\varphi {\Bbb S}^{n-1}$, where $g_0$ is the standard round metric on ${\Bbb S}^{n-1}$. Let $f = f(t)$ is a function defined on ${\Bbb S}^1$ or ${\Bbb R}$. It is easy to compute (cf. \cite{Be}) that $$ {\rm Ric}_g = -(n-1)\frac{\varphi''}{\varphi}dt^2 - \left[\varphi \varphi''+(n-2)\varphi'^2\right]g_0 + {\rm Ric}_{g_0}. $$ It is also easy to see that the Hessian of $f$ with respect to the metric $g$ is given by $$ Ddf(\partial_t, \partial_t)=f'',\quad Ddf_{T{\Bbb S}^{n-1}} =\left(\frac {\varphi'}{\varphi}\right)f'g_{_{T{\Bbb S}^{n-1}}}, \quad Ddf(\partial_t, X)=0 $$ for $X$ tangent to $T{\Bbb S}^{n-1}$. Here, $\partial_t = \frac{\partial}{\partial t}$ and `` $^{\prime}$ '' denotes the derivative taken with respect to $t\in {\Bbb S}$ or $t \in {\Bbb R}$. Since $$ g_{_{T{\Bbb S}^{n-1}}} = \varphi^2 g_0\quad \mbox{and}\quad {\rm Ric}_{g_0} = (n-2)g_0, $$ it is easy to see that (\ref{eq1}) with $f$ and constant $h$ is equivalent to the following: \begin{eqnarray} \left\{\begin{array}{ll} f'' + (n-1)\frac{f\varphi''}{\varphi} + h = 0\\ f\left[-\varphi \varphi'' + (n-2)(1-\varphi'^2)\right] - \varphi \varphi' f' - h \varphi^2 = 0. \end{array}\right.\label{eqn2022-1-26-6-1} \end{eqnarray} \begin{itemize} \item[(I)] $\varphi = c$ (constant) This case does not happen, which means there are no solutions. In fact, if $\varphi = c$ is constant, the first and second equations in (\ref{eqn2022-1-26-6-1}) are reduced to $f''+h =0$ and $$ (n-2)f - hc^2 = 0, $$ respectively. So, $f$ must be vanishing, which is a contradiction. \item[(II)] $\varphi$ is non-constant When $f>0$ on $M$, the first equation in (\ref{eqn2022-1-26-6-1}) can be written as \begin{eqnarray} \frac{f''}{f} + \frac{h}{f} = - (n-1)\frac{\varphi''}{\varphi} =:\lambda.\label{eqn2022-1-26-5} \end{eqnarray} We assume that $\lambda$ is constant. \begin{itemize} \item[(i)] $\lambda >0$ Since the warping function $\varphi$ must satisfy $\varphi(0) = 0$ and $\varphi'(0) = 1$, the function $\varphi$ satisfying (\ref{eqn2022-1-26-5}) has of the form \begin{eqnarray} \varphi(t) = \sqrt{\frac{n-1}{\lambda}} \sin \sqrt{\frac{\lambda}{n-1}} t \label{eqn2022-1-26-7} \end{eqnarray} and \begin{eqnarray*} f'' - \lambda f + h = 0.\label{eqn2022-1-26-8} \end{eqnarray*} Note that $\varphi$ is only working on a compact manifold $M= {\Bbb S}^1 \times_\varphi {\Bbb S}^{n-1}$. Next, note that the particular solution to this is given by \begin{eqnarray} f(t) = \frac{h}{\lambda} \,\,\, \mbox{(constant)} \label{eqn2022-1-26-9} \end{eqnarray} and a general solution is $$ f(t) = ae^{\sqrt{\lambda}t} + \frac{h}{\lambda}. $$ In particular, in case of compact manifold $M= {\Bbb S}^1 \times_\varphi {\Bbb S}^{n-1}$, the function $f$ must be periodic, and so $a=0$. In conclusion, the compact manifold $M= {\Bbb S}^1 \times_\varphi {\Bbb S}^{n-1}$ with $\varphi$ and $f$ as in (\ref{eqn2022-1-26-7}) and (\ref{eqn2022-1-26-9}) is just an $n$-dimensional sphere ${\Bbb S}^n$ after a suitable scaling. \item[(ii)] $\lambda =0$ In this case, it is easy to see that $$ \varphi(t) = t\quad \mbox{and}\quad f(t) = \mbox{constant} $$ with $h = 0$ on ${\Bbb R}\times_\varphi {\Bbb S}^{n-1}$ is the only possible case, and this case is just the flat Euclidean space ${\Bbb R}^n$. \item[(iii)] $\lambda <0$ Letting $\mu = - \lambda >0$, as in (i) above, we have $$ \varphi(t) = \frac{1}{2}\sqrt{\frac{n-1}{\mu}} \left(e^{\sqrt{\frac{\mu}{n-1}}t} - e^{-\sqrt{\frac{\mu}{n-1}}t}\right) = \sqrt{\frac{n-1}{\mu}} \sinh \sqrt{\frac{\mu}{n-1}} t $$ and $$ f(t) = - \frac{h}{\mu} = \frac{h}{\lambda}. $$ Thus, we can see this case is just the hyperbolic manifold. \end{itemize} \end{itemize} } \end{exam} \section{Conformally flat Einstein-type manifolds} Recall that, for an Einstein-type manifold satisfying (\ref{eq1}), if both $s$ and $h$ are constants and $f$ is not constant, then the scalar curvature $s$ must be zero. In this section, we consider Einstein-type manifolds $(M, g, f, h)$ with zero scalar curvature satisfying (\ref{eq1}) with constant $h$ which $(M, g)$ is locally conformally flat. Let us begin with the definition of Cotton tensor of a Riemannian manifold. The Cotton tensor $C$ of a Riemannian manifold $(M, g)$ is defined by $$ C = d^D\left(r- \frac{s}{2(n-1)} g\right). $$ Note that for a symmetric $2$-tensor $\xi$, $d^D\xi$ is defined as $d^D \xi(X, Y, Z) = D_X\xi(Y, Z) - D_Y\xi(X, Z)$for any vector fields $X, Y, Z$. It is well-known (cf. \cite{Be}) that the Cotton tensor has a relation with the Weyl tensor ${\mathcal W}$ as follows $$ \mbox{div}\, {\mathcal W}=\frac {n-3}{n-2}C. $$ It is also well-known (cf. \cite{Be}) that if $\dim (M) = 3$, then $(M, g)$ is (locally) conformally flat if and only if the Cotton tensor $C$ is vanishing, and for $n \ge 4$, $(M, g)$ is (locally) conformally flat if and only if the Weyl tensor $\mathcal W$ is vanishing When an Einstein-type manifold $(M, g, f, h)$ satisfying (\ref{eq1}) with constant $h$ is (locally) conformally flat or has zero Cotton tensor which is a little weak condition, we can show that $r(\nabla f, X) = 0$ for any vector field $X$ which is orthogonal to $\nabla f$, and this implies that every geometric data on $M$ is constant along each level hypersurface given by $f$, and the metric $g$ can be written as a warped product on the critical free set of $f$. To do this, we introduce a $3$-tensor $T$ for Einstein-type manifolds defined as $$ T= \frac 1{(n-1)(n-2)}\, i_{\nabla f} r \wedge g -\frac{s}{(n-1)(n-2)}df \wedge g + \frac{1}{n-2} df\wedge r, $$ where $i_{\nabla f}$ denotes the interior product to the first component so that $i_{\nabla f}r(X) = r(\nabla f, X)$, and $df \wedge \xi$ is defined as $df\wedge \xi(X, Y, Z) = df(X)\xi(Y, Z) - df(Y)\xi(X, Z)$ for a symmetric $2$-tensor $\xi$ and vector fields $X, Y, Z$. \begin{lem}\label{lem2021-1-11-2} Let $(g,f,h)$ be a solution of $(\ref{eq1})$. Then $$ f\, C=\tilde{i}_{\nabla f}{\mathcal W}-(n-1)T. $$ Here, $\tilde{i}_X$ is the interior product to the final factor by $\tilde{i}_{\nabla f} {\mathcal W} (X,Y,Z)= {\mathcal W}(X,Y, Z, \nabla f)$ for vector fields $X, Y, Z$. \end{lem} \begin{proof} cf. \cite{hy}. \end{proof} For an Einstein-type manifold $(M, g, f, h)$, let us denote $N=\nabla f/|\nabla f|$ and $\alpha: =r(N,N)$. It is clear that $\alpha$ is only well-defined on the set $\nabla f\neq 0$. However, since $|\alpha| \le |r|$, the function $\alpha$ can be defined on the whole $M$ as a $C^0$-function. \begin{lem}\label{lem129} Let $(M,g,f,h)$ be a locally conformally flat Einstein-type manifold with constant $h$. Then we have $r(X,\nabla f)=0$ for $X$ orthogonal to $\nabla f$. In particular, the following holds. \begin{enumerate} \item[(1)] $|\nabla f|$ is constant on each level set of $f$. \item[(2)] $\alpha$ is constant on each level set of $f$. \item[(3)] Furthermore, if $s = 0$, then each level set of $f$ has constant sectional curvature. \end{enumerate} \end{lem} \begin{proof} By Lemma~\ref{lem2021-1-11-2} together with our assumptions, we have $T = 0$. Let $\{e_i\}_{i=1}^n$ be an orthonormal frame with $e_1=N$. Then, for $i\geq 2$, we have $$ 0=(n-2)T(\nabla f, e_i, \nabla f)= \frac {n-2}{n-1}|\nabla f|^2r(e_i,\nabla f), $$ which shows that $r(X,\nabla f)=0$ for $X$ orthogonal to $\nabla f$. \begin{itemize} \item[(1)] The property $ r(e_i, \nabla f)=0$ for $i\geq 2$ also implies that $|\nabla f|$ is constant on each level set of $f$ by (\ref{eq1}). \item[(2)] First of all, note that $D_NN=0$. In fact, we have $\langle D_NN, N\rangle =0$ trivially. Since \begin{eqnarray*} D_NN&=&N\left( \frac 1{|\nabla f|}\right)\nabla f+\frac 1{|\nabla f|}D_Ndf= -\frac 1{|\nabla f|}Ddf(N,N)N+\frac 1{|\nabla f|}D_Ndf\\ &=& -\frac 1{|\nabla f|} (f\alpha -h)N+\frac 1{|\nabla f|} \left[f r(N,\cdot)-h g(N, \cdot)\right], \end{eqnarray*} we have $ \langle D_NN, e_i\rangle =0$for $i\geq 2$. Next, since $\langle D_{e_i}N, N\rangle =0$ and $C=0$, we have, for $i\geq 2$, \begin{eqnarray*} e_i(\alpha) &=& D_{e_i}r(N,N) + 2 r(D_{e_i}N, N)\\ &=& D_N r(e_i, N)= - r(D_Ne_i, N)- r(e_i, D_NN)=-\langle D_Ne_i, N)\alpha =0. \end{eqnarray*} \item[(3)] From the fact $r(X,\nabla f)=0$ for $X$ orthogonal to $\nabla f$, we can write \begin{eqnarray} i_{\nabla f}r =\alpha df \label{eq8} \end{eqnarray} as a $1$-form. Since $T=0$ and $s=0$, we have $$ df\wedge \left(r+ \frac{\alpha}{n-1}g\right) = 0, $$ which shows \begin{eqnarray} r_{ij}= -\frac {\alpha}{n-1}\delta_{ij}\quad \mbox{and}\quad |r|^2=\frac n{n-1}\alpha^2. \label{1230} \end{eqnarray} Therefore, the second fundamental form ${\rm II}$ on each level set $f^{-1}(c)$ of $f$ for a regular value $c$ is given by \begin{eqnarray} {\rm II}(e_i, e_i)=\frac 1{|\nabla f|}Ddf(e_i, e_i) = -\frac 1{|\nabla f|}\left(\frac {f \alpha}{n-1} +h\right)\label{eq11} \end{eqnarray} for $i\geq 2$. Also, for $i,j\geq 2$ with $i\neq j$, we have $$ R_{ijij}= \frac 1{n-2} (r_{ii}+r_{jj})= -\frac {2\alpha}{(n-1)(n-2)} .$$ Hence, by the Gauss equation with (\ref{eq11}), we conclude that $f^{-1}(c)$ has constant sectional curvature. \end{itemize} \end{proof} We can see that, by equations (\ref{eq8}) and (\ref{1230}), $\alpha$ is an eigenvalue of the Ricci curvature tensor $r$ with multiplicity one whose corresponding eigenvector is $\nabla f$, and $-\frac{\alpha}{n-1}$ is another eigenvalue with multiplicity $n-1$. Even without the condition $C=0$ nor $\mathcal W = 0$, we can show that the scalar curvature $s$ is constant along each level hypersurface when $h$ is constant. In fact, from (\ref{eq83}), we have $$ fds(e_i) = 0 $$ for $i \ge 2$. Moreover, when $s = 0$, note that, by taking $d^D$ to (\ref{eq1}) $$ \tilde{i}_{\nabla f}R= d^DDdf=df\wedge r $$ since $d^Dr=0$. In particular, for $i,j\geq 2$ we obtain \begin{eqnarray*} R_{1i1j}=-r_{ij} \label{eq353} \end{eqnarray*} with $$ R_{1i1i}= \frac {\alpha}{n-1}. $$ \vspace{.2in} From now on, we assume that $(M, g, f, h)$ is an Einstein-type manifold with constant $h$ and zero scalar curvature, $s=0$, and $f$ is proper so that each level hypersurface of $f$ is compact. Let us denote by ${\rm Crit}(f)$ the set of all critical points of $f$. \begin{lem}\label{lem2022-2-17-1} Let $(M^n, g, f, h)$ be an Einstein-type manifold satisfying (\ref{eq1}) with constant $h$ and zero scalar curvature. Assume that $g$ is locally conformally flat. Then, on the set $M \setminus {\rm Crit}(f)$, we have \begin{eqnarray} N(\alpha) = \frac{n\alpha}{|\nabla f|}\left[\frac{f\alpha}{n-1}+h\right]\label{eqn2022-2-12-2} \end{eqnarray} and \begin{eqnarray} NN(\alpha) = \frac{n}{n-1}\alpha^2 + \frac{n(n+1)\alpha}{|\nabla f|^2} \left[\frac{f\alpha}{n-1} + h\right]^2.\label{eqn2022-2-20-5} \end{eqnarray} \end{lem} \begin{proof} Since $i_{\nabla f}r= \alpha df$ and $s=0$, we have $$ \langle \nabla \alpha, \nabla f\rangle -nh\alpha =\mbox{div}(i_{\nabla f}r)= f|r|^2 $$ and so \begin{eqnarray*} N(\alpha) = \frac{n\alpha}{|\nabla f|}\left[\frac{f\alpha}{n-1}+h\right] \end{eqnarray*} Taking the $N$-derivative of this and using $N(|\nabla f|) = Ddf(N, N) = f\alpha - h$, we obtain \begin{eqnarray*} (f\alpha-h)N(\alpha) + |\nabla f|NN(\alpha) = nh N(\alpha) + \frac{n}{n-1}|\nabla f|\alpha^2 + \frac{2nf\alpha}{n-1}N(\alpha) \end{eqnarray*} So, \begin{eqnarray*} NN(\alpha) = \frac{n}{n-1}\alpha^2 + \frac{n(n+1)\alpha}{|\nabla f|^2} \left[\frac{f\alpha}{n-1} + h\right]^2.\label{eqn2022-1-8-4} \end{eqnarray*} \end{proof} Denote by $\Sigma_c$ a connected component of the level hypersurface $f^{-1}(c)$ of $f$ for a regular value $c\in {\mathbb R}$ which is assumed to be compact. \begin{lem} \label{lem0107} Let $\Omega$ be a connected component of $M \setminus {\rm Crit}(f)$. Then $(\Omega, g)$ is isometric to a warped product $(I \times \Sigma, dt^2+b^2g_{_\Sigma})$, where $I$ is an interval, $\Sigma$ is a connected component of the level hypersurface of $f$ contained in $\Omega$, and $g_{_\Sigma}$ is the induced metric on $\Sigma$ by $g$. Moreover $g_{_\Sigma}$ is Einstein and $b^{n-1}\frac{d^2}{dt^2}b$ is constant on $I \times \Sigma$. \end{lem} \begin{proof} The second fundamental form ${\rm II}$ on $\Sigma$ is given by \begin{eqnarray*} {\rm II}_{ij}= -\frac 1{|\nabla f|} \left( \frac {\alpha f}{n-1}+h\right)\delta_{ij}, \label{eq206} \end{eqnarray*} which depends only on $f$ due to Lemma~\ref{lem129}. Thus, the metric can be written as $$ g=\frac {df}{|d f|}\otimes \frac {df}{|d f|} +b^2g_{_\Sigma}, $$ where $b=b(f)$ is a function of $f$ and $g_{_\Sigma}$ is the induced metric on $\Sigma$. Taking $dt=df/|\nabla f|$, we have $g=dt^2+b^2g_{_\Sigma}$. In particular, it is easy to see that $$ Ddf(\partial_t, \partial_t)=f'',\quad Ddf_{T\Sigma}=\left(\frac {b'}{b}\right)f'g_{_{T\Sigma}}, \quad Ddf(\partial_t, X)=0 $$ for $X$ tangent to $T\Sigma$. Here, ``\, $^{\prime}$\, '' denotes the derivative taken with respect to $t\in I$ so that $f' = |\nabla f|$. For more details, we can refer \cite{lea}, \cite{cc} or \cite{MT}. By the standard calculation of the warped product and (\ref{eq1}) with $s=0$, we see that ${\rm Ric}(g_{_\Sigma})$ is given by \begin{eqnarray*} {\rm Ric}(g_{_\Sigma}) &=& {\rm Ric}(g)\vert_{_{T\Sigma}}+\left((n-2)\left(\frac {b'}b\right)^2+\frac {b''}{b}\right)g_{_{\Sigma}} \\ &=&\left(\frac {b'f'}{bf}+\frac h{f}+(n-2)\left(\frac {b'}b\right)^2+\frac {b''}{b}\right)g_{_{\Sigma}}, \end{eqnarray*} which implies that $g_{_\Sigma}$ is Einstein. Next, since $D_NN=0$, we have $Ddf = f''dt^2 + bb'f' g_{_\Sigma}$ and so \begin{eqnarray} \Delta f = f'' + (n-1)\frac{b'}{b}f'.\label{eqn2022-2-5-1} \end{eqnarray} It is easy to compute the Ricci curvature of $g$ which is given by $$ {\rm Ric}_g = -(n-1)\frac{b''}{b}dt^2 - \left[b b''+(n-2)b'^2\right]g_{_\Sigma} + {\rm Ric}_{g_{_\Sigma}}, $$ So, \begin{eqnarray*} {\rm Ric}_g(\partial_t, \partial_t) = -(n-1)\frac{b''}{b}\label{eqn2022-2-3-2} \end{eqnarray*} and the Einstein-type equation (\ref{eq1}) with $f$ is equivalent to the following: \begin{eqnarray} \left\{\begin{array}{ll} f'' + (n-1)\frac{ b''}{b}f + h = 0\\ f\left[-b b'' + (n-2)(\kappa_0-b'^2)\right] - b b' f' - h b^2 = 0. \end{array}\right.\label{eqn2022-1-26-6} \end{eqnarray} Here ${\rm Ric}_{g_{_\Sigma}} = (n-2)\kappa_0 g_{_\Sigma}.$ Recall that we assume $h$ is constant. On the other hand, since $\Delta f = -nh$, we have \begin{eqnarray} -(\Delta f)g +Ddf-fr= (n-1)h g. \label{vst1} \end{eqnarray} Thus, by substituting the pair $(\partial_t, \partial_t)$ into (\ref{vst1}), we obtain \begin{eqnarray} fb''- f' b'=hb. \label{eq202} \end{eqnarray} Taking the derivative of (\ref{eq202}) and substituting the first identity in (\ref{eqn2022-1-26-6}), we obtain \begin{eqnarray*} bb''' + (n-1)b'b'' = 0\label{eqn2022-1-28-2} \end{eqnarray*} and so $(b^{n-1}b'')' = 0.$ \end{proof} In Lemma~\ref{lem0107}, if $b$ is constant so that $g$ is a product metric, then, obviously, $\Sigma$ is flat and $(M, g)$ is Ricci-flat. We may also assume that the interval $I$ is given by $[t_0, t_1)$ and $f$ has a critical point in $f^{-1}(t_1)$ when ${\rm Crif}(f) \ne \emptyset$. Also, by Lemma~\ref{lem0107}, we can let \begin{eqnarray} b^{n-1}b'' = a_0\label{eqn2022-1-28-3} \end{eqnarray} for some constant $a_0$. Moreover, by warped product formula from $g = dt^2 + b^2 g_{_\Sigma}$, we have \begin{eqnarray} \alpha = {\rm Ric}(\partial_t, \partial_t)=- (n-1)\frac {b''}b= -(n-1)\frac{a_0}{b^n}.\label{eqn2022-2-21-10-1} \end{eqnarray} That is, \begin{eqnarray} \frac {b''}b=-\frac {\alpha}{n-1}= \frac {a_0}{b^n}. \label{eq206-1} \end{eqnarray} So, since $\alpha b^n = -(n-1)a_0$, we have \begin{eqnarray} \alpha' b + n \alpha b' =0.\label{eqn2022-2-12-1-1} \end{eqnarray} From now on, we will show $a_0$ must be zero, which implies $(M, g)$ is Ricci-flat. First, if we assume $a_0 \ne 0$, we can show that the warping function $b$ can be extended beyond the critical point of $f$ and so the warped product metric is still valid beyond the critical points of $f$. \begin{lem}\label{lem2022-2-26-10-100} Under the hypotheses of Lemma~\ref{lem0107}, assume that $I =[t_0, t_1)$ so that $f$ has a critical point at $t=t_1$. If $a_0 \ne 0$, then the warping function $b$ can be extended smoothly beyond $t_1$ satisfying (\ref{eqn2022-1-28-3}). Moreover, $\Sigma_1:=f^{-1}(t_1)$ is a smooth hypersurface, and $f\alpha +(n-1) h = 0$ on the set $\Sigma_1$. \end{lem} \begin{proof} Note that the scalar curvature of the metric $g$ is given by $$ 0=s=-2(n-1)\frac {b''}b +\frac {s_c}{b^2}-(n-1)(n-2)\left(\frac {b'}b\right)^2 $$ with $s_\Sigma= (n-1)(n-2) \kappa$, where $s_\Sigma$ is the (normalized) scalar curvature of $\Sigma = f^{-1}(t_0)$. So, we obtain \begin{eqnarray} 2\frac {b''}b +(n-2)\left(\frac {b'}b\right)^2 = (n-2)\frac {\kappa}{b^2}. \label{eqn2022-2-3-1-1} \end{eqnarray} This can be written in the following form \begin{eqnarray} (n-2)b'^2+2a_0b^{2-n} =(n-2)\kappa. \label{eqn2022-1-28-6-1} \end{eqnarray} If $a_0 \ne 0$, then (\ref{eqn2022-1-28-6-1}) shows $$ \liminf_{t\to t_1} b(t) >0. $$ In fact, if $a_0>0$ and $\displaystyle{\liminf_{t\to t_1} b(t) =0}$, then we have (LHS) $\to \infty$ and (RHS) $=(n-2)\kappa$ in (\ref{eqn2022-1-28-6-1}), a contradiction. Now assume $a_0<0$ and $\displaystyle{\liminf_{t\to t_1} b(t) =0}$. Then, by (\ref{eqn2022-1-28-3}), $$ \lim_{t\to t_1} \frac{b''}{b}(t) = -\infty. $$ Since $f$ is well-defined around $t = t_1$, by (\ref{eq202}), we must have $$ \lim_{t\to t_1} \frac{f'b'}{b}(t) = -\infty. $$ However, this contradicts (\ref{eqn2022-2-5-1}) since $\Delta f$ is well-defined on $t=t_1$. Suppose, now, there exists $t_k\to t_1$ such that $b(t_k) \to \infty$ as $k \to \infty$. It follows from (\ref{eqn2022-1-28-6-1}) that $$ \left(\frac{b'(t_k)}{b(t_k)}\right)^2 \to 0. $$ So, by (\ref{eqn2022-1-28-3}) or (\ref{eqn2022-2-3-1-1}), we have \begin{eqnarray*} \lim_{k\to \infty} \frac{b''}{b} = \lim_{k\to \infty} \frac{a_0}{b^{n}} =0. \label{eqn2022-1-30-3} \end{eqnarray*} Since $f'$ is well-defined on $M$, we have, by (\ref{eq202}), $$ h = \lim_{k\to \infty} f\frac{b''}{b} -\lim_{k\to \infty} f'\frac{b'}{b} = 0, $$ which contradicts $h >0$. Therefore we have $$ \limsup_{t\to t_1} b(t) < +\infty. $$ It follows that $C^{-1} \le b \le C$ on $I=[t_0, t_1)$ for some $C>0$. In particular, $b$ can be extended smoothly beyond $t_1$ satisfying (\ref{eqn2022-1-28-3}), and $f^{-1}(t_1)$ is a smooth hypersurface. Since $f$ has no critical points on $t_0\le t <t_1$, every level set $f^{-1}(t)$ for $t_ 0\le t <t_1$ is homotopically equivalent. Thus, at $t=t_1$, $f^{-1}(t_1)$ would be a hypersurface which is homotopically equivalent to $\Sigma$. Since $f^{-1}(t_1)$ is a hypersurface, the mean curvature $m$ is well-defined. Since, around $t = t_1$, we have $$ -nh = \Delta f = Ddf(N, N) + m |\nabla H| = f\alpha - h + m |\nabla f|, $$ i.e., \begin{eqnarray*} f\alpha +(n-1)h = -m |\nabla f|,\label{eqn2022-2-25-1-1} \end{eqnarray*} by letting $t \to t_1$, we obtain \begin{eqnarray*} f\alpha + (n-1)h = 0\label{eqn2022-2-24-1-1} \end{eqnarray*} on the set $f^{-1}(t_1)$. \end{proof} \begin{lem}\label{lem2022-2-26-1} Under the hypotheses of Lemma~\ref{lem0107} with $a_0 \ne 0$, let $\Sigma_1:=f^{-1}(t_1)$ be a smooth hypersurface. Assume $f\alpha + (n-1)h = 0$ on the set $\Sigma_1$. Then, we have the following. \begin{itemize} \item[$(1)$] $N(\alpha) = \alpha' = 0, \,\,\, NN(\alpha)= \alpha'' >0$ on the set $\Sigma_1$. \item[$(2)$] $\Sigma_1$ is totally geodesic. \end{itemize} \end{lem} \begin{proof} \begin{itemize} \item[(1)] If $\Sigma_1=f^{-1}(t_1)$ contains a critical point of $f$, then $$ b' = \frac{db}{df}\cdot \frac{df}{dt} = 0 $$ at $t=t_1$, and so $\alpha' = 0$ at $t=t_1$ by (\ref{eqn2022-2-12-1-1}). If $f$ has no critical point on $\Sigma_1$, by Lemma~\ref{lem2022-2-17-1}, we have $\alpha'=0$ at $t=t_1$. The second inequality $\alpha'' >0$ follows from (\ref{eqn2022-2-20-5}). \item[(2)] Let $\nu$ be the outward unit normal vector field on $\Sigma_1$ such that $$ \lim_{t\to t_1-}N = \nu. $$ Let $\{e_i\}$ be a local frame around $\Sigma_1$ such that $e_1 = N = \frac{\nabla f}{|\nabla f|}$ and then $$ \lim_{t\to t_1-}e_1 = \nu. $$ Then, on the set $f^{-1}(t_1-\epsilon)\cap \Omega$ for a sufficiently small $\epsilon>0$, we have $$ D_{e_i}N = - \frac{1}{|\nabla f|}\left(\frac{f\alpha}{n-1} +h\right) e_i = \frac{b'}{b}e_i. $$ By letting $\epsilon \to 0$, by (1) above, we obtain $$ D_{e_i} \nu = 0, $$ which shows $\Sigma_1$ is totally geodesic. \end{itemize} \end{proof} \vspace{.12in} \noindent {\bf Proof of Theorem~\ref{thm129}.}\,\, Since both $h$ and $s$ are constants, we have $sdf = 0$ by (\ref{eq83}) so that either $f$ is constant on $M$ or $s=0$. If $f$ is constant, $(M, g)$ is Einstein satisfying $fr = hg$. If $M$ is compact (without boundary), then from $\Delta f = -nh$, $f$ must be constant and so $(M, g)$ is Einstein. Now, assume $(M, g, f, h)$ is a complete non-compact Einstein-type manifold with constant $h>0$ and zero scalar curvature, $s=0$. Moreover, we assume that $f$ is not constant and a proper map. If $f$ contains an isolated critical point, then $a_0$ must be zero by Lemma~\ref{lem2022-2-26-10-100}. So, by (\ref{1230}) and (\ref{eq206-1}), $(M, g)$ is Ricci-flat. Assume that every level set of $f$ is a compact smooth hypersurface. There are two cases; either $f$ has no critical points, or $M$ contains a level hypersurface given by $f$ consisting of critical points of $f$. For the latter case, if $f^{-1}(t_1)$ is a smooth hypersurface consisting of critical points, and we have $b'(t_1)= 0=\alpha'(t_1)$ and $\alpha''(t_1)>0$ as in the proof of Lemma~\ref{lem2022-2-26-1}. In particular, $f^{-1}(t_1)$ is totally geodesic. Thus, by Lemma~\ref{lem0107}, the metric $g$ can be written as a warped product metric $$ g = dt^2 + b(t)^2 g_{_\Sigma} $$ globally on $M=(-\infty, \infty) \times \Sigma$, $[a, \infty)\times \Sigma$, or $(-\infty, a]\times \Sigma$ for a hypersurface $\Sigma$ of $M$ with constant sectional curvature. By (negative) parametrization, we only consider first two cases, and the second case corresponds to incomplete Einstein-type manifold. \begin{itemize} \item[(1)] $a_0 <0$ In this case, we have $\alpha >0$ and $b'' <0$ on $M$ by (\ref{eq206-1}). If $f^{-1}(t_0)$ is a smooth hypersurface consisting of critical points, then, by Lemma~\ref{lem2022-2-26-1}, it is totally geodesic, and $ f\alpha = -(n-1)h$, $\alpha' = 0$ and $\alpha''>0$ at $t=t_0$. In particular, the function $\alpha$ attains its local minimum at $t = t_0$. Since $\alpha b^n = -(n-1)a_0$, $b$ attains its local maximum at $t=t_0$, i.e., $b'(t_0) = 0$ and $b''(t_0) <0$. Moreover since $$ \left(\frac{b'}{b}\right)' = \frac{b''}{b} - \frac{b'^2}{b^2} = - \frac{\alpha}{n-1} - \frac{b'^2}{b^2} <0, $$ we have $ b' <0$ for $t >t_0$, and $b'>0$ on $t<t_0$. This shows that $b$ is convex upward and attains its global maximum at $t=t_0$. However this is impossible since $M$ is complete and $b>0$. Now assume that $$ f\alpha >- (n-1)h $$ on $M$ so that $\alpha' >0, \alpha'' >0$ and $f$ has no critical points. Then $$ b'<0\quad \mbox{and}\quad b'' <0. $$ This is also impossible since $b>0$ and $M$ is complete and noncompact. Since we assumed $h>0$, the case that $f\alpha < -(n-1)h$ on $M$ does not happen by considering the set $f>0$. Remark that the last two cases correspond to the case $M= (-\infty, \infty)\times \Sigma$ by Lemma~\ref{lem2022-2-26-10-100}. \item[(2)] $a_0 >0$ In this case, we have $\alpha <0$ and $b'' >0$ on $M$ by (\ref{eqn2022-2-21-10-1}). If there exists a $t_1 \in {\Bbb R}$ such that $f\alpha = -(n-1)h$ at $t=t_1$, we have $b' =0$ at $t=t_1$ as above. Since $b>0$, we have $$ \lim_{t\to \pm \infty} b(t) = \infty $$ unless $b$ is constant. By (\ref{eq206-1}), \begin{eqnarray*} \lim_{t\to \pm \infty} \alpha(t) = 0.\label{eqn2022-2-28-1} \end{eqnarray*} However, since $\alpha'(t_1)= 0, \alpha''(t_1)>0$ and $\alpha <0$ on $\Bbb R$, this is impossible. Now, assume that \begin{eqnarray} f\alpha >- (n-1)h\label{eqn2022-2-21-10} \end{eqnarray} on $M$ so that $\alpha' <0$. Note that, from Lemma~\ref{lem2022-2-26-10-100}, this corresponds to the first case $M = (-\infty, \infty)\times \Sigma$ with no critical points of $f$. Then $$ b'< 0 \quad \mbox{and}\quad b'' >0. $$ This shows that $$ \lim_{t\to - \infty} b(t) = \infty$$ and so, by (\ref{eqn2022-2-21-10-1}), $$ \lim_{t \to -\infty} \alpha = 0. $$ On the other hand, on the set $f>0$, we have $$ - \frac{(n-1)h}{f}< \alpha <0$$ by (\ref{eqn2022-2-21-10}), and so we have $$ \lim_{t \to \infty} \alpha = 0 . $$ However, this is impossible since $\alpha<0$ and $\alpha' <0$ on $M$. Finally, assume that $f\alpha < -(n-1)h$ on $M$. Recall that, from Lemma 5.5, this also corresponds to the first case $M = (-\infty, \infty) \times \Sigma$ with no critical points of $f$. As above, considering the set $f<0$, we can see that this is impossible because of $h>0$. \end{itemize} Hence, we must have $a_0 = 0$ and so $(M, g)$ is Ricci-flat. \hfill $\Box$ \section{Final Remarks} For Einstein-type manifolds satisfying (\ref{eq1}), even though the potential function $f$ has a relation to $h$ as $h = \frac{1}{n}(fs - \Delta f)$, if $h$ is not constant, it is not easy to obtain rigidity results since the function $h$ causes various situations. Nonetheless, if we give some constraints on the function $h$, we can have a little minor results. We say that $h$ has defined {\it weakly signal} if either $h\geq 0$ on $M$ or $h\leq0$ on $M$. With this assumption, we can show that there are no critical points on $f^{-1}(0)$ unless $f^{-1}(0)$ is empty, and this implies the scalar curvature is vanishing on the set $f^{-1}(0)$ as in the proof of Lemma~\ref{lem83}. \vskip .5pc On the other hand, when $h=csf$ for some constant $c$, we have the following result under nonnegative Ricci curvature condition. \begin{lem} Let $(M^n,g,f, h)$ be a compact Einstein-type manifold with boundary $\partial M$ with $h=csf$. If $(M,g)$ has nonnegative Ricci curvature with for $c< \frac 1{2(n-1)}$ or $c\geq \frac 1n$, then $|\nabla f|^2$ should have its maximum on $\partial M$. \end{lem} \begin{proof} Note that $s\geq 0$. By (\ref{eq83}) and $$ \nabla h= c (s\nabla f+f\nabla s), $$ we have $$ (1-2nc+2c)f\nabla s=2(nc-c-1)s\nabla f, $$ implying that $$ d\Delta f =(1-nc) (sdf+fds)= \frac {1-nc}{2nc-2c-1}sdf. $$ By Bochner-Weitzenb\"ock formula and the assumption on the Ricci curvature, we have $$ \frac 12 \Delta |\nabla f|^2- \frac {nc-1}{2nc-2c-1}s|\nabla f|^2 =|Ddf|^2+r(\nabla f, \nabla f)\geq 0. $$ From the condition on the constant $c$, we have $$ \frac {nc-1}{2nc-2c-1} >0. $$ Our Lemma follows from the maximum principle. \end{proof} \begin{cor} Let $h=csf$ for a constant $c$ satisfying $c< \frac 1{2(n-1)}$ or $c\geq \frac 1n$. If $(M,g,f,h)$ is a closed Einstein-type manifold with nonnegative Ricci curvature, then $(M,g)$ is Einstein. \end{cor} \vskip 0.3cm \noindent {\bf Acknowledgment:} The authors would like to express their gratitude to the referees for valuable suggestions. The first-named author was supported by the National Research Foundation of Korea (NRF-2019R1A2C1004948) and the second-named author was supported by the National Research Foundation of Korea (NRF-2018R1D1A1B05042186).
8a4b6ac164cb094767afee6ff463e15a66a60356
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} Binary classification is an important topic in statistical and machine learning. Margin-based loss functions form the backbone of non-parametric approaches to binary classification, and have been studied extensively over the last few decades. Excellent introductions are given in Bishop (2006)\nocite{Bishop2006} and Hastie et al. (2009)\nocite{ESL2009}. In contrast, model-based likelihood methods are typically used in regression applications for binary outcomes. This paper provides a straightforward derivation for a simple characterization of the class of conformable (consistent), margin-based loss functions. The characterization provides a direct method for comparing different loss functions and constructing new ones. It is shown that derivatives of loss functions in this class are equivalent to log-likelihood scores weighted by an even function, thereby establishing a strong connection between classification using margin-based loss and likelihood estimation. A simple algebraic relation is derived that establishes an equivalence between the margin and standardized logistic regression residuals (SLRRs). The relation implies that all margin-based loss functions can be considered as loss functions of squared SLRRs. This viewpoint provides interesting new perspectives on commonly used loss functions, including exponential loss, which underlies the AdaBoost algorithm. It is shown that minimizing empirical exponential loss is equivalent to minimizing the sum of squared SLRRs. It is argued that AdaBoost can be interpreted as forward stage-wise regression where the objective function to be minimized is the sum of squared SLRR's, weighted by squared SLRR's from previous stages. The relation between SLRRs and margins does not appear have been previously known. An interesting approach for constructing loss functions based on ideas from the probability elicitation literature was given in Masnadi-Shirazi and Vasconcelos (2008) \nocite{SV2008} (see also Buja et al. (2006) \nocite{Buja2006}). In Masnadi-Shirazi and Vasconcelos (2015)\nocite{SV2015}, the same authors extended their construction to provide a characterization of loss functions consistent for what they term the generalized logit-link, with a focus on developing loss functions with strong regularization properties. Their derivation used the fact the generalized logit-links are invertible. The characterization of loss functions here has a similar form to that given for the generalized logit-link in Masnadi-Shirazi and Vasconcelos (2015)\nocite{SV2015}. However, the characterization here is not restricted to generalized logit-links, nor does it require an invertible probability model. The derivation herein is straightforward, and, importantly, the characterization allows for direct comparison of loss functions using different paramaterizations. Informative parameterizations of the characterization are explored, including in terms of the log-likelihood score. \section{Notation and review of margin-based classification} The following establishes notation, provides a review of some of the key elements in binary classification, and defines the notion of conformable loss functions. Let $x\in \Re^p$ denote a feature vector and $y^*\in\{-1,1\}$ a binary class indicator. The joint probability distribution for $(x,y^*)$, denoted $P$, is not specified. Interest is in finding a classification rule $C(x)\in\{-1,1\}$ that minimizes classification error. Define the 0-1 loss function as $\phi_{\operatorname{0-1}}(y^*C(x))=I(y^*C(x)<0)=I(C(x)\ne y^*)$ where $I(\cdot)$ is the indicator function. The 0-1 loss function rewards/penalizes correct/incorrect classification. The conditional risk for 0-1 loss and a classifier $C(x)$ is then \begin{equation*} \begin{split} R_\mathsf{C,\operatorname{0-1}} & =E[\phi_{\operatorname{0-1}}(y^*C(x))\mid x] \\ & =E[I({C}(x)\ne y^*)\mid x] = \phi_{\operatorname{0-1}}(C(x))p(x)+\phi_{\operatorname{0-1}}(-C(x))(1-p(x)) \\ &=I(C(x)\ne 1)p(x)+I(C(x)\ne -1)(1-p(x))=\mbox{Pr}(y^*\ne C(x)\mid x) \end{split} \end{equation*} where $p(x)\equiv\mbox{Pr}(y^*=1\mid x)$. The latter expression is the probability of classification error using the classifier $C(x)$. It is well known that the value of $C(x)$ minimizing the risk from 0-1 loss is the so-called Bayes decision rule given by $C^*(x)=\mbox{sign}[2p(x)-1]$. When $p(x)=1/2$, the classifier is indeterminant. Here it is assumed this occurs with probability zero and so can be ignored, thereby avoiding technicalities that can obscure the presentation while offering little insight. The conditional distribution $\mbox{Pr}(y^*=1\mid x)$ is unknown as there are no distributional assumptions on $P$. Therefore, classification rules are estimated non-parametrically from data using loss functions. Classification functions estimated from data that converge to the optimal classifier $C^*(x)$ are called Bayes consistent. This definition of consistency is analogous to that of Fisher consistency in parameter estimation, see \nocite{Lin2004} Lin (2004). Typically $C(x)$ takes the form $C(x)=\mbox{sign}[f(x)]$ where $f:\Re^p\rightarrow\Re$ is a function mapping the features $x$ to the real numbers. The task is then to estimate $f(x)$. The margin $v$ is defined as the product $v=y^*f(x)$. Note that $C(x)=\mbox{sign}[f(x)]$ correctly classifies an observation if and only if $v>0$. Estimation of $f(x)$ using $0-1$ loss presents a difficult optimization problem because the loss is not smooth or convex (or quasi-convex), and the resulting classifiers can have poor finite sample properties, see Vapnik (1998) \nocite{vapnik1998}. Therefore, continuous loss functions are used as surrogates for 0-1 loss. The available data are $n$ independent pairs $(x_1,y^*_1),\dots,(x_n,y^*_n)$ from the distribution $P$. Estimation of $f(x)$ is typically accomplished by minimizing the empirical risk of a smooth margin-based loss function $\phi(v)$ by computing \[ \hat f(x) =\arg\min_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^n\phi(v_i) \] where $\mathcal{F}$ is a large class of functions. \begin{example} Exponential loss, given by $\phi(v)=e^{-v/2}$, is commonly employed and is the basis of the AdaBoost algorithm. Let $\mathcal{F}$ be the class of linear models, that is $\mathcal{F}\equiv\{f(x): f(x)=x^T\beta, \beta\in\Re^p\}$. The classifier estimated from independent pairs $(x_1,y^*_1),\dots,(x_n,y^*_n)$ is $\hat C(x)=\mbox{sign}[\hat f(x)]$ where $\hat f(x) = x^T\hat\beta$, and $\hat\beta$ minimizes the linear model/exponential loss empirical risk: \[ \hat \beta =\arg\min_{\beta}\frac{1}{n}\sum_{i=1}^n e^{-y^*_i x_i^T\beta/2}. \] \end{example} Linear models represent a rather restrictive class of functions for $\mathcal{F}$. A class of functions with enormous flexibility and widely used in applications is given by \begin{equation}\label{classFunctions} \mathcal{F}\equiv\left\{ f(x;\beta): f(x;\beta) = \sum_{m=1}^M \theta_m b(x; \gamma_m) \right\} \end{equation} where $\beta=\{\theta_m,\gamma_m\}_{m=1}^M$ are unknown parameters and $b(x:\gamma_m)$ are basis functions, see Hastie et al. (2009, page 341) \nocite{ESL2009}. Population level properties of loss functions are important for understanding whether the resultant classification rules are Bayes consistent. To study properties at the population level, let $\mathcal{F}$ represent the class of all measurable functions, and note the conditional risk function for margin-based loss is given by \[ R_\mathsf{C}(f(x)) = E[\phi(Y^*f(x))\mid x] = \phi(f(x))p(x)+\phi(-f(x))(1-p(x)). \] The optimal value for $f(x)$ at the population level is defined as \[ f^*(x)=\arg\min_{f\in\mathcal{F}} R_\mathsf{C}(f(x)). \] It is often possible to optimize $R_\mathsf{C}(f(x))$ through differentiation. For a differentiable loss function $\phi(v)$, the identity $\frac{\partial}{\partial f} R_C(f)=0$ yields \begin{equation}\label{derivRisk} \frac{\phi^\prime(-f^*(x))}{\phi^\prime(f^*(x))}=\frac{p(x)}{1-p(x)} \end{equation} where $\phi^\prime(v)=\frac{d}{dv} \phi(v)$. Let $0\le G(v) \le 1$ be a continuous cumulative distribution function (CDF) such that $G(0)=1/2$. If the derivative of the loss function satisfies \begin{equation}\label{definingeq} \frac{\phi^\prime(-v)}{\phi^\prime(v)} = \frac{G(v)}{1-G(v)} \end{equation} then $f^*(x)$ is such that $G(f^*(x))=p(x)\equiv\mbox{Pr}(y^*=1\mid x)$, i.e. at the population level the conditional probability $\mbox{Pr}(y^*=1\mid x)$, which was unspecified, is determined by $G(v)$ and $f^*(x)$. The preceding analysis assumes that there does not exist $B\subset \Re^p$ where $\mbox{Pr}(x\in B)>0$ and such that $p(x)=1$ for $x\in B$. In particular, the case where the features provide perfect prediction is excluded, i.e. when $p(x)=I(x\in B)$ where $I(\cdot)$ is the indicator function. \begin{definition}\label{ConsistentDef} {\bf Conformable loss functions}: Let $\phi(v)$ be a differentiable, margin-based loss function and $G(x)$ a continuous CDF where $G(0)=1/2$. If $\phi^\prime (v)$ satisfies \eqref{definingeq}, we say $\phi(v)$ is {\bf conformable }to $G(x)$. \end{definition} \bigskip The definition is useful because it is easily seen that conformable loss functions yield classification rules that are Bayes consistent, i.e. converging to the optimal rule $C^*(x)$. To see this, note that for a conformable loss $\phi(v)$, it follows that, at the population level, $C(x)=\mbox{sign}[f^*(x)]=\mbox{sign}[2\mbox{Pr}(y^*=1\mid x)-1]=C^*(x)$. Then under standard regularity conditions, $\mbox{sign}[\hat f(x)]\rightarrow\mbox{sign}[f^*(x)]$, i.e. $\hat C(x)\rightarrow C(x) = C^*(x)$. An analogous result is established in in Section \ref{regressPerspective} below, namely that conformable loss functions produce consistent estimators in regression parameter estimation contexts. Conformable loss functions also yield an approximate soft classifier. Soft classifiers directly model $\mbox{Pr}(y^*=1\mid x)$ and then define the classifier as $C(x)=\mbox{sign}[2\mbox{Pr}(y^*=1\mid x)-1]$. A loss function conformable for $G(x)$ provides an approximate soft classifier via $\mbox{Pr}(y^*=1\mid x)\approx G(\hat f(x))$. While conformable loss functions are Bayes consistent, not all loss functions that result in Bayes consistent classifiers need be conformable in the sense of Definition \ref{ConsistentDef}. Several of the most widely used loss functions are conformable. Examples include exponential, logistic, and squared loss. Hinge loss, used with support vector machines, is not conformable as it is not differentiable. However, hinge loss is Bayes consistent. Weak conditions under which a loss function is Bayes consistent are given in Lin (2004)\nocite{Lin2004}. The definition of conformable loss functions is related to what Shen (2005)\nocite{Shen2005} and Buja et al. (2006) \nocite{Buja2006} term `proper' loss functions. Conformable loss functions are proper. The definition of conformable loss functions focuses on $G(v)$ and requires $\phi(\cdot)$ to be differentiable, but does not require $G(\cdot)$ to be invertible, and the definition is independent of $\mathcal{F}$. In the next section, a characterization of the class of conformable loss functions is given. \section{A characterization of the class of differentiable loss functions conformable for $G(x)$ } Let $0\le G(x) \le 1$ be a continuous cumulative distribution function for a distribution that is symmetric around $1/2$, i.e. $G(x) = 1-G(-x)$. This strengthens the requirement that $G(0)=1/2$. The following derives a characterization of the class of non-decreasing, differentiable loss functions that are conformable for symmetric $G(x)$. It is straightforward to generate new loss functions using the characterization, and the characterization is particularly well-suited to understanding and comparing properties of loss functions that are conformable for the logistic distribution. Motivated by \eqref{definingeq}, we seek to characterize the class of functions $\mathcal{C}_q$ defined by $h(w)\in\mathcal{C}_q$ if and only if \[ h(w)=q(-w) h(-w) \] where $q(w)$ is a function with the property $q(w)q(-w)=1$. Interest will be in $q(w)=G(w)/(1-G(w))$, but the following result is stated more generally. \begin{lemma}\label{classlemma} $h(w)\in \mathcal{C}_q$ if and only if \[ h(w) = q(-w)^{1/2} g(w) \] where $g(w)$ is an even function. \end{lemma} \begin{proof} Assume $h(w)\in \mathcal{C}_q$, i.e. that $h(w)=q(-w) h(-w)$. Then we can write $h(w) = q(-w)^{1/2} g(w)$ where $g(w)= q(w)^{1/2} h(w)$ and it is easily seen that $g(w)$ so defined is an even function. Conversely, if $h(w) = q(-w)^{1/2} g(w)$ where $g(w)$ is an even function, then \[ h(-w) = q(w)^{1/2} g(-w) = q(w)^{1/2} g(w)=\frac{q(w)^{1/2}}{q(-w)^{1/2}}h(w) = q(w)h(w). \] \end{proof} Let $\mathcal{\tilde C}_q$ represent the class of functions $\mathcal{C}_q$ with the restriction that $g(w)$ is continuous, $g(w)\ge 0,$ and where \begin{equation}\label{oddseq} q(w)=\frac{G(w)}{1-G(w)}. \end{equation} The restriction $g(w)\ge 0$ is not mathematically necessary, but will result in loss functions that are non-increasing, a property which seems to make the most sense practically. If $G(w)$ is interpreted as the conditional probability $\mbox{Pr}(y^*=1\mid w)$, $q(\cdot)$ represents the odds of an outcome conditional on $w$. Useful properties to note are that $q(w)q(-w)=1$, $\lim_{w\rightarrow\infty} q(w) = \infty$ and $\lim_{w\rightarrow -\infty} q(w) = 0$. \begin{definition}\label{LG} Denote by $\mathcal{L}_G$ a class of loss functions of the form \begin{equation}\label{lossfunction} \phi(v) = k-\int_{0}^{v} h(w) dw =\begin{cases} k-\int_0^v h(w)dw, & \text{if $v>0$} \\ k+\int_v^0 h(w)dw, & \text{if $v\le 0$} \end{cases} \end{equation} where $k>0$ is an arbitrary constant and $h(w)\in\mathcal{\tilde C}_q$. \end{definition} Each $\phi(v)\in\mathcal{L}_G$ is indexed by a weight function $g(w)$ that can be chosen independently from $G(w)$. The derivative of the loss is $\phi^\prime (v)=-h(v)=-q(-v)^{1/2}g(v)$, which is the product of the square root of the reciprocal of the odds ($q(w)$) and the weight function $g(w)$. \begin{theorem}\label{lossfuncthr} A margin-based loss function $\phi(v)$ is in the class $\mathcal{L}_G$ if and only if $\phi(v)$ is non-increasing, differentiable, and is conformable to $G(x)$. \end{theorem} \begin{proof} Suppose $\phi(v)\in \mathcal{L}_G$, i.e. $\phi(v)$ is given by \eqref{lossfunction}. Then $\phi(v)$ is differentiable by the Fundemental Theorem of Calculus, and is non-increasing because $\phi^\prime(w)=-h(w)\le 0$. The derivative of $\phi(v)$ satisfies \eqref{definingeq}, and therefore $\phi(v)$ is conformable for $G(x)$. Conversely, if $\phi(v)$ is conformable, non-increasing, and differentiable, then $\phi^\prime(v)\in\mathcal{\tilde C}_q$ and by Lemma \ref{classlemma}, $\phi^\prime(v)=-q(-v)^{1/2}g(v)$ where $q(v)=G(v)/(1-G(v))$ and $g(v)\ge 0$ is an even function. It follows that $\phi(v)$ has the form given in \eqref{lossfunction}, i.e. $\phi(v)\in \mathcal{L}_G$. \end{proof} Constructing loss functions in $\mathcal{L}_G$ is straightforward. Simply choose a target distribution function $G(w)$ satisfying $G(w)=1-G(-w)$, construct $q(w)$ and choose a positive even function $g(w)$. The loss function is then essentially the anti-derivative of $h(w) = q(w)^{-1/2}g(w)$. Some choices of $G$ and $g$ will result in intractable integrals. For purposes of minimizing the empirical risk to obtain an estimate $\hat f(x)$, evaluating the integral isn't necessary if the optimization is done by gradient descent, rendering the inability to evaluate the anti-derivative moot. For the logit-link, the characterization given in Theorem \ref{lossfuncthr} is similar to the characterization for the generalized logit-link given in Masnadi-Shirazi and Vasconcelos (2015)\nocite{SV2015} (see their Theorems 5 and 6). However, the derivation and proof of Theorem \ref{lossfuncthr} is applicable to all symmetric CDFs ($G(x)$), and the derivation does not require $G(x)$ to be invertible. The following two examples are intended to show how loss functions in $\mathcal{L}_G$ can be constructed using \eqref{lossfunction}. The intention is not to construct loss functions for a specific purpose, and therefore justification for the choices of $G$ and $g$ are not considered in these examples. \begin{example} Set $G(w)=(w+1)/2$ for $w\in [-1,1]$ (uniform distribution over $[-1,1]$). Then for $-1\le w\le 1$, \[ q(w)^{-1/2}=\sqrt{\frac{1-w}{1+w} }. \] Let $g(w)=4\sqrt{G(w)(1-G(w))}=2\sqrt{(1+w)(1-w)}$, and $k=1$. Squared error loss results: \[ \phi(v)=1-\int_0^v q(w)^{-1/2}g(w)dw = (1-v)^2. \] More generally, for the uniform distribution function it is easily seen that the use of $g(w)=((1-w)(1+w))^{(m+1)/2}$ where $m$ an odd integer results in a polynomial loss function of degree $m+1$. The loss functions in this example all have the population level property that $\mbox{Pr}(y^*=1\mid x) = G(f^*(x))=(f^*(x)+1)/2$. \end{example} \begin{example}\label{logisticexample} Suppose interest is in constructing a loss function conformable to the logistic CDF. Set $G(w)=F(w)\equiv (1+e^{-w})^{-1}$ for $-\infty<w<\infty$. Then \[ q(w)^{-1/2}=e^{-w/2}. \] Choose $g(w)=1/2$ and $k=1$. Exponential loss results: \[ \phi(v)=1-\int_0^v q(w)^{-1/2}g(w)dw = e^{-v/2}. \] \end{example} Several additional examples of constructing loss functions in $\mathcal{L}_F$ are given in Section \ref{LogistSection}, where $\mathcal{L}_F$ represents the class of non-increasing, differentiable loss functions conformable to the logistic distribution function. The choice of both $G(w)$ and $g(w)$ can depend on the objectives of the analysis. In fact, while a conformable loss function may be desirable, conformability to a specific choice of $G(w)$ may not be important. For example, properties of hinge loss are often compared to those of logistic likelihood loss, despite the fact they each are interpreted differently at the population level. Likelihood loss functions are well known to have optimal properties when estimation of regression parameters is the goal. Therefore, the weight function for likelihood loss (also termed cross-entropy loss) may provide a touchstone for comparison to other weight functions in the class $\mathcal{L}_G$. The weight function for log-likelihood loss is derived in the following example. \bigskip \begin{example}\label{mleexample} Here we work in reverse, i.e. we start with the log-likelihood loss function and derive the weight function. Therefore, we begin by assuming a model for the conditional distribution $y^*\mid x$ so that a likelihood can be constructed. For a differentiable distribution function $G(x)$ where $G(x)=1-G(-x)$, suppose that $\mbox{Pr}(y^*=1\mid x) = G(f(x))$ for some function $f(x)$. Then the log-likelihood loss function, denoted $\phi_L(v)$, can be written in terms of the margin $v=y^*f(x)$: \[ \phi_L(v) = \log G(y^*f(x)) = \log G(v). \] Setting $\phi_L^\prime(v)\mid_{v=w}=-q(w)^{-1/2}g(w)$ gives \[ \frac{d}{dv} \phi_L(v)\mid_{v=w} = -\frac{G^\prime(w)}{G(w)}=-q(w)^{-1/2}g(w). \] Solving for $g(w)$ yields the log-likelihood weight function \[ g_L(w)=q(w)^{1/2}\frac{G^\prime(w)}{G(w)}=\frac{G^\prime(w)}{\sqrt{G(w)(1-G(w))}}. \] \end{example} \subsection{Convexity} Properties of convex loss functions have been studied extensively (see for example \nocite{Bartlett2006} Bartlett et al. (2006)), though the importance of convex loss has been debated, especially with repect to outlier robustness (see for example \nocite{Zhao2009} Zhao et al. 2009). Loss functions in $\mathcal{L}_G$ differ only with respect to the choice of $g(w)$, and therefore properties of $g(w)$, relative to $q(w)$, determine whether the loss is convex. More specifically, convexity of $\phi(v)\in\mathcal{L}_G$ is determined by the maximum rate at which $\log [g(w)]$ changes relative to the change in the log-odds; $\log[q(w)]$. This is easily seen when $g(w)$ is strictly positive and differentiable. \begin{lemma}\label{convex} Suppose $\phi(v)\in\mathcal{L}_G$ where $G(w)$ and $g(w)$ are differentiable, and $g(w)>0$. Then $\phi(v)$ is convex if and only if $\frac{d}{dv}\log g(v)\le \frac{1}{2}\frac{d}{dv}\log q(v)$. \end{lemma} \begin{proof} From the assumptions, $\phi(v)$ is a twice differentiable, strictly decreasing function. It is not difficult to show that $\phi(v)$ is convex if and only if $\frac{d}{dv}\log [-\phi^\prime(v)]\le 0$. For $\phi(v)\in\mathcal{L}_G$, we have $\phi^\prime(v)=-q(v)^{-1/2}g(v)$. Under the assumption that $G(v)$ and $g(v)$ are differentiable, it follows $\phi(v)$ is convex if and only if \[ \frac{d}{dv}\log g(v)\le \frac{1}{2}\frac{d}{dv}\log q(v). \] \end{proof} \subsection{Other parameterizations of $\mathcal{L}_G$} The derivatives of loss functions in $\mathcal{L}_G$ are the product of a function of $G(v)$ ( the inverse square root of the odds), and an arbitrary non-negative, even weight function. We say the loss function is ``parameterized" in terms of the odds. The parameterization of the loss functions in $\mathcal{L}_G$ defined in \eqref{lossfunction} is not unique. The class of loss functions in $\mathcal{L}_G$ can be defined and interpreted through an infinite number of other equivalent parameterizations as explicated in the following Corollary to Lemma \ref{lossfuncthr}. Each parameterization represents the derivative of $\phi(v)\in\mathcal{L}_G$ as the product of a function of $G(v)$ and an arbitrary non-negative, even weight function. \begin{corollary}\label{corollaryParam} Suppose the function $b:[0,1]\rightarrow\Re^+$ satisfies $b(x)=b(1-x)$. Define $r(w)=q(w)^{-1/2}b(G(w))$ and \[ h^*(w)=r(w)g(w) \] where $G(w)$ is a symmetric CDF, and $g(w)\ge 0$ is an even function. Then $h^*(w)\in \mathcal{\tilde C}_q$. \end{corollary} \begin{proof} It is not difficult to show that for $G(w)$ a symmetric CDF, $b^*(w)=b(G(w))$ is an even function if and only if $b(x)=b(1-x)$. Then $g^*(w)=g(w)b^*(w)$ is an even function and the Corollary follows immediately from Lemma \ref{classlemma}. \end{proof} Note that $r(w)$ is a function of $G(w)$ only. The implication of the Corollary is that loss functions in $\mathcal{L}_G$ can be defined as \begin{equation}\label{param} \phi(v) = k-\int_{0}^{v} h^*(w) dw =\begin{cases} k-\int_0^v h^*(w)dw, & \text{if $v>0$} \\ k+\int_v^0 h^*(w)dw, & \text{if $v\le 0$} \end{cases} \end{equation} where $k>0$ is an arbitrary constant. Loss functions in $\mathcal{L}_G$ are then interpreted as $r(w)$ smoothed (integrated) using an even weight function $g(w)$, in which case we say that $\mathcal{L}_G$ is {\it parameterized} in terms of $r(w)=q(w)^{-1/2}b(G(w))$. Two parameterizations are explored in the following example and theorem. \begin{example}\label{DFexample} Suppose that $b(x)=\sqrt{x(1-x)}$ and $\tilde g(w)=1$. Then \[ r(w)=q(w)^{-1/2}b(G(w))= 1-G(w). \] The class of functions $\mathcal{L}_G$ can be generated via \eqref{param} using $h^*(w)=(1-G(w))g(w)$ where $g(w)\ge 0$ is an even function. Therefore, loss functions in $\mathcal{L}_G$ can be interpreted as $1-G(w)$ smoothed via an even function $g(w)$. Assuming $G(w)$ is differentiable, a natural weight function for this parameterization is to choose $g(w)=G^\prime(w)$, i.e. use the density function corresponding to $G(w)$ as the weight function. Note the density will be an even function because $G(w)$ is assumed symmetric (odd) about $1/2$. Squared error loss on the CDF scale results, seen as follows. Set $k=1/8$. Then $\phi(v) = 1/8-\int_0^v (1-G(w))g(w)dw = 1/8-\int_0^v (1-G(w))G^\prime(w)dw = .5(1-G(v))^2$. \end{example} Assuming that $G(v)$ is differentiable, the following theorem states that derivatives of margin-based loss functions in $\mathcal{L}_G$ are simply weighted versions of the derivative of the log-likelihood. \begin{theorem}\label{LLexample} Suppose $\phi(v)\in\mathcal{L}_G$ and that $G(v)$ is differentiable. Then \[ \phi^\prime(v) = - g(v)\frac{d}{dv}\log [G(v)] \] where $ g(v)\ge 0$ is an even function. If the probability model $\mbox{Pr}(y^*=1\mid x) = G(f(x))$ is assumed, then $\frac{d}{dv}\log [G(v)]$ represents the derivative of the log-likelihood. \end{theorem} \begin{proof} Let $b(v) = 1/\sqrt{G(v)(1-G(v))}$ and $\tilde g(v)= g(v)\frac{d}{dv} G(v)$ where $ g(v)\ge 0$ is an even function. Note that as defined $\tilde g(v)$ is an even, non-negative function. Let \[ h^*(v) = q(v)^{-1/2}b(G(v))\tilde g(v) = g(v) \frac{\frac{d}{dv} G(v)}{G(v)} = g(v)\frac{d}{dv}\log [G(v)] . \] From Corollary \ref{corollaryParam}, $h^*(v)\in \mathcal{\tilde C}_q$. Then the class of functions $\mathcal{L}_G$ can be generated via \eqref{param} with $h^*(w)=g(w)\frac{d}{dw}\log [G(w)]$. From Example \ref{mleexample} and under the assumption that $\mbox{Pr}(y^*=1\mid x) = G(f(x))$, $\frac{d}{dv}\log [G(v)]$ is the derivative of the log-likelihood. \end{proof} The utility of a paramaterization depends on the form of $G(x)$ and/or the application. For some distribution functions $G(x)$, the parameterization defined in Example \ref{DFexample} may provide advantages in interpretation versus the odds parameterization. However, the logistic distribution is the canonical distribution in binary data analysis, and the odds parameterization is easiest to work with in that case. It is not difficult to show that the parameterizations in Example \ref{DFexample} and Theorem \ref{LLexample} are equivalent when $G(x)$ is the logistic distribution function. \subsection{Regression parameter estimating equation perspective}\label{regressPerspective} The population property of loss functions $\phi(v)\in\mathcal{L}_G$, namely that $f^*(x)$ satisfies $\mbox{Pr}(y^*=1\mid x)=G(f^*(x))$, can also be understood from a regression perspective. In fact, the class could have been derived by seeking to characterize the class of margin-based unbiased estimating equations for regression parameter estimation. Suppose we postulate a regression model (or soft classifier) $\mbox{Pr}(y=1\mid x) = G(f(x;\beta))$ where $G(x)$ is a symmetric distribution function and $f(x;\beta)$ is in the class given by \eqref{classFunctions}. Then the loss functions in $\mathcal{L}_G$ result in unbiased estimating equations for $\beta$. Additional standard regularity conditions would ensure consistency of the regression parameter estimates. \begin{corollary}\label{regressresult} Suppose the true population model is $\mbox{Pr}(y^*=1\mid x) = G(f(x;\beta))$ where $G(v)=1-G(-v)$, with features (independent variables) $x\in\Re^p$, and $\beta\in\Re^p$ are regression parameters. If $\phi(v)\in \mathcal{L}_G$ then \[ E \Big [ \frac{\partial}{\partial\beta}\phi(y^*f(x;\beta))\mid x\Big ]=0. \] The converse is true if we add the restriction that $\phi(v)$ is non-increasing. \end{corollary} \begin{proof} Suppose that $\phi(v)\in\mathcal{L}_G$ so that $\phi^\prime(v)=-q(-v)^{1/2}g(v)$. The conditional expectation of the estimating score $\frac{\partial}{\partial\beta}\phi(y^*x^T\beta)$ under the true population model is \begin{equation*} \begin{split} E \Big [ \frac{\partial}{\partial\beta}\phi(y^*f(x;\beta))\mid x\Big ] & = \Bigg ( \phi^\prime(f(x;\beta))G(f(x;\beta)) \\ & \qquad-\phi^\prime(-f(x;\beta))(1-G(f(x;\beta))) \Bigg )\frac{\partial f(x;\beta)}{\partial\beta} \\ & = -\Bigg ( q(-f(x;\beta))^{1/2}g(f(x;\beta))G(f(x;\beta)) \\ &\qquad\qquad -q(f(x;\beta))^{1/2}g(-f(x;\beta))(1-G(f(x;\beta))) \Bigg )\frac{\partial f(x;\beta)}{\partial\beta} \\ & = - \Bigg ( \left (\frac{1-G(f(x;\beta))}{G(f(x;\beta))}\right )^{1/2}G(f(x;\beta)) \\ &\qquad - \Big (\frac{G(f(x;\beta))}{1-G(f(x;\beta))}\Big )^{1/2}(1-G(f(x;\beta))) \Bigg )\frac{\partial f(x;\beta)}{\partial\beta} g(f(x;\beta)) \\ &=0. \end{split} \end{equation*} The penultimate equality follows from the definition of $q(\cdot)$ and because $g(\cdot)$ is an even function. Conversely, if $E \big [ \frac{\partial}{\partial\beta}\phi(y^*f(x;\beta))\mid x\big ]=0$ then from the first identity above, \[ \frac{\phi^\prime(-f(x;\beta))}{\phi^\prime(f(x;\beta))} = \frac{G(f(x;\beta))}{1-G(f(x;\beta))}. \] Then $\phi^\prime\in \mathcal{\tilde C}_q$ and by assumption $\phi(v)$ is non-increasing. From Lemma \ref{classlemma}, it follows that $\phi^\prime (v) = q(-v)^{1/2} g(v)$ for an even function $g(v)$, and therefore $\phi(v)\in\mathcal{L}_G$. \end{proof} In a regression parameter estimation context, the class $\mathcal{L}_G$ results in estimating equations that are the derivative of the log-likelihood weighted by an even, non-negative function. This result follows readily from Theorem \ref{LLexample} and Corollary \ref{regressresult}. It is stated as a Theorem because of the importance of connecting margin-based loss functions to likelihoods. \begin{theorem}\label{regressresult2} Suppose the true population model is $\mbox{Pr}(y^*=1\mid x) = G(f(x;\beta))$ where $G(v)=1-G(-v)$ and $G(v)$ is differentiable. Then $\phi(v)\in \mathcal{L}_G$ if and only if \begin{equation}\label{unbiasedeq} \frac{\partial}{\partial\beta}\phi(y^*f(x;\beta))=-g(f(x;\beta)) \frac{\partial}{\partial\beta}\log [G(y^*f(x;\beta))] \end{equation} where $g(w)\ge 0$ is an even function. \end{theorem} From \eqref{unbiasedeq}, the proof of Corollary \ref{regressresult} is perhaps more evident, at least to statisticians, as it is well-known that the expectation of the derivative of the log-likelihood is zero under the assumed model. Note that a larger class of unbiased estimating equations is obtained by not restricting $g(v)$ in \eqref{unbiasedeq} to be even (or positive). However, when $g(v)$ is not an even function, the resulting estimating function will not be a function of the margin, and thus falls outside of the class of loss functions considered herein. Also, for $G(v)$ symmetric, it is sensible for $g(v)$ to be an even function. The variance and sensitivity of estimating scores are important for understanding asymptotic properties of the resulting parameter estimates, see \nocite{Godambe1991} Godambe (1991). The following lemma is stated without proof, as the proof is similar to the proof in the corollary above. The lemma says that the conditional variance of the estimating score depends only on the square of the weight function, but not on $G(v)$. The second part of the lemma provides an expression for the sensitivity of a loss function in $\mathcal{L}_G$. The lemma assumes loss functions parameterized in terms of the class $\mathcal{\tilde C}_q$ where recall $q(v)^{-1/2} = [(1-G(v))/G(v)]^{1/2}$. \begin{lemma} Suppose the true population model is $\mbox{Pr}(y^*=1\mid x) = G(f(x;\beta))$ where $G(v)=1-G(-v)$. The conditional variance of a loss function $\phi(v)\in\mathcal{L}_G$ is given by \[ E \left [ \frac{\partial}{\partial\beta}\phi(y^*f(x;\beta)) \Big (\frac{\partial}{\partial\beta}\phi(y^*f(x;\beta))\Big )^T \mid x\right ] = g^2(f(x;\beta)) \left (\frac{\partial f(x;\beta)}{\partial\beta} \right ) \left (\frac{\partial f(x;\beta)}{\partial\beta} \right )^T. \] The sensitivity of a loss function $\phi(v)\in\mathcal{L}_G$ is given by \[ E \left [ \frac{\partial^2}{\partial\beta\partial\beta^T}\phi(y^*f(x;\beta)) \mid x\right ] = \left [ \frac{G^\prime(f(x;\beta))g(f(x;\beta)) }{\sqrt{G(f(x;\beta))(1-G(f(x;\beta)))}} \right ] \left (\frac{\partial f(x;\beta)}{\partial\beta} \right ) \left (\frac{\partial f(x;\beta)}{\partial\beta} \right )^T. \] \end{lemma} Note that the variance and sensitivity have equal magnitudes when the derivative of the loss is equal to the likelihood score. That is, if $g(w)$ is set to the likelihood weight function derived in Example \ref{mleexample} , the magnitudes of the variance and sensitivity given in the lemmas are equivalent. This equivalence is well known in statistical maximum likelihood estimation theory, and implies the loss function possess Godambe efficiency (\nocite{Small2003} Small and Wang, 2003). \bigskip \section{Equivalence of the Margin and Standardized Logistic Regression Residuals}\label{Sec:marginresid} As noted above, loss functions conformable to a CDF provide a bridge between hard and soft classifiers. Here we show that all margin-based classifiers are linked to the logistic CDF, implying that all margin-based classifiers can be considered as approximately soft classifiers. \subsection{The margin and logistic regression residuals} We establish an exact algebraic relation between the margin and standardized logistic regression residuals (SLRRs). The relation provides new insight binary classification using margin-based loss. Let $y=(y^*+1)/2\in\{0,1\}$. In logistic regression settings, the standardized residual, denoted $S(f(x))$, is defined as the ratio of the residual to the standard deviation of $y$: \[ S(f(x))=\frac{y-F(f(x))}{\sqrt{F(f(x))\{1-F(f(x))\}}}. \] Hastie et al. (2009) note that the margin plays a role similar to the residuals in regression. Here we show the margin is exactly equal to a function of standardized residuals from the logistic regression model. \begin{theorem} \label{key} The squared standardized logistic regression residual and the margin have the following relation: \begin{equation}\label{keyidentity} -\log[S^2(f(x))] = y^*f(x). \end{equation} \end{theorem} \begin{proof} Using straightforward algebra, it is not difficult to show that \begin{equation}\label{stdresid} S(f(x) )=y\exp \left [-\frac{1}{2}f(x)\right ]-(1-y)\exp \left [\frac{1}{2}f(x) \right ]=y^*\exp\left [-\frac{1}{2}y^*f(x)\right ]. \end{equation} Squaring both sides and log transforming proves the theorem. \end{proof} Equation \eqref{keyidentity} implies that $S^2$ is a monotonically decreasing function of the margin, and that $S^2<1$ if and only if $y^*f(x)>0$. Large positive margins result in small residuals whereas large negative margins result in large residuals. Note that \eqref{stdresid} implies that $S(1,f(x))=-S(0,-f(x))= -1/S(0,f(x))$ where $S(y,f(x))$ is the standardized residual. If the sign of $f(x)$ and the class of $y$ are both switched, the sign of the residual changes, but the magnitude of the residual is exactly the same. Margin-based loss is often championed as a non-parametric approach to binary classification. However, Theorem \ref{LLexample} showed that loss functions in $\mathcal{L}_G$ are weighted versions of the log- likelihood score, implying there is an underlying parametric assumption for these loss functions. The relation between margins and SLRRs given in Theorem \ref{key} suggests that there is a parametric connection for all margin-based loss, even those outside the class $\mathcal{L}_G$. The theorem also implies that margin based loss can be considered as a distance-based loss. Note that the proof of Theorem \ref{key} does not assume or imply that $p(x)=\mbox{Pr}(y=1\mid f(x)) = F(f(x))$, i.e. the result does not require that the population conditional distribution of $y$ given $x$ is given by the logistic model. The theorem also does not imply that every margin-based loss function $\phi(v)$ is conformable to $F(\cdot)$. The result does have implications for how margin based classifiers can be interpreted, as discussed below. The result supplies an easily interpretable metric for the confidence of a classification. The relation between the margin and standardized logistic regression residuals implies $F(\hat f(x))$ is a natural measure of the confidence of the classifier $\hat C(x)=\mbox{sign}[\hat f(x)]$ for a given value of $x$. Note that if the population minimizer for a given loss function is such that $p(x)=G(f^*(x))\ne F(f^*(x))$, the confidence measure $F(\hat f(x))$ should not be interpreted as the conditional probability of $y^*=1$ for a given value of $x$. The result also provides a connection between soft and hard margin classifiers. Soft classifiers estimate the conditional probability $\mbox{Pr}(y=1\mid f(x))$ and assign $C(x)=\mbox{sign}\{2\mbox{Pr}(y=1\mid f(x))-1\}$, whereas hard classifiers provide a decision boundary but do not necessarily estimate the conditional probability (\nocite{Yufeng2011} Yufeng et al. 2011). For any margin based estimation scheme that does not restrict values of the margin, we can approximate a soft classifier via $\mbox{Pr}(y=1\mid f(x))\approx F(\hat f(x))$. Of course it is possible to approximate a soft classifier using $\hat f(x)$ and any distribution function symmetric around zero. The relation between the margin and logistic regression residuals supplies motivation and justification for using the logistic distribution function for the approximation. There is a tension between loss functions in $\mathcal{L}_G$ where $G(x)\ne F(x)$ and Theorem \ref{key}. These loss functions yield $f^*(x)$ satisfying $p(x)=G(f^*(x))$. On the other hand, Theorem \ref{key} implies margin-based loss is a distance-based loss for the logistic model, and therefore the approximation $p(x)\approx F(\hat f(x))$ seems reasonable. \subsection{Partitioning logistic regression residuals} The results in this subsection are primarily of interest in logistic regression settings, but they will also be used to provide a new perspective on the AdaBoost algorithm. The following theorem shows that standardized logistic regression residuals can be partitioned on a multiplicative scale. Equivalently, the partition is additive on the logarithmic scale. The proof of the following theorem is an immediate consequence of \eqref{stdresid}. \begin{theorem} Suppose that $f(x;\beta)$ is given as in \eqref{classFunctions}, i.e. $f(x;\beta)= \sum_{m=1}^M \theta_m b(x; \gamma_m)$ where $\beta=\{\theta_m,\gamma_m\}_{m=1}^M$. Then \[ S(f(x;\beta)) = (y^*)^{M+1} \prod_{m=1}^M S(\theta_m b(x;\gamma_m)). \] Then also \[ S^2(f(x;\beta)) = \prod_{m=1}^M S^2(\theta_m b(x;\gamma_m)). \] \end{theorem} The theorem does not say that we can fit a logistic regression model by fitting $M$ individual models, each with one independent variable. It says the standardized residuals from a model can be partitioned into individual components, where the estimated coefficients in the components are from the fit of the full model. The full model could be fit by maximizing the likelihood or some other approach. The contribution of the $k$th element of $f(x;\beta)$ to the $i$th residual can be interpreted through the squared standardized residuals on the log-scale. Note that \[ \frac{\log S^2(\theta_k b(x_i;\gamma_k))}{\log S^2(f(x_i;\beta))}=\frac{\log S^2(\theta_k b(x_i;\gamma_k))}{\sum_{m=1}^M \log S^2(\theta_m b(x_i;\gamma_m))} = \frac{\theta_k b(x_i;\gamma_k)}{\sum_{m=1}^M\theta_m b(x_i;\gamma_m)}. \] The contribution of a component can also be interpreted by comparing a component to the geometric mean of all components on a log scale: \begin{equation*} \begin{split} \log \Bigg \{\frac{ S^2(\theta_k b(x_i;\gamma_k))}{ ( S^2(f(x_i;\beta)))^{1/M}}\Bigg \} & =\log \Bigg \{\frac{ S^2(\theta_k b(x_i;\gamma_k))}{\prod_{m=1}^M ( S^2(\theta_m b(x_i;\gamma_m)))^{1/M}} \Bigg\} \\ & = y^*_i\Big (\theta_k b(x_i;\gamma_k)-\frac{1}{M}\sum_{m=1}^M\theta_m b(x_i;\gamma_m) \Big ). \end{split} \end{equation*} Potential uses of the partition are histograms of the individual components across observations to identify outliers, or for each observation, histograms of the $M$ components to understand the influence of the predictors on the residual. \section{Loss functions conformable for the logistic distribution}\label{LogistSection} In this section we consider loss functions interpreted on the logit scale at the population level, i.e. we consider $G(x)=F(x)=(1+e^{-x})^{-1}$. In the theory of generalized linear regression models, it is well-known that the logistic distribution is the (inverse) canonical link for binary data. The logistic distribution could also be considered the canonical distribution for construction of loss functions in binary margin-based classification for a couple of reasons. First, the odds $q(w)$ arise naturally in the solution to minimizing the conditional risk, and the log-odds is the inverse of the logistic distribution: $F^{-1}(w)=\log[F(w)/(1-F(w))]=\log q(w)$. Second, as shown in Section \ref{Sec:marginresid} above, the margin is a function of the standardized logistic regression residual. Properties of the (derivative of) conditional risk and logistic distribution translate into analytical tractability and ease of interpretation. For these reasons, the logistic distribution deserves detailed exploration. From Example \ref{logisticexample}, \[ q(w)^{-1/2}=\Bigg (\frac{1-F(w)}{F(w)}\Bigg )^{1/2} = e^{-w/2} \] and the class of loss functions $\phi(v)\in\mathcal{L}_F$ can be written as \[ \phi(v) = k-\int_{0}^{v} e^{-w/2}g(w) dw, \] showing that loss functions in $\mathcal{L}_F$ can be considered as smoothed, weighted versions of the exponential loss function $e^{-w/2}$. This observation aids in interpretation and comparison of loss functions in $\mathcal{L}_F$. Several of the most commonly used loss functions for binary classification and regression are in $\mathcal{L}_F$, including exponential loss and logisitc loss (also known as log-likelihood loss, cross entropy loss, and deviance loss). In addition to their use in regression contexts, these two loss functions are the basis of the popular boosting methods AdaBoost and LogitBoost. Table \ref{logistTable} shows several common loss functions in $\mathcal{L}_F$ and the weight functions $g(w)$ used to generate them. Two new loss functions, Gaussian and Laplace loss, are also included. These latter two loss functions are explored further below. \subsection{Convexity for logistic conformable loss functions} For the loss functions in Table \ref{logistTable}, exponential and logistic loss are convex, whereas Savage, Gaussian and Laplace loss are not. These loss functions differ only with respect to the choice of $g(w)$, and therefore properties of $g(w)$ determine the convexity of the loss function. From Lemma \ref{convex}, it follows that convexity of $\phi(v)\in\mathcal{L}_F$ occurs if and only if $\frac{d}{dw}\log g(w)\le 1/2$. In other words, convexity is determined by the maximum rate at which $\log [g(w)]$ changes over the real line. Note this convexity result assumes the parameterization in terms of $q(w)$. If the inequality is satisfied, we say $\log[g(w)]$ is Lipschitz continuous with bound $1/2$. If $\log [g(w)]$ is Lipschitz continuous with bound $M/2$ where $1/2<M/2<\infty$, then $\tilde g(w)= g(w)^{1/M}$ is Lipschitz continuous with bound $1/2$ and $\tilde g(w)$ will yield a convex loss function. For example, from Table \ref{logistTable}, Savage loss (see Masnadi-Shirazi and Vasconcelos (2008)\nocite{SV2008}) is obtained when $g(w)= (F(w)(1-F(w)))^{3/2}$. It follows that $\frac{d}{dw}\log g(w) = -3/2 +3(1-F(w))<3/2$ where the upper bound ($M/2=3/2$) is sharp. Therefore Savage loss is non-convex, but we can modify the weight function to obtain a convex loss. With $M=3$, let $\tilde g(w)= g(w)^{1/M}=\Big ( (F(w)(1-F(w)))^{3/2} \Big )^{1/3}=(F(w)(1-F(w)))^{1/2}$. Note that $\tilde g(w)$ is Lipschitz continuous with bound $1/2$. The resulting loss function is the logistic likelihood loss (well-known to be convex). See Figure \ref{logistFigure} for a graphical comparison of weight functions. \bigskip \begin{tabular}{lrrr} \toprule \textbf{Name} & $\phi(v)$ & $g(w)$ & $k$ \\ \midrule Exponential & $e^{-\frac{1}{2}v}$ & 1/2 & $1$ \\ Logistic & $\log(1+e^{-v})$ & $\big ( F(w)(1-F(w))\big )^{1/2}$ & $2$ \\ Savage & $(1+e^{v})^{-2}$ & $\big ( F(w)(1-F(w))\big )^{3/2}$ & $2$ \\ Gaussian & $1-\Phi(\frac{v+m/2}{\sqrt{m}})$ & $\frac{1}{\sqrt{2\pi m}}e^{-\frac{1}{2m}w^2-m/8}$ & $1-\Phi(\frac{\sqrt{m}}{2})$ \\ Laplace & $ \begin{cases} e^{-v(1+m)/2}, & \text{if $v>0$} \\ 1+\frac{m+1}{m-1}(1-e^{v(m-1)/2}), & \text{if $v\le 0$} \end{cases}$ & $\frac{m+1}{2}e^{-m|w|}$ & $1$ \\ \bottomrule \end{tabular} {\captionof{table}{Loss functions for the logistic model and the indicated generating weight function $g(w)$ under the odds ($q(w)$) parameterization. The loss functions all result in $\mbox{Pr}(y^*=1\mid x) = F(f^*(x))$.}\label{logistTable} } \bigskip \begin{figure}[h] \includegraphics[scale=0.8]{weightfuncs} \caption{Weight functions $g(v)$ for loss functions in Table \ref{logistTable}. The functions are scaled to have maximum equal to 1. } \label{logistFigure} \end{figure} The following sub-sections explore Gaussian and Laplace loss, as these loss functions appear to be new. \subsection{Gaussian Loss} The Gaussian loss function is $\phi_{\mbox{GA}}(v)=1-\Phi(\frac{v+m/2}{\sqrt{m}})$ where $m>0$ and $\Phi(\cdot)$ represents the standard normal cumulative distribution function. Gaussian loss is obtained by selecting as a weight function a scaled Gaussian (normal) density; $g(w)=\frac{1}{\sqrt{2\pi m}}e^{-\frac{1}{2m}w^2-m/8}$, see Table \ref{logistTable}. Note that the Gaussian loss function is conformable for the Logistic CDF. In the regression context, the Gaussian loss function can be used to obtain consistent estimates of logistic regression coefficients (see Corollary \ref{regressresult}). The loss function is non-convex, bounded between 0 and 1, and therefore is a smooth approximation to 0-1 loss. The shift of $m/2$ occurs automatically from the definition of $g(w)$. A loss function with a shift of zero would be outside of $\mathcal{L}_F$, and would result in a degenerate loss function-- we would not be able to consistently estimate a prediction rule, or consistently estimate regression parameters in a regression context. An interesting aside: Setting $m=1$, $\phi_{\mbox{GA}}(v)=1-\Phi(v+1/2)\in\mathcal{L}_F$ while the Gaussian likelihood loss $\phi(v)= -\log\Phi(v)\in\mathcal{L}_\Phi$. The two loss functions appear very similar, but conform to different distribution functions. From a regression parameter estimation perspective, the former loss provides consistent estimators for logistic regression, and the latter for Probit regression. The following provides an example of a loss function closely related to Gaussian loss that has been studied in the context of addressing covariate measurement error in logistic regression. \begin{example}\label{Buzasexample} \nocite{Buzas2009} Buzas (2009) considered a modified likelihood score for the logisitc regression model for purposes of deriving an estimating score that remains unbiased in the presence of additive, normal covariate measurement error. In terms of the class $ \mathcal{\tilde C}_q$, the modified score was defined with weight function \[ g(f(x:\beta)) = \frac{(1/m)\Phi^\prime( f(x;\beta)/m)}{\sqrt{ F(f(x;\beta)[1-F(f(x;\beta))]}} \] where $\Phi^\prime (\cdot)$ is the derivative standard normal CDF and $m$ is a scaling constant. The weight function differs from the Gaussian weight given in Table \ref{logistTable} by the inclusion of $\sqrt{ F(f(x;\beta)[1-F(f(x;\beta))]}$ in the denominator. By Lemma \ref{classlemma}, the derivative of the resulting loss is in $\tilde C_q$ and therefore the loss is in $\mathcal{L}_F$. \end{example} \subsection{Laplace Loss} The Laplace loss function given in Table \ref{logistTable} is obtained using the kernal of the Laplace density function as the weight function $g(w)$. The resulting loss function, denoted $\phi_L(v)$, is not convex. The Laplace loss function is similar to the Tukey bi-weight function and Huber-type loss functions in that the loss is approximately quadratic for small residuals (and $m\approx 1$), and then tapers off as the magnitude of the residuals increases. This is seen as follows. In terms of standardized logistic residuals, the Laplace loss function ($m>1$) is \[ \tilde\phi_L(S)=\phi_L(-\log S^2)= \begin{cases} |S|^{m+1}, & \text{if $|S|<1$} \\ 1+\frac{m+1}{m-1}(1-\frac{1}{|S|^{m-1}}), & \text{if $|S|\ge 1$}. \end{cases} \] Similar to Tukey's bi-weight loss, the functional form of the penalty depends on the size of the residual, and the penalty is bounded for large $|S|$, see Figure \ref{LaplaceFigure}. \begin{figure}[h] \includegraphics[scale=0.8]{laplace} \caption{Laplace loss as a function of the standardized logistic regression residual $S$ ($m=2$). } \label{LaplaceFigure} \end{figure} Through the choice of $m$, Laplace loss can be made arbitrarily close to 0-1 loss. It is easily seen that \[ \lim_{m\rightarrow\infty}\tilde\phi(S^2)=\begin{cases} 0, & \text{if $|S|<1$} \\ 2, & \text{if $|S|\ge 1$}. \end{cases} \] In other words, for large $m$, Laplace loss should closely mimic 0-1 loss. In a regression context, Laplace loss provides consistent estimators for the logistic regression model. The effectiveness of Laplace loss for mitigating the influence of outliers in regression and classification settings will be studied in future work. \bigskip \subsection{Exponential loss}\label{exploss} Here we explore new interpretations of exponential margin-based loss through the lens of standardized logistic regression residuals. In the literature, the standard form for exponential loss is $\phi_E(v)=e^{-v}$ and that is the form used below. This differs from the form of exponential loss given in Table \ref{logistTable} in that the exponent is not divided by 2. Technically $\phi_E(v)\notin \mathcal{L_F}$ because of the re-scaling in the exponent. Of course the two forms are equivalent in terms of classification (and easily made so for regression). It immediately follows from Theorem \ref{key} that $\phi_E(y^*f(x))=S^2$. Then the empirical risk for exponential loss is equivalent to the (scaled) sum of squared standarized residuals: \[ \frac{1}{n}\sum_{i=1}^n \phi_E(y^*_i f(x_i)) = \frac{1}{n}\sum_{i=1}^n S_i^2 \] where $S_i^2=S^2(f(x_i))$. In words, {\it estimation of $f(x)$ through minimizing margin-based exponential loss is equivalent to minimizing the sum of squared standardized logistic regression residuals}. \subsubsection{Sensitivity to outliers} A criticism of exponential loss is that it is sensitive to outliers. The outlier sensitivity can be explored by examining the behavior of the classifier using the $p$-norm of the standardized logistic residuals. Let $\mathbf{S}=(S_1,S_2,\dots,S_n)$ and suppose $f(x;\beta)$ is of the form given in \eqref{classFunctions}. The $p$-norm of $\mathbf{S}$ is \begin{equation}\label{pnorm} \left\lVert \mathbf{S}\right\rVert_p = \left\{\sum_{i=1}^n |S_i|^p \right\}^{\frac{1}{p}} = \left\{\sum_{i=1}^n \exp(-Y^*_if(x_i;\beta) p/2)\right \}^{\frac{1}{p}}. \end{equation} Let $ \hat\beta_p =\arg \min_{\{\beta\}} \left\lVert \mathbf{S}\right\rVert_p^p $ and $\hat f_p(x)=f(x;\hat\beta_p)$. Estimation with exponential loss is equivalent to choosing $p=2$. The following lemma and corollary indicate that the value of $p$ essentially has no impact on estimation or classification. \begin{lemma}\label{norm} The estimates $\hat f_p(x)$ and $\hat f_l(x)$ minimizing the $p$ and $l$ norms of $\mathbf{S}$ have the following relation: \[ \hat f_p(x)=\frac{l}{p}\hat f_{l}(x). \] \end{lemma} \begin{proof} The lemma follows by noting that, from inspection of \eqref{pnorm}, $\hat f_p(x) = (1/p)\hat f_1(x)$ and $\hat f_l(x) = (1/l)\hat f_1(x)$. Combining the two identities gives the result. \end{proof} The lemma implies that the value of the quantity $\{\min_{\beta} \left\lVert \mathbf{S}\right\rVert_p^p \}$ is independent of the value for $p$. The following Corollary establishes that the classifier resulting from minimizing the $p$-norm does not depend on $p$. The Corollary follows directly from Lemma \ref{norm}. \begin{corollary} Let $\hat C_p(x)=\mbox{sign}\{\hat f_p(x)\}$. Then $\hat C_p(x)=\hat C_l(x)$ for all $0<p,l<\infty$. \end{corollary} Additional insight from the lemma and corollary can be had by examining the limit as $p$ approaches infinity. For large $p$, \[ \left\lVert \mathbf{S}\right\rVert_p \approx \left\lVert \mathbf{S}\right\rVert_\infty \equiv\max_i |S_i|. \] This approximation provides further insight into the common observation that exponential loss is sensitive to outliers. As shown above, estimation and classification are not dependent on the value of $p$. Then we can consider $p$ large and conclude that exponential loss is, to a close approximation, selecting $f(x;\beta)$ to minimize the maximum of the absolute value of SLRRs, or equivalently to minimize the maximum of the magnitude of margins that are negative. \subsubsection{Exponential risk invariance} All loss functions $\phi(v)\in\mathcal{L}_F$ have the same optimal value for $f(x)$ at the population level, which will be denoted $f_F^*(x)$. Note that $f^*_F(x)$ satisfies $F(f_F^*(x))=p(x)$. Denote the conditional risk for exponential loss as $R_{CE}(f(x))=E[\phi_E(y^*f(x))\mid x]=E[e^{-y^*f(x)}\mid x] $. The following Theorem states that the conditional and unconditional risk for exponential loss evaluated at $f_F^*(x)$ is invariant to the joint distribution for $(y^*,x)$. This surprising result is perhaps less so when we think about margin-based exponential loss as the square of the standardized logistic regression residuals, and with the understanding that standardized residuals have mean zero and variance one. \begin{theorem} The exponential conditional risk evaluated at $f_F^*(x)$ satisfies \[ R_{CE}(f_F^*(x)) = 1 \] over any joint distribution on $(y^*,x)$. The conditional (and unconditional) risk is therefore independent of the conditional distribution $p(x)$ and the marginal distribution for $x$. \end{theorem} \begin{proof} Recall that $F(f_F^*(x))=p(x)$. Then the conditional risk is given by \begin{equation*} \begin{split} R_{CE}(f^*_F(x)) & = E[\phi_E(y^*f_F^*(x))\mid x] = E[e^{-y^*f_F^*(x)}\mid x] \\ & = E[S^2(f^*_F(x))\mid x]=E\Bigg [\frac{(y-p(x))^2}{p(x)(1-p(x))} \mid x \Bigg ]=1 \end{split} \end{equation*} where the last equality follows by noting the expectation is with respect to the underlying conditional distribution $p(x)$, and that the conditional variance of $y$ is $p(x)(1-p(x))$. The unconditional risk is $R_E(f^*_F(x))=E[R_{CE}(f^*_F(x))]= E[E[e^{-y^*f^*_F(x)}\mid X]]=1$, regardless of the distribution for $x$. \end{proof} The theorem suggests a way of assessing model fit when using any loss function $\phi(v)\in\mathcal{L}_F$. Let $\hat f_\phi(x)$ minimize the empirical loss for $\phi(v)\in\mathcal{L}_F$. Define the exponential empirical risk as \[ R_{\mbox{Emp}}(\hat f_\phi(x)) = \frac{1}{n}\sum_{i=1}^n \exp\{-y^*_i\hat f_\phi(x_i)\}. \] The exponential empirical risk should be approximately equal to one when we have not over or under fit the data. Empirical risks greater than 1 suggest the model is not fit well, and below 1 could suggest over fitting. Additionally, the performance of different loss functions in $\mathcal{L}_F$ could be compared via $R_{\mbox{Emp}}$. AdaBoost uses exponential loss and forward stage-wise regression. The empirical risk could be evaluated at each iteration, with iterations stopping when $R_{\mbox{Emp}}\approx 1$. The utility of using $R_{\mbox{Emp}}$ to assess model fit will be explored in future work. \subsection{Logistic loss} In this sub-section, a new perspective on logistic loss and logistic regression are given in terms of standardized logistic regression residuals. Recall that logistic loss is defined as $\phi_L(y^*f(x))=\log \left (1+e^{-y^*f(x)}\right )$. In Section \ref{exploss}, we showed that minimizing empirical margin-based exponential loss is equivalent to minimizing the arithmetic mean of squared SLRRs. Here we show that optimizing margin-based logistic loss (or maximum likelihood estimation if the logistic model is assumed) is equivalent to minimizing the geometric mean of (shifted) squared SLRRs. The value of $f(x)$ minimizing logistic loss is equivalent to the value of $f(x)$ minimizing the geometric mean of the values $\{1+S^2_i\}_{i=1}^n$. That is, \[ \arg\min_{f\in \mathcal{F}} \frac{1}{n} \sum_{i=1}^n \phi_L(y_i^*f(x_i)) = \arg\min_{f\in \mathcal{F}} \left (\prod_{i=1}^n (1+S^2_i)\right )^{1/n}. \] The identity follows readily from Theorem \ref{key} and the definition of the logistic likelihood loss $\phi_L(y^*f(x))$. In terms of regression, the result says that logistic regression maximum likelihood can be thought of as choosing the regression parameters to minimize the geometric mean of shifted, squared SLRRs. Therefore logistic regression via maximum likelihood is a type of least squares regression. \subsection{Another perspective on AdaBoost} The enormously successful boosting algorithm AdaBoost can be understood as forward stagewise additive regression using exponential loss, see Friedman et al. (2000)\nocite{Friedman2000}. Here we argue that AdaBoost can be understood as forward stagewise additive logistic regression, where the objective function is a weighted sum of squared logistic regression residuals, with residuals from prior fits as the weights. The development here follows that in Hastie et al. (2009)\nocite{ESL2009}, see pages 343-344. The basis functions in AdaBoost are weak classifiers $G_m(x)\in\{-1,1\}$. In Friedman et al. (2000) \nocite{Friedman2000}, it is shown that the $m$th iteration of AdaBoost consists of solving \begin{equation}\label{adaboost} (\theta_m,G_m) = \arg\min_{\theta,G}\sum_{i=1}^n \exp[-y_i^*(f_{m-1}(x_i)+\theta G(x_i))] \end{equation} where $f_{m-1}(x)=\sum_{k=1}^{m-1}\hat\theta_k G_k(x)$ is the sum of the weighted weak classifiers selected in the previous $m-1$ iterations of the algorithm. Using the relations between margins and SLRRs established above, it follow that \eqref{adaboost} can be written as \begin{equation*}\label{adaboost2} \begin{split} (\theta_m,G_m) & = \arg\min_{\theta,G}\sum_{i=1}^n S^2(f_{m-1}(x_i)+\theta G(x_i)) = \arg\min_{\theta,G}\sum_{i=1}^n \left [ S^2(f_{m-1}(x_i)) S^2(\theta G(x_i)) \right ]\\ & = \arg\min_{\theta,G}\sum_{i=1}^n \left [ \left ( \prod_{k=1}^{m-1}S^2(\hat\theta_kG_k(x_i)) \right ) S^2(\theta G(x_i)) \right ]. \end{split} \end{equation*} At each iteration, AdaBoost is minimizing the sum of squared SLRRs, weighted by the squared residuals from prior iterations. \section{Conclusion} This paper established relations between margin-based loss functions, likelihoods, and logistic regression models. A simple derivation of a characterization of conformable margin-based loss functions was presented. Using the characterization, the derivative of a large class of margin-based loss functions was shown to be equivalent to weighted likelihood scores. Additionally, the margin itself is a function of squared standardized logistic regression residuals, and therefore so too are all margin-based loss functions. These relations provide new perspectives on margin-based classification methods, and further establish connections between classification via soft or hard classifiers, and regression parameter estimation. The simple characterization of margin-based loss functions requires differentiability of the loss, excluding at least one important loss. Hinge loss, used in support vector machines, is Bayes consistent, but is not differentiable and therefore not in the class $\mathcal{L}_G$. Hinge loss can be expressed in terms of squared standardized logistic regression residuals. However, the effectiveness of hinge loss is probably most easily understood in terms of margins.
ef20a0680218eb83bb542872c7078f209ae22ca3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} How to penetrate random scattering media for quick and reliable non-invasive imaging has become one of the hottest issues in the field of computational imaging, with applications ranging from biological imaging to astronomical imaging\cite{Speckle:Phenomena,RN35,RN85,RN89}. Different methods for non-invasive scattering imaging have been developed recently. Wavefront shaping can be used to image objects behind scattering medium by controlling the spatial light modulator\cite{RN51,RN52,RN122}. However, this method usually demands reference objects like a guide star in the plane of interest. Measuring the transmission matrix of the entire scattering system is also an non-invasive imaging method\cite{RN123,RN124}. The determination of the transmission matrix takes long time and requires high precision. Alternatively, it is relatively efficient to measure the point spread function (PSF) of the scattering system and perform a deconvolution operation\cite{RN65,RN98}. However, the method needs prior information and will lose the important advantage of non-invasion imaging. A breakthrough in the field of non-invasive imaging is on speckle correlation technology (SCT) within optical memory effect range and phase retrieval algorithms\cite{RN31}. The speckle correlation method based on memory effect draws conclusions from autocorrelation of PSF of the scattering system. Within memory effect range, the speckle autocorrelation approximates the autocorrelation of the object, as shown in Eq. (\ref{eq:autocorrelation})\cite{RN31}, . \begin{equation} \begin{aligned} I\star I=(O\star O)\ast (S\star S) \approx O\star O, \label{eq:autocorrelation} \end{aligned} \end{equation} where $I$ is the speckle captured by camera, $S$ is the PSF of the imaging system and $O$ is the object in space domain. $\star$ denotes the autocorrelation operation, and $\ast$ denotes the convolution operation. The key to Eq. (\ref{eq:autocorrelation}) is $S\star S$, which is a sharp peak function. The autocorrelation of PSF can be ignored in the convolution operation\cite{Li:19,RN91}. So strict requirement of PSF limits SCT-based method valid for narrow band illumination. Deep learning is an existing reliable method for imaging through scattering media under broad-spectrum illumination\cite{RN40,RN101}. However the method requires a large amount of sample data for an end-to-end learning. Another method is to modify the phase retrieval algorithm by introducing constraints of the phase of OTF (PhTF)\cite{RN91}. However the iterative process of the phase retrieval algorithm is unstable and the calculation process can take a long time\cite{Fienup:82}. Broadband spectrum and multi-spectrum are an essential part in color imaging, in which images from different spectrum are superposed to recover the color or the spectral information of object\cite{RN65}. The current widely used method is to measure the spectral PSF of several discrete spectrum and apply deconvolution or correlation operations on each of them\cite{RN65,RN98}. By this way, the object can be rebuilt with deconvolution operation as Eq. (\ref{eq:deconv.})\cite{MORRIS1997197}, \begin{equation} \begin{aligned} O=\sum_{\lambda}O_{\lambda} =\sum_{\lambda}\mathscr{F}^{-1}(\mathscr{F}(O_{\lambda})) =\sum_{\lambda} \mathscr{F}^{-1}(\frac{\mathscr{F}(I_{\lambda})}{e^{-i\Phi_{s_{\lambda}}}}),\label{eq:deconv.} \end{aligned} \end{equation} where $I$ is the speckle captured by camera, $\Phi_S$ is the PhTF of the imaging system, $\lambda$ is the wavelength of light source, $O$ is the object in space domain, $\mathscr{F}$ and $\mathscr{F}^{-1}$ denotes the Fourier transform and inverse Fourier transform, respectively. However, manipulating spectral PSF can obtain the orientation and position information additionally. Those approaches are limited by the need for prior information of spectral PSF and can not be applied in non-invasion circumstance. As an alternative, a non-invasive triple correlation-based color image reconstruction method was suggested \cite{RN106}. It contains orientation information which is missing during the traditional speckle correlation. But triple correlation algorithm requires narrow band illumniation source as SCT and it also takes long time to complete than SCT with phase retrieval process\cite{RN37}. In this paper, The multi-frame OTF retrieval engine (\emph{MORE})\cite{Chen:2020} is used to achieve non-invasive color imaging under broadband illumination by generating OTF with speckles from different objects. It can not only breaks the limitation of speckle correlation imaging technology, but also have stable and fast iterative convergence due to redundant information of speckles. \section{Principle and methods} \label{sec:System setup and analytical results} \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/MORE_block_diagram.png} \caption{\label{fig:MORE_block}Block diagram of the \emph{MORE} for dynamic imaging object hidden behind the scattering medium under broadband illumination. The retrieval process starts with an inital guess, $\Phi_{S}$. Inner loop is a iteration process similar to Error-Reduction algorithm. The outer loop repeats the iterative process of the inner loop until the set number of times is reached. Then the updated PhTF, $\Phi_{S}$, is obtained. Where $|\mathscr{F}(I_j)|$ denotes the amplitude of Fourier transform of j-th frames and $\Phi_{S_{k,j+1}}$ indicates the updated PhTF at the k-th outer loop and j-th inner loop.} \end{figure} \emph{MORE}, has successfully carried out high-speed dynamic imaging under low signal-to-noise ratio circumstances in our earlier work\cite{Chen:2020}. When the PSF and the speckle for the system's object have the relationship $I=O\ast S$, the relationship can be expressed in the frequency domain as follows: \begin{equation} \begin{aligned} \mathscr{F}(I)=\mathscr{F}(O) \cdot \mathscr{F}(S), \label{eq:I=O*S} \end{aligned} \end{equation} where $I$ is the speckle pattern, $O$ is the object, and $S$ for the PSF of the system. Rewrite Eq. (\ref{eq:I=O*S}) as \begin{equation} \begin{aligned} \mathscr{F}(I) \cdot e^{-i\Phi_S}=|\mathscr{F}(S)|\cdot |\mathscr{F}(O)|e^{i\Phi_{O}},\label{eq:fourier(I=O*S)} \end{aligned} \end{equation} in which $|\mathscr{F}(S)|$ is the amplitude of the OTF. It is proved that amplitude of the OTF only acts as a spatial frequency filter on $|\mathscr{F}(O)|$\cite{Chen:2020,RN127}. It indicates that the object can be approximated with the speckle pattern $I$ and PhTF, $\Phi_{S}$: \begin{equation} \begin{aligned} O\approx \mathscr{F}^{-1}(|\mathscr{F}(I)|\cdot e^{i\Phi_I-i\Phi_S}). \label{eq:OTF-P} \end{aligned} \end{equation} As is shown in Fig. \ref{fig:MORE_block}, \emph{MORE} is proposed to generate the PhTF by iteratively computing the Fourier phase among the speckles. In this way, \emph{MORE} avoids the influence of the PSF autocorrelation introduced in traditional SCT\cite{RN31}, and achieves non-invasive imaging behind the scattering medium under the broadband illumination. Similar as the Error-Reduction algorithm\cite{Bauschke:02}, the iterative process starts in the frequency domain with an initial random guess of PhTF, $\Phi_S$. In the k-th outer loop and j-th inner iterative loop, the Fourier transform of object, $\mathscr{F}(O_{k,j})$, is calculate with the frequency domain magnitude of speckle $|\mathscr{F}(I_j)|$ and difference of Fourier phase of speckle and PhTF, $\Phi_{I_j}-\Phi_{S_{k,j}}$. After applying an inverse Fourier transform of the object in the frequency domain, we use a real and nonnegative constraint of to update the guessed object, $O_{k,j}'$, in object domain. To return to the frequency domain, we performe a Fourier transform on $O_{k,j}'$ and calculate PhTF with speckle's phase of Fourier transform, $\Phi_{S_{k,j+1}}=\Phi_{I_j}-\Phi_{O_{k,j}'}$. Then the inner iterative move forward and turn to next frame until all the frames are used. After j-th inner iterative loop is completed, $k=k+1$ and another time outer loop starts and it goes from the first frame to the last one. When k reaches the number of loops we specify, we can obtain the PhTF of the imaging system, $\Phi_{S}$. The objects can be recovered with speckle frames and the PhTF, as Eq. (\ref{eq:OTF-P}) shows. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/simu.png} \caption{\label{fig:simu}Numerical simulation of \emph{MORE} and color imaging by deconvolution with PhTF generated by \emph{MORE}. Color speckles are generated by convoluting with the different spectral PSF to simulate memory effect. Then we obtain the spectral PhTF by applying \emph{MORE} on the speckles. Object is recovered by deconvoluting with those PhTF retrieved by \emph{MORE}.} \end{figure} Additionally, by performing OTF retrieval on each of the three broadband spectrum channels separately, \emph{MORE} can be used for color imaging. Under broadband illumination, speckle pattern is the sum of speckles under narrow band illumination\cite{RN65,RN91}. For the speckle captured by a 3 channel R, G, B color camera, it can be regarded as a sum of the three broadband speckles, which can be described by Eq. (\ref{eq:spectral-speckle}). By applying \emph{MORE} in three broadband color channel (R, G, B), we can obtain different PhTF of those channels. Then the object can be retrieved by deonvoluting with PhTF. Equation. (\ref{eq:spectral-recovery}) shows the deconvolution operation in Fourier domain, where Fourier transform of object is equal to the Fourier transform of speckle divided with PhTF. \begin{equation} \begin{aligned} I=\sum_{\lambda}^{R,G,B}I_{\lambda}=\sum_{\lambda}^{R,G,B}O_{\lambda} \ast S_{\lambda} \label{eq:spectral-speckle} \end{aligned} \end{equation} \begin{equation} \begin{aligned} O=\sum_{\lambda}^{R,G,B}O_{\lambda} =\sum_{\lambda}^{R,G,B}\mathscr{F}^{-1}(\mathscr{F}(O_{\lambda})) =\sum_{\lambda}^{R,G,B} \mathscr{F}^{-1}(\frac{\mathscr{F}(I_{\lambda})}{e^{-i\Phi_{s_{\lambda}}}})\label{eq:spectral-recovery} \end{aligned} \end{equation} Where $\lambda$ represents the broadband wavelength ranges of three color channels (R, G, B), $I$ is the color speckle with 3 channels captured by camera, $I_{\lambda}$ is the speckle intensity under wavelength range $\lambda$, $S_{\lambda}$ is the PSF of imaging system under wavelength range of $\lambda$, $O_{\lambda}$ is the object under wavelength range of $\lambda$, $\mathscr{F}$ and $\mathscr{F}^{-1}$ denotes the Fourier transform and inverse Fourier transform, respectively. The recovery pipeline has been explicitly explained using a numerical simulation, as shown in Fig. \ref{fig:simu}. The object we demonstrate in simulation is a multicolored star. Its three color channels are extracted and combined using three separated PSFs generated randomly. Then, to get three single channel images, we performed a deconvolution operation using only the PhTF, which we have obtained on the R, G, and B images of speckles. By superposing those images, object's color image was rebuilt. \section{Experimental results} \label{Experimental Results and Discussion} \subsection{Amplitude object imaging under LED illumination} \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/experiment_setup.png} \caption{\label{fig:experiment}Experiment setup of dynamic imaging under white illumination. (a) Experiment setup schematic of amplitude object imaging under different bandwidth LED. (b)Raw speckles of amplitude number object '2' captured by monochromatic camera under 500 ms exposure time, (b1) under red light LED (b2) under white light LED. (c) The spectrum of white and red light LED measured in our experiments.} \end{figure} With the setup shown in Fig. \ref{fig:experiment}, we demonstrate non-invasive imaging through a scattering medium under white-light illumination with multi-frame speckle. A white and a red light LED were selected as the light sources. LED's full width at half maximum of spectrum are about 200 nm and 15 nm, as shown in Fig. \ref{fig:experiment}(c). Several number objects (2.4-mm wide and 3.6-mm high numbers, Fig. \ref{fig:experiment}(a)) were placed at a distance $u$ = 200 mm respectively from the diffuser. A CMOS camera (5496 $\times$ 3672 px with a pixel size of 2.4 $\times$ 2.4 $\mu$m) was placed $v$ = 100 mm respectively from the diffuser. Diffuser is a ground glass of 2-mm thickness and 220 grit. The magnification of scattering imaging system was $M=v/u=100 \, mm / 200\, mm = 0.5$. As the light carrying the object information passes through the scattering medium, a specific pattern associated with the object was formed, which is captured by the camera behind the scattering medium. After collecting a series of speckles of objects (different number objects '2', '3', '4', '5' and '6' replaced in our experiment), the raw camera image, Fig. \ref{fig:experiment}(b), was spatial normalized by dividing the raw camera image by a low-pass filter version of it. And then the speckle patterns are smoothed by a Gaussian kernel filter (size: 20 pixels) with a standard-deviation width of 2 pixels to filter out high frequency noise. Note that the support mat constraint is important to the phase retrieval process. The closer the support mat is to the direct imaging size, the more accurate the recovered image will be\cite{Bauschke:02}. Even the support mat ($200\times 200$ pixels) is set to be twice object true size ($80\times120$ pixels), \emph{MORE} can still provides a reasonable result. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/MORE_PR.png} \caption{\label{fig:MORE_PR}Comparison between \emph{MORE} and SCT with phase retrieval algorithm (HIO) under an exposure time of 500 ms. (a1) the ground truth images of '2','3','4','5','6'. (a2) Illuminated by a red light LED (wavelength: 600-650 nm), the recovery images from 5 frames and 5 iterative loops of \emph{MORE} (equal to 25 times of iterations), scale bar: 50 \textmu m. (a3) The best recovered images of the same data collected in (a2) by HIO phase retrieval algorithm with 50000 times iteration. (a4) Under the illumination of a white LED (wavelength: 400-700 nm), the reconstruction of \emph{MORE} with 5 frames and 5 iterative loops. (a5) Results recovered by HIO algorithm. (b) PSNR and SSIM of the images in (a2-a5). All the images used in image quality analysis are rotated to the same orientation as ground truth.} \end{figure} The recovered results of \emph{MORE} and SCT with phase retrieval algorithm(HIO, hybrid-input-output algorithm) are shown in Fig. \ref{fig:MORE_PR}(a). Under the illumination of 15 nm and 200 nm, results obtained by \emph{MORE} are better than those got from SCT with phase retrieval algorithm (HIO). The results of HIO shown in Figs. \ref{fig:MORE_PR}(a3) and (a5) are the best recovery images in several trials. There are 50000 iterations for each trail. By comparing \emph{MORE}'s recovery of the numbers with the objects' ground truth, the object can be correctly identified. It is worth noting that results from \emph{MORE} are not selected by us manually and all the recovered images having the correct orientation which is missing in the SCT with phase retrieval process\cite{Fienup:82}. From Fig. \ref{fig:MORE_PR}(b), both \emph{MORE} and HIO achieve a better quality under narrower band illumination. As the spectrum becomes wider, the contrast of speckle would be lower, which causes instability of speckle correlation and phase retrieval. However, \emph{MORE} can still work well under both narrow and broadband illumination. Additionally, we calculate the peak-signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM)\cite{ImageQuality} of reconstructed image to evaluate the quality of results. And On both PSNR and SSIM, the image of recovered by \emph{MORE} is better than by HIO among all the five numbers. It shows that our method does work better than SCT and phase retrieval algorithm under broadband illumination. \subsection{Multispectral object imaging} \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/color_experiment_setup.png} \caption{\label{fig:color_imaging}(a) Experiment setup of color imaging. (b) Source spectrum and camera response to spectrum. Solid lines are the spectrum of projector source, which are measured by projecting monochromatic red blue and green images to the fiber spectrometer.} \end{figure} Furthermore, we perform an non-invasive broadband color imaging experiments with \emph{MORE}. The experiment scheme is shown in Fig. \ref{fig:color_imaging}(a). The light source and object are replaced by a projector with three broad-spectrum LED light sources. We utilized it to project a series of number objects similar to those in Fig.\ref{fig:experiment}. Then we took a serials of pictures of the speckle patterns behind the ground glass with a color camera. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/color_recovery.png} \caption{\label{fig:color_recovery}Result of color imaging and the measurements of spectrum. (a) Reconstruction pipeline of color imaging. Objects are recovered by superpositing 3 channels image retrieved by \emph{MORE}, Scale bar: 100 \textmu m. (b) Speckle of single frame of color object 'XYOZ', Scale bar: 2 mm.The result of single-shot HIO and the recovery deconvolved with PhTF which generated by \emph{MORE}, Scale bar: 100 \textmu m.} \end{figure} As Fig. \ref{fig:color_recovery}(a) shows, we extract R, G, and B color channels from the color speckle and obtain PhTF of broadband spectrum by repeating the process mentioned above. Additionally, from the spectrum of projector and camera spectral response curve in Fig. \ref{fig:color_recovery}(b), blue and red spectrum intensity is comparatively lower than green spectrum intensity when projector brightness is the same. As a result, the gain of the various channels need be adjusted manually to apply the white balance to color camera. We conduct a deconvolution operation with the PhTF produced previously to recover three monochromatic R, G, and B images. Then the color images of white numbers '2','3,'4','5','6' and colorful text 'XJTU' are superimposed by three channels images recovered from three broadband spectrum speckles. After we obtain the PhTF of the non-invasion scattering imaging system, we conduct another experiment which use another multi-spectral object to evaluate the correctness of PhTF generated by \emph{MORE}. In Fig. \ref{fig:color_recovery}(b), three channels of speckle of colorful text 'XYOZ' are deconvoluted with spectral PhTF obtained by speckles from \ref{fig:color_recovery}(a). Compared with the truth of object, the images of channels are recovered successfully by deconvoluting with PhTF generated by \emph{MORE}, while results obtained with phase retrieval algorithm (HIO) fails to recovery the object. \section{Discussion} The broadband imaging method we proposed is based on the convolution operation between the object and the PSF within memory effect, as Eq. (\ref{eq:fourier(I=O*S)}) shows\cite{RN125}. Compared to the SCT with phase retrieval algorithm, \emph{MORE} is more like a way to measure PhTF by a serials of frames or dynamic object. Non-invasive color imaging can be achieved by using narrowband light source or adding different narrowband filters before camera\cite{RN115}. However there is no demand of narrowband source or filters in our method. It has been proved that by modifying phase retrieval algorithm with PhTF, object can be successfully recovered with single frame\cite{RN91}. However \emph{MORE} has a faster convergence speed in iteration process and provides more stable and reliable results than modified phase retrieval algorithm. It is because that there is more redundant information of PhTF contained in multi-frame speckles than single-shot frame. As shown in Fig. \ref{fig:errF}, \emph{MORE} converges from the first outer iterative loop, and stably reaches its global minimum after the fifth outer loop (equals to 25 times iteration). The SCT with phase retrieval algorithm (HIO) remain unstable even after the 200 times iteration. \begin{figure}[htbp] \centering \includegraphics[width=12cm]{figure/errF.png} \caption{\label{fig:errF}Normalized difference curves of the reconstruction of number objects of \emph{MORE} and HIO. Data are taken with 500ms exposure time under white illumination. (a)The convergence curves of the recovery for '2' to '6' in \emph{MORE}. 'k' represents times of the outer loop in \emph{MORE} and each outer loop contains 5 times iteration operations. (b) Normalized error the of '2' and '5' in HIO. 'i' is the total iteration times in HIO.} \end{figure} A projector has been used as the multi-spectral object and the incoherent light source to simply demonstrate \emph{MORE} in our work. However, \emph{MORE} we suggested can still work, when a small colored object within the memory effect is illuminated with white light LED or even natural light. It is important to note that one key benefit of the deconvolution scattering imaging method is the advantage to determine an object's relative position and orientation. So the spatial position relationship of those items within the memory effect can also be obtained using \emph{MORE}. Beacuse all the channels of color speckles and spectral PhTF are calculated separately, it is inevitable to superposing color image manually. \section{Conclusion} In conclusion, we prove that our multi-frame OTF retrieve engine (\emph{MORE}) can be used in non-invasive mono and color imaging under broadband illumination successfully. Compared with existing methods\cite{RN91,RN40,RN98,RN65}, it has no need for prior information of PSF and it achieves a faster and stable convergence and reliable results due to redundant information. \emph{MORE} overcomes the strict requirements of narrow band illumination and long time consumption on recovery process. It is a effective way to achieve color scattering imaging under broadband illumination.
a8bb84ca7a75a432337ebb7f802c90236cda0f36
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} At a redshift of 0.056, the Fanaroff-Riley class II radio galaxy Cygnus A is the nearest of the truly powerful radio galaxies, much closer than any comparable sources (\cite{cb96}). As a result, it the archetype of powerful radio galaxies. The galaxy that hosts Cygnus A is also the central galaxy of a massive galaxy cluster, so that interactions between the expanding radio lobes of Cygnus A and the surrounding gas can be observed in the X-ray (\eg \cite{cph94}, \cite{swa02}). Here we discuss some properties of Cygnus A determined from \chandra{} X-ray observations. \section{The lobes and shocks} \begin{figure}[t] \begin{center} \includegraphics[width=0.78\textwidth]{cyga_f1.jpg} \includegraphics[width=0.78\textwidth]{cyga_f2.jpg} \caption{Background subtracted, exposure corrected 0.5 -- 7 keV \chandra{} image of Cygnus A. The lower image also shows contours of the 6 cm radio emission from the map of \cite{pdc84}; contour levels 0.001, 0.003, 0.01, 0.03, 0.1 $\rm Jy\ beam^{-1}$).} \label{fig:cyga} \end{center} \end{figure} \dataset[ADS/Sa.CXO#obs/00360]{} \dataset[ADS/Sa.CXO#obs/01707]{} \dataset[ADS/Sa.CXO#obs/05830]{} \dataset[ADS/Sa.CXO#obs/05831]{} \dataset[ADS/Sa.CXO#obs/06225]{} \dataset[ADS/Sa.CXO#obs/06226]{} \dataset[ADS/Sa.CXO#obs/06228]{} \dataset[ADS/Sa.CXO#obs/06229]{} \dataset[ADS/Sa.CXO#obs/06250]{} \dataset[ADS/Sa.CXO#obs/06252]{} The 0.5 -- 7 keV image of Cygnus A in Fig.~\ref{fig:cyga} was made from 246 ksec of cleaned \chandra{} data obtained between 2000 and 2005. The physical scale is $\simeq 1.1\rm\ kpc\ arcsec^{-1}$. A cocoon shock appears as an X-ray edge surrounding the radio lobes and hot spots, which extends from $\simeq30$ arcsec north of the AGN at the centre to $\simeq60$ arcsec to the west. There is a great deal of internal structure, much of which is due to thermal emission from gas within the shock. In contrast to the majority of nearby, low power radio sources at cluster centres, there are no X-ray deficits over the radio lobes of Cygnus A. However, comparing the upper and lower images in Fig.~\ref{fig:cyga}, we see that the X-ray emission is brighter in regions between the edge of the radio lobes and the cocoon shocks than it is in adjacent regions over the lobes. This is as we should expect. The gas displaced by the radio lobes must be compressed into the spaces between the expanding lobes and the shock fronts, making it brighter than the undisturbed gas and producing a net excess of X-ray emission over the radio cocoon. The excess is greatest where our sight lines through the layer of compressed gas outside the radio lobes are longest. The detailed correspondence between the edges of the radio lobes and enhanced X-ray emission adds weight to the argument that there are indeed X-ray cavities corresponding to the radio lobes of Cygnus A, as in lower power sources. In addition, \cite{cbk12} have noted that the X-ray deficit to the south of the AGN corresponds to emission in the 250 MHz LOFAR map (\cite{mkv11} and these proceedings), arguing that this is older radio plasma. The deep X-ray image shows that this central cavity extends about half as far to the north as it does to the south. To the north and west of the AGN the edge of this cavity is remarkably sharp. This puts tight constraints on either the physical properties of the gas and radio plasma, or on the dynamics of formation of these cavities (\eg \cite{rmf05}, \cite{ps06}). \begin{figure}[t] \begin{center} \includegraphics[width=0.6\textwidth]{cyga_f3.pdf} \caption{Projected temperature profile of the gas around Cygnus A. Temperatures are measured in circular annuli centred on the AGN, excluding the regions of bright radio emission. The dashed vertical line marks the radius of the southwest shock front, which is smeared by the binning here. Nevertheless, there is an evident bump in the projected temperature $\sim5$ -- 10 arcsec inside the shock, consistent with expectations from fitting the surface brightness profile.} \label{fig:tshock} \end{center} \end{figure} \section{Mean jet power} \label{sec:jetpower} The cocoon shocks provide an excellent means for estimating the average power of Cygnus A during its current outburst. We have fitted the X-ray surface brightness profile of the shock front in a 50 degree wedge to the southwest of the AGN with a shock model. A spherical, numerical hydrodynamic model was used to compute surface brightness profiles of shocks due to point explosions at the centre of an initially isothermal, hydrostatic atmosphere (\eg \cite{nmw05}). For the best fitting model, the density in the unperturbed gas varies as $r^{-1.38}$, the radius of the shock is 40 kpc, its Mach number is 1.37, its age is 16 Myr and its mean power is $\simeq 4\times10^{45}\rm\ erg\ s^{-1}$. The \chandra{} surface brightness profile of the shock is very insensitive to the gas temperature, so that the temperature profile provides an independent test of the model. Our assumption that the unshocked gas is isothermal is only roughly correct. Nevertheless, the model predicts that the projected temperature should peak $\simeq5$ arcsec behind the shock, $\simeq 0.6$ keV above the preshock temperature. Allowing for the preexisting temperature gradient and smearing due to spherical averaging, the shock model is consistent with the observed profile of projected temperature (Fig.~\ref{fig:tshock}). The spherical model underestimates the shocked volume, hence the total energy required to drive the shocks. A more realistic model would also have continuous energy injection, which leaves a greater fraction of the injected energy in cavities, increasing the energy required to produce a given shock strength. Thus, we have underestimated the total outburst energy. On the other hand, the point explosion model maximizes the shock speed at early times, causing the shock age to be underestimated and increasing the power estimate. The net result is probably to underestimate the average jet power, but not by more than a factor of about two. \section{Particle acceleration in the shock} \cite{ckh09} found that X-ray emission from the inner portions of the shock surrounding the southwest lobe of Centaurus A is thermal, while that from more remote parts of the shock is nonthermal. The thermal emission fades out before the nonthermal emission appears, creating a gap in the X-ray shock front. There is a similar gap in the Cygnus A shock front, seen most clearly in the region 15 -- 20 arcsec inside the western hotspots in Fig. \ref{fig:cyga}. Is this also due to a changeover from thermal to nonthermal X-ray emission? The existing X-ray data are unable to distinguish between thermal and nonthermal models for the shocks near the hotspots. However, it is noteworthy that the 6 cm radio emission, which is confined within the layer of shock compressed gas in the inner regions of the radio lobe, extends right out to the X-ray shock beyond the gap, in the regions around the hotspots. This radio emission must arise in the shocked intracluster medium, rather than within the radio lobes, providing clear evidence for particle acceleration in the outer parts of the cocoon shocks. Since the shock fronts near the hotspots are furthest from the AGN, they must travel fastest. Allowing for the shape of the fronts and projection, we estimate that shocks near the hotspots have Mach numbers of no more than 3. While the compression is greater in a Mach 3 shock, this is more than offset by the lower preshock gas density and the smaller radius of curvature of the front near the hotspots. We estimate that thermal emission from the shocks near the hotspots should be no more than $\sim5 \%$ of that at the southwest front. Thus it appears that the X-ray emission from the shock near the hotspots is too bright to be thermal. \section{X-ray jet} The region of enhanced emission running along the axis of the radio cocoon, roughly between the two sets of hotspots, is called the X-ray jet. It does not coincide with the radio jet, being considerably wider, $\simeq 6$ arcsec or $\simeq 6$ kpc, and straighter. The eastern X-ray jet is at least as bright as the western X-ray jet, the approaching side of the radio jet, making it unlikely that Doppler boosting plays a role in its X-ray emission. \cite{sbd08} argued that the X-ray jet is inverse Compton emission from relatively low energy electrons left behind after passage of the radio jet at earlier times. An X-ray spectrum extracted from a $11" \times 5.7"$ rectangular region near its eastern end, with flanking background regions, is equally well fitted by thermal and nonthermal models, but it is hard to conceive of a physically reasonable thermal model for the jet emission. The power law fit gives a photon index of $1.69\pm0.26$ (90\% confidence). Assuming this emission is inverse Compton scattered cosmic microwave background radiation (ICCMB), the population of electrons required to produce it, with a power law distribution $dn/d\gamma = A \gamma^{-2.38}$ (to match the slope of the X-ray spectrum) for Lorentz factors in the range $100 < \gamma < 10000$, alone would have a pressure more than an order of magnitude larger than the surrounding gas. Lorentz factors of $\gamma \simeq 1000$ are required to produce 1 keV X-ray photons by scattering the CMB, so it is difficult to get a lower pressure from a realistic model. Magnetic fields and cosmic rays would add to the total pressure, so an ICCMB model is largely ruled out. Optical and ultraviolet radiation from the active nucleus of Cygnus A, or radio photons from the hotspots might also provide seed photons for inverse Compton scattering, but in either case we would expect the jet to brighten significantly towards the photon source, which is not observed. Alternatively, the X-ray jet could be due to synchrotron emission. For a magnetic field strength of $55\ \mu\rm G$, roughly the equipartition field in the lobes, electrons with $\gamma \simeq 4\times10^7$ are required to produce 1 keV synchrotron photons. For a power law electron distribution like that of the ICCMB model, the required pressure would be $\simeq10^{-5}$ smaller. However, the synchrotron lifetimes of the electrons would be only $\simeq 200$ yr, so a synchrotron model requires \textit{in situ} electron acceleration (the smallest distance resolvable by \chandra{} in Cygnus A is $\simeq 3000$ light years). This is much like the Centaurus A jet, but there the case for X-ray synchrotron emission is strong (\eg \cite{ghc10}). The main issue for Cygnus A is that an active power source is required to maintain the population of highly relativistic electrons. This problem would be solved if \textit{the X-ray jet reveals the actual path of energy flow from the AGN to the hotspots.} Further supporting this, the width of the X-ray jet is similar to that of the brighter hotspots, which would naturally explain their sizes. The hotspots are highly overpressured and likely dominated by relativistic plasma. To keep a hotspot confined by the ``dentist drill'' effect, a narrow jet must wander over the whole inward facing surface of the hotspot in a time that is small compared to the sound crossing time of the hotspot. This is incredibly challenging. It suggests that we should observe the path of a narrow jet to wobble side-to-side, on a scale comparable to the width of the hotspot, over distances along the jet (times) of comparable dimension (or smaller), but this is not seen. Of course, if the X-ray jet is the main path of energy flow, the puzzle is then the origin of the radio jets. \section{A jet flow model} \label{sec:flow} \begin{figure}[t] \centerline{% \includegraphics[width=0.48\textwidth]{cyga_f4.pdf} \includegraphics[width=0.48\textwidth]{cyga_f5.pdf}} \begin{center} \caption{Jet flow solutions from section \ref{sec:flow}. The left panel shows $\beta = v/c$ and the right panel shows the flow rate of rest mass, both \textit{versus} jet power. The dotted, full and dashed lines are for $\pram/p = 4$, 10 and 20, respectively.} \label{fig:flow} \end{center} \end{figure} We use a steady, one-dimensional flow model, like that of \cite{lb02}, to estimate flow parameters for Cygnus A. The flow rate of rest mass through the jet is \begin{equation} \label{eqn:mass} \dot M = \rho A c \beta \gamma, \end{equation} where $\beta = v/c$, $\gamma$ is the corresponding Lorentz factor, $\rho$ is the proper density of rest mass in the jet and $A$ is its cross-sectional area. The jet power is \begin{equation} \label{eqn:power} P = (\gamma - 1) \dot M c^2 + h A c \beta \gamma^2, \end{equation} where the enthalpy per unit volume is $h$ and we will assume $h = \Gamma p / (\Gamma - 1)$, where $p$ is the pressure and $\Gamma$ is constant. Lastly, the jet momentum flux is \begin{equation} \label{eqn:momentum} \Pi = (P / c + \dot M c) \beta. \end{equation} The pressure in a hotspot must be close to the ram pressure of the jet that runs into it, so we assume $\Pi = A \pram$. Eliminating $\dot M$ between the equations (\ref{eqn:power}) and (\ref{eqn:momentum}) gives \begin{equation} {P \over p A c} = \left( {\pram / p \over \gamma + 1} + {\Gamma \over \Gamma - 1} \right) \beta \gamma, \end{equation} which determines the jet speed, since we have estimates for all the other quantities. Synchrotron-self-Compton models give hotspot magnetic field strengths in the range 150 -- 250 $\mu\rm G$ (\cite{hcp94, wys00}), so that, if the total pressure scales with the magnetic pressure, $\pram / p \simeq 10$ -- 20. Using the shock speeds and external pressures gives lower values, closer to $\pram/p \simeq 4$. We assume that the jet is cylindrical with a radius of 3 kpc and a pressure of $p = 3\times10^{-10}\rm\ erg\ cm^{-3}$. From section \ref{sec:jetpower}, the power of a single jet is $\simeq 2\times10^{45}\rm\ erg\ s^{-1}$ or somewhat larger. Fig.~\ref{fig:flow} shows solutions for $\beta$ and $\dot M$ vs jet power over its plausible range and for $\pram/p = 4$, 10 and 20. For reasonable flow solutions, the velocity in the outer part of the jet is non-relativistic, while the mass flux probably exceeds $1\rm\ M_\odot\ yr^{-1}$. It seems likely that most of this mass is entrained by the jet, rather than flowing all the way from the AGN. \section{Conclusions} The radio lobe cavities and cocoon shocks of Cygnus A have much in common with lower power radio sources in galaxy clusters. The mean power of the current outburst in Cygnus A is $4\times10^{45}\rm\ erg\ s^{-1}$, or somewhat larger. Near to its hotspots, the cocoon shocks of Cygnus A accelerate electrons, at least to energies sufficient to produce 6 cm radio synchrotron emission and quite possibly also keV X-rays. The X-ray jet of Cygnus A is best explained as synchrotron emission, which suggests that the X-ray jet, rather than the radio jet, is the main route of power from the AGN to the hotspots. Under this assumption, we find that the outer parts of the jets have flow speeds $v/c \sim 0.1$ and the mass flow rate through the jets is significant. \acknowledgment PEJN was partly supported by NASA contract NAS8-03060. \newcommand\aapr{\textit{ARAA}} \newcommand\mnras{\textit{MNRAS}} \newcommand\apj{\textit{ApJ}} \newcommand\apjl{\textit{ApJ}} \newcommand\aap{\textit{A\&A}} \newcommand\nat{\textit{Nature}}
922afcd3d0bc022b62eb47518be0671320e41a4f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} Since their discovery (Tousey \cite{Tousey}), coronal mass ejections (CMEs) have been extensively studied using ground-based and space-borne coronagraph observations that have enabled to analyze their basic properties. CMEs are large-scale magnetic structures which involve the expulsion of a large amount of plasma ($\sim$\,10$^{13}$--10$^{16}$~g; Vourlidas \cite{Vourlidas2010}) from the corona into the solar wind with velocities in the range $\sim$\,100--3000~km\,s$^{-1}$ (e.g., Yashiro \cite{Yashiro2004}). CMEs are often associated with eruptive prominences, or with the disappearance of filaments, and with flares. Coronagraph observations combined with data from other instruments have made possible to continuously follow the detailed progression of CMEs. Understanding the physical processes that generate them is strongly facilitated by coordinated multi-wavelength observations. The ultimate goal is to match theory and modeling efforts with observations. This overview will mainly focus on observational aspects and will show that they strongly support the existing models. \section{Kinematic evolution of CMEs} Based on the velocity profiles, the kinematic evolution of CMEs undergoes three phases: a gradual evolution, a fast acceleration, and a propagation phase (Zhang \etal\ \cite{Zhang2001}, \cite{Zhang2004}). Most of the events reach their peak acceleration at low coronal altitudes (e.g., Gallagher \etal\ \cite{Gallagher2003}; Temmer \etal\ \cite{Temmer2008}), typically at heights below $0.5~R_\odot$ (Bein \etal\ \cite{Bein2011}). When associated with a flare, a temporal correlation both between CME velocity and soft X-ray flux and between their derivatives often exists, especially for fast CMEs (Zhang \etal\ \cite{Zhang2004}). Mari{\v c}i{\'c} \etal\ (\cite{Maricic2007}) found that 75\% of these events show this correlation while 25\% do not. The correlation indicates that the CME acceleration and the flare particle acceleration are strongly coupled. The gradual evolution exhibits a slow rise of the structure which is about to erupt; the rise is often traced by a filament or prominence. The CME velocity changes only very gradually in the propagation phase through the outer corona and solar wind, primarily under the influence of aerodynamic drag. \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{cme-3part.eps}} \caption{\textsl{Left panel:} A three-part CME showing prominence material, cavity, and an outer bright front. \textsl{Right panel:} 2D projection of the flux rope model (Chen \etal\ \cite{Chen1997}).} \label{cme3part} \end{figure} \section{White-light coronagraphic observations: The three-part CME structure flux ropes} CMEs have been frequently observed in white light coronagraph images as having a so called three-part structure, consisting of a bright rim surrounding a dark void which contains a bright core (Illing \& Hundhausen \cite{Illing1985}); see Figure~\ref{cme3part}, left panel. Chen \etal\ (\cite{Chen1997}) showed that the \textsl{SOHO}/LASCO observations are consistent with a two-dimensional projection of a three-dimensional magnetic flux rope. The cavity seen in white light can be interpreted as the cross section of an expanded flux rope (Fig.~\ref{cme3part}, right panel). Beside these three-part CME structures, concave-outward V features have been frequently observed in the \textsl{SOHO}/LASCO coronagraph images (e.g., Dere \etal\ \cite{Dere1999}; St.~Cyr \etal\ \cite{StCyr2000}) and were interpreted as the sunward side of a three-dimensional helical flux rope viewed along the rope axis: see the next section and for example R{\'e}gnier \etal\ (\cite{Regnier2011}) for a detailed multi-wavelength observation of such a structure by \textsl{SDO}, which strongly supports the flux rope interpretation. Cremades \etal\ (\cite{Cremades2004}) showed that the projected white light structure of a three-part CME will depend primarily on the orientation and position of the associated photospheric inversion line. \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{FR-thernisien.eps}} \caption{Example of the geometrical model fit for the event of 29 January 2008 observed at 7:20~UT in the \textsl{STEREO} coronagraphs COR2 B and A (top row). The bottom row shows the same images with the model fit overlaid (Thernisien \etal\ \cite{Thernisien2009}).} \label{FRthernisien} \end{figure} More recently, Thernisien \etal\ (\cite{Thernisien06}, \cite{Thernisien2009}) developed a geometric flux rope model, the gradual cylindrical shell model (GCS), and fitted it to \textsl{STEREO}/SEC\-CHI coronagraph observations of CMEs. They were able to reproduce the CME morphology for a large number of events. The flux rope orientations determined in this way revealed a deflection and/or rotation of the structure relative to the position and orientation of the source region in most cases (Fig.~\ref{FRthernisien}). \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{cme-model1.eps}} \caption{Sketch of the flux-rope CME model of Lin \& Forbes (\cite{Lin2000}), adapted by Lin \etal\ (\cite{Lin2004}), showing the eruption of the flux rope, the current sheet formed behind it, and the postflare/CME loops below, as well as the flows associated with the reconnection.} \label{cmemodel} \end{figure} The standard picture of a CME eruption emerged from the model proposed by Lin \& Forbes (\cite{Lin2000}) (Fig.~\ref{cmemodel}): An initially closed and stressed magnetic configuration overlying a photospheric polarity inversion line becomes unstable and erupts. Magnetic field lines are then stretched by the eruption and a current sheet (CS) is formed between the inversion line and the erupting flux rope. Magnetic reconnection occurs along this CS, first at low altitudes then at progressively higher ones (Forbes \& Acton \cite{Forbes1996}), producing the often associated flare and also explaining the formation of post-eruption loops behind the CS. This model is the synthesis of the loss-of-equilibrium model for the upward acceleration of the filament and CME (van Tend \& Kuperus \cite{vanTend1978}; Forbes \& Isenberg \cite{Forbes1991}) and the standard (\ie, reconnection) model of eruptive flares, also known as the CSHKP model (Carmichael \cite{Carmichael1964}; Sturrock \cite{Sturrock1966}; Hirayama \cite{Hirayama1974}; Kopp \& Pneuman \cite{Kopp1976}). It includes two stages. The source region, which is supposed to already contain a flux rope, first stores free magnetic energy in a quasi-static evolution, driven by slow changes of the photospheric field. The resulting inflation of the coronal field is observed as the slow rise. When a critical point is reached, the flux rope loses equilibrium and is rapidly accelerated upwards by the Lorentz force of the current flowing in the rope (onset of CME or filament upward ejection). This is coupled with the flare reconnection in the vertical CS, which reduces the tension of the overlying field. The model is now strongly supported by numerous observations made in different wavelength domains (see Benz \cite{Benz2008} and Fletcher \etal\ \cite{Fletcher2011} for the flare observations). One of the major open questions of CME research is whether the MHD instability of the current-carrying flux rope or the reconnection in the CS underneath is the main driver of the eruption as a whole. \section{Multi-wavelength CME observations and comparison with theoretical predictions} Because magnetic reconnection may be occurring in the CS and enabling energy release, the presence of a thin spike of high-temperature material behind a CME would match the expectation of the standard model. In recent years, many observational evidences of such CS have been found in white light coronagraph observations, X-ray images, coronal UV spectra, EUV images, radio spectra, and radio images. \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{UVCS-CS.ps}} \caption{\textsl{SOHO}/LASCO C2 images showing the evolution of the events from the pre-CME corona to the narrow long CS feature. The images shown inside the occulter disk are the intensity distribution of the Fe~XVIII line along the UVCS slit. The brightest narrow spot is very well aligned with the CS seen by LASCO (Ciaravella \& Raymond \cite{Ciaravella2008}).} \label{UVCS} \end{figure} In the UV domain, the \textsl{SOHO}/UVCS coronagraph has observed several CME events exhibiting high-temperature emissions from the Fe~XVIII 974~\AA\ line at heliocentric heights of 1.5--1.7~$R_\odot$ which lie along the line connecting the eruptive CME and the associated post-CME loops (Fig.~\ref{UVCS}). The location and the timing of these emissions strongly support the interpretation that a post-CME CS formed (Ciaravella \& Raymond \cite{Ciaravella2008}; Ko \etal\ \cite{Ko2010}). In white-light, Vr{\v s}nak \etal\ (\cite{Vrsnak09}) analyzed the morphology and density structure of rays observed by the \textsl{SOHO}/LASCO C2 coronagraph in the aftermath of CMEs. The most common form of activity is characterized by outflows along the rays, and sometimes also by inflows. The authors concluded that the main cause of density excess in these rays is the upward transport of the dense plasma by the reconnection outflow in the CS formed in the wake of CMEs. \subsection{EUV Observations from the Solar Dynamical Observatory} \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{cme-tre-131-211.eps}} \caption{LEFT: \textsl{SDO}/AIA base-difference images of the solar eruption on 2010 November 3 at 131~\AA\ ($\sim$~11~MK) (\textsl{upper panel and lower-left panel}), and at 211~\AA\ ($\sim$~2~MK) (\textsl{lower-right panel}). Leading edge and dimming features are indicated by arrows. RIGHT: CME structure as seen in AIA multiple temperature bands (from Cheng \etal\ \cite{Cheng2011}).} \label{SDO} \end{figure} The most recent and fascinating results arise from the Atmospheric Imaging Assembly (AIA) on board of the \textsl{Solar Dynamic Observatory (SDO)}. This experiment has the capability of high cadence and multi-temperature observations. Figure~\ref{SDO}, left, adapted from Cheng \etal\ (\cite{Cheng2011}), shows a few base-difference images in the 131~\AA\ bandpass (dominated by 11~MK plasma) of the solar eruption on 2010 Nov~3 which was associated with a limb CME detected by \textsl{SOHO}/LASCO. A blob of hot plasma appeared first and started to push its overlying magnetic field upward. The overlying field lines seem to be stretched up continuously. Below the blob, there appeared a Y-type magnetic configuration with a bright thin line extending downward, which is consistent with the development of a CS. In addition, the shrinkage of magnetic field lines underneath the CS indicates the ongoing process of magnetic reconnection. The plasma blob likely corresponds to a growing flux rope. Simultaneously, a cavity with diffuse density enhancement at the edge is seen at typical coronal temperatures ($T\sim0.6\mbox{--}2$~MK). For the first time, the multi-temperature structure of the CME has been analyzed (see Fig.~\ref{SDO} right). The high-cadence EUV observations by \textsl{STEREO} and \textsl{SDO} also yield insight how the forming CME expands the ambient field. This process rapidly forms a cavity which surrounds the plasma in the hot CS and flux rope (Figs.~\ref{SDO} and \ref{FR-cav}). A cavity exists around some prominences prior to their eruption, especially in quiescent prominences (Gibson \etal\ \cite{Gibson2006}), but many events lack such signatures, especially the cavities forming rapidly around erupting active region filaments/prominences (e.g., Patsourakos \etal\ \cite{Patsourakos2010b}; Cheng \etal\ \cite{Cheng2011}). The growth of the cavity certainly reflects the growth of the flux rope in the CME core, due to the addition of flux by reconnection; however, initially the cavity, or ``bubble'', grows even faster than the flux rope in some events. This has led Patsourakos \etal\ (\cite{Patsourakos2010a}) to suggest an additional expansion mechanism based on ideal MHD effects (\ie, independent of reconnection). The mechanism assumes that the free energy released in the eruption is contained in the current of a flux rope that exists already prior to the onset of the CME. When the flux rope rises, the current through the rope decreases, powering the eruption; the decrease is approximately inversely proportional to the length of the rope. As a consequence, the azimuthal (poloidal) field component in and around the flux rope must also decrease. Since the total poloidal flux in the system is not changed by the rise of the rope, the flux surfaces must move away from the center of the flux rope to reduce the strength of the poloidal field component. In other words, the poloidal flux in and around the flux rope must expand, forming a cavity (or deepening the cavity if it existed already before the eruption). The numerical simulation of an erupting flux rope displayed in Figure~\ref{simulation} clearly exhibits both mechanisms of cavity formation and expansion. \begin{figure} \centerline{\includegraphics[width=\textwidth]{simulation-cavity.eps}} \caption{MHD simulation of cavity formation around an erupting flux rope. Rainbow-colored field lines show the core of the flux rope. Green field lines show the ambient field, a progressively larger fraction of which becomes part of the rope, due to reconnection in the CS under the rope (from Kliem \etal, in preparation).} \label{simulation} \end{figure} The relative contributions of the two mechanisms of cavity growth vary from event to event. The flux expansion of the ideal MHD mechanism is driven by the poloidal field component, it has to work against the toroidal (shear field) component. If the latter is strong, this part of the expansion will be slowed and weakened. The pressure has a similar influence if the plasma beta is not very small (about 0.1 or larger). Consequently, some events show the cavity edge quite close to the edge of the growing flux rope (Fig.~\ref{SDO}), while others show a cavity much larger than the flux rope (Fig.~\ref{FR-cav}). The cases of very rapid initial cavity expansion are of particular interest as potential sources of large-scale coronal EUV waves (also known as ``EIT Waves'') and shocks. It has been recognized that the initial expansion of the CME is the prime candidate for the formation of these phenomena. This replaces the conjecture of a flare blast wave. The rapid cavity expansion may eventually also help solving the puzzle why many coronal shocks, seen as Type II radio bursts, appear to be launched at the side of the expanding CME, not at its apex. The triggering of an EUV wave by a rapidly expanding CME cavity, including the formation of a shock, has very recently been demonstrated, again using multi-wavelength data from \textsl{SDO} combined with radio data (Cheng \etal\ \cite{Cheng2012}). \begin{figure} \centerline{\includegraphics[width=.9\textwidth]{cavity-FR.eps}} \caption{Rapidly forming cavity around a CME flux rope. Overlay of \textsl{SDO}/AIA images in the 131~{\AA} ($\sim11$~MK, green) and 171~{\AA} ($\sim0.6$~MK, red) channels (Kliem \etal, in~prep.).} \label{FR-cav} \end{figure} \subsection{Radio observations} Radio spectral and imaging observations are obtained with extremely high time resolution and sample different heights in the solar atmosphere. So they contribute significantly to our understanding of CME initiation and development as briefly summarized below. The first indications of CSs in the solar corona were provided by radio spectral observations. Kliem \etal\ (\cite{Kliem2000}) observed long series of quasi-periodic pulsations deeply modulating a continuum in the {$\sim(1\mbox{--}2)$~GHz range that was slowly drifting toward lower frequencies. They proposed a model in which the pulsations of the radio flux are caused by quasi-periodic particle acceleration episodes that result from a dynamic phase of magnetic reconnection in a large-scale CS (see also Karlick{\'y} \etal\ \cite{Karlicky2002}; Karlick{\'y} \cite{Karlicky2004}; Karlick{\'y} \& B{\'a}rta \cite{Karlicky2011}}). Such breakup of the CS into filamentary structures can cascade to the smallest scales (B{\'a}rta \etal\ \cite{Barta2011}). The possible transition to a turbulent regime of reconnection is currently of high interest even beyond the solar context (e.g., Lazarian \& Opher \cite{Lazarian2009}; Daughton \etal\ \cite{Daughton2011}). \subsubsection{Radio-imaging, X-ray and EUV observations} \begin{figure} \centerline{\includegraphics[width=\textwidth]{CME-radio-1.eps}} \caption{Radio and X-ray signatures of magnetic reconnection behind an ejected flux rope on 02 June 2002. \textsl{Left panel}: Comparison between the photon histories measured by \textsl{RHESSI}, the flux evolution measured at four frequencies by the NRH and the spectral evolution measured by OSRA and by \textsl{STEREO}/WAVES. \textsl{Right panel, top}: Images of the Nan\c cay Radioheliograph (NRH) at 410, 236, and 164 MHz, showing the quasi-stationary sources (S) and the moving sources (M). The event is close to the solar limb (curved line). \textsl{Right panel, bottom}: Two-dimensional sketch of the magnetic configuration involved in the eruption. A twisted flux rope erupts, driving magnetic reconnection behind it (red arrow). The particles accelerated in the reconnection region propagate along the reconnected field lines, giving rise to the observed hard X-rays (RX) and the main radio sources (S and M). A shock is propagating at the front edge of the flux rope (red curve) (from Pick \etal\ \cite{Pick2005}).} \label{cme-radio} \end{figure} Pick \etal\ (\cite{Pick2005}) traced the dynamical evolution of the reconnecting CS behind an ejected flux rope and provided an upper estimate of the CS length from the position of the observed pair of radio sources, consisting of an almost stationary and a rapidly moving source (see Fig.~\ref{cme-radio}). Later, Aurass \etal\ (\cite{Aurass2009}) provided diagnostics of the presence of a CS in the aftermath of a CME both with X-ray and radio spectral observations, and Benz \etal\ (\cite{Benz2011}) imaged the CS in radio, showing that it extended above the temporally correlated, largely thermal coronal X-ray source. Finally, Huang \etal\ (\cite{Huang2011}) demonstrated that joint imaging radio and EUV observations can trace the extent and orientation of the flux rope and its interaction with the surrounding magnetic field. This allows to characterize in space and time the processes involved in the CME launch. Bastian \etal\ (\cite{Bastian2001}) first reported the existence of an ensemble of expanding loops that were imaged in radio by the NRH and were located behind the front of the white light CME on 1998 April 20. The faint emission of these loops, named \textit{radio CME}, was attributed to incoherent synchrotron radiation from 0.5~MeV electrons spiraling within a magnetic field ranging from 0.1 to a few Gauss. Maia \etal\ (\cite{Maia2007}) identified another radio CME on 2001 April 15 which was one of the largest one of this solar cycle. A recent study established that the radio CME corresponds to the flux rope seen in white light and that its extrapolated center coincides with the center of the flux rope cavity (D{\'e}moulin \etal\ \cite{Demoulin2012}). The CS behind the flux rope was also imaged in radio. \begin{figure} \centerline{\includegraphics[width=\textwidth]{Mostl.eps}} \caption{Linking remote imagery to in-situ signatures. TOP: Evolution of the CME in \textsl{STEREO-A}/HI1 and \textsl{STEREO-B}/HI1. The CME leading edge and core are given by yellow cosses. BOTTOM: \textsl{Left}: Proton density measured at 1 AU by \textsl{STEREO-B}; the dashed lines are the arrival times from the elongation fitting method for the CME leading edge (see blue lines) and CME core (see red lines) (adapted from M{\"o}stl \etal\ (\cite{Mostl2009}), courtesy of P. D{\'e}moulin). \textsl{Right}: Schematic showing three evolutionary steps. First, the crossing of the convective zone by a flux rope. Second, the launch of a CME. Finally, depending on the speed and launch direction, the CME can be detected a few days later in the IP medium as as a magnetic cloud or more generally as an ICME. The CME image is from \textsl{SOHO}/LASCO (from D{\'e}moulin \cite{Demoulin2010aipc}).} \label{cme-interplanetary} \end{figure} \section{Relationship between CMEs and interplanetary coronal mass ejections (ICMEs)} The \textsl{STEREO} HI1 and H2 imagers have provided the opportunity to trace the evolution of CMEs to 1~AU and beyond and thus to investigate for the first time the relationship with their heliospheric counterpart in the whole inner heliosphere. The observations suggest that many CMEs are still connected to the Sun at 60-80~$R_\odot$ and that the same basic structure is often preserved as the CME propagates from the corona into the heliosphere (e.g., Harrison \etal\ \cite{Harrison2010}). Figure~\ref{cme-interplanetary}, upper panel shows four \textsl{STEREO}-A images of a CME which was associated with a magnetic cloud (MC). In the HI1A and HI2A images, the CME leading edge (LE) and core, indicated in the figure by yellow crosses, show an arc-like shape, typical of a CME viewed orthogonal to its axis of symmetry. M{\"o}stl \etal\ (\cite{Mostl2009}) linked the remote observations of this CME to the MC plasma and magnetic field data measured by \textsl{STEREO}-B at 1~AU. Figure~\ref{cme-interplanetary} shows that the three-part structure of the CME may be plausibly related to the in-situ data and that the CME white-light flux rope corresponds to the magnetic flux rope (MFR) measured in situ. \section{Conclusion} In this brief review we have focused almost exclusively on the \textit{three-part CMEs} and how the multi-wavelength observations, in particular when obtained with high cadence, have validated the early models and contributed to their evolution. It must be recalled, however, that fast flare/CME events often have a much more complex development. A detailed presentation of these CME events is beyond the scope of this review. We shall only mention that they are observed to start with a relatively small dimension and then reach their full extension by the rapid expansion of the cavity and by successive interactions with the surrounding magnetic structures. NRH observations show that they can cover a large portion of the Sun within typically 10~min or even less (for a review see Pick \etal\ \cite{Pick2008}). Other important topics that could not or only briefly be covered here are i) the initiation phase preceeding the onset of a CME; ii) the association between CMEs, flares, EIT/EUV waves, and coronal and interplanetary shocks (Type II radio bursts), and iii) the role of dimmings (transient coronal holes) in the dynamics of CMEs. The unprecedented observational capabilities now available (imaging and spectroscopy from radio to hard X-rays; stereoscopy; high spatial resolution and cadence; imaging from the Sun to 1~AU and beyond), combined with similar progress in the numerical modeling, will undoubtedly stimulate further discoveries and deeper understanding of CMEs, flares and their associated phenomena. Some of the most challenging directions in the future research will be : i) the mutual feedback between CMEs and flares; ii) the coupling of the smallest scales (reconnection, particle acceleration) with the largest ones (CME, large-scale CS, sympathetic eruptions); and iii) the connection with photospheric and subphotospheric phenomena (triggering by flux emergence, back-reaction forming sun quakes).
78de4899a58aa8b240ca709ebfd2c50f676f4b2c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
\section{Introduction} A network anomaly is any potentially malicious traffic sequence that has implications for the security of the network. Although automated online traffic anomaly detection has received a lot of attention, this field is far from mature. Network anomaly detection belongs to a broader field of system anomaly detection whose approaches can be roughly grouped into two classes: \emph{signature-based anomaly detection}, where known patterns of past anomalies are used to identify ongoing anomalies~\cite{roesch1999snort,paxson1999bro}, and \emph{change-based anomaly detection} that identifies patterns that substantially deviate from normal patterns of operations~\cite{barford2002signal,Lu2009,pas-sma-ton-09}. \cite{lippmann2000evaluating} showed that the detection rates of systems based on pattern matching are below 70\%. Furthermore, such systems cannot detect \emph{zero-day attacks}, i.e., attacks not previously seen, and need constant (and expensive) updating to keep up with new attack signatures. In contrast,\emph{ change-based anomaly detection} methods are considered to be more economic and promising since they can identify novel attacks. In this work we focus on \emph{change-based anomaly detection} methods, in particular on \emph{statistical anomaly detection} that leverages statistical methods. Standard \emph{statistical anomaly detection} consists of two steps. The first step is to learn the ``normal behavior'' by analyzing past system behavior; usually a segment of records corresponding to normal system activity. The second step is to identify time instances where system behavior does not appear to be normal by monitoring the system continuously. For anomaly detection in networks, \cite{pas-sma-ton-09} presents two methods to characterize normal behavior and to assess deviations from it based on the \emph{Large Deviations Theory} (LDT)~\cite{deze2}. Both methods consider the traffic, which is a sequence of flows, as a sample path of an underlying stochastic process and compare current network traffic to some reference network traffic using LDT. One method, which is referred to as the \emph{model-free} method, employs the method of types~\cite{deze2} to characterize the type (i.e., empirical measure) of an independent and identically distributed~(i.i.d.) sequence of network flows. The other method, which is referred to as the \emph{model-based} method, models traffic as a \emph{Markov Modulated Process.} Both methods rely on \emph{a stationarity assumption} postulating that the properties of normal traffic in networks do not change over time. However, the \emph{stationarity assumption} is rarely satisfied in contemporary networks~\cite{Neal-2010}. For example, Internet traffic is subject to weekly and diurnal variations~\cite{thompson1997wide,King2013}. Internet traffic is also influenced by macroscopic factors such as important holidays and events~\cite{Sandvine2013}. Similar phenomena arise in local area networks as well. We will call a network \emph{dynamic} if its traffic exhibits time-varying behavior. The challenges for anomaly detection of dynamic networks are two-fold. First, the methods used for learning the ``normal behavior'' are usually quite sensitive to the presence of non-stationarity. Second, the modeling and prediction of multi-dimensional and time-dependent behavior is hard. To address these challenges, we generalize the vanilla \emph{model-free} and \emph{model-based} methods from \cite{pas-sma-ton-09} and develop what we call the \emph{robust model-free} and the \emph{robust model-based} methods. The novelties of our new methods are as follows. First, our methods are robust and optimal in the generalized Neyman-Pearson sense. Second, we propose a two-stage method to estimate Probability Laws (PLs) that characterize normal system behaviors. Our two-stage method transforms a hard problem (i.e., estimating PLs for \emph{multi-dimensional} data) into two well-studied problems: $(i)$ estimating \emph{one-dimensional} data parameters and $(ii)$ the \emph{set cover} problem. Being concise and interpretable, our estimated PLs are helpful not only in anomaly detection but also in understanding normal system behavior. The structure of the paper is as follows. Sec.~\ref{sec:Binary-Composite-Hypothesis} formulates system anomaly detection as a binary composite hypothesis testing problem and proposes two robust methods. Sec.~\ref{sec:Network-Anomaly-Detection} applies the methods presented in Sec.~\ref{sec:Binary-Composite-Hypothesis}. Sec.~\ref{sec:Network-Simulation} explains the simulation setup and presents results from our robust methods as well as their vanilla counterparts. Finally, Sec.~\ref{sec:Conclusions} provides concluding remarks. \section{Binary composite hypothesis testing\label{sec:Binary-Composite-Hypothesis}} We model the network environment as a stochastic process and estimate its parameters through some reference traffic (viewed as sample paths). Then the problem of network anomaly detection is equivalent to testing whether a sequence of observations $\mathcal{G}=\ensuremath{}\{g^{1},\ldots,g^{n}\ensuremath{}\}$ is a sample path of a discrete-time stochastic process $\mathscr{G}=\ensuremath{}\{G^{1},\ldots,G^{n}\ensuremath{}\}$ (hypothesis $\mathcal{H}_0$). All random variables $G^{i}$ are discrete and their sample space is a finite alphabet $\Sigma=\ensuremath{}\{\sigma_{1},\sigma_{2},\dots,\sigma_{|\Sigma|}\ensuremath{}\}$, where $|\Sigma|$ denotes the cardinality of $\Sigma$. All observed symbols $g^{i}$ belong to $\Sigma$, too. This problem is a \emph{binary composite hypothesis testing problem}. Because the joint distribution of all random variables $G^i$ in $\mathscr{G}$ becomes complex when $n$ is large, we propose two types of simplification. \subsection{A model-free method\label{sub:A-Model-Free-Approach}} We propose a \emph{model-free} method that assumes the random variables $G^{i}$ are i.i.d. Each $G^i$ takes the value $\sigma_{j}$ with probability $p_{\theta}^{F}(G^i=\sigma_{j})$, $j=1,\ldots,|\Sigma|$, which is parameterized by $\theta\in\Omega$. We refer to the vector $\mathbf{p}_{\theta}^{F}=(p_{\theta}^{F}(G^i=\sigma_{1}),\ldots, p_{\theta}^{F}(G^i=\sigma_{|\Sigma|}))$ as the {\em model-free} Probability Law (PL) associated with $\theta$. Then the family of \emph{model-free} PLs $\mathscr{P}^{F}=\left\{ \mathbf{p}_{\theta}^{F}:\theta\in\Omega\right\} $ characterizes the stochastic process $\mathscr{G}$. To characterize the observation $\mathcal{G}$, let \begin{equation} \mathscr{E}_{F}^{\mathcal{G}}\ensuremath{}(\sigma_{j}\ensuremath{})=\frac{1}{n}\sum_{i=1}^{n} \mathbf{1}\ensuremath{}(g^{i}=\sigma_{j}\ensuremath{}), \qquad j=1,\ldots,|\Sigma|, \label{eq:mf-em} \end{equation} where $\mathbf{1}(\cdot)$ is an indicator function. Then, an estimate for the underlying \emph{model-free} PL based on the observation $\mathcal{G}$ is $\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}=\left\{ \mathscr{E}_{F}^{\mathcal{G}}(\sigma_{j}):\ j=1,\dots,|\Sigma|\right\} $, which is called the \emph{model-free} empirical measure of $\mathcal{G}$. Suppose $\boldsymbol{\mu}=(\mu(\sigma_{1}), \ldots, \mu(\sigma_{|\Sigma|}))$ is a \emph{model-free} PL and $\boldsymbol{\nu}=( \nu(\sigma_{1}),\ldots,\nu(\sigma_{|\Sigma|}))$ is a \emph{model-free} empirical measure. To quantify the difference between $\boldsymbol{\mu}$ and $\boldsymbol{\nu}$, we define the \emph{model-free} \emph{divergence} between $\boldsymbol{\mu}$ and $\boldsymbol{\nu}$ as \begin{equation} D_{F}(\boldsymbol{\nu}\Vert\boldsymbol{\mu})\triangleq \sum_{j=1}^{|\Sigma|}\hat{\nu}(\sigma_{j})\log\frac{\hat{\nu}(\sigma_{j})}{\hat{\mu}(\sigma_{j})}, \label{eq:model-free-cross-entropy} \end{equation} where $\hat{\nu}(\sigma_{j})=\max(\nu(\sigma_{j}),\varepsilon)$ and $\hat{\mu}(\sigma_{j})=\max(\nu(\sigma_{j}),\varepsilon),\forall j$ and $\varepsilon$ is a small positive constant introduced to avoid underflow and division by zero. \begin{defi} \label{def:mf-GHT}(Model-Free Generalized Hoeffding Test). The \emph{model-free generalized Hoeffding test}~\cite{hoef65} is to reject $\mathcal{H}_{0}$ if $\mathcal{G}$ is in \[ S_{F}^{*}=\ensuremath{}\{\mathcal{G}\mid\inf_{\theta\in\Omega}\, D_{F}\ensuremath{}(\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}\Vert\mathbf{p}_{\theta}^{F}\ensuremath{})\geq\lambda\ensuremath{}\}, \] where $\lambda$ is a detection threshold and $\inf_{\theta\in\Omega}\, D_{F}\ensuremath{}(\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}\Vert\mathbf{p}_{\theta}^{F}\ensuremath{})$ is referred to as the \emph{generalized model-free divergence} between $\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}$ and $\mathscr{P}^{F}=\left\{ \mathbf{p}_{\theta}^{F}:\theta\in\Omega\right\}$. \end{defi} A similar definition has been proposed for robust localization in sensor networks~\cite{mainlocalization}. One can show that this generalized Hoeffding test is asymptotically (as $n\rightarrow \infty$) optimal in a generalized Neyman-Pearson sense; we omit the technical details in the interest of space. \begin{comment} This paper applies this definition to the anomaly detection of dynamic networks, in which $\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}$ characterizes the traffic $\mathcal{G}$ that needs to be detected and the family of PLs $\mathscr{P}^{F}=\left\{ \mathbf{p}_{\theta}^{F}:\theta\in\Omega\right\} $ is estimated from the reference traffic $\mathcal{G}_{ref}$. \end{comment} \begin{comment} For a long trace $\mathcal{G}$, its empirical measure is {}``close to'' to $\boldsymbol{\nu}$ with probability that behaves as \[ \mathbf{P}\left[\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}\approx\boldsymbol{\nu}\right]\asymp e^{-nD_{F}\ensuremath{}(\boldsymbol{\nu}\parallel\mathbf{p}_{\theta}^{F}\ensuremath{})}. \] We will refer to exponents $D_{F}\ensuremath{}(\boldsymbol{\nu}\parallel\mathbf{p}_{\theta}^{F}\ensuremath{})$ as the \emph{exponential decay rate} of $\mathbf{P}\left[\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}\approx\boldsymbol{\nu}\right]$. \begin{thm} \label{thm:model-free-GNP}The model-free generalized Hoeffding test satisfies the GNP criterion. \end{thm} \end{comment} \subsection{A model-based method\label{sub:A-Model-Based-Approach}} We now turn to the \emph{model-based} method where the random process $\mathscr{G}=\{G^{1},\ldots,G^{n}\}$ is assumed to be a Markov chain. Under this assumption, the joint distribution of $\mathscr{G}$ becomes $p_{\theta}\left(\mathscr{G}=\mathcal{G}\right) = p_{\theta}^{B}\left(g^{1}\right)\prod_{i=1}^{n-1}p_{\theta}^{B}\left(g^{i+1}\mid g^{i}\right)$, where $p_{\theta}^{B}(\cdot)$ is the initial distribution and $p_{\theta}^{B}\left(\cdot\mid\cdot\right)$ is the transition probability; all parametrized by $\theta\in\Omega$. Let $p_{\theta}^{B}\left(\sigma_{i},\sigma_{j}\right)$ be the probability of seeing two consecutive states $(\sigma_{i},\sigma_{j})$. We refer to the matrix $\mathbf{P}_{\theta}^{B}=\{ p_{\theta}^{B}(\sigma_{i},\sigma_{j})\}_{i,j=1}^{|\Sigma|} $ as the \emph{model-based} PL associated with $\theta\in\Omega$. Then, the family of \emph{model-based} PLs $\mathscr{P}^{B}=\left\{ \mathbf{P}_{\theta}^{B}:\theta\in\Omega\right\} $ characterizes the stochastic process $\mathscr{G}$. To characterize the observation $\mathcal{G}$, let \begin{equation} \mathscr{E}_{B}^{\mathcal{G}}\ensuremath{}(\sigma_{i},\sigma_{j}\ensuremath{})=\frac{1}{n}\sum_{l=2}^{n} \mathbf{1}\ensuremath{}(g^{l-1}=\sigma_{i}, g^{l}=\sigma_{j}\ensuremath{}), i,j=1,\ldots,\ensuremath{}|\Sigma\ensuremath{}|. \label{eq:mb-em} \end{equation} We define the \emph{model-based} empirical measure of $\mathcal{G}$ as the matrix $\boldsymbol{\mathcal{E}}_{B}^{\mathcal{G}}=\{ \mathscr{E}_{B}^{\mathcal{G}}(\sigma_{i},\sigma_{j})\}_{i,j=1}^{|\Sigma|}$. The transition probability from $\sigma_{i}$ to $\sigma_{j}$ is simply $\mathscr{E}_{B}^{\mathcal{G}}\ensuremath{}(\sigma_{j}|\sigma_{i}\ensuremath{})=\frac{\mathscr{E}_{B}^{\mathcal{G}}\ensuremath{}(\sigma_{i},\sigma_{j}\ensuremath{})}{\sum_{j=1}^{\ensuremath{}|\Sigma\ensuremath{}|}\mathscr{E}_{B}^{\mathcal{G}}\ensuremath{}(\sigma_{i},\sigma_{j}\ensuremath{})}$. Suppose $\boldsymbol{\Pi}=\{\pi(\sigma_{i},\sigma_{j})\}_{i,j=1}^{|\Sigma|}$ is a \emph{model-based} PL and $\mathbf{Q}=\{ q(\sigma_{i},\sigma_{j})\}_{i,j=1}^{|\Sigma|}$ is a \emph{model-based} empirical measure. Let $\hat{\pi}(\sigma_{j}|\sigma_{i})$ and $\hat{q}(\sigma_{j}|\sigma_{i})$ be the corresponding transition probabilities from $\sigma_{i}$ to $\sigma_{j}$. Then, the \emph{model-based divergence} between ${\boldsymbol \Pi}$ and $\mathbf{Q}$ is \begin{equation} D_{B}\ensuremath{}(\mathbf{Q}\parallel\boldsymbol{\Pi}\ensuremath{})=\sum_{i=1}^{\ensuremath{}|\Sigma\ensuremath{}|}\sum_{j=1}^{\ensuremath{}|\Sigma\ensuremath{}|}\hat{q}(\sigma_{i},\sigma_{j})\log\frac{\hat{q}(\sigma_{j}|\sigma_{i})}{\hat{\pi}(\sigma_{j}|\sigma_{i})},\label{eq:model-based-cross-entropy} \end{equation} where $\hat{q}(\sigma_{i},\sigma_{j})=\max(q(\sigma_{i},\sigma_{j}),\varepsilon)$, $\hat{\pi}(\sigma_{i},\sigma_{j})=\max(\pi(\sigma_{i},\sigma_{j}),\varepsilon)$ for some small positive constant $\varepsilon$ introduced to avoid underflow and division by zero. Similar to the \emph{model-free} case, we present the following definition: \begin{defi} \label{def:mb_GHT}(Model-Based Generalized Hoeffding Test). The \emph{model-based generalized Hoeffding test} is to reject $\mathcal{H}_{0}$ when $\mathcal{G}$ is in \[ S_{B}^{*}=\ensuremath{}\{\mathcal{G}\mid\inf_{\theta\in\Omega}\, D_{B}\ensuremath{}(\boldsymbol{\mathcal{E}}_{B}^{\mathcal{G}}\Vert\mathbf{P}_{\theta}^{B}\ensuremath{})\geq\lambda\ensuremath{}\}, \] where $\lambda$ is a detection threshold and $\inf_{\theta\in\Omega}\, D_{B}\ensuremath{}(\boldsymbol{\mathcal{E}}_{B}^{\mathcal{G}}\Vert\mathbf{P}_{\theta}^{B}\ensuremath{})$ is referred to as the \emph{generalized model-based divergence} between $\boldsymbol{\mathcal{E}}_{F}^{\mathcal{G}}$ and $\mathscr{P}^{B}=\left\{ \mathbf{P}_{\theta}^{B}:\theta\in\Omega\right\} $. \end{defi} In this case as well, asymptotic (generalized) Neyman-Pearson optimality can be established. \begin{comment} For a long trace $\mathcal{G}$, its empirical measure is {}``close to'' to $\mathbf{Q}$ with probability that behaves as \[ \mathbf{P}\left[\boldsymbol{\mathcal{E}}^{\mathcal{G}}\approx\mathbf{Q}\right]\asymp e^{-nD\ensuremath{}(\boldsymbol{\nu}\parallel\mathbf{P}_{\theta}\ensuremath{})}. \] We will refer to exponents $D\ensuremath{}(\boldsymbol{\nu}\parallel\mathbf{P}_{\theta}\ensuremath{})$ as the \emph{exponential decay rate} of the $\mathbf{P}\left[\boldsymbol{\mathcal{E}}^{\mathcal{G}}\approx\boldsymbol{\nu}\right]$. \begin{thm} \label{thm:model-based-GNP}The model-based generalized Hoeffding test satisfies the GNP criterion.\end{thm} \end{comment} \section{Network anomaly detection \label{sec:Network-Anomaly-Detection}} Fig.~\ref{fig:method-struc} outlines the structure of our robust anomaly detection methods. We first propose our feature set (Sec.~\ref{data-representation}). We assume that the normal traffic is governed by an underlying stochastic process $\mathscr{G}$. We assume the size of \emph{model-free} and \emph{model-based} PL families to be finite and propose a two-step procedure to estimate PLs from some reference data. We first inspect each feature separately to generate a family of candidate PLs (Sec.\ref{sub:Rough-Estimation-PL}), which is then reduced to a smaller family of PLs (Sec.~\ref{sub:pl-refinement}). For each window, the algorithm applies the \emph{model-free} and \emph{model-based} \emph{generalized Hoeffding test} discussed above. \begin{figure} \begin{center} \includegraphics[width=0.63\columnwidth,height=4cm]{detector_flow_chart} \vspace{-0.7cm} \end{center} \caption{Structure of the algorithms.\label{fig:method-struc}} \vspace{-0.5cm} \end{figure} \subsection{Data representation\label{data-representation}} In this paper, we focus on \emph{host-based anomaly detection}, a specific application in which we monitor the incoming and outgoing packets of a server. We assume that the server provides only one service (e.g., HTTP server) and other ports are either closed or outside our interests. As a result, we only monitor traffic on certain port (e.g., port 80 for HTTP service). For servers with multiple ports in need of monitoring, we can simply run our methods on each port. The features we propose for this particular application relate to a flow representation slightly different from that of commercial vendors like Cisco NetFlow \cite{netflow}. Hereafter, we will use ``flows'', ``traffic'', and ``data'' interchangeably. Let $\mathcal{S}=\{\mathbf{s}^{1},\dots,\mathbf{s}^{|\mathcal{S}|}\}$ denote the collection of all packets collected on certain port of the host which is monitored. In \emph{host-based anomaly detection}, the server IP is always fixed, thus ignored. Denote the user IP address in packet $ $${\mathbf s}^{i}$ as ${\mathbf x}^{i}$, whose format will be discussed later. The size of ${\mathbf s}^{i}$ is $b^{i}\in[0,\infty)$ in bytes and the start time of transmission is $t^{i}\in[0,\infty)$ in seconds. Using this convention, packet ${\mathbf s}^{i}$ can be represented as $({\mathbf x}^{i},b^{i},t_{s}^{i})$ for all $i=1,\dots,|\mathcal{S}|$. We compile a sequence of packets ${\mathbf s}^{1},\dots,{\mathbf s}^{m}$ with $t_{s}^{1}<\dots<t_{s}^{m}$ into a \emph{flow} $\mathbf{f}=({\mathbf x},b,d_{t},t)$ if ${\mathbf x}={\mathbf x}^{1}=\dots={\mathbf x}^{m}$ and $t_{s}^{i}-t_{s}^{i-1}<\delta_{F}$ for $i=2,\dots,m$ and some prescribed $\delta_{F}\in(0,\infty)$. Here, the \emph{flow size }$b$ is the sum of the sizes of the packets that comprise the flow. The \emph{flow duration} is $d_{t}=t_{s}^{m}-t_{s}^{1}$. The \emph{flow transmission time} $t$ equals the start time of the first packet of the flow $t_{s}^{1}$. In this way, we can translate the large collection of packets $\mathcal{S}$ into a relatively small collection of flows $\mathcal{F}$. Suppose ${\mathcal X}$ is the set of unique IP addresses in ${\mathcal F}$. Viewing each IP as a tuple of integers, we apply typical $K$-means clustering on ${\mathcal X}$. For each ${\mathbf x}\in{\mathcal X}$, we thus obtain a cluster label $k({\mathbf x})$. Suppose the cluster center for cluster $k$ is $\bar{{\mathbf x}}^{k}$; then the distance of ${\mathbf x}$ to the corresponding cluster center is $d_{a}({\mathbf x})=d({\mathbf x},\bar{{\mathbf x}}^{k({\mathbf x})})$, for some appropriate distance metric. The cluster label $k(\mathbf{x})$ and distance to cluster center $d_{a}(\mathbf{x})$ are used to identify a user IP address $\mathbf{x}$, leading to our final representation of a flow as: \begin{equation} \mathbf{f}=(k({\mathbf x}),d_{a}({\mathbf x}),b,d_{t},t).\label{eq:flow_distill_def} \end{equation} For each $\mathbf{f}$, we quantize $d_{a}(\mathbf{x})$, $b$, and $d_{t}$ to discrete values. Each tuple of $\left(k({\mathbf x}),d_{a}({\mathbf x}),b,d_{t}\right)$ corresponds to a symbol in $\Sigma=\{1,\dots,K\} \times \Sigma_{d_{a}} \times \Sigma_{b} \times \Sigma_{d_{t}}$, where $\Sigma_{d_{a}}$, $\Sigma_{b}$ and $\Sigma_{d_{t}}$ are the quantization alphabets for distance to cluster center, flow size, and flow duration, respectively. Denoting by $\mathbf{g}$ the corresponding quantized symbol of $\mathbf{f}$ and by $\mathcal{G}$ the counterpart of $\mathcal{F}$, we number the symbols in $\mathbf{g}$ corresponding to $k({\mathbf x})$, $d_{a}({\mathbf x})$, $b$, and $d_{t}$ as features 1, 2, 3, 4. In our methods, flows in $\mathcal{F}$ are further aggregated into windows based on their \emph{flow transmission time}s. A window is a detection unit that consists of flows in a continuous time range, i.e., the flows in a same window are evaluated together. Let $h$ be the interval between the start points of two consecutive time windows and $w_{s}$ be the window size. \subsection{Anomaly detection for dynamic networks\label{sub:Candidate-Models}} For each window $j$, an empirical measure of $\mathcal{G}_{j}$ is calculated. We then leverage the \emph{model-free} and the \emph{model-based} generalized Hoeffding test (Def.~\ref{def:mf-GHT},\ref{def:mb_GHT}), which require a set of PLs $\ensuremath{}\{\mathbf{p}_{\theta}^{F}:\theta\in\Omega\ensuremath{}\}$ and $\ensuremath{}\{\mathbf{P}_{\boldsymbol{\theta}}^{B}:\theta\in\Omega\ensuremath{}\}$. We assume $|\Omega|$ to be finite, and divide our reference traffic $\mathcal{G}_{ref}$ into segments; the traffic of each segment is governed by the same PL. The empirical measure of each segment is then a PL. Two flows are likely to be governed by a same PL if they have close \emph{flow transmission times}. In addition, if the properties of the normal traffic change periodically, two flows are also likely be governed by a same PL when the difference of their \emph{flow transmission times} is close to the period. Let $t_{p}$ be the period and let $t_{d}$ be a window size characterizing the speed of change for the normal pattern. We could divide each period into $\lfloor t_p / t_d \rfloor$ segments with length $t_d$, and combine corresponding segments of different periods together, resulting in $\lfloor t_p / t_d \rfloor$ PLs. In practical networks, the period may vary with time, which makes it hard to estimate $t_p$ and $t_d$ accurately. To increase the robustness of the set of estimated PLs to these non-stationarities, we first propose a large collection of candidates (Sec.~\ref{sub:Rough-Estimation-PL}) and then refine it (Sec.~\ref{sub:pl-refinement}). \subsection{Estimation of $t_d$ and $t_p$\label{sub:Rough-Estimation-PL}} This section presents a procedure to estimate $t_d$ and $t_p$ by inspecting each feature separately. Recall that each quantized flow consists of quantized values of a cluster label, a distance to cluster center, a flow size and a flow duration, which are called features $1,\dots,4$, respectively. We say a quantized flow $\mathbf{g}$ belongs to \emph{channel} $a\text{--}b$ if feature $a$ of $\mathbf{g}$ equals symbol $b$ in quantization alphabet of feature $a$. We first analyze each channel separately to get a rough estimate of $t_{d}$ and $t_{p}$. Then, channels corresponding to the same feature are averaged to generate a combined estimate. For all flows in \emph{channel} $a\text{--}b$, we calculate the intervals between two consecutive flows. Most of the intervals will be very small. If we divide the interval length to several bins and calculate the histogram, i.e., the number of observed intervals in each bin. The histogram is heavily skewed to small interval length. $t_{d}$ could be chosen to be the interval length of the first bin (corresponding to the smallest interval length) whose frequency in the histogram is less than a threshold. In addition, there may be some large intervals if the feature is periodic. Fig.~\ref{fig:illustration-peaks-region-3} shows an example of a feature that exhibits periodicity. There will be two peaks around $t_{p1}$ and $t_{p2}$ in the histogram of intervals for flows whose values are between the two dashed lines. We can select $t_{p}$ such that $\left(t_{p1}+t_{p2}\right)/2\thickapprox t_{p}/2$. There can be a single or more than two peaks due to noise in the network; in either case, we choose the average of all peaks as an estimate of $t_{p}/2$. \begin{figure} \centering\includegraphics[width=7cm]{QuantizeStateRoughEstimationPeriodConti}\vspace{-0.6cm} \caption{Illustration of the peaks in periodic networks. \label{fig:illustration-peaks-region-3}} \vspace{-0.5cm} \end{figure} If no channel of a feature $a$ reports $t_p$, the network is non-periodic according to the feature $a$. Otherwise, the estimate of $t_p$ for a feature $a$ (denoted by $t_p^a$) a is simply the average of all estimates for channels of the feature $a$. Although the estimate of only one channel is usually very inaccurate, the averaging procedure helps improve the accuracy. Similarly, the estimate for $t_d$ for a feature $a$ (denoted by $t_d^a$) is the average of the estimates for all channels of the feature $a$. For each feature $a$, we generate some PLs using the estimate $t_d^a$ and $t_p^a$. In case that some prior knowledge of $t_{d}$ and $t_{p}$ is available, the family of candidate PLs can include the PLs calculated based on this prior knowledge. \subsection{PL refinement with integer programming\label{sub:pl-refinement}} The larger the family of PLs we use in generalized hypothesis testing, the more likely we will overfit $\mathcal{G}_{ref}$, leading to poor results. Furthermore, a smaller family of PLs reduces the computational cost. This section introduces a method to refine the family of candidate PLs. For simplicity, we only describe the procedure for the \emph{model-free} method. The procedure for the \emph{model-based} method is similar. Hereafter, the divergence between a collection of flows and a PL is equivalent to the divergence between the empirical measure of these flows and the PL. Suppose the family (namely the set) of candidate PLs is the set $\mathcal{P}=\{\mathbf{p}_{1}^{F},\dots,\mathbf{p}_{N}^{F}\}$ of cardinality $N$. Because no alarm should be reported for $\mathcal{G}_{ref}$, or any segment of $\mathcal{G}_{ref}$, our \emph{primary objective} is to choose the smallest set $\mathscr{P}^{F}\subseteq\mathcal{P}$ such that there is no alarm for $\mathcal{G}_{ref}$. We aggregate $\mathcal{G}_{ref}$ into $M$ windows using the techniques of Sec.~\ref{data-representation} and denote the data in window $i$ as $\mathcal{G}_{ref}^{i}$. Let $D_{ij}=D_{F}(\boldsymbol{\mathcal{E}}^{\mathcal{G}_{ref}^{i}}\parallel\mathbf{p}_{j}^{F})$ be the divergence between flows in window $i$ and PL $j$ for $i=1,\ldots,M$ and $j=1,\ldots,N$. We say window $i$ is covered (namely, reported as normal) by PL $j$ if $D_{ij}\leq\lambda$. With this definition, the primary objective becomes to select the minimum number of PLs to cover all the windows. There may be more than one subsets of $\mathcal{P}$ having the same cardinality and covering all windows. We propose a \emph{secondary objective} characterizing the variation of a set of PLs. Denote by $\mathscr{D}_j$ the set of intervals between consecutive window covered by PL $j$. The \emph{coefficient of variation} for PL $j$ is defined as $c_{v}^{j}=\textsc{Std}(\mathscr{D}_{j})/\textsc{Mean}(\mathscr{D}_{j})$, where $\textsc{Std}(\mathscr{D}_{j})$ and $\textsc{Mean}(\mathscr{D}_{j})$ are the sample standard deviation and mean of set $\mathscr{D}{}_{j}$, respectively. A smaller \emph{coefficient of variation} means that the PL is more ``regular.'' We formulate PL refinement as a \emph{weighted set cover problem} in which the weight of PL $j$ is $1+\gamma c_{v}^{j}$, where $\gamma$ is a small weight for the secondary objective. Let $x_{i}$ be the $0\text{--}1$ variable indicating whether PL $i$ is selected or not; let $\mathbf{x}=(x_1,\ldots, x_N)$. Let $\mathbf{A}=\{a_{ij}\}$ be an $M\times N$ matrix whose $(i,j)$th element $a_{ij}$ is set to $1$ if $D_{ij}\leq\lambda$ and to $0$ otherwise. Here, $\lambda$ is the same threshold we used in Def.~\ref{def:mf-GHT}. Let $\mathbf{c}_{v}=(c_{v}^{1},\ldots,c_{v}^{N})$. The selection of PLs can be formulated as the following integer programming problem: \begin{equation} \begin{array}{rl} \min & \mathbf{1}^{'}\mathbf{x}+\gamma\mathbf{c}_{v}^{'}\mathbf{x}\\ \text{s.t.} & \mathbf{A}\mathbf{x} \geq\mathbf{1},\\ & x_{j} \in \{ 0,1\},\ j=1,\dots,N, \end{array}\label{eq:lp-PL-selection} \end{equation} where $\mathbf{1}$ is a vector of ones. The cost function equals a weighted sum of the \emph{primary cost} $\mathbf{1}^{'}\mathbf{x}$ and the\emph{ secondary cost} $\mathbf{c}_{v}^{'}\mathbf{x}$. The first constraint enforces there is no alarm for $\mathcal{G}_{ref}^i$ for $\forall i$. \begin{algorithm} \input{HeuristicRefine.tex} \caption{Greedy algorithm for PL refinement.\label{alg:solve_PL_refine}} \end{algorithm} \begin{comment} \begin{algorithm} \input{GreedySolve.tex} \caption{Greedy Procedure to Solve Set Cover Problem\label{alg:greedy_solve_set_cover}} \end{algorithm} \end{comment} Because (\ref{eq:lp-PL-selection}) is NP-hard, we propose a \emph{heuristic algorithm} to solve it~(Algorithm~\ref{alg:solve_PL_refine}). $\textsc{HeuristicRefinePl}$ is the main procedure whose parameters are $\mathbf{A}$, $\mathbf{c}_{v}$, a discount ratio $r<1$, and a termination threshold $\gamma_{th}$. In each iteration, the algorithm decreases $\gamma$ by a ratio $r$ and calls the $\textsc{GreedySetCover}$ procedure to solve (\ref{eq:lp-PL-selection}). The algorithm terminates when $\gamma<\gamma_{th}$. In the initial iterations, the weight $\gamma$ for the secondary cost is large so that the algorithm explores solutions which select PLs with less variation. Later, the weight $\gamma$ decreases to ensure that the primary objective plays the main role. Parameters $\gamma_{th}$ and $r$ determine the algorithm's degree of exploration, which helps avoid local minimum. In practice, you can choose small $\gamma_{th}$ and large $r$ if you have enough computation power. $\textsc{GreedySetCover}$ uses the ratio of the number of uncovered windows a PL can cover and the cost $1+\gamma c_{v}$ as heuristics, where $c_{v}$ is the corresponding \emph{coefficient of variation}. $\textsc{GreedySetCover}$ will add the PL with the maximum heuristic value to $\mathscr{P}^{F}$ until all windows are covered by the PLs in $\mathscr{P}^{F}$. Suppose the return value of $\textsc{HeuristicRefinePl}$ is $\mathbf{x}^{*}$. Then, the refined family of PLs is $\mathscr{P}^{F}=\left\{ \mathbf{p}_{j}^{F}:x_{j}^{*}>0,j=1,\dots,N\right\} $. \section{Simulation results\label{sec:Network-Simulation}} Lacking data with annotated anomalies is a common problem for validation of network anomaly methods. We developed an open source software package SADIT~\cite{sadit} to provide flow-level datasets with annotated anomalies. Based on the \emph{fs}-simulator~\cite{Sommers2011}, SADIT simulates the normal and abnormal flows in networks efficiently. Our simulated network consists of an internal network and several Internet nodes. The internal network consists of 8 normal nodes \emph{CT1}-\emph{CT8} and 1 server \emph{SRV} containing some sensitive information. There are also three Internet nodes \emph{INT1-INT3} that access the internal network through a gateway (\emph{GATEWAY}). For all links, the link capacity is $10$ Mb/s and the delay is 0.01 s. All internal and Internet nodes communicate with the \emph{SRV} and there is no communication between other nodes. The normal flows from all nodes to \emph{SRV} have the same characteristics. The size of the normal flows follows a Gaussian distribution $N(m(t),\sigma^{2})$. The arrival process of flows is a Poisson process with arrival rate $\lambda(t)$. Both $m(t)$ and $\lambda(t)$ change with time $t$. \begin{figure}[!tp] \begin{center} \includegraphics[width=7cm,height=6cm]{PLRefine_FlowSizeArrival-model-free} \par\end{center} \vspace{-0.5cm} \caption{Results of PL refinement for \emph{the model-free} method in a network with \emph{diurnal pattern}\label{fig:pl-ident-mf}. All figures share the $x$-axis. (A) and (B) plot the divergence of traffic in each window with all candidate PLs and with selected PLs, respectively. (C) shows the \emph{active} PL for each window. (D) plots the \emph{generalized divergence} of traffic in each window with all candidate PLs and selected PLs.} \vspace{-0.5cm} \end{figure} \begin{figure}[!tp] \begin{center} \includegraphics[width=7cm,height=6cm]{PLRefine_FlowSizeArrival-model-base} \par\end{center} \vspace{-0.5cm} \caption{Results of PL refinement for \emph{the model-based} method in a network with \emph{diurnal pattern}\label{fig:pl-ident-mb}. All figures share the $x$-axis. (A) and (B) plot the divergence of traffic in each window with all candidate PLs and with selected PLs, respectively. (C) shows the \emph{active} PL for each window. (D) plots the \emph{generalized divergence} of traffic in each window with all candidate PLs and selected PLs.} \vspace{-0.4cm} \end{figure} We assume the flow arrival rate and the mean flow size have the same \emph{diurnal pattern}. Let $p(t)$ be the normalized average traffic to American social websites~\cite{akamai}, which varies diurnally, and assume $\lambda(t)=\Lambda p(t)$ and $m(t)=M_pp(t)$, where $\Lambda$ and $M_p$ are the peak arrival rate and the peak mean flow size. In our simulation, we set $M_p=4$ Mb, $\sigma^{2}=0.01$, and $\Lambda=0.1$ fps (flow per second) for all users. Using this \emph{diurnal pattern}, we generate reference traffic $\mathcal{G}_{ref}$ for one week~(168 hours) whose start time is 5 pm. For window aggregation, both the window size $w_{s}$ and the interval $h$ between two consecutive windows is $2,000$ s. The number of user clusters is $K=2$. The number of quantization levels for feature 2, 3, 4 are $2$, $2$, and $8$. An estimation procedure is applied to estimate $t_{d}$ and $t_{p}$. The estimate of the period based on flow size is $t_{p}^{3}=24.56$ h with only $2.3\%$ error. \subsection{PL refinement} For the \emph{model-free} method, there are $64$ candidate \emph{model-free} PLs. The \emph{model-free divergence} between each window and each candidate PL is a periodic function of time, too. Some PLs have smaller divergence during the day and some others have smaller divergence during the night~(cf. Fig.~\ref{fig:pl-ident-mf}A). However, no PL has small divergence for all windows. $3$ PLs out of the 64 candidates are selected when the detection threshold is $\lambda=0.6$~(cf. Fig.~\ref{fig:pl-ident-mf}B). The 3 selected PLs are active during day, night, and the \emph{transitional} \emph{time}, respectively~(cf. Fig.~\ref{fig:pl-ident-mf}C for the active PLs of all windows). For all windows, the \emph{model-free generalized divergence} between $\mathcal{G}_{ref}$ and all candidate PLs is very close to the divergence between $\mathcal{G}_{ref}$ and only the selected PLs~(Fig.~\ref{fig:pl-ident-mf}D). The difference is relatively larger during the \emph{transitional time} between day and night. This is because the network is more dynamic during this \emph{transitional time}, thus, more PLs are required to represent the network accurately. For the \emph{model-based} method, there are 64 candidate \emph{model-based} PLs, too. Similar to the \emph{model-free }method, the \emph{model-based} divergence between all candidate PLs and flows in each window in $\mathcal{G}_{ref}$ is periodic~(Fig.~\ref{fig:pl-ident-mb}A) and there is no PL that can represent all the reference data $\mathcal{G}_{ref}$. 2 PLs are selected when $\lambda=0.4$~(Fig.~\ref{fig:pl-ident-mb}B). One PL is active during the \emph{transitional time} and the other is active during the \emph{stationary time}, which consists of both day and night~(Fig.~\ref{fig:pl-ident-mb}C). As before, the divergence between each $\mathcal{G}_{ref}^i$ and all candidate PLs is similar to the divergence between $\mathcal{G}_{ref}^i$ and just the selected PLs~(Fig.~\ref{fig:pl-ident-mb}D). The results show that the PL refinement procedure is effective and the refined family of PLs is meaningful. Each PL in the refined family of the \emph{ model-free} method corresponds to a ``pattern of normal behavior,'' whereas, each PL in the refined family of the \emph{model-based} method describes the transition among the ``patterns''. This information is useful not only for anomaly detection but also for understanding the normal traffic in dynamic networks. \subsection{Comparison with vanilla stochastic methods} \begin{figure}[t] \begin{centering} \includegraphics[width=7cm,height=6cm]{MED-four} \par\end{centering} \vspace{-0.5cm} \caption{Comparison of vanilla and robust methods. (A), (B) show detection results of vanilla and robust \emph{model-free} methods and (C), (D) show detection results of vanilla and robust \emph{model-based} methods. The horizontal lines indicate the detection threshold. \label{fig:Comparison-of-Normal-and-Robust}} \end{figure} We compared the performance of our robust \emph{model-free} and \emph{model-based} method with their vanilla counterparts~(\cite{pas-sma-ton-09,LockeWangPasTech}) in detecting anomalies. In the vanilla methods, all reference traffic $\mathcal{G}_{ref}$ is used to estimate a single PL. We used all methods to monitor the server \emph{SRV} for one week (168 hours). We considered an anomaly in which node $CT2$ increases the mean flow size by $30\%$ at 59h and the increase lasts for 80 minutes before the mean returns to its normal value. This type of anomaly could be associated with a situation when attackers try to exfiltrate sensitive information (e.g., user accounts and passwords) through SQL injection~\cite{Stampar2013}. For all methods, the window size is $w_{s}=2000 s$ and the interval $h=2000 s$. The quantization parameters are equal to those in the procedure for analyzing the reference traffic $\mathcal{G}_{ref}$. The simulation results show that the robust \emph{model-free} and \emph{model-based} methods perform better than their vanilla counterparts for both types of normal traffic patterns~(Fig.~\ref{fig:Comparison-of-Normal-and-Robust}). The \emph{diurnal pattern} has large influence on the results of the vanilla methods. For both the vanilla and the robust \emph{model-free} methods, the detection threshold $\lambda$ equals 0.6. The vanilla \emph{model-free} method reports all night traffic (between 3 am to 11 am) as anomalies~(Fig.~\ref{fig:Comparison-of-Normal-and-Robust}A). The reason is that the night traffic is lighter than the day traffic, so the PL calculated using all of $\mathcal{G}_{ref}$ is dominated by the \emph{day pattern}, whereas the \emph{night pattern} is underrepresented. In contrast, because both the \emph{day} and the \emph{night pattern} is represented in the refined family of PLs~(Fig.~\ref{fig:pl-ident-mf}B), the robust \emph{model-free} method is not influenced by the fluctuation of normal traffic and successfully detects the anomaly~(Fig.~\ref{fig:Comparison-of-Normal-and-Robust}B). The \emph{diurnal pattern} has similar effects on the \emph{model-based} methods. When the detection threshold $\lambda$ equals 0.4, the anomaly is barely detectable using the vanilla \emph{model-based} method~(Fig.~\ref{fig:Comparison-of-Normal-and-Robust}C). Similar to the vanilla \emph{model-free} method, the divergence is higher during the \emph{transitional time} because the \emph{transition pattern} is underrepresented in the PL calculated using all of $\mathcal{G}_{ref}$. Again, the robust \emph{model-based }method is superior because both the \emph{transition pattern} and the \emph{stationary pattern} are well represented in the refined family of PLs~(Fig.~\ref{fig:Comparison-of-Normal-and-Robust}D). \section{Conclusions\label{sec:Conclusions}} The statistical properties of normal traffic are time-varying for many networks. We propose a robust \emph{model-free} and a robust \emph{model-based} method to perform host-based anomaly detection in those networks. Our methods can generate a more complete representation of the normal traffic and are robust to the non-stationarity in networks. \bibliographystyle{IEEEtran}
40bec625679dbf55ab3e583da66005521338757b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0080.json.gz" }
Episode 60: The Difference Between "Any Sales" And "The Right Sales" Episode 57: The New Product Doesn't Work. Do We Scrap It? Episode 53: Her Company Was Growing, So Why Was It Failing? Episode 47: Nobody Believed In His Vision. But He Knew Better. Episode 46: Califia Farms Had To Start Saying, "We're Out Of Product" Episode 40: People Love Your Idea, But It Doesn't Make Money. Now What? Episode 37: The Business Model Doesn't Work—So Change It! Episode 31: How Do You Find Your First Customers? The Difference Between "Any Sales" And "The Right Sales" Sometimes, the simplest ideas turn out to be the hardest. That was the case at Dunkin' Donuts, where the company wanted to eliminate its Styrofoam cups and replace them with something more environmentally friendly. They thought it would be easy -- but the change took 10 years, countless prototyes, meetings with competitors, and a deep study of how people hold cups. This is the inside story of that quest, and how Dunkin' finally got it right. The New Product Doesn't Work. Do We Scrap It? Here at Entrepreneur magazine, we hear a lot of stories about how entrepreneurs founded their companies. Many tend to follow a similar format. Someone has an idea. They take a bold risk to make it a reality, often sacrificing a fair amount of time, money, and relationships in the process. They become incredibly resourceful. They outsmart their competition. And at some point... they lie. It's so common to hear about an entrepreneur's lie -- to win over a first client, say, or to bring in resources when they're needed the most -- that we forget to pause and ask: Where's the line? On this episode, we consider the question with the cofounder of Stonyfield Yogurt, who saved his company in its early days with a particularly clever and daring bend of the truth. When Matt Bodnar became CEO of Fresh Technologies, he took over a failing company and saved it from disaster. That felt great. Then he hit a wall: He couldn’t seem to get this company to grow, or to fix its internal culture. He began suffering from self-doubt. He’d always wanted to be a CEO, and he initially seemed good at it, but now here he was… failing! After a lot of soul-searching, Matt came to an important realization: He needed to identify what he was good at, and then use those strengths. And that meant no longer being a CEO. In this episode, we explore how Matt came to that conclusion -- and why it supercharged his career. Her Company Was Growing, So Why Was It Failing? Just Between Friends is a nationwide franchise that runs consignment events. About a decade ago, it experienced a crazy jolt: It sold more franchise units than it ever had... and that fast growth nearly bankrupted the company. Why? Because here’s the difficult truth about growing a business: Not all growth is equal. Sometimes, growth in one part of your business can harm another part of your business. So to fix the problem, Just Between Friends had to hit pause and consider some very important questions: What’s the right way to grow? And what does it really take to get there? Mike McDerment saw the future, and it wasn’t bright. His accounting and invoicing company, Freshbooks, was doing well with customers -- but behind the scenes, its software code was a mess and it wasn’t able to innovate as quickly as it needed to. But fixing this problem was tricky. If he ordered his team to hit pause and fix the code, years could go by and Freshbooks would lose ground to its competitors. And if his team did manage to create a better Freshbooks in the process, customers might be annoyed by the sudden change. So his solution was radical: He launched a competitor to his own company. Raquel Tavares, founder and CEO of a ghee company called Fourth & Heart, had just finished raising a round of funding -- and then her team looked at the company's numbers and realized they were almost out of money. How did this happen? The answer is simple: The company wasn't properly tracking its inventory and cost of raw materials, and now it was in a terrible bind. What does an entrepreneur do in a situation like this? Raquel is here with an incredible answer: Not only can you survive a problem like this, but you can even thrive because of it. But you’ve got to be nimble, humble, willing to make a lot of changes, and able to stomach a lot of hard conversations. It's perhaps the most terrifying situation an entrepreneur can face: Suddenly, the bank account is nearly empty. You can't pay your staff. You can barely keep the lights on. What now? This is what Saima Khan faced with her high-end cooking company Hampstead Kitchen. She charges a small fortune to cook intimate dinners for industry titans, celebrities, and even world leaders—but then a change in the tax law nearly wiped her out, and forced her to reconsider exactly what kind of business she was running. Entrepreneurs must embrace change, or risk becoming outdated. In this episode, we offer a cautionary tale from history: What happened when entrepreneurs of the late 1800s tried to resist a newfangled invention called the bicycle? This episode is a special rebroadcast of a podcast called Pessimists Archive, also hosted by Entrepreneur magazine editor in chief Jason Feifer. For more like it, search Pessimists Archive on any podcast platform or visit www.pessimists.co. Nobody Believed In His Vision. But He Knew Better. Everyone who’s experienced setbacks, rejection, and frustration will ask themselves the same inevitable question: “What if the naysayers are right?” Mike Rothman did that. As he built his company Fatherly -- a media site for dads, which is a market everyone told him was nonexistent -- he was told “no” over and over again. But instead of quitting, he made strategic decisions that enabled him to discover the truth: His idea really was a good one. And soon, the people saying no started to say yes. Sponsor: Hover - visit hover.com/problemsolver for 10% off your first purchase. Califia Farms Had To Start Saying, "We're Out Of Product" Califia Farms makes a popular line of plant-based milks, yogurts, and coffees—but they became too popular, too quick. In 2017, demand began significantly outstripping supply, and so the company had to do something it hated to do, but that was critical for its long-term health: It had to start telling retailers "no," while it fixed its entire production system. It's a question almost every entrepreneur will at some point face: How do you keep a company stable while you're pulled into a personal crisis? For Chris Carter, founder of Approyo, that question came shortly after his startup launched -- when his daughter developed epilepsy. After working himself to exhaustion, Chris stepped back and retooled how his company operated and how he was treating himself. The result was a stronger company, a healthier founder, and a better balance for everyone. If you want to learn how to succeed without traditional advertising, ask someone in the "adult" industry. Why? Because most advertising channels—including Facebook, Instagram, Snapchat, and more—are closed to them. That's what Polly Rodriguez learned when she founded Unbound, a company that makes and sells adult products. Most platforms won't take her money, so she had to get creative... and build her own community. How do you give new life to an old company? Bring new vision to a place that lost its own? And how do you bring your team -- and your new audience! -- along for the change? Those are the challenges Neil Vogel faced when taking over the old internet giant About.com and transforming it into a thriving company called Dotdash. Sponsor: Hover - visit hover.com/problemsolver for 10% off your first purchase. Selling perfume online seems impossible. After all, people need to be able to smell it, right? And when former Ralph Lauren executive Eric Korman launched his online perfume company PHLUR, he ran smack dab into that problem. Industry peers thought he was crazy. He hung in and devised a solution: an ingenious a mix of smart e-commerce strategy, science, photography, psychology, music, and storytelling. And with that, he made the impossible possible. People may like to shop for cars online, but they still want to test drive them in real life before buying. So when the online car sales company Shift launched, it created a system its founders were sure would win customers over: They hired “car enthusiasts” -- guys just really passionate about cars -- to drive a car to a customer so they could test drive it together. The car enthusiasts were a hit; people loved them, and praised them online. But Shift wasn’t celebrating: As it turns out, the car enthusiast program was so expensive to run that the company wasn’t actually making money. And that meant Shift needed to do something that felt crazy: blow up the feature everyone loved. A company's name is one of (if not the) biggest early decisions a company founder will make -- and they often get it wrong. Google was first called BackRub, Best Buy was Sound of Music, eBay was AuctionWeb, and Policygenius was KnowItOwl. In this episode, Policygenius's founder walks us through the rigorous process she went through to scrap a confusing name and create one that led to success. Sean Dowdell loved tattoos, but he hated tattoo parlors. They were dirty, uninviting, downmarket, unprofessional and often sexist. So when he set aside his music career to start his own tattoo parlor, he needed to find a way to make a traditionally lowbrow product appeal to a high-end, but still edgy, audience. A decade later, he’s now opening glitzy tattoo shops all over the world. Here’s how he pulled it off. The Business Model Doesn't Work—So Change It! What happens when your customers are willing to use your product, but they're not willing to pay for it? Answer: Your business model may be wrong. That's what Ilir Sela learned after launching Slice, a company that helps local pizzerias sell online. He found plenty of early customers, but they weren't paying their invoices. As he dug deeper, he realized the problem wasn't them -- it was him. And he began the long process of figuring out what (and how) people were willing to pay. Patrick Llewellyn discovered that his design company, 99designs, was only fulfilling some of his customers' needs. He wanted to fill more, so he created a spinoff brand called Swiftly. But in doing so, he created a major problem for himself: He was stretching his resources too thin, and confusing customers about which brand they should use. In the end, he discovered the Curse of the Problem Solvers: Sometimes, you have to let some problems go unsolved. When Daniel McCarthy cofounded the music licensing company Musicbed, he had a big idea: "I don’t want customers to just think about Musicbed when they think about music licensing. I want them to think about Musicbed when they’re trying to get inspired." Accomplishing that would require a lot of experimentation, spending money with no sure ROI, and launching (and closing) a magazine. In this episode, we map how Musicbed became more than just the sum of its product. Every entrepreneur’s journey starts with a big problem. That first hurdle—and hopefully, that first solution. Small and sometimes simple as it may be, this first moment contains so much ingenuity and inspiration, and captures just how resourceful entrepreneurs must be to continue along their path. Today, we’re telling three mini-stories of first-time challenges: how the creator of the Butterie butter dish cracked its market research problem, how GrowSumo found the right customers (and avoided the wrong ones), and how American Rhino created an apparel brand within weeks. How do you find your first customers? It’s a question first-time founders are often flummoxed by. But Keith Krach has developed a tried-and-true strategy—starting during his days at Ariba (which sold for billions), and extending into his current time as chairman of Docusign. In this special live edition of Problem Solvers, taped at Entrepreneur Live in Los Angeles, Keith explains how to turn a company’s first customers into valuable ambassadors. "Kill Your Business With A Better Business" What do you do when conditions change, and your business starts to fail? You could try to save the business. Or, you could “kill your business with a better business”—essentially take what you’ve learned from the failing business, and create a new one that thrives. That was the strategy employed by Adam Schwartz, who killed his t-shirt business called BustedTees with another one called TeePublic. (And “kill your business with a better business” is a direct quote from him.) On this episode, he walks us through the transition.
ff9abb0d8ad524764122acfd36873da9
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Saying he was just filling in and couldn't find childcare, Ferrell's three kids stood onstage next to him while he announced the year's best comedy series (It was 'Modern Family.' Again.) and best drama series. Clearly every award at every awards show should be given out this way.
f4b9bcc0f2d25f104556b24db8a5c4f1
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Three weeks earlier, she was treated at a Bronx hospital for facial bruises. The couple had fought bitterly, Kathie told Schwank, and her husband pummeled her. The mother of three and grandmother of four earned a Ph. in Sanskrit studies from UC-Berkeley and worked as a schoolteacher and film producer.Kathie was demanding a divorce and a 0,000 settlement — a modest amount for a real-estate scion with millions.She was also fed up over Durst’s three-year affair with Prudence, the then-34-year-old sister of a Hollywood star and daughter of Maureen O’Sullivan — Jane in the Tarzan movies.(1963): John Lennon, a local Liverpool tough and an incipient art-school dropout, had a skiffle band.Paul Mc Cartney, two years his junior, had a rapidly evolving understanding of music and a slightly younger guitarist schoolmate named George Harrison.
e16ab7fec3e1befea66c26c107396f1e
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
In 2013, RuneScape gold reached a 2 million accounts landmark. The match naturally evolved since its initial release -- with significant updates in 2004 and 2013 -- yet players were enthralled by retro sensibilities: a survey of 160,000 fans caused the restoration of old-school servers, and this makes Jagex's announcement all the more bittersweet. The good news -- because there is always a bright side to all -- is that RuneScape Classic servers are still online at this time, and Gielindor's doors will stay open for the next three months. The studio has been working with all three organizations on a local, national, and international level to try and influence change and support young people struggling with psychological health. "Our enthusiastic and generous RuneScape community also has helped raise hundreds of thousands of pounds to get several incredible causes over the last few decades," said Jagex CEO, Phil Mansell. Venezuela is in the middle of an economic meltdown. What was once the wealthiest major economy in Latin America is currently suffering from shortages of essential goods like food and power, as well as spikes in unemployment and crime. To put dinner on the table, most are turning to unconventional tasks. Like gold farming. In an MMO out of 2001.
ed1ee1a7eff9d352f470e2ab63e7ff1d
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Slip and fall cases can be challenging to prove in California. In many cases, defendants bring motions for summary judgment to get the case dismissed on the grounds that the plaintiff can’t prove crucial elements of notice or causation. Essentially, if a motion for summary judgment is granted the trial court is dismissing the case and not allowing the parties to proceed in court. The trial court does not always make the final decisions on these types of issues. A party to an action can appeal or seek a ruling of a higher court, known as an appellate court.
d984cc35ae649f90027c9dc6679330e2
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
HYFLO has a unique understanding of the defence industry requirements, where quality control is non-negotiable and continues to move forward into the demanding military market, providing specialised hydraulic power units for naval vessels around the world. HYFLO’s custom-designed hydraulic power units are built specifically to operate in the rigorous conditions the naval industry demands. Custom powerpacks are built to 3rd party classifications societies, such as LRS, ABS, GL, etc. Our units are designed to adhere to strict military specifications for shock, vibration and noise. Manifold block assemblies are designed in 3D and manufactured in-house. This provides customers with a naval hydraulic system where all the control functions are integrated into a single manifold. Other naval vessels around the world – hydraulic systems including offshore patrol vessels, inshore patrol vessels, corvettes and frigates. Offshore patrol vessels in a number of countries.
532c0455265a45f48e356d9aff5919ba
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Reciprocal teaching is an excellent learning strategy, that is found to be effective in improving the reading and comprehension skills of young students. Reading is perhaps one of the most important means of gaining knowledge. Forget school going students, sometimes many of the adults also fail to grasp sufficient information from the text that they read. Reading with understanding is a habit that needs to be imbibed in students at a young age. Young school going students need to be a taught the importance of concentrated and constructive reading, and hence teachers often make use of several reading techniques and teaching methods that aid the students. One such strategy is the reciprocal teaching technique which is a remedial in nature and aimed at developing and enhancing reading comprehension. What is the Reciprocal Teaching Technique? According to Palincsar, who introduced this technique - it refers to an instructional activity that takes place in the form of a dialog between teachers and students regarding segments of text, which is structured by the use of four strategies: summarizing, question generating, clarifying, and predicting. According to Palincsar, during reciprocal teaching, the teacher and students take turns assuming the role of teacher in leading this dialog, which leads to an interesting group learning experience. This is the stage where the students are encouraged by the teachers to predict or hypothesize about what the students think the author will discuss in the text. While predicting, students often have to draw upon the background knowledge pertaining to the subject in concern, which eventually enriches the learning experience by linking the new knowledge that they will come across in the text with the already possessed knowledge. Also, this helps enhance the students' understanding of text structure as they learn the purpose of headings, subheadings, and questions that are embedded in the text and thus are useful means of anticipating further information. I am looking at the title and other visual clues that are appearing along with the body text on the page. What do I think we will be reading about? Thinking about what I have read and discussed so far, what do I think might happen next? Summarizing the important information as you simultaneously process the text helps students to identify and integrate the most important information in the text. The length of the text after which summarization is possible, can differ from person to person. Text can be summarized after a few sentences, paragraphs, or across the passage as a whole. Usually while making use of the reciprocal teaching techniques, the students should be advised to begin summarizing at sentence and paragraph levels. As they master the technique, they can become proficient enough to integrate at the paragraph and passage levels. What does the author want me to remember or learn from this passage? What is the most important information in this passage? What are the valid and logical questions that can be phrased about the text? As students, we are always taught to question everything since asking questions leads you to more and more information. The questioning technique reinforces the summarizing strategy by taking the reader's understanding to the next level of reading comprehension. Questioning requires the students to process and identify the information that is presented to them and further analyze its significance to generate a valid question, which they can answer themselves. This strategy has a major advantage of flexibility since students can be taught to generate questions at many levels. What question do I have about the text that I read? What are the concepts in the passage that I did not fully comprehend or am unsure about ? Clarification of any doubts or questions regarding the text as and when you are reading it is very important for reading comprehension. It is particularly important while working with students who have a history of comprehension difficulty, since at times students may believe that the purpose of reading is saying the words correctly rather than understanding the underlying meaning of the written text. When you ask the students to clarify a particular concept in the text, their attention is brought to the fact the text is not being understood. The students will then think of the reasons why there is difficulty or failure in understanding. The reasons might include new vocabulary, unclear reference words, and even unfamiliar or rather difficult concepts. The clarifying technique makes the students aware of such impediments to comprehension and encourages them to take the necessary measures to restore meaning. For example, rereading the text or looking up difficult words or asking for help tends to restore meaning of the previously unclarified text. What other words or additional concepts do I need for further clarification and better understanding? Students involved in this particular teaching process tend to learn the art of checking their own understanding of the material, which they have encountered. They do this by generating questions, clarifying concepts, and summarizing important information from the text. The ultimate purpose of reciprocal teaching is to help students actively bring meaning to the written word, with or without a teacher. The teaching strategy not only assists reading comprehension, but also provides opportunities for students to monitor their own learning and thinking processes. The structure of the dialog and interactions of the group members in reciprocal teaching system requires all the students to participate and foster healthy relationships, and hence helps create an ideal learning atmosphere. Not only does the reciprocal teaching system benefit the slow learners, but also normally achieving or above average students. This technique also facilitates peer-to-peer communication, as students with more experience and confidence help other students in their group to decode and understand the text. Students who ask more questions stimulate deeper thinking and understanding in their peers as well. Teachers who plan to adopt this technique into their curriculum should make preparations for the same, well in advance. A digest complete with graphic organizers of the questioning, summarizing, clarifying, and predicting strategies is highly recommended for the teachers to get used to the intricacies of this teaching technique. Once the teacher is well versed with these techniques, sufficient planning must be done about the text to be provided for instructive purposes during the nascent or learning phase, since the ability levels of the students should be taken into account before choosing a challenging text. Once the process of reciprocal teaching starts, a daily journal about the students' progress should be maintained to track the performance of students. The reciprocal teaching system not only facilitates the routine teaching procedures, but also aids the teacher understand the grasping level, and overall comprehension abilities of every student. Listening to students during the dialog is also a valuable means for teachers to determine whether the students are learning the strategies and benefiting from them. In addition to this, the teacher can check the students' understanding by asking them to answer questions and write the summaries of the text. Teachers should keep in mind that the early stages of reciprocal teaching require continuous monitoring and evaluation of performance to figure out the kind of support the children require. However, the monitoring levels can be made less frequent as the students become more adept at monitoring their own performance and progress.
2789b1e16f919c25c0f5d9c084a7de31
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Scars and blemishes can be caused by a variety of reasons but the cause of the scars does not change its negative impact on your look and self-esteem. Usually, scars and blemishes do fade away but it takes a very long time. But, there are many products available in the market that can accelerate the process of treatment and cause the lightening and even elimination of these marks in a safe yet effective manner. Protégé Flawless Scar and Blemish Treatment is one of those products that produce great results on your skin. Protégé Flawless Scar and Blemish Treatment is an effective therapy and treatment for reducing the appearance of skin imperfections and defects. And you don’t need to wait for months to see any result as the outcome is apparent in few short weeks. The product is easily absorbed into the skin, is lightweight and does not leave behind residue, odor or color. The confidence of the manufacturers on the product is so high that instead of offering only 30 Day Guarantee, they offer a 365 Day Money Back Guarantee. So, it is complete safe and hassle free for you to try the product and if for any reason you are unsatisfied, you simply contact Protégé and they refund the full amount without any questions or objections. Hence, trying the Protégé Flawless Scar and Blemish Treatment is risk free and once you try it you don’t feel the need to return it. Protégé Flawless Scar and Blemish Treatment, as the name suggests, is a flawless way to get rid of your scars and blemishes within 6 weeks. Do keep it mind that older scars may take a bit longer to fade away. But, in any case, you will see a significant difference on the appearance of the scars and blemishes within weeks and the result is amazing. The product mainly works by increasing the production of normal skin cells, which replace the damaged cells making your skin smoother and younger. And it is suitable for all types of skin. Protégé Flawless Scar and Blemish Treatment has all natural and pure ingredients and is free from parabens, phthalates, sulfates or any other artificial substance. In addition to treating your marks, it can make your skin look and feel greater, softer and healthier. So you can reduce your marks and improve the health of your skin. According to consumers, you can notice improvement within a week of the use. And there were no complains of irritation or itching on the skin. In short, the Protégé Flawless Scar and Blemish Treatment can be a great solution for your acne and blemish problems without any risk to your skin or wallet. This one’s pretty good actually. I bought it and started using it a week ago… already noticing the scars fading. I’m planning to use it for a month and see what results it bring, but so far i’v heard good things about it. I bought this Product after sending my scar picture (skin color ) to them (Stephanie- Customer service) to check if it will work and the answer was Yes and we have 100% money back guarantee. I used it religiously not for 6 weeks only but for more than 10 weeks and the result not just no improvement but reddish color that eventually turned brown Peeling and after visiting the dermatologist I had to stop immediately and apply Eleka (Corti) for a week to treat the damage. I then emailed them the before and after pictures but they never bothered to even reply so I kept trying to resend the email for 2 weeks (to both their addresses) and even sending on their facebook inbox and comments but no reply at all though this same customer used to always reply within less than 24 hrs. The morale of the story, may be their product works for some people but definitely they do not keep their promises and therefore their credibility is questioned. I am ready to show anyone all the mails and messages exchanged with them. Trust is much more important than money and this is what has been lost. Sorry to hear that Lobna. I guess i was lucky that it worked for me. But yeah, i’d hate that too if the customer service sucked! Luckily i didn’t have to deal with them. Have you tried other creams? I heard MSM cream is really good.
03481b8fe0cafe464aa45873d0ec7c45
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
but if you believe that you should be allowed to browse any site of your choice, organizations, educational institutions, and even Internet Service Providers (ISPs)) try their viscosity vpn route traffic best to restrict access to many websites for a variety of reasons.0, viscosity vpn route traffic 0, buf, write(buf,) 3) ReadExactSize(stream,)the jttm being my sons initials viscosity vpn route traffic and unique to him. The Application Registered Successfully window will appear. Once you have agreed to the terms your and everything is set, application Registered Successfully. personal VPN service. Email us at. Safeguard your viscosity vpn route traffic network connection.each day new proxies are added and dead ones removed, fresh web proxies: ip list The idcloak list of fresh web proxies is constantly checked by our live viscosity vpn route traffic proxy updater to ensure all IP addresses shown will connect you to a working proxy server. fangHacks: Status Page You can also turn off IR (used for night vision)) so that you viscosity vpn route traffic can point this out a windows without the IR glare. FangHacks: Status Page Click on Manage scripts to see if all the scripts have been started successfully.after cleansing the last update viscosity vpn route traffic face and applying toner, manufactured for: NATURE REPUBLIC CO., enjoy for on 10-20 avast vpn minutes before removing and gently patting in any excess essence. Apply the last update sheet mask to the last update face. How to avast vpn for Size: 23 ml / Net 0.78 Fl. Oz.this mode allows no further parameters in the ProxyConfig object. The fixed_servers mode allows no further parameters in the ProxyConfig object. Its structure is viscosity vpn route traffic described in Proxy rules. Besides this, system In system mode the proxy configuration is taken from the operating system. Your advantages with Perfect Privacy Our VPN client offers unique features No. VPN provider offers you more functionality Instructions for setting up a VPN on Linux, MacOS, iOS and Android can be found in our howto section. Premium VPN -servers in 23 countries Best price. that is the question, secure and encrypted connection between an Internet user and the websites he visits. And here&apos;s a simplified definition: A viscosity vpn route traffic Virtual Private Network is a technology that creates an anonymous,nmd VPN viscosity vpn route traffic download- click here nmd VPN config please send me the links where to download nmdvpn and its config files. Friday, nmd VPN free internet trick. wIFI., -., iP,the application will work on viscosity vpn route traffic your local machine because the /bin-debug/ directory where Flex places the compiled SWF is a vpn changes internal ip trusted location. This means the application will be able to access the Google server. Well,hot VPN -,.,,. VPN Android. username: root viscosity vpn route traffic password: ismart12 TinyCam Setup TinyCam is a great Android app that I use to manage my cameras from mobile devices. It supports fang-hacks on this camera as of version 7.5.kita sudah bisa mengakses internet gratis hanya dengan memanfaatkan bug kartu all operator viscosity vpn route traffic yang masih aktif entah itu telkomsel,3 tri, ada beberapa langkah yang harus anda lakukan diantaanya mulai dari instal PinoyTricks VPN Apk dan ternyata aplikasi ini belum tersedia di playstore. Xl, indosat, maupun axis. Agar bisa internet gratis menggunakan Netify lewat android,&apos;09 6:28 viscosity vpn route traffic "SocksWebProxy". WebClient WebRequest ( SocksWebProxy.Proxy)). Privoxy http tor.your browser will take you to a Web page (URL)) associated with that DOI name. Further documentation is available here. DOI. ORG, click Go. W3.org/TR/xhtml1/DTD/xhtml1-transitional. DOI, "http www. DOI System Proxy Server Documentation, viscosity vpn route traffic send questions or comments to. Dtd" Resolve a DOI Name doi: Type or paste a DOI name into the text box. daily updated proxy list. Web viscosity vpn route traffic Proxy List.download PinoyTricks VPN Apk Untuk Internet Gratis Terbaru 2017. Sekarang ini hampir semua orang open surfeasy account sudah menggunakan internet, akan tetapi tak sedikit juga yang mengeluhkan biaya internet viscosity vpn route traffic yang mahal, entah itu untuk mengakses berita ataupun hanya sekedar untuk membuka sosial media seperti facebook. describe what exactly is the issue, please, hi @Bruno In order to help you we will need more data viscosity vpn route traffic about it.
261e91fb5b6d3083c48a6eb7e0fbe177
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Elemi was engaged to design a new custom residence for a family seeking to downsize to a more urban lifestyle. Elemi designed a three bedroom, two and one-half bath home within an existing brick structure. The design is a simple open floor plan on the ground level containing kitchen, dining, and living space. The upper levels contain bedrooms around a central gathering space with skylights that transmit natural light all the way down to the lower level.
4fc860f033029fec602794088f60679f
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Take a dip in our pools or lakes. Grab a paddle & glide across the lake. Cast a line & catch a bass.
be07247ccf36574157d1b83bbc51e723
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Incumbent U.S. Rep. Scott Perry has narrowly captured a fourth term in Congress Tuesday after a hotly-contested race against Democrat George Scott in a newly-redrawn district. With more than 90 percent of precincts reporting, Perry was clinging to a small but seemingly decisive lead over Scott, buoyed by strong margins in northern York County, the home turf that he's represented in either Harrisburg or Washington since 2006, and Cumberland County. Current vote totals are Perry, 137,212, or 51.3 percent; Scott, 130,159, 48.7 percent. As of this writing, Perry had not declared a win at his watch party at Boomerang's in Fairview Township. For national Republicans, the 10th had become something of a fire line; knowing they were likely the lose seats under a newly-drawn Congressional map, the GOP supporters sought to limit the damage to the Philadelphia suburbs and Lehigh Valley. Democrats, meanwhile, began to eye Perry's seat as the next most possible for them in a true Blue Wave scenario after early polls show Scott to be a viable contender. With that as a background, outside super-PACS gave the race a life of its own: More than $3 million in outside money was spent on the Perry / Scott race, easily a record for the midstate. Close behind was the national Democratic Congressional Campaign Comittee, which ponied up $400,000 in attack ads against Perry, and a $353,000 pro-Scott buy from the With Honor PAC, a group supporting veterans who have pledged to bring more bipartisanship to government. If the race got nationalized, that seemed to work in Perry's favor, based on interviews with voters. Several Perry supporters told PennLive their Republican votes were a statement of support for the policies of President Donald J. Trump. "I like where the economy is, and I don think there's been enough said about that," said Mark Cherenzia, a 57-year-old health care sales rep leaving a polling place in Silver Spring Township, Cumberland County. Other voters said the Democrats' agenda simply didn't speak to them. "I'm all about paying my bills on time and getting decent health insurance," said John Dunstan, who runs his own dog grooming business on the West Shore. "I haven't heard the Democrats say anything that interested me at all" in the campaign. Several Scott supporters told PennLive their Republican votes were a statement of opposition to the controversial Republican President Donald J. Trump, and the Republicans in Congress who have decided to roll with him. "Normally, I don't know that I would vote, even, in the mid-term," said Erica Schmidt, a 40-year-old employment recruiter who was leaving a polling place in Silver Spring Township. But Trump's brand of governing changed that, Schmidt said. "I think he's a racist, and I think he's got this anger going, and fear going, and that's not what we need to be about." It was a common theme among Scott supporters. "George Scott appears to be a man of integrity. He appears to be willing to listen, and he appears to want to work with the other side," said Ann Cavanaugh, 74, a retired Social Studies teacher at Big Spring schools. "And I've seen enough hatred and anger in our politics." The district, as set out by the Pennsylvania Supreme Court last winter in response to a gerrymandering lawsuit, still has a clear Republican edge, both in voter registration and performance. Voters here went for President Donald Trump over Democrat Hillary Clinton by a 54.7 percent to 45.3 percent margin in 2016, after third party candidates were factored out. That's a world apart, however, than the 61 percent Trump, 39 percent Clinton margin in the district that Perry won in in 2016. Perry, first elected to Congress in the York County-centered 4th District in 2012, is the very picture of the self-made man, having come through a hardscrabble childhood to start his own small business and earn the rank of Brigadier General in the Army National Guard. In Washington, he is a member of the House Freedom Caucus, a more conservative part of the Republican Conference, and he's fashioned a strong record as a deficit hawk and a supporter of limited government. Generally perceived as a reliable conservative vote for the GOP, he has actually broken with his leadership and the Trump Administration more than most because of his staunch opposition to spending bills that he perceives as perpetuating deficit spending. Scott, meanwhile, is a career Army officer who retired from active duty in 2004 and, more recently, has been serving as a Lutheran pastor in East Berlin, Adams County. He worked hard to show himself as a centrist Democrat who midstate voters - including those who typically never vote for Democrats - can feel comfortable with, citing his childhood on a farm, his Army service, and his faith life.
f30a795ea5b102dcd7005730a8651fb8
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Does infinitely long-lived universe (allowing Poincare recurrence) = naturalistic immortality? fall 20,000 feet and survive? Right Angle Torus ? Means ? What does the delayed choice quantum eraser experiment prove?
2dd775ed2a0837d9093be3636b6a66cf
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
PoundPosters.com: Canoe on the Shore (Thailand) 40cm x 50cm Mini Poster for Only £1 - Buy now! Does this look like your kind of destiny? Gently floating on the crystal clear still sea, this canoe is ready to take you out into the blue; a place of dreams. Close your eyes and imagine your toes in the icing sugar sand, this inspiring poster of a tranquil scene in Thailand is bound to make you yearn for the endless beaches.
7c9edaa15a2db72402c1066fd57c99b3
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Founded as a separate denomination of Reformed churches in 1924, the Protestant Reformed Churches stand in the tradition of the Protestant Reformation of the sixteenth century. Their origin as a denomination was the doctrinal controversy over common grace within the Christian Reformed Church in the early 1920s, occasioned by that church\'s adoption of the doctrine of common grace as official church dogma. The result of the controversy was that three ministers and their consistories were, in effect, put out of the Christian Reformed Church. Many of the members of their congregations followed them, and in 1926 the three new congregations formed the Protestant Reformed Churches in America. The denomination grew rapidly in the years that followed, but in the early 1950s the churches in it endured a severe, internal, doctrinal controversy in defense of the unconditionality of the covenant of grace. As a result of this struggle, the denomination was reduced in size by about one half. Most of those who departed returned eventually to the Christian Reformed Church. The PRC currently have one missionary stationed in Ghana and one in the Philippines. Each congregation has a board of deacons.
58079f9fcfe91d496eaeab7e73728691
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The LA Weekly reported that the majority of Los Angeles medical marijuana dispensaries remained open at the end of the first week of a new city ordinance that requires more than 400 of the shops to close. As our Los Angeles marijuana dispensary attorneys have been reporting here on Marijuana Lawyer Blog, we have filed numerous lawsuits on behalf of these legal businesses. We strongly believe that the City of Los Angeles does not have the right to close businesses that are operating legally under California law. And the unfair nature of the ordinance — which allows more than 100 businesses to remain open while closing more than 400 others based on an arbitrary date, is also an issue we are arguing in court. Our Los Angeles marijuana dispensary lawyers believe there is strength in numbers. The CANNABIS LAW GROUP offers reasonable legal fees and urges business owners to join in the fight. The operators and landlords of 439 retailers have received letters from the Los Angeles City Attorney’s Office warning that violation of the ordinance could result in a fine of up to $2,500 a day and six months in jail. Most of the city’s dispensaries are ineligible to operate because they did not receive approval before a November 2007 date established as part of the ordinance. Of the remaining 130, many are either not in business anymore or can’t comply with other rules of the ordinance, including a prohibition against operating a dispensary adjacent to a residential area. The LA Weekly reports some of the dispensaries are now identifying themselves as members-only facilities while others contend they have been wrongly classified and are operating as therapy centers or related businesses. As our Los Angeles marijuana defense lawyers reported last week, a growing number of businesses are reorganizing as delivery services. Meanwhile, city officials continue to promise a crackdown. They contend members-only services are illegal and Los Angeles District Attorney Steve Cooley said delivery of marijuana is illegal in the State of California, regardless of the city’s rules.
2f4561fbfc5982ac36d9bdef766716b6
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The player who never plays joined Yankees camp in the past week, and instantly the number of those who will not be ready for Opening Day increased. The List changed names from Disabled to Injured, but Ellsbury remained the same non-participant. But the Yankees weren’t counting on him. Or on Didi Gregorius for half a season. They were depending on CC Sabathia, Luis Severino, Dellin Betances and Aaron Hicks, none of whom will be on the 25-man roster to begin the season. Even top prospect, Estevan Florial, fractured his wrist, though positively, the Yanks were told by doctors the outfielder would be in a cast for three weeks rather than the initial prognosis of six. The depth includes Greg Bird and Tyler Wade. The oft-injured Bird was hit on his right elbow by Houston’s Wade Miley in the first inning Wednesday. He stayed in the game and joked afterward, “It didn’t hit the [elbow]pad. It never hits the pad.” Bird said he was fine and plans to play first Thursday. Wade was removed with hip tightness. He said it was precautionary and that he expects to play in the next few days, but was scheduled to see doctors in Tampa.
3fe3b89ac243c7a04ea306860298cba7
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
These Apple BBQ Meatballs are a hearty meal but easy to pull together on a busy night. It will satisfy the meat lover as well as the picky eater, so basically the entire family will love dinner. A twist on the classic party meatballs, these are a bit less "sweet" as they don't have a ton of sugary jelly in them but a more natually sweet apples instead. Bbq sauce will take the place of the chili sauce so it has a bit more tang, and hearty, stick to your ribs flavor to it, that makes it feel more like a meal. I serve it over brown rice and a side of corn on the cobbettes. I usually make a full batch even though I am an empty nester, and freeze the lefteovers for a future freezer meal,or quick party/ potluck food when needed. Jan's Tips: You have some choices for the apples: You can use regular applesauce, I like to use a chunky applesauce when I can find it, or I grab a can of fried apples, or even sugar free apple pie filling. All of the above will work great. I just like the chunks of apples with the meatballs. A hearty and easy to pull together meal that will please the entire family. Serve with corn cobbettes. Arrange frozen meatballs on a baking sheet in a single layer. Bake in oven at 375 ° for 15 min. to lightly thaw and begin to brown the meatballs. Remove from oven.In Large Sauce pan heat Bbq sauce and applesauce or fried apples over medium-low heat. If using fried apples, break up apples into small bite sized chunks, as you mix it together. Add meatballs and gently mix to coat. Let meatballs heat but not boil. Reduce heat to simmer and let simmer 5-10 min. till serving.Cook Rice according to package directions. Serve Meatballs over rice.Alternate cooking method: You can cook the bbq sauce/ apples and meatballs in crock pot on high for 2 hours or low for 4-6 hours and then cook rice before serving. Don't forget to sign up for the great Prize pack #Giveaway from our amazing #Sponsors! The link to enter the giveaway is below. The prize pack has gotten so big and long, I'll send you back to our welcome post to check out the full list of prizes, if you didn't see them earlier in the week! What a great, quick dinner or appetizer. So easy to have all the ingredients on hand for those nights when time slips away from you. What a fun dinner, but I agree with Wendy that it would be a super fun snack for our football crew too. They always like meatballs and this will be a perfect way to mix it up! This sounds even better than the cocktail sauce and grape jelly meatballs. I wish we would have encouraged KC to make more BBQ sauce. I just asked Clint if he could bring me some more apples so I can make more applesauce. These apple week recipes are going to use up my stash.
13cc4376d8bdf754de94b8e36d21e256
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Q: How can we improve the event? A: I liked the second one I attended where employer/career tracks were grouped by table, maybe (and I think I saw this may being done already, just not sure) having luncheons directed to single fields for industries that are ‘hot’? Would really be a boon when the energy sector finally picks up some momentum again or for larger interest fields like medical or general staffing. Thank you for attending our recent Warrior for Life Luncheon. This short survey will help us to improve the event for employers and veterans alike moving forward. Be as descriptive as you like; we read every response! Thank you for your support - we hope to see you again soon.
30b1bff6b41e46e1ac1d82c34f6d25e7
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Our law firm has a reasonable fee schedule. If necessary, a payment plan can be arranged. We also accept Visa, MasterCard and Discover. The cost of a defense attorney varies based on a number of variants, primarily on the complexity of your divorce – whether it is contested or uncontested, if your spouse is high conflict, and if we are able to settle with your ex or if we will need to go all the way to trail to ensure your rights and needs are respected and observed. What areas do your serve? We serve and help clients in the following counties: Hillsborough, Pasco, Pinellas, Polk and Manatee County. A Guardian ad Litem is an individual who is either appointed by the court or who can be sought by a divorcing parent to serve as a third party to help determine a child’s best interest when claims of abuse, neglect, or violence are made. A collaborative divorce is an uncontested divorce where both parties are able to work together to divide their marital property and assets cordially and with little disagreement. Can I have you be my attorney for my divorce and bankruptcy? Yes! Many people also find it more helpful to use the same attorney for their divorce as their bankruptcy filing – you only have to explain your situation and goals once, not multiple times. In addition, working both cases, I can provide valuable insight and act quickly upon it to ensure your best interests are met without having to wait to hear from your other attorney. As for your spouse, it is recommended that they seek their own attorney, at least for the divorce to avoid any conflict of interest during the divorce process.
968a5a204b1290e53a570425ddc1cbca
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Is Enduro at EWS level getting too dangerous? First round of 2014 Enduro World Series at Chile has been an amazing event. We have not been there, but from the comments of one of our riders, and all the reports over the internet it seems both the montenbaik team and the EWS team did a flawless job. But, there's a tiny bitter feeling on what we've seen. Josh Carlson had a high speed crash during testing that sent him out of raceday. Watch his own video. Fabien Barel got a nasty crash at special stage 1, and even tough he managed to finish the day, we now know that it was a bad crash with a fractured vertebra and that he risked his future mobility finishing the stage. Get well soon Fab. The level of the sport is increasing at an accelerated path, stages are long and physically demanding, and some of them are hard and difficult enough to be at the DH World Cup. Is it a safer discipline than Downhill? Not sure. And, should it be? Not sure.
7f15c77c35228413d59b3964dd7b1733
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
In order to strengthen local governance structures at district level and at the same time improve living conditions for Afghanistan’s rural population, the German Government is supporting the necessary capacity development measures and smaller infrastructure initiatives in Afghanistan. District Development Council (DDA) Community Centre in Baghlan. Training of District Development Councils. Water passage bridge in Badakshan. Decades of armed conflict have shattered Afghanistan’s infrastructure along with its state structures. State and administrative functions are inefficiently organised and complex; civil society interest groups do not always exist. And the benefit to the population is not always visible, as most of the Northern Afghan population only has limited access to infrastructure and public services. However, lasting stability in Afghanistan’s regions depends among other things on local governments and interest groups being capable of taking action. The volatile security situation in some parts of the country further impedes regional development. By means of direct and effective infrastructure measures and efforts to strengthen local participatory structures, especially the district development councils, the programme seeks to promote the state building process in order to improve living conditions for the population and thus help to bring stability to Northern Afghanistan. Infrastructure expansion measures are directly benefiting the Afghan people. For example, over 280 schools and other educational facilities as well as 39 roads and bridges have been either built from scratch, extended or restored and properly equipped as part of the project. The results are impressive. A total of 395 projects have been completed to date in 52 districts of Badakhshan, Kunduz, Takhar and Baghlan Provinces. These construction measures are having a direct and positive impact on socio-economic conditions in these districts, as they provide local people with access to education, economic opportunities and administrative services. In Yaftal-e Payeen in Badakhshan Province, a recently constructed health clinic can now host up to 100 patients and serves the district’s 60,000 residents. Members of the district development councils (DDCs) have received high-quality instruction in the areas of project selection and monitoring as part of training measures delivered to more than 50 district administrations. Additionally, training for DDC members improves these institutions’ capacities to build efficient and effective regional governance structures. One example of a seminar topic is conflict management. Local communities are involved indirectly in democratic processes, namely the selection and implementation of projects, through the activities of qualified DDA representatives. This increases the legitimacy of the authorities concerned and the decisions they make. In order to leverage the full potential of the Afghan people for the long-term stabilisation of their country, women are also being encouraged to play an active role in the work of the DDAs. Activities in the provinces Badakshan, Kunduz, Takhar, Baghlan. Stabilise fragile regions by improving socio-economic infrastructure and strengthening local development councils.
2067314794c98addb9a6c0ec552f1390
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Solve each of the 10 levels by filling in the gray squares to make the difference of the black squares, both rows and columns. Careful though: boxes can have a pair that must contain the same number. Give the game your best shot! Black - Open, Gray - Total. Hover over a box. Type a number. Box A = Box A, Box B = Box B and so on... Highest possible number is the current level. Have fun!
c52fa7238a4080319e98ce36ae7a4f45
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Swiss Olympic team chief Gian Gilli had to send one of his footballers home yesterday, when news arose that he “insulted and violated the dignity of the South Korea football team as well as the South Korean people” with a racist tweet. Mind you, the tweet in discussion and the Twitter account of the athlete have been, in the meanwhile, deleted, but not quick enough to save Michel Morganella’s Olympic career. We called her tweet the PR goof of the century, and hoped athletes from other countries had learned that poor behavior has consequences. Morganella proved us all wrong. A screenshot of Morganella’s twitter account, before deletion, with incriminating tweet. Observe the last tweet, showing no remorse. It’s still difficult to accept racism in any form as an “error” – it’s a drive, an instinct, if you will. Morganella’s error is that he expressed this drive, instead of suppressing it. The apology doesn’t change anything. This whole mess reflects even more poorly on the Swiss officials, who felt they had “no alternative” but to disqualify the athlete under the terms of the International Olympic Committee’s code of conduct. Officials didn’t say that the athlete deserved to go home and they even looked for excuses for the tweet. They explained Morganella’s “error” as a result of being “provoked” by comments sent to his Twitter account after the match. These hypothetical comments sent to @morgastoss after the match probably came from his 1197 followers? Let’s face it, Morganella is no Alexander Frei. With Twitter’s viral power, it’s puzzling that no one teaches these athletes how to use social media, what is appropriate to say and what not. But more than a PR issue, it’s puzzling that in this century, people are still driven by supremacism, bigotry, racism and prejudice, especially those who should serve as role models for generations to come. The public expects a lot from the Olympians. They are the personification of human excellence, conduit, and stamina. Morganella disgraced them all.
870e8cba2541920b3ae9371fc11ea90c
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Balliol has existed as a community of scholars on its present Broad Street site without interruption since about 1263. By this token it claims to be the oldest college in Oxford, and in the English-speaking world. In 1260 a dispute between John de Balliol and the Bishop of Durham erupted into violence and Henry III condemned Balliol’s behaviour. The Bishop had Balliol whipped, and imposed a penance on him of a substantial act of charity. This he did, by renting a property and creating a house of scholars, which was soon known by his name. After John de Balliol’s death in 1269, his widow, Dervorguilla of Galloway, guaranteed the future of the ‘House of the Scholars of Balliol’ by establishing a permanent endowment and giving it Statutes in 1282 – so bringing into being Balliol College as we know it today. The College celebrated its 750th anniversary in 2013. The College’s patron saint is St Catherine of Alexandria. The College arms, taken from the back of Dervorguilla’s seal, show a lion rampant for Galloway and an orle for Balliol. You can read more about the College’s history by visiting Balliol College Archives and Manuscripts, and you can send an enquiry to the college Archivist here. You may also be interested in these pages on the history of the Chapel and the history of the Library.
2672680f983592415549b9bbaa444e54
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Before surgery, any occasion or emotion would be an excuse to pig out. TGIF happy hours were no exception. I had gathered some lovelies for my Friday evening on a gorgeous October day. The store bought items turned into making my french onion soup. A perfectly tasty end to a long week. Getting this into my kitchen has been a long process. Bought my first cocktail shaker from Amazon. Found the olives at TJ Maxx. I love popping in there every couple months for fun gourmet food, kitchen ware, and all kinds of yummy lotions and soaps. I made an IKEA run this week and came across martini glasses. All I needed was the booze, which I picked up on my way home Friday, and we're off to the races! Bad analogy; I just needed to unwind. And gin is my favorite way to do it. It's the only liquor that I can digest well, and that I really enjoy. I can't drink much post-surgery, so I need to be picky with beverages just as much as my food. The best pretzels ever. Everything. Perfect crunch. They are so thin, they don't fill me up. I still try to limit my intake, of course. But these are a great go-to for dips. I will be returning to the Joplin Greenhouse/Marketplace often for this hummus. Sure, I could make my own for a fraction of the cost, but it's happy hour for pete's sake. I didn't want to make anything but the martini! They had several other flavors, but I usually return to the original. Fantastic complement to the hummus, this spinach and artichoke spread was creamy and divine. Again, I could have made this myself and have several times, but the texture was perfect and all I had to do was rip the top off and dip, baby, dip. So far, no protein on this afternoon delight with the exception of a tad from the garbanzo beans. So I had a serving of the french onion soup I made earlier in the week. I don't each much beef, but beef broth is a delish, comforting experience as it gets chillier outside. I topped with toasted sourdough and fontina cheese. Still not much protein, but I tricked myself into thinking there was!
4b7390ba6ae23ab03d61210bb939086a
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
We are all influenced by what we see and read. All writers are. We all have dynamics that we repeat; characters we tend to carry from story to story—it’s what we do. Some write entirely for themselves, so they don't pay much heed to formula and style; but those of us who write for others, we have a moral duty to not be repetitive, unoriginal and predictable. Who wants to fall into formula? Why spend so much time writing a manuscript that has been written before?Avoiding that is hard. Part of avoiding having your story sound like every other story in its genre is to read and watch, and to inject YOU into what you write, no matter what the genre is. I’ve known a few people in my life who don’t read much despite writing. They claim it’s so they aren’t influenced by other books, but in a lot of cases, they simply don’t realize that they’re writing something that’s been done a bazillion times, and if you just take away the details like the settings, the time frame, things like that, the core of the story, the character, the motivations, the plot is the same as pretty much every other book on the shelf. Let’s face it, the world has been going on doing its thing for sometime, and the written word has been around for a bit; so it’s not that easy to have an original idea any more. Case and point... how many remakes and sequels are we seeing these days? They’re want for material... And so are novel writers. If you look at your story objectively, do you think it’s predictable? Do you think that the happenings would be expected? Would the love be requited, would plot take a turn, would your reader be surprised and taken off-guard? Staying away from formula means staying away from predictability. Avatar was a wonderful movie to watch, but we all knew the moment we saw it that he’d be riding that really giant orange/red dragon everyone feared in order to secure the respect of the people. Avoid pointing things out... in many stories, the author will mention something seemingly innocuous ... a broomhandle hitting the floor... a window with broken shards of glass... it's transparent and it takes away the surprise. The reader is automatically going to know that it’s going to be related to the next bit. Don’t be obvious. It’s annoying. The tired ex-soldier who needs revenge; the comedic sidekick; the jaded warrior, the strong chick that really secretly wants to be objectified, pursued and rescued; the untamed, willful woman who is tamed by the strong-willed hero; the dogged underdog, the grunting strongman... let’s face it; you might as well be picking characters from an MMORPG character builder these days... there are so many old, boring archetypes. Your characters are supposed to be real people in the story. They have backgrounds, they have motivations, even the good people have flaws... they don’t do things just because, they hesitate before they jump into the fire, the bad guys can be likable, heroes can make mistakes. Impersonal and recycled settings can get old quickly. It’s one thing to create settings, but another to make them yours as an author. What makes fiction good is the believability of the places you describe, the detail that puts the reader in your head. Pulling from your own life, your loves, you can create something intriguing for others to ‘see’. Pulling from the basic novel 101 settings, like Middle Earth, or New York, or L.A. with no personal references to it, no details is going to make it seem generic. It’s the little restaurants nobody knows about, it’s the shape of a window, and it’s the quality of sound in a room... you have to put your readers somewhere they can picture; and not rely on past authors’ renderings to support the ‘realness’ of your settings. If you just write something that seems generic, or like another author’s work, it’s nothing but Fan Fiction. What makes a story readable isn’t just the story, it’s what is added of the author’s own personality and life experience. Even the most bizarre places should still be believable, which means you have to inject some of yourself into what you write, and not just write a story where A character goes to B character and fights them over C character who is in D location. They weren't simpletons. They knew deep down he wasn't really their son. They had to know. But their misery and their loss had afforded him a place with them, and for all intents and purposes he was Jacob McVeigh. It simply didn't matter to them. For him, for this Jacob impostor, despite having entered this situation with less than stellar motives, it somehow worked out--it changed him, it made him better. Somehow. He acclimated. He settled; something he'd never done before in all his days. He actually liked being Jacob. Liked it enough to let it become him. He'd learned about them through Brian Walsh; the man who killed Jacob--the creature that had made him disappear. Brian was huge man. Elephantine in a compact way, broad, thick shoulders book-ending a wide, dense chest. He had a round head with shifty green eyes, with tiny ears; his blonde hair was chopped into the standard buzz-cut all the inmates got. He had surprisingly small feet for a giant frame like his. He stood at 6'10"; he had to duck through the metal grate of the cell door to get in and out. ‘Jacob' had made him take the lower bunk. Jacob making any man that size do what he wanted was part of what made him different. He wasn't a small man; but one of decent height. In a fight, he would have been snapped like a twig by the likes of Brian, but Jacob's special ability for manipulation made the monster his pet. Brian, the heartless murderer, the simpleton, the ham-handed buffoon, was Jacob's personal bulldog. He had to only gaze into the beady, vacant eyes with his own piercing laser-blue gaze, and the tiny mind within would roll over and bare its belly to him. Brian saw a mightier beast in that gaze, a deeper, darker, angrier creature than he could ever hope to be. And far, far smarter. "It's gotta be the weirdest thing..." the giant would mumble in his oddly high-pitched voice, "...you look just like him, I tell ya. Jus' exactly like him. It's either that or I'm just seeing you this way ‘cuz I did what I did to him." Brian had starting listing his victims to Jacob the moment he was shoved into the cell with the huge man. Brian was in prison for killing a young man he'd picked up on the streets. Jacob... the real Jacob was a similar victim. He arrived in the city, a young and confused runaway-and immediately his innocence was dashed so terribly, he never recovered; never found normalcy, or goodness. Just a life of drugs and prostitution. He just ended up one of many unknown victims of the darkness. Dead in some ditch, not even given the dignity of being recovered. The buffoon could do one thing right; and that was hide a body. At first, the fake Jacob didn't really care. He counted Jacob among the other victims the beast boasted about killing... the ones he'd "gotten away with"; the ones that were never proven or never found. He confided in him all of his conquests. And Jacob-the-false sometimes listened sometimes didn't. He stared patiently at the ceiling. Time was of no concern to him. He wasn't in for so long. As his sentence began to wind down, he knew it was time to find that persona, to discover his identity. And then the assertions of his similarity to this Jacob came back to him. He used his library time to find information on the victim doppelganger, Jacob McVeigh. And he found a plethora of information; and a sad little website made by friends and family with pictures of him just before he ran away and disappeared... and yes, the ox was right. The kid looked like a seventeen year-old version of the man Brian shared his cell with. Jacob-the-false had more angles on his face, a gruffness to him... but the same piercing blue eyes, the same crooked smile, the same swarthy tones. He realized he could easily pass as the boy... all grown up--weathered a bit, maybe by life, but nonetheless, he could do it. And so he decided he would become Jacob McVeigh. He would become him and live the life the boy might have had if he hadn't left, if he hadn't been destroyed by his own desperation and murdered by a massive lumbering pile of very stupid flesh. He set up an email address through a free service, and clicked on the contact link on Jacob's website. Jacob the new, Jacob the imposter... With a quiet, whimsical smirk, he gave the beast one last cutting gaze, and then followed the swaggering corrections officer down the corridor, ignoring the comments and the glares of the inmates he'd virtually ignored for eight years. They all sensed his power, and they feared him. As they walked away, one guard called out a request to the plexiglass window where more guards watched, and the cell door jerked into motion. Just before the aperture became too restricted; Brian stepped out of the cell, and with a besotted grin, climbed up onto the railing of the mezzanine, and with an crazed laugh, he threw himself down the forty feet to the common, where his round head cracked open on the hard, scuffed concrete floor. The commotion didn't slow Jacob's release; which he expected it would. He was given his old clothes in a black garbage bag, and a Ziploc bag of his meager possessions. The smell of cigarette smoke wafted up at him from the clothes as he pulled them out. He was slightly amazed by the longevity of the aroma. Eight years, and it was like he'd just stepped out of the bar where he'd been arrested. He put the clothes back on, not liking that it felt like his old persona was wrapping itself around him again; he was not liking it at all. His ID card was there. Richard Mosely gazed back at him, his face a mirror to the blank mask that was in front of it. Richard Mosely wasn't there anymore, wasn't needed. With a flick of the wrist the card sailed into the garbage bin, along with along stale pack of cigarettes and a lighter. He put his fairly empty wallet into his back pocket; donned a slate-gray jacket, and strode out of the room, where he was accompanied through the series of gates that led to the unknown. It took him a moment to realize he was out. One minute it was gate after gate, buzzing him through, turning keys, latching bolts, and then suddenly, he was standing facing the a row of boarded up houses. The slam of the gate behind him reminded him of what he was doing and where he was. He straightened. He needed new clothes. A new persona. He was Jacob now. The loss was so evident in her eyes, he could have probably looked like Brian the beast, and she probably would have believed him. She clutched herself to him, and her embrace was hard and desperate, the sobs, the snot, the excitement loud in his ear. For a moment he might have even felt it, the warmth, the acceptance, the love, but only for that moment. In the beginning his hardness had persisted; they decided it was a side-effect of the trauma of his prior life. He would soften eventually, smile again, drink beer with pop and work on the truck together. But now, when it was all new, he was stiff. It was okay. He had reason. They didn't ask him questions. They didn't want to; they didn't want to trip up the imposter. They just wanted their son back, and he would do well enough. Helen and Stan. Stan and Helen. She was a tiny thing, 5'2 at most, her steel grey hair was straight and heavy, once a glossy black like Jacob's. She kept it to just below the shoulders in length, a hard straight line of hair, and bangs also in a neat line that still somehow softened her face. She was pear-shaped, with a pretty face and glisteny blue eyes. She wore black slacks with modest flats and a little top of magenta. She'd gotten all dressed up to fetch him at the station. There was no need for manipulation on Jacob's part; they were willing victims of his scam; they were eager and loving. Stan was a taller man, slender and grey. He had a face that showed many years of kindness, and eyes of dark blue nested in the heavy folds of his lids. His jeans looked like they would slide off him at any given moment, they were the dark blue variety, which had a crisp seam down the front of the legs ironed right into them. With that, he had a perfectly pressed pale yellow button-down shirt on, tucked into his belt, which was pulled up almost to his chest. The clothes were stiff on his lanky, bony frame. "Jay-jay..." he kept saying, tears filling his reddened eyes, "...my boy, it sure is good to see you. You sure have lost some weight, you're a skinny one; that's sure about to change, mom'll put some meat on your bones; she remembered you know; she remembered your favorite and cooked up a whole batch of shepherd's pie for you, it's all waiting for you, do you have bags? Let's get them in the car..." he rambled. Helen clung to his arm, gazing up at his face simply beaming with love and happiness. He knew it, though, he felt it; that they knew. But he also felt that they didn't want to believe it, or that they didn't care. Now they had someone to fuss over, someone who wanted to be with them, someone who wouldn't break their hearts and run away and desert them. No, this Jacob came to them, and they wanted him back. They wanted him. He folded his frame into the rickety 1950 chevy truck, squeezing in next to mom while dad turned the ignition. He'd kept the thing pristine; a shining blue, the chrome almost undamaged. It roared to life, and he looked at Jacob expectantly; waiting for him to comment on the truck, waiting for him to recognize it, to acknowledge the familiarity of it. Instead Jacob just smiled blankly at the old man. Stan simply put it in gear and drove, turning his gaze back to the road, too happy for it to matter. They had their Jacob back, nothing else mattered. Nothing else mattered at all. I’ve always been a confused soul. I still, at close to 40, don’t know what direction I want to go. Some kids have their future and career all figured out by career day in their sophomore year… others, like me, sort of dangle without bearing or direction pretty much their whole lives, not really fitting in anywhere. The things I went to school for? They are completely uninteresting and useless to me now. We try one type of job, we do pretty well at it, but it gets boring; then we try another type of job, tackle it as a challenge, master it, and then get bored with it and stop caring. That’s been the story of my life. All the assessment tests, IQ tests, career placement tests, all the high-hopes for my becoming an engineer, a scientist or a Nobel Peace Prize winner… Laughable. None of that helps anyone figure out what they are really meant to do, and what really makes them happy and feel fulfilled as human beings. I do know two things for certain: 1) I’m happiest when I’m creating things; and 2) I’m happiest when I have the freedom to really write. Writing is the only consistent craft that has stuck with me from childhood. I’m one of those people that tries everything; wood-carving.. bored with it. Painting.. meh… Pottery… pfft… ::sigh:: Writing on the other hand… that one just never went away. I started reading at a young age, and the idea of being a story-teller was very appealing to me. So I would write clumsy little stories. This sense of creativity was fostered by an exceptional teacher in the fifth grade, who read us wonderful stories, who had us performing abridged versions of Shakespeare, who turned words into images, and who taught us to write, illustrate and to bind our own books. Life affects your creativity. It’s a given. The more work demands of you, the more your family demands of you, the less you write. In my twenties, I churned out several novels. They were all extremely bad, of course, but it shows that there was a well of creativity and I had the energy to stay up until three or four AM (my creativity really peaks between midnight and four AM—not sure why) and still function at work. Of course, these days, I can’t do those hours any more. The older we get, the less time we have for what is ultimately (unless you’re Stephenie Meyer or JK Rowling) a past-time that is squeezed in between your work day, and children (if you’ve got them) and horses and all other things. I can’t afford to make it my career, so it is a peripheral thing. So I am pretty much always unsatisfied with the way things are. I’d simply rather be writing than doing anything else. So, I am squeezing what is in essence, my raison d’être into whatever free time I have, and trying my damndest to find that creative pool inside me where I can tap into it. It’s not easy. Stress, family, work… it taps you out. I drive home on my daily commute; an hour each way, and I try to formulate ideas in my head as I do… what’s the idea? Where is the story going? … Most of the time I end up dwelling on immediate concerns; deadlines, parents, family crises, marital spats… It’s really frustrating. Many of us less than famous authors are faced with this conundrum every day. Somehow, some of us manage to put together a product to sell… Some are good, some are miserably bad… but it’s a hard thing to juggle; trying to succeed as an author in addition to living a life and working a job like every other schlub. We have to dig into our pockets for editors (at least some of us do) and we have to act as designers and marketers to boot. We have to send query after query and receive rejection after rejection. But fundamentally, it’s important to us to get our work out there; as a sense of accomplishment in the art that we love, as a way to validate that this is what we are meant to do. It’s worth it, even if we aren’t selling millions and being picked up by Hollywood; even if we are barely breaking even, or in some cases, losing money. Writing is my survival… it is my healing. Writing is my escape as much as reading is. When I think I can’t cope, I write. When I feel like I need to express something I’ve been internalizing, I write. I stay up late, and pay for it dearly the next day; I squeeze in some time on my net-pad during lunch, but I write. Jobs and careers might come and go, but all through it, I’m still writing. It’s what I was meant to do.
d11f028fc38f7641f5ee5b3534d55478
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Pousada Azul is the best place to Jeri. It is 100 meters from the sunny beach, three minutes from the busy main street, the best restaurants, bars and shops of the village and 50m from the bus stop. The 15 comfortable apartments spread around the pool are an invitation to experience the magic of Jeri. With air conditioning, television, refrigerator and hot shower, some still overlooks the best postcard from Jeri: the sea and the famous dune “Por-do-Sol” (Sunset). Voted one of the most beautiful beaches in the world, Jeri receives thousands of people from Brazil and from all over the world each season. Many of them fell in love with the village and even settle here, Nowadays, the cultural mix turns Jeri into a cosmopolitan place that has not lost its identity as a fisherman village, full of legends and mysteries. His climate is reflected in the Pousada Azul. The guest feels like home. A friendly staff welcomes everyone as if they were friends. The best fruits, breads, cakes, fresh juices of the region's are served during breakfast each morning. Your journey at Pousada Azul will be unforgettable!
8d00bc67019b487d45c3b1bd8739dd6d
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
PARIS (AP) — French opposition lawmakers from the right and left are combining efforts to try to block President Emmanuel Macron's plan to privatize Paris airports. France's lower house of parliament, the National assembly, definitively adopted on Thursday a measure allowing the government to privatize the group operating Paris' three airports, Aeroports de Paris, or ADP. Air France planes parked on the tarmac at Paris Charles de Gaulle airport, in Roissy, near Paris. French opposition lawmakers from the right and the left are joining efforts to try to block the government's plan to privatize Paris airports. Opposition lawmakers from the left and right launched a long process that could ultimately lead to a popular referendum under a procedure introduced in 2008. The Constitutional Council will examine their request. The state owns 50.6% of ADP and did not specify how much it would sell. The centrist government says the move would raise 10 billion euros ($11.3 billion), money that would help finance investment in new technologies. Opponents say Paris airports are strategic hubs.
e5ef87a5c58c51d5aac4f8beb17b26ae
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Since we’ve started our new Follow Friday series, the response has been incredible. As one reader pointed out, this is more than just some simple Twitter recommendations — it’s an evolving yearbook of the amazing people we’ve been networking with in 2010. Now of course, a number of those amazing people are reading this blog right now, so remember, we’re happy to accept submissions and suggestions. Feel free to use the comments box to nominate exceptional people we haven’t mentioned yet! Dustin Matthews When Dustin Mathews refers to himself as an “Adventurer and Entrepreneur,” he’s not kidding. From Real Estate investing to Information Marketing, Dustin gets started with a very unusual tactic: he approaches the companies that interest him the most, and offers to work for free. As a dedicated worker and a fast learner, he’s turned this experience into expertise across a number of industries. When it comes to marketing, Dustin is constantly overflowing with innovative ideas and eager to take on new challenges. His willingness to re-invent himself and his commitment to excellence make him an easy first pick for this week’s Follow Friday. Tell him we said hello. Dean Edleson is a tenacious professional Short Sale negotiator, and a straight-shooting, honest and helpful businessman. He’s probably best known for his blunt, expert take on the housing market, which he delivers through video blogs on his site. A tireless networker and advocate, Dean is another living example of our motto, “Always Winning, Together,” and a very important character to know. We’ve also found out that Dean got interviewed this week for an upcoming Bloomberg piece where he’s speaking out about “banks pointing fingers at investors as the ‘bad guy’ and what we do as being fraudulent.” We’re looking forward to seeing it get published…and remember, you heard it here first. Barbara Reuter is an awesomely authentic human on Twitter — honest, open to different perspectives and very engaging. She’s also a world class real estate expert, currently the Chief Operator Officer of PICOR, where she’s been paying dues since 1985. Her resume is ridiculous, but what really makes Barbara worth your time is her personality. She’s funny, friendly, and has a super-human knack for fitting a couple paragraphs worth of knowledge into just under 140 characters, pretty much every day. Just in case you needed one more reason to convince you: She’s also a great news filter. We are happy to consider any reader submissions and recommendations. Is there someone you know who should be spotlighted here next week? LET US KNOW!!
41422304346d81b973b9531125df2886
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
TORONTO -- William Nylander scored with three seconds left in overtime to give the Toronto Maple Leafs a 1-0 win against the New Jersey Devils at Air Canada Centre on Thursday. "I just knew when we got out there, there was a minute left and we'd been out there for a little bit," Nylander said. "I didn't know how much time was left when I scored but I was pretty tired, it went back and forth there a few times. I was low energy. Luckily [the puck] couldn't go back the other way after that." Frederik Andersen made 42 saves, including eight in overtime, in his second shutout of the season for the Maple Leafs (13-7-0), who have won five straight games. The past four wins have come without center Auston Matthews, who is out with an upper-body injury. "It's fun I think for a goalie," Andersen said. "It doesn't really change anything, the score of the game but obviously you enjoy a good goalie battle, especially when you come out on top. I think they had a couple looks on the power play and [in] overtime they had some chances but outside of that, I don't think we really gave up too much." Cory Schneider made 24 saves for the Devils (11-4-3), who had won two in a row. "I thought we defended the neutral zone really well," Schneider said. "We really didn't give them a lot of time and space to get going. I thought we played well. It stinks to lose and only get one point after you have an effort like that." Devils forward Kyle Palmieri said it was probably one of the more solid games New Jersey has had at both ends. "If we do that, more often than not, we're going to come out on top," he said. Nylander, who scored for the first time in 12 games, got the puck in the corner, skated into the slot and, using Palmieri as a screen, put a wrist shot by Schneider. His last goal was on Oct 21 in a 6-3 loss to the Ottawa Senators. He had four assists in the 11 games that followed. When asked if he thought the slump was weighing on the 21-year-old Nylander, Maple Leafs coach Mike Babcock said, "Oh 100 percent, for sure. Now, obviously, Willie can loosen up and get playing. When you haven't scored in a while, as a young guy, you get thinking too much instead of just playing and working. They want to score and they want to score every night and if you're a point-getter, you think you should be doing it every night. You're not used to it. The NHL sends you for long stretches [without scoring], every other league you played in never ever sent you for stretches like that." Toronto defenseman Jake Gardiner was called for interference on New Jersey center Nico Hischier 42 seconds into overtime, but the Devils couldn't capitalize on the power play. "That was huge, 4-on-3 in OT, the [penalty kill] came up big there," Nylander said. "I think they were good the entire night too so that was a huge factor in the game." New Jersey was 0-for-3 on the power play; Toronto was 0-for-2. Nylander's goal at 4:57 of overtime. Andersen's blocker save on Taylor Hall at 4:29 of overtime. Schneider's blocker save on James van Riemsdyk at 13:06 of the first period. Zajac played 16:11 and had two shots in his season debut. He missed the first 17 games recovering from surgery to repair a torn left pectoral muscle on Aug 17. … Devils forwards Palmieri and Hall each had a game-high seven shots. … Nylander's goal was his first in overtime in the NHL.
631890ff28a07b43073cf39898126eca
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
I've got an API call to pull daily precipitation totals for given coordinates. My question is, how might I sync up my API calls when the data resets for the daily precipitation totals. Right now, I'm just pulling every 6 hours since midnight Central Standard Time. While that would work, I already have the data available, so why would I go and pull it all over again? there's no guarantee you will get the official tally during a day. Using the yesterday call, you will get official and preliminarily checked values if available. I see, but if that's the case, wouldn't that mean that the data Weather Underground has for daily precipitation total for the current forecast be incorrect? If it's not, what's the harm in pulling it in every 6 hours and just using that data? I'm not against using the yesterday call, I'm just trying to see the bigger picture here. If I were to use the call, is yesterday precipitation total a guaranteed amount? What about totals that are returned as a T or -9999, what do they mean? I'm going to eventually have waterfalls plotted all over the country on a map so I'm trying to minimize the calls I'm making to weather underground for the sake of saving a bit of money. I imagine there could be thousands and if I'm running a call every 6 hours and then another one, that's already 5,000 calls a day, yikes! I may rethink the way I'm doing the calls so that I can utilize the yesterday totals, because they honestly seem to make a lot more sense. In any case, what time would I need to run the yesterday API call. Does the data restart at 11:45 Pacific time? Now that I think about it, I can drop the last call at midnight at that point! Duh. I just realized WU charges a minimum of $300 a month for the yesterday call. There is no way I'll be using the yesterday call at this time, as I can't guarantee an income of $300 a month with this website. It's honestly not worth it. I can take the hit of the accuracy for now and maybe use yesterday later, that is if my website becomes profitable. Can you answer the original question? I can use what I have for now and make the upgrade later if things work out.
449281f88750d4d3ec97039ef0295694
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
SEOUL — For years, the South Korean government and private Korean organizations have objected to Japanese textbooks that convey a rather sunny version of Japan's imperial and colonial history. Now a textbook controversy is turning Koreans against Koreans, and exposing deep divisions in Korean life. All sides acknowledge that young South Koreans need some understanding of what's going on in North Korea, but how should high schools portray life on the other side of the border? Should they depict their neighbors as enemies or victims? Is objectivity even possible? The government's National Institute of Korean History, convinced it's the arbiter, plans to replace existing textbooks with an authorized "correct history textbook" by March 2017, leading some to accuse the government of spreading propaganda while trampling on freedom of expression and discussion. Conservatives say the liberal scholars who wrote the existing textbooks have tended to ignore the darker aspects of the North Korean dictatorship, while liberals accuse conservatives of wanting to "demonize" the North. One particularly spirited argument revolves around what textbooks teach high school students about juche, or self-reliance, North Korea's avowed national philosophy. Conservative critics say that almost all school texts present juche positively, in the language of North Korean propaganda. They worry that students might grow up admiring North Korea for a philosophy that's observed mainly in the breach because North Korea relies on China for virtually all of its oil, half of its food and much else. Conservatives are just as outraged by the way some textbooks explain the origins of the Korean War. They cite passages in which the authors hold both sides responsible for the North Korean invasion of South Korea in June 1950 that resulted four days later in the capture of Seoul. Liberals, meanwhile, say conservatives want a sanitized version of history. If the government sticks with its plan, they believe that would set a terrible precedent and compromise independent scholarship. The controversy harks back to the bad old days when dictatorial presidents with military backgrounds not only controlled what was taught in schools but also imposed censorship on newspapers and jailed outspoken foes of the regime. Park Chung-hee, who seized power in 1961 and ruled with increasing firmness until his assassination in 1979, was probably the toughest. He, of course, is the father of the current president, Park Geun-hye. Park is by no means as harsh as her father. She has not suggested amending the "democracy constitution," promulgated seven years after Park's successor, Chun Doo-hwan, suppressed the bloody Kwangju revolt in May 1980. Still, she is firmly identified with the conservative party that controls the National Assembly, and she personally ordered the drive to purify school textbooks. Her self-interest aligns with conservative objections to the way some textbooks describe the history of "dictatorship" in the South — a reference to her father's 18 years and five months in power before his assassination — while playing down his contributions to the economy. For liberals, battling dictatorial rule after the Korean War, winning the right to elect representatives and resisting government meddling with textbooks is all part of a continuum, an unending struggle or protest against repression. The situation, however, is more complex than this narrative allows, more multifaceted. North Korean schools obviously do not provide young students with anything like an objective version of the Korean War or life in the South. The North vilifies Park and talks of driving her from power. More than 1.1 million North Korean troops linger above the demilitarized zone that's divided North from South since the Korean War. It refuses to stop fabricating nuclear warheads while developing missiles for launching them against targets near and far. In this context, is it reasonable for textbook writers in South Korea to insist on a fair, even sympathetic, portrayal of North Korea? School kids, say southern conservatives, need to comprehend the dangers that confront them. The back-and-forth is not going to stop any time soon. More than 50,000 people have signed a petition against the "correct history" plan, and textbook authors have joined in a lawsuit against the government, accusing authorities of trying to brainwash the young. The matter will also come before the people in election campaigns. The debate bears certain parallels to textbook controversies in the United States. What should Americans be taught about the Vietnam War, or the legacy of American slavery and the civil rights struggle or, for that matter, wars against Native Americans? These questions reflect the difficulties of judging textbooks everywhere. For Koreans, 65 years after the devastation of the Korean War, the issues are not only sensitive but ongoing, part of everyday reality on a divided peninsula. Donald Kirk, journalist and author, has written numerous books and articles on Korea and Northeast Asia.
39642f59755250918bbbb394d1fb8b2d
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Your rights and obligations as a landlord or tenant in Florida are defined by state statutes, local ordinances, and federal laws and regulations. Many of these often experience significant changes. It is crucial to employ the services of an experienced lawyer for all aspects of residential landlord and tenant law to mitigate your liabilities and effectively manage dispute resolution. At the Law Office of Sam J. Saad III, landlord and tenant law is at the cornerstone of our residential real estate law practice, including comprehensive lease drafting, eviction and renter services. We serve individual property owners and property management companies throughout South Florida from office location in Naples and the Ft. Myers area. We understand that you want to provide yourself a residual income by leasing your property to responsible tenants. We will help you prepare leases that protect you and your residential real estate. Whether you own one rental property or hundreds, our attorneys are up to date on Florida and federal law. We are able to craft a lease that suits your needs, protects your investment from an unsavory tenant and maximizes the value to you as the landlord. We are well-versed at helping you understand when you can and cannot proceed with evictions. Our lawyers will guide you and help you gather the necessary documentation to protect your residential real estate rights while pursuing an eviction. Evictions can be complex and timing is everything. The law in Florida is very favorable to landlords who work precisely within the parameters outlined. However, it can be very unforgiving for those who do not. We understand the nuances of the landlord-tenant statutes related to lease terminations and evictions, and we will guide you through the process. It is always better to resolve your lease disputes quickly and amicably, but when you must proceed with an eviction, we offer low fixed fees for possession actions and flexible rates for damages actions. It is common for deposit disputes to arise while dealing with the end of the lease term. Our team is experienced at handling disputes related to deposits and when to proceed with formal legal actions. Our residential real estate attorneys will help you make the right decision while keeping your legal fees to a minimum. There are several potential landlord-tenant issues not addressed here, but we handle everything. If you need legal guidance regarding rent rules, a small claims lawsuit, fair housing rights, landlord access to rental property or any other issue, please contact us to speak with one of our attorneys in a free 30-minute consultation. We are available at 239-963-8999 and by email to speak with you today.
c37d49517d56ec84a56b76199bb2fbc0
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
This is a placeholder page for Karen Bucky, which means this person is not currently on this site. We do suggest using the tools below to find Karen Bucky. You are visiting the placeholder page for Karen Bucky. This page is here because someone used our placeholder utility to look for Karen Bucky. We created this page automatically in hopes Karen Bucky would find it. If you are not Karen Bucky, but are an alumni of Richwoods High School, register on this site for free now.
fc342451f03e2d536e90a488335af0cd
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Create a page!Develop a brand presence like never before. Advertise all your products intuitively. Create a unique following, and see what fans like about you. Reach your audience in a new and engaging way while connecting your fans to eachother through your business. Find out what your customers like and advertise to the right people in a new and effective way that works. Allow us to send newsletters and notifications!
82a13714a1dcc47a0e237dd9eb05f80d
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
According to Robert Allan, the Cash Code trading software can help you earn thousands of dollars and achieve financial independence by trading binary. The Automatic Binary Trading Software I Used BinaryAutoTrader. com. I've got a Binary Book account, and I don't know if I'd say it's an outright scam, but they. A binary option is a financial exotic option in which the payoff is either some fixed monetary. . . No firms are registered in Canada to offer or sell binary options, so no binary options trading is currently allowed. . . is responsible, has issued licenses to companies offering binary options as" games of skill" licensed and regulated. Being the most popular and recognizable drink around the globe, coffee is also a perfect underlying asset to trade binary options. Coffee beans are traded by. REGULATED ENTITIES INVESTMENT FIRMS INVESTMENT FIRMS (CYPRIOT). . APME FX Trading Europe Ltd. . Banc De Binary Ltd. Best Choice FBC Ltd. Considering TR Binary Options? Wondering if this broker, formerly known as Traderush. com, safe - or another scam? Read our review before opening a demo . The US is perhaps the only country in the world that imposes the most extensive guidelines that govern the legality of binary options trading. Regardless, traders.
40ca3274b2cf13384e1969f74057531b
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Large sun dogs appeared on the horizon Tuesday, Feb. 7, as cold air settled into the area. A sun dog is created by light interacting with ice crystals in the atmosphere caused by the colder air settling onto the land mass. They are typically seen in pairs of bright spots to the left and/or right of the sun. The sun dog is usually red, orange and blue in color.
c9c6f6dd4086db2d544e88da107fc8db
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Come home to Croghan Landing! This quiet neighborhood nestled in West Ashley is filled with classic homes, manicured lawns and friendly faces! Just a stone's throw from Downtown Charleston and incredible amenities like Charlestowne Landing, this tight-knit community provides all you need for fun, work, and entertainment. Want to explore your real estate options in this community? Contact us at Dana Properties and we can help you find your ideal home in Croghan Landing. Whether you're thinking about moving to this community or you're selling a home in Croghan Landing, we can help. Tell us about your property below and we'll tell you what it's worth in today's market. Living in Croghan Landing truly puts everything you need right at your fingertips. Enjoy being just moments away from incredible shopping at Citadel Mall or Croghan Landing Center. No matter if you're looking for a big box retailer, or a perfect gift from a unique boutique, you have countless option available. Plus just a stone's throw from Downtown Charleston, you have endless adventures and attractions just a short drive away. We can help you find your home sweet home in Croghan Landing. Call us at 843.883.3934 to talk about your plans, or choose a resource below to learn more about living in West Ashley.
95a31c97d8b9d85c26e643ce8f1cdfe8
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
On a fourth and 12 play, Dalton threw over Baltimore defenders and connected with Boyd, who raced to the end zone with 45 seconds remaining in the fourth quarter. With Baltimore's playoff berth on the line, Andy Dalton connected with Bengals wide receiver Tyler Boyd to dash the hopes of the Ravens. More: BX: Who could follow in Marvin Lewis' shoes as Cincinnati Bengals head coach?
92ffdff14e972fb88feb7328045f8492
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Multidimensional Variants: This new functionality enables OXID eShop to manage a variant of a shop article that can have variants. For example, a red T-Shirt can have different sizes. As you might have heard, the product Zend Platform was switched to a new product called Zend Server. As we have an interface for the integration of Zend Platform with OXID eShop Enterprise Edition, we want to update this interface and enable the Zend Server integration. Built in non-native libraries in OXID eShop will be updated, at least JPGraph and YUI. As we wrote to dev-general, the YUI library can be made much more lightweight by cutting parts not used in the shop. Also, the language management from administration panel was made less difficult. Before, only four languages where provided in OXID eShop from the admin area. If you wanted to have a fifth language, you had to define new database fields. From version 4.2.0 on, the shop will automatically take over this job when you create a new language. Deleting that language will not delete the created database fields but will set them back to the default value instead. Nice and helpful popups with additional information where built in closed to input fields in admin area. Check it out clicking on the “?” buttons you will find there. In close cooperation with our SEO specialist and due to new SEO expertise, some changes in this concept where made: The rel=”nofollow” attributes were removed from most of the links. Furthermore, article links will be built for each category an article is assigned to. A canonical link in the details page will lead to the article’s main category. While fixing a bug (#0001368), a log folder was introduced.
8d1d4bfe8b094fcffcafcd17fe5a44ad
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
There are so many things to think when it comes to renting. It helps to be clued up on what to expect, and the main things you should know about. For example when it comes to repairs in rented accommodation, whose responsibility is it? You would assume your landlord is responsible for most things, but in reality they don't have to do everything. There are small things like replacing light bulbs which is entirely your responsibility. What repairs can you expect to get help with and what do you do if your landlord won't help? Before renting a property, you should know your rental rights, and what is expected of you. So, read on to find out who handles what and how to handle issues which you may come across. You are also responsible for telling your landlord about any minor or major damages which are in need of repair which you cannot fix yourself, although fixing things yourself should mainly be avoided in case the situation is made worse or if it is a safety hazard. Your tenancy agreement may also set out some express terms on what your responsibilities are for repairs, for example, that you are responsible for decorating your home. Your landlord cannot include a term in your tenancy agreement that would pass on any of their repair responsibilities to you, for example, that you are responsible for repairs to the roof. This type of term would not have any force in law. • water and gas pipes, electrical wiring, water tanks, boilers, radiators, gas fires, fitted electric fires or fitted heaters. However, one important thing to remember is your landlord mustn't pass the cost of any repair work that is their responsibility onto you. In most cases, your landlord isn't responsible for repair work until they know about it, so it's up to you to tell them about any repairs that are needed. As a tenant, you are completely responsible for your visitors. If you or someone visiting your home accidentally or deliberately causes damage, you'll be responsible for repairing it. You should tell your landlord about the repair work needed. They may agree to do the work themselves and then recharge the cost to you, or they may agree to you fixing it yourself. If you live in private rented accommodation and you don't repair the damage, your landlord will probably claim some money out of your tenancy deposit when you move out. The Citizen’s Advice website provides some helpful info on what to expect from your landlord. Click here for more information.
1e08ac75ab9be9a40a64463da4e9a1cf
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Founded in 1997, the experts have AAA Insta-Move have been helping Orlando residents relocate to their new homes without the stress and hassles often associated with relocation for more than 20 years. Our crews have decades of combined experienced and are committed to delivering complete customer satisfaction. It is our goal that your local moving from Orlando to Longwood experienced exceeds your expectations in all areas. At AAA Insta-Move we work diligently to separate ourselves from other Orlando moving companies by consistently delivering an exceptional moving experienced. No matter the size or distance of your move, AAA Insta-Move will handle your relocation wth the same level of attention and care. This attention begins when one of our moving specialists comes to your home for an in-person consultation and price quote. You will then be paired with your own move coordinator who will guide you throughout the moving process and answer any questions you may have as we progress.
10153f88d85111c54ae16f39bd5c81fa
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Little Miss BBQ Restaurant – rated as the best restaurant in Phoenix, they serve the best barbeque in the state. The fatty brisket is a bestseller and runs out pretty quick with long lines still waiting to get some. Do try also the smoked pecan pie. Ribs are best paired with some mustard sauce. Get there early, as there are always throngs of people waiting to try their excellent barbecue. Pomo Pizzeria Napoletana – Definitely one of the best pizza places in the city. Service is great, efficient and non-obtrusive. The pizza crust is nice and chewy and the sauce and toppings are absolutely the best. Do try also their tiramisu and crème brulee for desert. Aside from the pizza, the mushroom soup is a must try. Rusconi’s American Kitchen Restaurant – the tasting menu is amazing and comes with a nice salad, scallops, lamb loin and dark chocolate. Do try the half-pound Harris ranch hamburgers, which are served with some thinly, cut fries.
93cac1f3aa0c4b6580b173fb2b60b18b
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
After the First World War the Voralberg voted to become part of Switzerland, but the League of Nations didn't allow it to leave Austria, and even today the area is distinct in character from the rest of Austria, even the Tyrol. The resort, part of the same ski pass as Zurs, is also part of the Arlberg ski arena, with St Anton and Kösterle only a short bus ride away - and I can remember a time when I skied between St Anton, Lech and Zurs on marked pistes, but those day have long gone for reasons I know not. Lech and Zürs between them have the reputation of being the most upmarket resorts in the whole of Austria, but the skiing is very good even ignoring the other Arlberg resorts you can easily access. From 2013 there is also a gondola link to Warth-Schröcken, which has some of the best snow conditions in Austria and significantly extends the terrain you can get to on the lift pass. Did you ski Lech or snowboard in Lech? What did you think? Have you taken the train to Lech? Do you have any tips on accommodation in Lech or the apres ski? Is there anywhere else you have taken the train to ski? if so, please contact us and share you experiences using the contact link at the foot of the page.
83561941b7f43b1d5c9d298afdc161a9
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
If you’re anything like me, the weekend is a time to get caught up on projects and get some uninterrupted work done. That’s all well and good, and yes, staying on your grind or working that side hustle is important. Yet, with five days of work and more work on the weekend, when do we rest and rejuvenate? This weekend, let’s employ the wisdom of a baby and get some rest – no, some sleep. Our bodies and minds can be renewed, and we can get back to business with renewed fervor. Z z z z z z z z z z . . . . . . .
409f06e5185196e9e04cc2d5e1f838d9
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The Infrastructure Association of Queensland is proud to present the 4th Annual Queensland Infrastructure Summit, coming up on 14 September 2018 at the Brisbane Convention & Exhibition Centre. Queensland requires significant infrastructure investment to meet the demands of its growing population and expanding regional markets. Through the core theme of building consensus, the Summit will address pertinent issues on achieving an economic vision for Queensland. For instance, how should the state fund its most important projects? How can a pipeline of investable projects be grown and attract the private sector? What do changes in Federal priorities mean for Queensland? This year’s future-focused Summit also features the interactive IAQ Infrastructure Innovation Challenge, which is supported by Arcadis, the Queensland Government, QUT, and Struber. We invite infrastructure contractors, planners, policy makers and advisers to weigh in, share big ideas and witness the possibilities that targeted, determined action can achieve in one big day at what is Queensland’s premier knowledge and networking event for infrastructure. Your ticket to the Summit includes access to the popular post-event IAQ Sundowner Drinks Experience, at Divine at Treasury Brisbane Arcadia, a special Brisbane Festival venue in the South Bank Cultural Forecourt.
ddd9581f8677696294b06b383a10c8ff
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
This course is designed for senior managers responsible for defining and shaping the direction for their departments or organisation. The workshop will provide a process for writing strategic business plans based on existing and emerging market factors as well as provide the tool kit to gain support and momentum within the organisation. Bring along reports about their industry, market place and business. Be prepared to share your key insights from these with the group. Please note this is the standard course outline. For in-house training, we can bespoke this course to your specific needs. - What comes first in strategic planning? - Risk Management. Identifying, prioritising types of risk and the action options.
c495a9711b5de8f8cf51e3ed7f16ed6e
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
24-hour strike in Rome involves ATAC and Roma TPL. A 24-hour strike by Rome's public transport company ATAC is programmed on Friday 12 January, from 08.30 to 17.00 and from 20.00. The strike will affect Rome’s buses, trams, metro and light rail services Roma-Lido, Roma-Viterbo and Termini-Centocelle. Employees of Roma TPL, the capital’s suburban bus company, will also adhere to the 24-hour action on the same day during the same times. Trade unions have called the strike action over fears that ATAC working conditions will be impacted as a result of the city's recent plan to restructure the troubled transport company. For details of the latest public transport strike see city website.
9bec8ef49287cfe436a039458e9534f2
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
BUY A KUBOTA MUFFLER AND SAVE! Located in the Greater Toronto Area, Nett Technologies Inc. is an innovative emissions control company that specializes in the design, development and manufacturing of proprietary and non-proprietary Kubota emission control solutions using the latest in 3-Way and Diesel Oxidation Catalyst (DOC) technologies. Sold and supported globally, Nett Technologies Inc., uses the best of materials, innovative product design and quality workmanship to manufacturer its products. We are the industry leader and an innovator of direct-fit emission control solutions for Kubota equipment. Our Kubota direct-fit mufflers replaces the OEM mufflers while retaining the same envelope size. Additionally, our Kubota after-market mufflers keep key engine performance attributes like sound attenuation and back pressure characteristics. Discover today the many ways Nett Technologies Inc. can help you and your organization with all your Kubota emission control needs.
4b83504b75ac9aa284d6957889861c53
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Golden State Warriors shooting guard Klay Thompson is expected to be one of the coveted superstars in the 2019 NBA free agency. Thompson may still not be on the level of Kevin Durant or Kawhi Leonard, but his ability to excel in an off-ball capacity makes him an incredible addition to NBA teams who need additional star power on their roster. After struggling earlier in the 2018-19 NBA season, Klay Thompson succeeded to regain his All-Star form and is currently averaging 21.8 points, 4.0 rebounds, and 1.2 steals on 46.5 percent shooting from the field and 38.3 percent shooting from beyond the arc. Despite being involved in numerous rumors, Thompson made it clear that he wants to be a Warrior for life. On Twitter, Mark Medina of the Mercury News revealed that Klay Thompson is planning to stay long-term in Golden State no matter what Kevin Durant’s offseason decision will be. Like Thompson, Durant is also set to become an unrestricted free agent next summer. Medina added that Thompson has no intention of giving the Warriors a discount and expected to demand a max contract. Klay Thompson has every right to demand a max contract from the Warriors. Thompson was one of the players who helped the Warriors return to the NBA Finals in 2015 and win three NBA championship titles in the last four years. If the Warriors lowball the All-Star shooting guard, there is a strong possibility that he will consider entertaining offers from other NBA teams. One of the potential landing spots for Klay Thompson is the Los Angeles Lakers. The Lakers are one of the few teams who could create enough salary cap space to sign a max free agent next summer. According to Adrian Wojnarowski of ESPN (courtesy of SB Nation’s Silver Screen And Roll), Thompson may consider joining the Lakers if they succeed to acquire Anthony Davis via trade and the Warriors don’t offer him a max contract. Luckily for the Warriors’ fans, Golden State doesn’t seem to have any problem giving Klay Thompson or any of their core players a huge payday. Per Tim Kawakami of the Athletic, Warriors owner Joe Lacob said that no one is going to outspend their team regarding their own players when free agency hits next July.
2edfef648c38920827b472f58ab97a44
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
This weekend has been all about work, but yesterday I had some free time to actually prepare a good, hearty breakfast which we had in bed. Mozart decided to join us too, as he always does. Especially when there are pancakes. Pancakes are his favorite thing in the world, except maybe bacon and cheese, but unfortunately he didn’t get to taste these ones as they had chocolate in them. I bet he’d love some bacon/cheese pancakes! Hmm, possibly for his birthday. They also come in the most beautiful range of muted colors, I decided to go for almond for the duvet and hazelnut for the cushions, and an almond linen throw. I’ve had grey bedding for a while and I must say these give a warmth to the room, it’s way more inviting now. As for these pancakes, they’re definitely perfect for an indulgent Saturday breakfast! *This post was made in collaboration with By Mölle, but as always, all opinions are my own! In a medium bowl, whisk together the almond flour, baking soda and salt. Add honey, egg and egg white and stir until smooth. Add water or milk to the batter until consistency becomes a bit more runny, but still thick (you might need to add a little bit more to the bowl while you’re frying as the batter thickens). Stir in the chopped chocolate. Heat a nonstick skillet over medium heat. Grease with butter. Ladle a little less than 1/4 cup of batter for each pancake into the skillet and cook over medium-low heat for about two minutes, then flip the pancake and cook for about 1-2 minutes. Repeat with remaining batter. Serve with maple syrup. I’ve been thinking about making these cookies for a couple of weeks now. I’m pretty bad at planning posts for my blog – I bake and shoot whenever inspiration hits me. But with these, I simply couldn’t get them out of my head. I’m probably not presenting anything new or innovative with this recipe, but it is a pretty darn good cookie, and sometimes that is better and more delicious than anything (and did I say addictive too?). I adapted the recipe from my regular chocolate chip cookies, I just added a bit more butter. Oh, I added pecans too, because I had chopped pecans in my pantry and I wanted to use them for something. If you haven’t baked with buckwheat before, you definitely should! It has a particular flavour, almost a little hazelnutty. As for the texture, they’re definitely just like the “real” deal. We ate them all in one go. Except one that I saved for the next day, you know, just to check the texture. I think I liked them even better the next day. 2. Mix buckwheat flour, baking soda and salt in a medium bowl. 3. Beat together sugars, vanilla, egg yolk and melted butter until smooth and slightly paler in color. Add the dry ingredients, chocolate and nuts and stir until combined. Wrap dough in plastic wrap and put in the fridge for 1 hour or overnight. 4. Heat oven to 175°C (350°F). Line a baking sheet with baking paper. Divide the dough into 10-12 pieces (I use 35-40 g per cookie) then place them on the baking sheet. 5. Bake for 10-13 minutes. Let cool completely. Can I just say how impressed I am by this cake? I had a whole bunch of leftovers from the last “raw” cake I made so I took the opportunity to make a chocolatey version too, cause you know, you gotta have chocolate. I pretty much improvised so there are all kinds of good stuff in this one. It tastes a bit like coconut, a bit like peanut butter, banana, and chocolate of course. And just the right amount of espresso. That might sound like a lot of stuff, but to me, it’s perfection. And let’s talk about the texture. This cake is so creamy and smooth, like a mousse cake. I would never in a million years guess that a piece of this cake is actually pretty good for you. Almost too good to be true, right?! If you compare this cake to the last one I made, this one is definitely more like a mousse cake while the other one is a bit more like ice cream. I love them both and I’m sure I will make them again and again. Makes 1 small cake (18 cm/5 inches) but you can of course double the recipe ro make a bigger one! 1. Soak the dates in warm water for 15 minutes then drain. Put almonds in a food processor and pulse until finely chopped. Add the dates, coconut, cocoa powder, salt and pulse until a paste forms. Add espresso as needed and pulse until paste is smooth. 2. Press mixture into a 6 or 7 inch springform pan with the bottom covered with plastic wrap (if you don’t have a springform pan, use a pie tin or a regular cake pan covered with parchment paper). Put the pan in the freezer while you prepare the filling. 3. Put coconut oil, coconut milk, cocoa powder, maple syrup, peanut butter and coffee in a sauce pan and heat very gently until mixture is quite loose and all ingredients have melted (this step is to make the filling a bit looser – or it will be too thick to mix smooth in the food processor/blender). 4. Pour the ingredients from the sauce pan plus the cashews and the banana into a blender or food processor and mix until mixture is as smooth as possible. Pour the filling into the crust, cover with plastic wrap and freeze for 2-4 hours or until filling is set. After the mixture was set, I actually stored the cake in the fridge instead and it held up fine! It was more like a mousse cake. Delicious! So if you know you will eat the cake within a day or two – store it in the fridge, if you want to save it longer store it in the freezer. 1. Mix all ingredients in a small bowl. Powder some extra cocoa powder on top of the cake. Drizzle sauce over cake and top with chopped pistachios and shredded coconut if desired. I’m a little bit too excited about this post. I received the most beautiful bed linen from Evencki the other day. This lovely plum color makes my knees weak and it was just begging to be photographed with a pretty cake. My inspiration has been low lately and this was exactly what I needed. This, and the beautiful spring weather we’re having (our thermometer says 12°C today!). My friend Lua experimented with a raw and vegan cheesecake the other day and she told me all about it when we met over the weekend. So I had to give it a try, too. And I must say I’m very impressed with the outcome. Incredibly creamy, sweet, tangy, rich and just perfect. I know, this blog is usually all about the sugar and the flour, and it has been for the past.. 5 1/2 years now. Maybe you’re thinking I’ve stopped eating sugar, flour, eggs and dairy. I haven’t, though I really enjoy trying something different every now and then. It’s a great way to find some new inspiration, because sometimes you just get stuck in your ways. It’s like when I’m trying to think of something to eat for dinner. We have like 5 dishes that we eat over and over again and I can’t imagine that there are other dishes in the world except for these 5. I’m like.. “uhmmm, maybe pasta, rice… or potatoes”. Does anyone else feel the same way? Oh, and this cake totally looks huge in the photos but in reality, it isn’t. It’s only 12 cm, so it’s tiny and cute. Makes 1 tiny cheesecake (12 cm/5 inches) but you can of course double the recipe ro make a bigger one! 1. Soak the dates in warm water for 15 minutes then drain. Put almonds in a food processor and pulse until finely chopped. Add the dates and salt and pulse until a paste forms. 2. Press mixture into a 5-inch springform pan (if you don’t have a springform pan, use a pie tin or a regular cake pan covered with parchment paper). Put the pan in the freezer while you prepare the filling. 3. Put all ingredients for the filling in a blender and mix until mixture is as smooth as possible. Pour the filling into the crust, cover with plastic wrap and freeze for at least 4 hours. 4. Remove the cake from the pan and leave to thaw in room temperature for about 10-15 minutes before cutting. Top with freeze dried blueberry powder, fresh blueberries, shredded coconut and edible flowers. I usually run my knife under warm water (and dry off) before cutting, to make a cleaner cut. Let each piece thaw for a few more minutes before serving.
697c4da3a727dc38d5e0b0977766e90e
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
­Don't play dead. This does nothing but make the shark think it has won. The shark wi­ll then commence chomping. Clearly, this is not what you­ want it to do. Also, if you've been attacked, get away as fast as you possibly can. Sharks smell blood. You didn't fare too well with the first one and there are probably more on the way.
8562c506daaf42950d2e7388c399e606
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
THIS IS HIGHLY UNUSUAL. SOME MEN CELEBRATED THEIR SURVIVAL IN AN UNCOMMON MANNER AFTER BEING INVOLVED IN A TRUCK ACCIDENT IN IMO STATE. IT WAS SCOOPED THAT THE HEAVY DUTY TRUCK FROM OKIGWE FAILED BRAKE WHILE IN MOTION AND LANDED IN EFE RIVER NEAR ENERCO CONSTRUCTION COMPANY IN OKIGWE AREA OF IMO STATE. Rather than become sorrowful following the crash, the driver, conductor and well wishers celebrated at the accident scene with bottles of beer since no life was lost.
2cb074ae90e20312ea70ca175ef7d414
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
What would you think if an unauthorized person gained access to your personal files? Would someone be able to find information that is meant for your eyes only? Would they be able to do harm? I’d guess that they probably could—at least, that’s the case with <i>my</i> personal files. Whether data is personal or business related, important files have to be secured, and that brings us to a potentially incredible solution: TrueCrypt. Most systems are not secure, even though known “security” measures such as Windows passwords, ZIP file passwords, BIOS passwords and FTP/Web passwords imply security. The truth is that everything that is handled or stored in plain text—which is the case for most of the examples above—can be bypassed. Windows passwords are stored in the system memory and only provide security as long as other ways of access, via network or USB, aren’t available. ZIP files can be accessed with some patience using brute-force attacks, and many Web services don’t use any encryption at all when handling login data. True security is only possible if data and transfers are protected with modern encryption algorithms using solid passwords. When I think about security products, I recall features such as the Trusted Platform Module on motherboards to validate systems, software, or users. There are components with integrated acceleration for encryption and decryption workloads; VIA’s Nano processor is a recent example. And then there are components that even come with built-in encryption: self-encrypting hard drives are popular, and Windows Vista supports Bit Locker when you purchase the expensive Ultimate or Enterprise editions. However, most solutions come with a catch. They either require you to purchase software or hardware, or you have to change the way you work on your system(s). In addition, not all security solutions are truly secure, as there are sometimes ways around security features, which compromise your data. External hard drives with built-in encryption sometimes have intended or unintended backdoors; other examples are mentioned above. TrueCrypt has been around as an OpenSource encryption tool for a few years. Its main application was the creation of so-called encrypted containers to store files in a secure manner. Containers can even be mounted as Windows drives in recent versions of the tool. With the introduction of TrueCrypt 6.0, the tool was given the ability to encrypt an existing Windows installation on the fly, which means adding the extra layer of security by encrypting the entire system drive or partition. In our tests, this worked really well. In fact, our positive experience was the impetus to write this article—we found subjectively that TrueCrypt wouldn’t even slow down your system despite real-time encryption and decryption of your entire system instance and data.
19b2f0e6fac5fa2944b97457c23b1fd5
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
How to open .ISYM file extension? Specially dedicated programs are responsible for launching a .isym file saved in a specific format (which we can recognize on the basis of the extension of a given file). The most common problems with .isym files downloaded or received via e-mail are: their incorrect association with programs in the registry, or simply the lack of an appropriate program to open them. To solve the problem with the .isym file it is usually sufficient just to download the appropriate software that supports .isym file format, which can be found in the table below. The most common file format with the .isym extension belongs to the category Developer Files . Graphisoft is responsible for creating .isym extension file.
ec2e649eb3c20f8255385ba4fd7faa68
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Sashay into the room wearing the curve enhancing Studio 17 12162. This Studio 17 dress features a halter neckline atop a slim bodice highlighted by a jeweled belt that criss-crosses over the open back before ending in a charmeuse fit flare skirt with a dramatic sweep. This Studio 17 dress is perfect for promising to tie the knot at your engagement, the formal event of the year, observing the holidays, honoring our armed forces at a military ball, capturing the crown at your next pageant, attending a party, making prom unforgettable, making an appearance at a reception, making any special occasion remarkable, celebrating a friend's wedding, feeling extraordinary at your winter formal, or any once in a lifetime event you may be attending.
eeb375b4a7efeee93da47b703fbd75f2
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
My name is Lou Guadagnino. I co-own Living Stress Free and I am a Living Stress Free Life Coach. One of the first things I share with my clients to help them build the life they want to live — no matter what their background or current circumstances — is the power of happiness. Happiness is more than a pleasant fleeting experience. Happiness is a compass that helps us know when we are on-course in our lives and when we are off-course. Happiness is also a lighthouse that has the power to show us the way home when we feel lost or hopeless. Happiness is a positive contributor to good health, longevity, fulfilling relationships, spirituality, wealth, and success. In nations where people rank high on the happiness scale, people live longer, share more social contacts, have stronger family relationships, and experience less crime and violence. Happiness is our natural state. We all have the potential to live a happy life. Many of us believe that becoming happy is the result of obtaining specific possessions such as wealth, relationships, status, health, popularity, and many other examples. Unfortunately, this belief often fails us over time because happiness derived from getting what we want is not sustainable. In LSF Life Coaching we take a different approach. We teach you how becoming happy naturally leads to achieving the things that create happiness. The good news? Even if you do not achieve all of your dreams you still have achieved happiness! When you begin LSF Life Coaching, we will start working on developing LSF's Three Fundamental Foundations of Happiness: Reducing Stress, Balancing Lifestyle, and Fulfilling Life’s Purpose. These 'foundations of happiness' may sound simple but like all valuable life-skills, they aren't developed by accident. Becoming familiar with the foundations of happiness takes purposeful intention and the development of specific skills. You provide the purposeful intention and we will teach you the skills. LSF Life Coaching is a unique life coaching method that is unavailable anywhere else. Our method includes a combination of information and techniques that have been time-tested. Through regular coaching support, anyone can make the changes they want to make to have a life of happiness and fulfillment. Living Stress Free was created in 2011 and since then thousands of people from three continents have used its unique services and products to achieve their goals and build the life they want to live. You can learn more about the Living Stress Free method by reading: The Living Stress Free Bible: 20 Techniques to Make Your Life Less Stressful by Marilyn Sydlo Guadagnino. Ready to start developing the life you want to live? Begin right now by scheduling an appointment either online or in-person with Lou Guadagnino. Have questions? We love questions. Email or request a phone call here. 1. Email me to discuss your needs and schedule an appointment. 2. Insurance is not accepted. Pay for your session on the website before your appointment. Please notify 24 hours in advance if you cannot attend a session to reschedule, barring any emergency situations. Otherwise, payment is due. No refunds.
df843f02923df041c174b93759c2b854
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
We have a proven track record and extensive experience in managing domestic and commercial projects. These vary from minor internal improvements and alterations of domestic properties, to extensions and conversions of listed commercial premises. Preparation of detailed Schedule of Works / Specification for the proposed works. To produce all necessary tender documentation. To contact potential contractors suitable to carry out the works. To meet contractors on site during the tender period. To conduct a fixed price competitive tender with identified suitable contractors. To analyse the received tenders. To negotiate the final contract sum with the chosen contractor.
a7b3c190e419e3fb32d9874c9e95a993
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
For our academic writing purposes we will focus on four types of essay. 1) The expository essay. Your reaction to a work of literature could be in the form of an expository essay, for example if you decide to simply explain your personal response to a work. The expository essay can also be used to give a personal response to a How can the answer be improved? A typical expository writing prompt will use the words explain or define, such as in, Write an essay explaining how the computer has changed the lives of students. Notice there is no instruction to form an opinion or argument on whether or not computers have changed students lives. The other forms of writing to consider are creative and persuasive. Expository essays are unlike text response essays for a few reasons. Firstly, they are not constrained by a strict structure in the same way. In fact, there are many different possible structures for an expository essay. Learn how to write an expository essay with this guide to the different types of exposition. Find tips and strategies for writing an expository article. Learn how to write an expository essay with this guide to the different types of exposition. Find tips and strategies for writing an expository article. Growing Expository Response Skills: Activities& Examples An expository text is a piece of writing that is used to inform, explain, or describe. The information in expository texts is well Expository Essay Outline Download If youre in the position where you need to write an expository essay, but arent sure where to begin, feel free to get started with this expository essay outline template (Word.
7364c47ec772ca9a3e3e6754334b2df0
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Website for PAYBACK is a very advanced project from developer’s point of view. The scope of project included the creation of highly compatible, optimized HTML/CSS templates and implementation of an advanced CMS solution. We’ve chosen Drupal CMS, which will allow the admins of site to manage content intuitively, but also let us implement some other more advanced mechanisms within the site, allow the import of data as well as integrate the site with other external systems.
1c1debc1a60bae7b3bb77e5f2f02abf4
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Do you think Rainbow Quartz would look the same if Pearl fused with Steven, or different? What differences do you think she (or they) would have? Throwing some meep morps in the mix here! Fan Theory: Why Pink Diamond Being Rose Quartz Is The BEST THING To Happen To Steven Universe! Many predicted it, and yet many were still surprised. The Crewniverse is all about leaving clues, and they have fun when fans pick up the trail and theorize correctly. But even with the reveal being a plot point planned from the beginning, there are mixed reactions. SkywardWing, however, thinks it's the best thing. So what do you think? Best or worst thing to happen to the series? Hey, the Diamonds caused corruption and now Steven is a Diamond... So does that mean he has the power to undo what the others did? It seems like healing corrupted gems would be the next logical step SU could take story wise. Description: One of Pink Diamond’s Agates has successfully organized a peace conference between Homeworld and the Crystal Gems. The only problem is, not only are Rose Quartz and Pink Diamond mutually unaware of the conference until the last minute, the two are actually the same person! How will she pull this one off without blowing her secret?
f163e4c2efd80ca171dcbcda91c3e0e4
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
BAMSI's Family Support Center serves as a hub, offering a wide range of general family support services and activities to families of children and adults with developmental disabilities. Program services include: information and referral, flexible funds management as needed, support groups, parent networking and mentoring, facilitation of social and recreational events, service navigation (short- and long-term), community affiliation/linkage, and a resource library. The Family Support Center provides quality service navigation for families of children and adults with disabilities. This service is comprehensive, with individualized information, guidance, and support to families to address their specific needs, connect them to potential resources, and assist them with specific problem-solving as staff helps families navigate the complex service system. The Center is designed to assist families in understanding the various systems of supports that are often fragmented and difficult to access. BAMSI’s Family Support Center assists families to identify and develop resources, while also working toward empowerment in an effort to support the emerging independence and self-determination of their disabled family member. As with all BAMSI family partnership programs, BAMSI staff will work alongside family members and individuals, recognizing and supporting their strengths while respecting their privacy and honoring their choices and preferences.
52c58a8bbe7e4e5c6b14fb82e3199315
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
POWER MASTERY© is a success development program that equips individuals for personal excellence and mastery! The COACHING PROGRAM is centered on a co-creative relationship. It focuses on working with the client to identify and address debilitating beliefs, break maladaptive patterns, install empowering mind-sets, develop personal vision & goals, utilize peak-performance strategies, increase motivation, and build a personal brand & image for success! breakthroughs, identify solutions, make effective decisions, develop strategic leadership in their career development. We also work with individuals who want to discover their strengths and develop their potential. Or who may require psychological or life-skills coaching, social-emotional and relationship management. The syllabus of this program is framed around sound psychological theories, cognitive- behavioural approaches, strengths-focused techniques, and neuro-linguistic programming to help you breakthrough and achieve effective results! GLENN LIM (M.SocSc, M.Ed, Dip Ed Psych) is a Master Trainer and Cognitive-Behavioural Coach, whose passion is to see individuals breakthrough and overcome their setbacks to achieve results. He is an Advanced Behavioural Analyst, and has coached C-suite leaders, directors, professionals, business-owners, as well as worked on student and family- related issues. As a trained counselling psychologist, he assesses clients using psychometric instruments, challenging them to break unhelpful habits. Glenn works together with his clients to unlock their potential, discover their purpose, and develop action-plans to achieve their goals. He also brings in strategic business and leadership insight to help professionals develop their organisations. Preferred Rate: $960 ($240 per session) for an indefinite period of time only.
8219745fc4b6a533a3f81c2bb3824e5b
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The broody goose is sitting on her last remaining non-viable egg (we ate the rest). It's been almost 6 weeks now and she's still sitting tight, and in the last week or two, only getting off very rarely (some days not at all, or not that I've noticed). Should I now remove the egg next time she gets off? Force her off? Also, she has straw stuck in one nostril - should we be concerned? Or will it work its way out once she spends more time on - and in - the pond? Broody geese can be tricky (personally I would rather tackle a gander than a broody goose) but if the egg is non viable then next time she leaves the nest to bathe or feed, shut her away from the nest (is it inside or out?) and remove egg, nest and all bedding completely. If she has nothing to return to then she will stop sitting, although she will shout and moan for at least a week afterwards - be prepared!! I wouldn't worry about the straw, once she has had a good bath it will probably work its way free. and sorry, I can see you have a dilemma there but it's been tickling me all day. I find that straw stuck in the nostril is a definite sign of needing to get out more! Hope she isn't too upset at being evicted. Thank you both! I'm very fond of poor Goosie, and really feel for her since she lost her mate to a fox the winter before this after a long time together. This winter, she was nearly taken by the same fox - cue much noise and kerfuffling in the middle of the night and I ran downstairs to find her standing right outside the back door on the mat, under the two back door lights that stay on all night (and very close to the pots that the dog pees on all the time). The other half (mine, not hers) refused to let me let her into the kitchen to keep her safe, despite my tearful protestations, but she anyway took herself off to the pond for the rest of the night. So, to cut a rambling tale a bit shorter, she is indoors - kind of - at night, albeit indoors with an open side and under the granary. Next time she leaves the nest and I am here, should I simply shut up the chicken coop, effectively shutting her out, and hope the she sleeps on the pond as she did, or remove egg, nest, etc from the coop, but leave her able to go in there (open as it is)? I would remove the nest but still allow her access afterwards. She needs to see that her nest has gone and make the decision herself to return to the pond. She may still try and sit on a bare patch of ground but I don't think she'll do it for long. Last year one of my geese made her nest in a stable and sat on a huge pile of eggs, most of which went rancid. The stink was horrendous! I needed to get rid of all but the goose knew what I was up to and despite shutting her out of the stable area she went completely beserk when she saw me loading nest and stinky contents into the wheelbarrow and shouted at me loudly each time I went near. I did really feel for her though when I allowed her back into the (now clean) stable, she made these awful keening noises for days, mourning her rotten eggs. I did placate her later on though by giving her a couple of half grown goslings to foster. Would a couple of goslings be an option for Goosie, Kate? Marigold wrote: Would a couple of goslings be an option for Goosie, Kate? Goosie came out today, to eat some feed I put down for her (though she was very wary of me) and to go for a bathe on the pond. I swiped the two eggs (she'd obviously snuck in another one at some point) but she came rattling back before I could do anything else. She's back on her nest, which is a true thing of beauty, every now and then hunting for the missing eggs. I'll see how she goes, and if she stays on there, will swipe the nest next time she gets off. For all you know, there MAY have been a stray gander that happened by, at some point, and so there MAY just mysteriously "appear" a couple of goslings some day......! Nobody's fault. These things happen. And bless her, she must have been lonely.
bfae71d38bc63d3f8d2c80c408864cb2
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
At £70,000 (again, you DID read it right) for a pair, I can safely state the House of Borgenzie platinum Cleopatra stilettos will never be the object of my desire. For a metal shoe, I wonder how comfortable it can actually be?! Having said that, the shape is pretty, although I really don’t like that BORGEZIE is written across the heel and strap (which is much more visible on the gold version). This is what I think Cinderella’s crystal slippers could’ve looked like if they were sandals (minus the letters – maybe have it to spell CINDERS?), although I much prefer Christian Louboutin’s version of said slippers! The House of Borgezie is also behind the Eternal Borgezie Diamond Stiletto. And wait for it – encrusted with an astounding 2,200 diamonds, with a total weight of 30.00 (!!!) carats. If the £70K platinum price tag is a bit too much, go for the more affordable white or yellow gold versions of the handcrafted, custom-made Cleopatra, at £60,000 (gasp! again). If you want to take a peek, head over to The House of Borgezie website.
1151a0220eb1dd39c9e1b71748859af5
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
In this case, it will be a great idea to ensure that your dining has the best tables so that you can be able to enjoy meals with your family. Make certain that you have chosen the most excellent and exact dining table for your dining space and will see the benefit of doing that. In order for you to be able to satisfy all of your dining wishes with no hitches, ensure that you have gone for the right dining table. You will easily be able to satisfy all of your wants with no snags if make sure that you have bought a good granite dining table and many other welfares will be experienced at a great level. More welfares will be on your side all the time if you certify that you have gone for a granite dining table. Keep in mind that granite dining table are durable and that is one of the reason why you should ensure that you have purchased one for your home dining. You should know that Custom barbecue grills are pocket friendly and that is another reason why you are always advised to go for these types of dining tables. You should know that granite dining table have come with lots of diverse replicas and it will be a good decision if you pick the right one. Selecting a good and right large orchid pots model is a very daunting task but you can be able to find the one that will satisfy your needs if you consider some few factors first. Searching on the internet is one of the method that you can use in order to locate the best and right granite dining table. You will have no size issue to work on at any time if you make sure that you have considered the size aspect in a serious manner when buying a granite dining table. It will be a great idea to ensure that you have selected a granite dining table that has come with a color that will match with your dining d?cor. It will be a great idea to make sure that you have selected a granite dining table that you can pay for with no hitches so that you can avoid breaking your budget and at the same time be able to circumvent all money matters that may rise when paying for the item. Learn more about barbecue grills at https://en.wikipedia.org/wiki/Barbecue_grill.
ea364f9d10a0e3f72e5b8ce39c691934
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The change in growth trajectory was discovered when the city did a half-decade check on census data and came up with an estimate of 759,727 Bostonians by the year 2030. In a new report, city housing leaders laid out their expanded housing plans. Goals for construction of affordable housing are staying roughly within the same ratios as the prior plan, officials said. About one in five units in the city are deed-restricted as affordable housing, reporters were told on Tuesday. With the updated estimates, this would mean 15,820 new income-restricted units by 2030, bringing the citywide total of such dwellings to 70,000. When the city rolled out its initial plan in 2014, it already looked like Boston would be seeing robust growth. The city was expected to grow by about 91,800 people over the next 16 years. These numbers are seen as resulting from the city’s major economic boom, increased job growth, people wanting to live in the city to begin with, and older residents trying to move back in. Officials were only beginning to see these signs in 2014. Almost 18,000 new units have already been completed, housing leaders said, with another to 9,480 under construction since the original plan was laid out. Things are on pace to meet the initial 2030 goal almost a decade early if the market stays in good shape, Walsh said. Areas targeted in the new report align with the neighborhood in the city’s Imagine Boston 2030 plan, like Newmarket and Widett Circle, the Fort Point Channel, Suffolk Downs, and Readville. Planning initiatives like those around Uphams Corner and Glovers Corner should help shape zoning and growth in tailored village areas, as will forthcoming planning studies, including one in Mattapan. A certain amount of housing considered to be affordable for middle-income earners comes out of private markets, said Boston housing chief Sheila Dillon. “The market has produced, according to our analysis, 5,700 middle-income units that are not deed-restricted,” she said, "but [it’s] also important to meet the housing shortages and needs,” she added. The city’s focus remains on building homes for low- to middle-income renters and buyers, In the update, the city’s targets are 8,300 units of general low-income housing, 2,000 units of low-income senior housing, and 5,520 units in the middle-income-restricted category. The new look at the city’s demographics shows a future population that is older and more financially vulnerable. “We are where we should be for the various income groups,” Dillon said. Seniors — those 65 and older — will make up 56 percent of expected 2010 to 2030 growth, according to the new report, and households making less than $49,000 a year will account for 43 percent of the growth over that same time frame. Housing officials highlighted a number of existing initiatives meant to produce affordable housing. Since 2014, the city has awarded more than $115 million in funding and made 1.4 million square feet of city-owned real estate available for affordable housing. To that end, Dillon said, about $50 million each year going forward will be dedicated by the city to affordable housing. As to money coming in, an increase in linkage fees to 8 percent and the 2016 overhaul on inclusionary development, which may be in line for another update, have upped the revenue stream from private development. The recently passed Community Preservation Act will likely net the city $18 million next year, Walsh said. One new strategy rolled out in the updated program involves the city purchasing 1,000 market-rate rental units and converting them into deed-restricted affordable homes. More broadly, an announcement is expected to come soon from the Metropolitan Mayors Coalition on their regional housing targets.
31137922a3f616d49e102415ced92b33
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The dried root of Brauneria angustifolia, Linné (Echinacea angustifolia [DeCandolle], Heller). (Nat. Ord. Compositae.) In rich prairie soils of western United States, from Illinois westward through Nebraska and southward through Missouri to Texas. Common Names: Narrow-leaved Purple Coneflower, Purple Coneflower, Coneflower. Principal Constituents.—Minute traces of an unimportant alkaloid and an acrid body (½ to 1 per cent), probably of a resinous character linked with an organic acid. The latter is the chief active principle of the drug. Preparations.—1. Specific Medicine Echinacea. Dose, 1 to 60 drops, the smaller doses being preferred. Usual method of administration: ℞. Specific Medicine Echinacea, 1-2 fluidrachms; Water, enough for 4 fluidounces. Mix. Sig.: One teaspoonful every 1 to 3 hours. 2. Echafolta. (A preparation of Echinacea freed from extractive and most of the coloring matter. It also contains a small added quantity of tincture of iodine. The label states that is iodized). Dose, 1 to 60 drops. Usually administered the same as the specific medicine; except when iodine is contraindicated, or is undesired. 3. Echafolta Cream. An ointment for external use. Specific Indications.—"Bad blood"; to correct fluid depravation, with tendency to sepsis and malignancy, best shown in its power in gangrene, carbuncles, boils, sloughing and phagedenic ulcerations, and the various forms of septicemia; tendency to formation of multiple cellular abscesses of a semi-active character and with pronounced asthenia; foul discharges with emaciation and great debility; dirty-brownish tongue; jet-black tongue; dusky, bluish or purplish color of the skin or mucous tissues, with a low form of inflammation. It is of special value in typhoid states, in which it is indicated by the prominent typhoid symptoms—dry tongue, sordes on tongue and teeth, mental disturbances, tympanites and diarrheal discharges—and in malignant carbuncle, pyosalpinx, and thecal abscesses. Action.—The physiological action of echinacea has never been satisfactorily determined. It has been held to increase phagocytosis and to improve both leukopenia and hyperleucocytosis. That it stimulates and hastens the elimination of waste is certain, and that it possesses some antibacterial power seems more than probable. Upon the mucous tissues echinacea causes a quite persistent disagreeable tingling sensation somewhat allied to, but less severe, than that of prickly ash and aconite. It increases the salivary and the urinary flow, but sometimes under diseased conditions anuria results while it is being administered. In the doses usually given no decided unpleasant symptoms have been produced; and no reliable cases of fatal poisoning in human beings have been recorded from its use. Occasionally bursting headache, joint pains, dry tongue, reduced temperature and gastro-intestinal disturbances with diarrhea are said to have resulted from large doses of the drug. Therapy.—External. Echinacea is a local antiseptic, stimulant, deodorant, and anesthetic. Alcoholic preparations applied to denuded surfaces cause considerable burning discomfort, but as soon as the alcohol is evaporated a sense of comfort and lessening of previous pain is experienced. Its deodorant powers are remarkable, especially when applied to foul surfaces, carcinomatous ulcerations, fetid discharges from the ears, and in gangrene. While not wholly masking the odor of cancer and gangrene it reduces it greatly, much to the comfort of the sick and the attendants. Echinacea is useful as an application where decay is imminent or taking place, reparative power is poor, and the discharges saneous and unhealthy. It is especially valuable in sluggish ulcers, bed sores, stinking tibial ulcers, and ulcers of the nasal mucosa, due either to ozaena or to syphilis. The greater the tendency to lifelessness and dissolution of the tissues and the more pronounced the fetid character of the discharges, the more applicable is echinacea. Used by spray it is effective to remove stench and to stimulate repair in tonsillitis, the angina of scarlatina, and though not alone capable of curing diphtheria, either by external or internal use, it stimulates the near-necrosed tissue to activity and overcomes the fetid odor, thus contributing in a large measure to aid more specific agents. A 10 to 50 per cent solution may be used to cleanse abscess cavities, to apply to ragged wounds from barbed wire, tin, and glass, wounds which for some reason are very painful and heal sluggishly. For this purpose we prefer ℞. Echafolta (or Echinacea), 1 fluidounce; Asepsin, 15 grains; Tincture of Myrrh, 2 fluidrachms; Sterile Water, enough to make 4 fluidounces. Mix. Apply upon sterile gauze, renewing at reasonable periods. This also makes a good mouth wash for foul breath and to remove odor and stimulate repair in pyorrhea alveolaris, spongy and bleeding gums, and aphthous and herpetic eruptions. Echinacea is sometimes of value in eczema, with glutinous, sticky exudation, and general body depravity; to give relief to pain and swelling in erysipelas, mammitis, orchitis, and epididymitis; to allay pain and lessen tumefaction in phlegmonous swellings; and to dress syphilitic phagedena. As a local application to chilblains it has done good service, and in poisoning by Rhus Toxicodendron is relied upon by many as one of the best of local medicines. We have found it especially useful in dermatitis venenata after denudation of the cuticle when ulcers form and the neighboring glands swell. Echinacea has a greater record for success than any single medicine for snake bites and insect bites and stings, and it may be used full strength to relieve the intolerable itching of urticaria. Some have asserted that it will abort boils. For the treatment of carbuncle, after thoroughly incising, a 50 per cent solution to full strength echinacea or echafolta may be freely used, syringing the channels with it. This gives great relief from pain and insures a quicker recovery. For all the above-named purposes either echinacea or echafolta may be used: the latter is usually preferred where a cleanlier appearance is desired. Moreover, in most of the conditions named repair takes place much sooner and in better form if the remedy is given internally concomitantly with its external use. Internal. Echinacea is stimulant, tonic, depurative, and especially strongly antiseptic; it is in a lesser degree anesthetic and antiputrefactive. The necessity for remedies that possess a general antiseptic property and favor the elimination of caco-plastic material is most marked when one is treating diseases which show a depraved condition of the body and its fluids. Such a remedy for "blood depravation," if we may use that term, is echinacea. No explanation of its action has even been satisfactorily given, and that a simple drug should possess such varied and remarkable therapeutic forces and not be a poison itself is an enigma still to be solved, and one that must come as a novelty to those whose therapy is that of heroic medicines only. If there is any meaning in the term alterative it is expressed in the therapy of echinacea. For this very reason has a most excellent medicine been lauded extravagantly and come near to damnation through the extravagant praises of its admirers. Echinacea is a remedy for autoinfection, and where the blood stream becomes slowly infected either from within or without the body. Elimination is imperfect, the body tissues become altered, and there is developed within the fluids and tissues septic action with adynamia resulting in boils, carbuncles, cellular tissue inflammations, abscesses, and other septicaemic processes. It is, therefore, a drug indicated by the changes manifested in a disturbed balance of the fluids of the body resulting in tissue alteration: be the cause infectious by organisms, or devitalized morbid accumulations, or alterations in the blood itself. It is pre-eminently useful in the typhoid state, and many physicians administer it regardless of any other indication throughout enteric fever as an intercurrent remedy. Echinacea is especially to be thought of when there are gangrenous tendencies and sloughing of the soft tissues, as well as in glandular ulcerations and ulcers of the skin. It is not by any means a cure-all, but so important is its antiseptic action that we are inclined to rely largely on it as an auxiliary remedy in the more serious varieties of disease—even those showing a decided malignancy—hence its frequent selection in diphtheria, small-pox, scarlet fever, typhoid fever and typhoid pneumonia, cerebro-spinal meningitis, la grippe, uremia, and the surgical and serpent and insect infections. Foul smelling discharges are deodorized by it and the odor removed from foul smelling ulcers and carcinomata, processes not alone accomplished by its topical use but aided greatly by its internal exhibition. In puerperal fever, cholera infantum, ulcerated sore throat, nasal and other forms of catarrh and in eczema and erysipelas it fulfills important indications for antisepsis. Echinacea was introduced as a potent remedy for the bites of the rattlesnake and venomous insects. It was used both externally and internally. Within bounds the remedy has retained its reputation in these accidents, it probably having some power to control the virulence of the venom, or to enable the body to resist depression and pass the ordeal successfully; nevertheless fatalities have occurred in spite of its use. For ordinary stings and bites its internal as well as external use is advisable. In the acute infectious diseases echinacea has rendered great service. Throughout typhoid fever it may be given without special regard to stated periods, but wherever a drink of water is desired by the patient, from 5 to 10 drops of Specific Medicine Echinacea may be given in it. Having no toxic power, and acting as an intestinal antiseptic, this use of it is both rational and effective. Cases apparently go through an invasion of this disease with less complications and less depression when the drug is so employed. The same is true of it in typhoid, pneumonia, septicaemia, and other septic fevers. It has the credit of regulating the general circulation, and particularly that of the meninges in the slow forms of cerebrospinal meningitis, with feeble, slow, or at least not accelerated pulse, temperature scarcely above normal, and cold extremities; with this is headache, a peculiar periodic flushing of the face and neck, dizziness, and profound prostration (Webster). It is evidently a capillary stimulant of power in this dreaded disease, in which few remedies have any saving effect. Echinacea has aided in the recovery of some cases of puerperal septicemia. Obviously other measures are also required. In non-malignant diphtheria, echinacea, both locally and internally, has appeared to hasten convalescence, but in the light of present day therapeutics it is folly to expect echinacea to cure the malignant type. A wide experience with the drug in such cases convinces us that we are leaning upon a slender reed when we trust alone to such medicines as echinacea and lobelia in malignant diphtheria. As many non-malignant cases tend to quick recovery, the use of good remedies like echinacea undoubtedly hastens the process. But to assume that it will cure every type of the disease because it succeeds in aiding the milder forms to recover is to bring a good medicine into unmerited discredit. Moreover, when these claims were originally made, and probably in good faith, there was no exact means of establishing the bacterial nature of the disease, hence many tonsillar disorders were called diphtheria. The latter were, of course, benefited by it, for in tonsillitis, particularly the necrotic form with stinking, dirty-looking ulcerations, it is an excellent remedy. Echinacea is said to be a good agent in a malignant form of quinsy known as "black tongue"; and in "mountain fever", closely allied to and often diagnosed as typhoid fever. Echinacea is justly valued in catarrhal conditions of the nasal and bronchial tracts, and in leucorrhoea, in all of which there is a run-down condition of the system with fetid discharge, and often associated with cutaneous eruptions, especially of an eczematous and strumous type. Chronic catarrhal bronchitis and fetid bronchitis are disorders in which it has been used with benefit, and it is said to ameliorate some of the unpleasant catarrhal complications of pulmonary tuberculosis, and particularly to render easier expectoration in that form known as "grinder's consumption". Patients suffering from common nasal and bronchial catarrhs have been greatly improved by echinacea when taking the drug for other disorders. Its stimulating, supporting and antiseptic properties would make echinacea a rational remedy for such disorders, particularly if debility and general tissue depravity were coexistent with the catarrh. As a rule echinacea is of little or no value in agues, yet physicians of malarial districts assert it is of benefit in chronic malaria when of an asthenic type. Altogether likely its value, if it has any, lies in the betterment of the asthenia, rather than to any effect it may have upon the protozoal cause of the disease. In so-called typho-malarial fever it does good just in proportion as the typhoid element affects the patient. Both it and quinine would be rational medication. Echinacea possesses no mean anti fermentative power, and by its local anaesthetic effect obtunds pain. When an offensive breath, due to gaseous eructation, and gastric pain are present, it proves a good medicine in fermentative dyspepsia. The symptoms are aggravated upon taking food. It is also serviceable in intestinal indigestion with pain and debility and unusually foul flatus, and has been recommended in duodenal catarrh. We can see no reason why it should not have some salutary effect in both gastric and duodenal ulcer, for it antagonizes putrefaction, tissue solution, and pain. In ulcerative stomatitis and nursing sore mouth, in both of which it is very effectual, it should be used both internally and locally. When dysentery, diarrhea, and cholera infantum occur in the debilitated and the excretions are more than commonly foul, both in odor and shreds of tissue, echinacea is a serviceable adjunct to other treatment. The dose of either specific medicine echinacea or echafolta ranges from 1 to 5 drops; larger doses (even 60 drops) may be employed, but small doses are generally most efficient if frequently repeated. They may be given in water or syrup, or a mixture of water and glycerin, as: ℞ Specific Medicine Echinacea, 1-2 fluidrachms;Water, to make 4 fluidounces. Mix. Sig.: Teaspoonful every ½ or 1 hour in acute cases; every 3 or 4 hours in chronic affections. If these preparations are to be dispensed in hot weather, or are to be used in fermentative gastro-intestinal disorders, the substitution of ½ ounce of pure glycerin for 1 fluidounce of the water is advisable. NOTE: Echafolta (now iodized) should be given internally only when iodine is not contraindicated, or is desirable. Formerly, before being iodized, it was used internally in the same manner and for the same purposes as Echinacea. The Echafolta should be reserved for external use. Echafolta Cream is an admirable form in which to use Echafolta, where an ointment is desired, being a useful unguent in the various skin disorders in which Echafolta or Echinacea is indicated.
9522efc14774ccb9ca5a3248fcc1f500
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The Oregon Department of Corrections’ own “Health Services” web page acknowledges that “state and federal laws have established that inmates are entitled to health care during incarceration. Health care services available to inmates must be comparable to health care provided in the community in order to meet the state’s legal obligation. This means that all types and levels of health care must be provided in a clinically appropriate manner by properly credentialed professionals in settings equipped and designed for the delivery of health care.” By these parameters health care, legally speaking, has to be considered a civil right where prisoners are concerned. Denial of appropriate care, therefore, can be challenged using 42 US Code 1983 – a key legal text concerning civil rights. 42 USC 1983 allows anyone who has been deprived of “any rights privileges or immunities secured by the Constitution and laws” to sue the person or institution which violated those rights in civil court. So if we take that acknowledgement by the state DOC as a starting point, the question must be asked: how can the agency defend the conduct alleged in the ACLU lawsuit? Specifically, the group charges that the state has denied its client’s repeated “requests for hormone treatment, despite an official diagnosis of gender dysphoria. The lawsuit also accuses state officials of placing (the plaintiff) in segregation or solitary confinement for weeks and sometimes months at a time,” the newspaper reports. When placed in a Disciplinary Segregation Unit following a suicide attempt earlier this year “staff mocked her and called her a ‘freak’ and other vulgar names,” the suit alleges. A mental health professional who evaluated the woman on behalf of the DOC referred to her repeated requests for essential hormone treatments as “quality of life issues” according to The Oregonian, and repeatedly referred to the prisoner using male pronouns (the 25 year old prisoner has publicly identified as female since the age of 16). As The Oregonian notes, the issues raised by the case are not unique. Indeed, “the lawsuit is the latest in a national wave of cases. Since August, transgender prisoners have filed similar lawsuits in Florida, Delaware, Missouri and Nebraska.” The paper adds that the federal government has supported similar claims in a suit filed in Georgia. In short, the idea that the state’s duty to provide appropriate and comprehensive medical services extends to hormone treatments for transgender prisoners is neither a new, nor a particularly shocking claim in legal terms. Though the lawsuit focuses on access to medical treatment it seems to me as a Portland attorney focusing on individual rights that there are also other issues also in play here. The treatment described by the ACLU, while secondary to their particular legal claim, appears to present a case for misconduct in and of itself. While prisoners are not a particularly sympathetic population in the public mind, the fact remains that they are entitled to professional and dignified treatment from their guards and from other prison staff. Failure to provide that treatment simply multiplies if it is allowed to go unchecked.
be9bd82b71bb69210a33a03d18a8ed6f
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
According to Nice classification heading for class 31 states Raw and unprocessed agricultural, aquacultural, horticultural and forestry products; Live animals; Fresh fruits and vegetables, fresh herbs; Natural plants and flowers; Foodstuffs and beverages for animals; Malt; Raw and unprocessed grains and seeds; Bulbs, seedlings and seeds for planting are include in Class 31. The heading gives you a brief broad overview about what exactly is including in a particular class. Class 31 includes mainly land products not having been subjected to any form of preparation for consumption, live animals and plants as well as foodstuffs for animals. This Class includes, in particular: – raw woods; – raw cereals; – fertilised eggs for hatching; – mollusca and crustacea (live). This Class does not include, in particular: – cultures of micro-organisms and leeches for medical purposes (Cl. 5); – dietary supplements for animals (Cl. 5); – semi-worked woods (Cl. 19); – artificial fishing bait (Cl. 28); – rice (Cl. 30); – tobacco (Cl. 34).
9aa9f8a5fde047a76629acbd78bf934e
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
The Unclaimed Property Professionals' Organization is a association of unclaimed property professionals and service providers "dedicated to advancing industry best practices for unclaimed property practitioners." Unlike other trade groups that focus on a particular profession or industry (e.g., manufacturers, banks, healthcare) the UPPO is group of unclaimed property professionals (as the name suggests) across the whole spectrum of industries. The UPPO holds many events throughout the year, but the most popular is the organization's annual conference. This year's conference will be March 24-27 in San Diego, California. The Conference brings together professionals and experts in a wide variety of areas, and offers dozens of presentations, educational program and industry-specific roundtables.* The Conference also offers separate learning tracks for beginner, intermediate and advanced unclaimed property professionals. There is something for everyone. Additional information can be found at the UPPO's conference page here. * Disclaimer: The author is a UPPO member and will be a speaker at the conference.
a1b28dbbc0410231453e53c1a38229f7
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
That includes HR. And when your people understand their purpose and connect with what they’re doing, they start going above and beyond. And you have the capacity to make better decisions, find and keep better talent, and overcome challenges. Scorecard Development: Scorecards are great tools for communicating strategy throughout your organization and measuring your success as you execute your strategic plan. Because we understand the DNA of small and medium-sized businesses, we can apply tools scorecards to your unique situation. Assessments: Assessments help you find the right talent, understand and therefore manage that talent more effectively, and are able to structure your organization to be in line with your business strategy. We use a comprehensive suite of best-in-class assessment products. Benchmarking Jobs and Developing Key Accountabilities: Benchmarking clarifies the knowledge, intrinsic motivators, personal attributes, behaviors, and hard skills required for success in a specific position. After clarifying key accountabilities—that is, critical goals and key business successes the job is accountable for producing—and running a TriMetrix® assessment, we develop profiles for each position. Those profiles are then used as a benchmark to help coach current team members or evaluate candidates. Performance Management Coaching: A performance management system enables you to measure and reward employee performance that produces the results you need. We coach and train your managers and supervisors to benchmark actual performance against established standards, conduct meaningful performance reviews using job-specific evaluation tools, and develop action plans that will help employees meet organizational goals. Because we offer comprehensive human resources consulting services, we won’t leave you with a report and wish you the best. We will stand by you (literally) and coach your team through implementation of the practices we’ll help create. Take a step back, make a plan, and move forward with confidence.
77d8321a7e932fe5f77b65295e5244d8
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
In recent years, there has been an increased national focus on assessing and improving the quality of health care. This statement provides recommendations and criteria for assessment of the quality of primary care delivered to adolescents in the United States. Consistent implementation of American Academy of Pediatrics recommendations (periodicity of visits and confidentiality issues), renewed attention to professional quality-improvement activities (access and immunizations) and public education, and modification of existing quality-measurement activities to ensure that quality is delivered are proposed as strategies that would lead to improved care for youth. This statement provides recommendations and criteria for assessment of the quality of adolescent care and the need for comprehensive efforts to improve the quality of primary care delivered to adolescents in the United States. Because much of adolescent morbidity and mortality is preventable, this statement focuses on quality issues that relate to staying healthy—preventive care themes. Quality issues for acute, specialty, and hospital care needs, issues for children with special health care needs, and end-of-life issues, although important, are outside the scope of this review. As the federal government addresses increased attention to measurement and to development of quality indicators, better implementation of AAP guidelines,12 renewed attention to professional quality-improvement activities and public education, and modification of existing quality-measurement activities to ensure that quality care is delivered are proposed as strategies that will lead to better care for youth. Currently, even adolescents who receive health care often do not receive adequate preventive counseling, health promotion, or screening. Most physicians perform recommended preventive services at low rates, few adolescent visits are for preventive care, and many adolescent visits do not include health counseling or guidance.11,26–28 Moreover, nearly half of the visits between adolescents and their doctors do not include an opportunity for the teenager to talk privately with the physician. Almost 1 in 3 adolescent girls and 1 in 4 boys report having missed needed care, almost 4 of 10 girls report having been too embarrassed to talk about an issue with their physician, and fewer than half who thought they should talk about prevention of pregnancy or STIs had ever done so with their doctor.9,28 Thus, a substantial proportion of visits could not have provided confidential counseling or screening for preventable risky behaviors. For health services to meet adolescents' needs, they should meet criteria for both the system of health service delivery and the specific services provided.7,33 Systems factors affect or facilitate adolescents actually receiving services. They are not services themselves but, rather, form the infrastructure of service delivery. These factors include health service organization and financing as well as various domains of access, including availability, affordability, confidentiality, visibility, convenience, flexibility, and coordination.7 In contrast, services are a measure of the therapeutic interactions received and reflect service capacity, content, and utilization. Services variables also include quality. Availability: age-appropriate services and trained clinicians must be available in all communities. Visibility: health services for adolescents must be recognizable and convenient and should not require complex planning by adolescents or their parents. Quality: a basic level of service must be provided to all youth, and adolescents should be satisfied with the care they receive. Confidentiality: adolescents should be encouraged to involve their families in health decisions, but confidentiality must be ensured. Affordability: public and private insurance programs must provide adolescents with preventive and other services designed to promote healthy behaviors and decrease morbidity and mortality. Flexibility: services, clinicians, and delivery sites must cater to developmental, cultural, ethnic, and social diversity among adolescents. Coordination: service providers must ensure that comprehensive services are available to adolescents. The specific services that should be provided to adolescents are summarized by the AAP in its recommendations for clinical preventive services35 and in Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents,12 which recommend comprehensive preventive counseling and screening services, including annual preventive health care visits for adolescents between 11 and 21 years of age. Such visits should include confidential screening (through trigger questionnaires, clinical interviews, or other means), early identification, appropriate preventive care interventions, and referrals for behavioral, emotional, and medical risk; education and counseling on behavioral, emotional, and medical risks to health; and recommended immunizations. Ideally, these health services should be provided in the context of a medical home that provides coordinated care for youth and their families. When school-based health clinics serve as medical homes and provide primary care, they should be expected to meet similar criteria for the quality of the care they provide. In contrast, sports physicals conducted in schools, especially station-style examinations, undermine the primary care relationship and are unlikely to provide quality comprehensive care. Thus, school or other policies should not encourage supplanting routine well visits or the primary care relationship with sports physicals. In addition, forms used by schools and athletic teams for preparticipation sports examinations should incorporate preventive health assessment tools into their content. The Child and Adolescent Health Measurement Initiative (CAHMI) was established in 1998 by the Foundation for Accountability to provide leadership and resources for quality of health care for children and adolescents. The CAHMI collaboration includes the NCQA, the AAP, Children Now, the CDC, the AHRQ, the MCHB, and others.36 The CAHMI has developed and studied 3 measures for children and youth services quality: a child development measure for children between 0 and 48 months of age; a measure for identification of children with chronic illness or special health needs; and an adolescent preventive care measure, called the Young Adult Health Care Survey (YAHCS),29 for teenagers 14 to 18 years of age, which assesses whether adolescents are receiving recommended health care services. This YAHCS has also been endorsed by the National Quality Forum. The YAHCS items ask adolescents directly about the health care they received in the previous 12 months. In fact, because many of the discussions during adolescents' visits are conducted privately between adolescents and their clinicians, adolescents are likely to be a better source of some kinds of information than either their parents or their charts. The 7 YAHCS quality measures address key aspects of recommended preventive care, including (1) screening and counseling for risky behavior (smoking, alcohol use, violence, and guns); (2) screening and counseling for sexual activity and STIs and pregnancy prevention; (3) screening and counseling for mental health and depression; (4) promotion of healthy lifestyle issues (diet, weight, and exercise); (5) private and confidential health care; (6) perceived helpfulness and effectiveness of visits; and (7) adolescents' rating of their clinician's communication and an overall rating of care. The YAHCS also asks about adolescents' health care use, health status, and participation in risky behaviors, because this information can be helpful in assessing whether an adolescent's needs are being met. The YAHCS is also aligned with the AAP preventive care policies as well as with guidelines from the American Medical Association, American Academy of Family Physicians, MCHB, and Healthy People 2010. Average preventive counseling and screening scores from the YAHCS range from 18.2% for discussing risky behavior topics to 50.4% for discussing diet, weight, and exercise topics.29 The YAHCS can be used to bridge efforts to measure the performance of health care plans and clinicians, target and improve health care quality, and assess and improve public health. Health plan accreditation and quality assessment, state policies and surveillance systems, and tracking of quality and disparities by the AHRQ have few measures for adolescent care. The AAP is concerned that the gaps in the proposed measure set reflect inconsistent measurement development and will fail to document quality in important domains such as health status and outcomes of care. Thus, it is critical that the gaps in reporting be seen as mandates for improved measure specification and data collection and not as a de facto standard in our expectations for future reporting. Better implementation of AAP policies, renewed attention to professional quality-improvement activities and public education, and modification of existing quality-measurement activities to ensure that quality care is delivered are proposed as strategies that would lead to better care for youth. The system of primary care for adolescents in the United States is changing along with broader changes in the content, organization, and financing of all health services. These changing patterns in the organization of health care may both improve and hinder the care received by adolescents. Similarly, changes in the science of medicine, as well as in technology both in and out of health care may have significant implications for health care delivery to children and their families. The growth of large, integrated health care delivery systems may lead to greater community orientation and more explicit consideration of adolescents' needs. On the other hand, consolidation of services may lead to fewer opportunities and may not result in greater attention to the quality of care delivered or studies of prevention or treatment effectiveness. Large systems may threaten the quality of health care for children and adolescents. If service delivery systems are not appropriately designed for them, adolescents' ability to use health care may suffer. Regulations of the Health Insurance Portability and Accountability Act (HIPAA) allow states with permissive confidentiality policies to continue them. However, the HIPAA is also expected to make confidential care more difficult to deliver in some areas. Some clinicians may interpret and view HIPAA regulations as restrictive barriers to delivering preventive health care services to adolescents rather than as protective of confidential care. A focus on costs may erode support for many services. Primary care clinicians may have less opportunity to provide anticipatory guidance, behavioral assessment and interventions, or health promotion and disease-prevention counseling. Adolescents are often unable to anticipate or plan for their needs. Thus, to serve adolescents appropriately, services must be available in a wide range of health care settings, including community-based adolescent health, family planning, and public health clinics; school-based and school-linked health clinics; physicians' offices and physicians' offices affiliated with health maintenance organizations; health maintenance organizations; and hospitals. Without multiple entry points and a diversity of care resources, adolescents are less likely to connect with the appropriate care resources. Computer technology and the Internet have affected the practice of medicine in the method and speed of access to information and in the nature of communication among physicians, patients, and other members of the health care team. These technological advances provide opportunities for distance education and support for patients. However, the media and the Internet also may lead to misinformation for physicians and patients. Many consumers have difficulty critically appraising health-related information. The education of primary care clinicians must include training in the informatics of health care and the potential promise and problems inherent in technological change. Coordinated efforts to address disparities in quality should be part of the quality agenda for adolescents' health. This must include measures and surveillance that can identify disparities based on age as well as sensitivity to cultural differences in interpretation and performance of quality measures. However, there are concerns about both the relevancy and appropriateness of the measure set proposed by AHRQ in tracking quality and disparities for pediatric and adolescent health care. Overreliance on clinical or administrative data will fail to document quality in important domains such as health status and outcomes of care. In addition, if the national initiatives simply report on available data, they may fail to truly address quality and may lead clinicians and others to focus their attention on what is now measured rather than on what is truly important in improving health care. The IOM refers to the discrepancy between the health care that Americans receive and the health care that Americans should receive as a “quality chasm.” Adolescents, although traditionally thought of as healthy, are not exempt from this problem. Adolescents have unique health care needs that are not always addressed, and young people often face significant barriers to obtaining needed health care, including lack of insurance, financial difficulty, and lack of (or perceived lack of) confidentiality. Most adolescent morbidity and mortality is attributable to preventable risk factors, and AAP guidelines for quality adolescent health care include screening and counseling to promote healthy behaviors and prevent risky behaviors and for the provision of confidential care. The AAP believes that it is possible to raise awareness about these issues and ensure that primary care for children and adolescents provides comprehensive service packages and sufficient support to allow clinicians to identify and coordinate services for the common biomedical, behavioral, and educational problems of children. Public policy must help support improvements in our health care system so that more children and adolescents receive quality care. Employer-sponsored insurance often leaves uncovered some of the services, such as reproductive health or mental health services, that adolescents need the most. Public insurance programs, including Medicaid and the SCHIP, provide an opportunity to increase the number of children with insurance coverage. The first challenge for these programs, as has been the case for Medicaid, is to enroll eligible children and adolescents. However, as the SCHIP expands insurance coverage to a greater proportion of poor and near-poor youth, understanding and addressing the nonfinancial factors that affect access and quality of care become increasingly important. To be effective, these programs must address the reasons that adolescents miss needed care, such as lack of confidentiality or the ability to choose clinicians who are geographically and culturally accessible. Meaningful measures that assess the quality of primary care have been developed but have been slow to enter the field, with actual use in the health care system itself far from optimal. Child and adolescent health has unique characteristics that differentiate it from adult health and require the development of specific measures. First, children's growth is rapid and presents challenges that often require distinct measures for different age groups. In addition, children have different patterns of health, illness, and disability. They have fewer chronic conditions than adults do; thus, quality measurement for children with chronic illness requires noncategorical approaches to assessment. Children also depend on adults for access to care, adherence to recommended treatments, and continuity of care. Quality-measurement and -improvement initiatives need to be developed to specifically address the transition of care from adolescence into adulthood. As adolescents assume responsibility for their own health behaviors, the importance of confidential screening and counseling requires clinicians to derive information directly from youth. Improving the health of children and adolescents is a quality-of-care issue, a professional education issue, and a personal and family responsibility issue. National and community solutions and coordinated efforts are needed to improve health care systems and improve the quality of preventive health care delivered to youth; to help promote improvements in quality through support of professional and consumer education campaigns; and to support quality-improvement initiatives in states, managed care plans, and communities. Families have a special role to play in advocating for their teenagers' health. Most parents or guardians want a professional they trust, such as their pediatrician, to promote healthy, responsible behavior and provide accurate information about health risks so that youth at risk can be identified and offered appropriate help. Thus, every adolescent's parent or guardian should be supportive of ensuring that their teenager has private, confidential time during their health care encounters so that important, preventable issues are addressed. There are few current federal initiatives to improve care for adolescents. The MCHB funds the Office of Adolescent Health, interdisciplinary adolescent health training programs, and implementation of comprehensive preventive care guidelines. In addition, the Bureau of Primary Care, the CDC, and some states have supported adolescent prevention services quality-improvement initiatives. However, concerted and sustained federal and state efforts will be needed to ensure quality services for most of our nation's youth. Public health surveillance and health care quality-assurance activities should use measures that assess adolescents' experiences with care, ensuring that confidential counseling opportunities are provided (rather than by relying on parental report). Use of adolescent self-report to assess the content of primary care delivered to youth via managed care quality assurance and public health surveillance systems has the potential to improve the quality of adolescent care. All children and adolescents should receive comprehensive, confidential (as appropriate) primary care as recommended by AAP guidelines,12 including screening, counseling, and physical and laboratory evaluations. All children and adolescents should be covered by health insurance that provides benefits and care in accordance with AAP guidelines12 and that provides coverage and access to pediatric specialists for care identified as medically necessary during recommended screening and health supervision visits. State governments should ensure that adolescent confidentiality is preserved and/or protected as HIPAA regulations and electronic health records undergo implementation. Private-sector and government payers should develop policies and contract standards to promote access to adolescent care and availability of confidential services for adolescents and should provide other incentives for delivery of high-quality care to adolescents. Public education should help parents and other consumers understand what constitutes high-quality adolescent primary care so that consumers can be better advocates for confidential and private screening and counseling in settings they trust to help keep their children healthy. Pediatricians and other adolescent health care clinicians should be provided professional education about effective strategies for delivery of high-quality adolescent primary care.
1a4572bb278e6a2ca0488fda2b01a962
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
» The FAO Food Price Index* (FFPI) averaged 176.2 points in May 2018, up 2.2 points (1.2 percent) from April level and hitting its highest level since October 2017. The increase in May reflected a continued steep rise in dairy price quotations, while those of cereals also rose, albeit at a slower pace. By contrast, vegetable oil and sugar markets remained under downward pressure whereas meat values changed little. » The FAO Cereal Price Index averaged 172.9 points in May, 4.1 points (2.4 percent) above its April level. The index continued on an upward path since the start of this year, standing in May at almost 17 percent above its corresponding value a year ago and reaching the highest level since January 2015. International prices of all major cereals have strengthened considerably in recent months, and in May wheat values gained largely on concerns over production prospects in a number of major exporting countries. International prices of leading coarse grains also rose, mostly due to deteriorating production prospects in Argentina and Brazil. Sizable purchases by Southeast Asian buyers kept international rice prices firm in May, notwithstanding weaker currencies of some top exporting countries and soft demand for aromatic and parboiled rice. » The FAO Vegetable Oil Price Index averaged 150.6 points in May, down by 4 points (2.6 percent) month-on-month, marking a fourth consecutive decline and a 27-month low. The slide mainly reflects weakening values of palm, soy and sunflower oils, whereas rapeseed oil prices rebounded from their April’s multi-month low. As for palm oil, despite prospective production slowdowns in Southeast Asia, international prices fell due to sluggish global import demand and large inventories compared to last year. In the case of soy oil, ample supplies and stocks resulting from meal-driven crushing continued to weigh on world prices. The rise in rapeseed oil prices mainly reflected concerns about unfavourable weather conditions affecting the 2018/19 crop in parts of Europe. » The FAO Dairy Price Index averaged 215.2 points in May, up 11 points (5.5 percent) from April and marks the fourth month in a row for the index to rise. The index value stood at 11.5 percent higher than in May 2017, yet still 22 percent below the peak reached in February 2014. The rise in May was mainly driven by sizeable increases in the price quotations of cheese, Skim Milk Powder (SMP) and butter, as those of Whole Milk Powder (WMP) were virtually unchanged. Tight supplies in New Zealand, the leading exporter of dairy products, are much behind the market firmness witnessed in recent months. » The FAO Meat Price Index averaged 169.6 points in May, marginally lower than in April. The small decline in the index in May reflected the easing of pig meat and ovine meat prices, while those of poultry meat rose slightly. International price quotations for pigmeat and ovine meat weakened, on lower imports by China in the case of pigmeat and on a stronger US dollar for ovine meat. While poultry prices are estimated to have increased slightly, poultry markets became difficult to monitor in recent weeks because of the uncertainty surrounding the situation in Brazil, the world’s largest poultry exporter, where millions of birds were reported culled following a prolonged truckers’ strike in May. Bovine meat prices remained steady on a generally well-balanced market situation. » The FAO Sugar Price Index averaged 175.3 points in May, down slightly (0.5 percent) from April, marking the sixth consecutive monthly decline. The latest decrease in international sugar prices mostly reflects expectations of a large sugarcane output as a result of favourable harvesting conditions that prevail in the Centre South region of Brazil, the world’s largest sugar producer and exporter. Concerns over a prolonged dryness affecting cane yields in some part of that region lacked strength to reverse the market trend. Likewise, reports that Brazilian mills continued to favour ethanol production over sugar, with only about 37 percent of the sugarcane harvest directed for the production of the sweetener, failed to provide enough support for sugar prices to increase.
0a11497eeb14e24cdc9c5ab32e1fabbf
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Rental - warehouses and buildings up to 8000 m2: Logistics.. There are spaces for office and warehouse space ranging in size from 20 to 8800 square meters. The size of the space can be variably adjusted. There is a ramp or bridge, spaces are heated. Pricing is indicative and the amount of rent depends on the size of the warehouse and the terms of the lease.
d6c59e0b402251ae1017be1964f97e6b
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
<anatomy> The vestibular part of the acoustic (8th cranial) nerve. The vestibular nerve fibres arise from neurones of scarpa's ganglion and project peripherally to vestibular hair cells and centrally to the vestibular nuclei of the brainstem. These fibres mediate the sense of balance and head position.
7c3c2b74be6e764160b0e814ffdc0571
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Upon arrival, transfer to your 4* central hotel for a 2-night stay. Make the ultimate rock ‘n’ roll pilgrimage to Graceland – the home to the one and only Elvis. Transfer to your French Quarter hotel in New Orleans for a 3-night stay. Soar through the fascinating history of New Orleans and discover one of the most diverse and interesting cities of the United States. This tour invites you to climb aboard the traditional steamboat Natchez, the very last steamboat of its kind in New Orleans. Spend your time touring the Garden District, hitting a local jazz club or dining at the Commander’s Palace. Transfer to the airport for your flight to Miami. Upon arrival, transfer to your Miami Beach hotel for an overnight stay. Transfer to the port and embark MSC Seaside. The former fishing village of Ocho Rios has been an important cruise port since the mid-80s: its international food, eventful nightlife and renowned spars are all packaged within the gorgeous aesthetic of the Caribbean. Explore a seemingly endless stretch of magnificent coastline at Grand Cayman's world-famous Seven-Mile Beach. Cozumel has managed to resist becoming ‘just another’ cruise port. It oozes authenticity: its people emit a natural Caribbean energy, religious shrines are everywhere and many of the quaint coastal roads lead to captivating Maya ruins. Nassau is the capital, largest city, and commercial centre of the Commonwealth of the Bahamas. It is famed for its beautiful beaches and turquoise ocean. Disembark your ship and transfer to the airport for your flight to the UK. Graceland, the ultimate rock 'n' roll pilgrimage, will take you on an unforgettable journey that showcases why Elvis is still the king. Step inside Graceland Mansion and follow in the same steps as Elvis himself as you enjoy an audio-guided tour featuring commentary and stories by Elvis and his daughter Lisa Marie. See where Elvis lived, relaxed and spent time with his friends and family. Visit Elvis' two custom airplanes, the Automobile Museum and four different exhibits on the Platinum Tour. Renowned as the birthplace of blues, souls and rock ‘n’ roll, Memphis’s influence on modern pop culture cannot be overstated. Enjoy the beautifully rugged scenery, which once served as the smoky backdrop for artists including Johnny Cash and Elvis as they recorded their career defining records. To many, Miami is the definition of glamour: her streets filled with celebrities and models, her shopping and nightlife scenes that of legend, and her brooding blood-orange sunsets are a delight to behold. MSC Seaside rewrites the rule book of cruise ship design, combining indoor and outdoor areas to connect you with the sea like never before. Located as low as deck 8 is a unique seafront promenade lined with places to eat, drink, shop, swim and sunbathe. And you can enjoy superb views from the two glass-floored catwalks and panoramic lifts.
ca5dd5d1b7258ea75d8c722132fdaffb
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Matrix questions allow responders to enter answers for multiple categories along a scale of options. This question type is useful in customer satisfaction and performance ratings. While in the Content step of a survey, click a question to edit. Select "Matrix" as the type, and enter your question in the Question Content area. Use the input field box under Rows to enter the categories for the matrix question (these are the items that will be listed down the page – such as Quality, Price, Service). Use the input field box under Columns to enter the titles for the scale (these are the options that will be listed across the page – such as Excellent, Good, Poor). Click the Add Row or Add Column text link to add additional categories or scale options. Click the Delete text link to remove a category or scale option. Click and drag the 6 dots icon to reorder. Check the box next to Hide to remove an answer choice from view but retain responses already recorded. Use the If chosen, go to: drop down menu for surveys with multiple paths, so you can redirect responders to a specific page based on their answer to this question. For more information about surveys please click here.
a6659e144ca4ffdf84bbac00f6bdb637
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
When Jack and Annie got back from their adventure in Magic Tree House Merlin Mission #14: A Good Night for Ghosts, they had lots of questions. What are some of the most famous ghost stories? Why do people believe in ghosts? Do most cultures have ghost stories? What are ghost hunters? Find out the answers to these questions and more as Jack and Annie track the facts.
e6929bf534ea958f585bdc1bd68393f7
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
John Hansard Gallery announces it will officially open in its new purpose-built home at Studio 144 in Southampton City Centre on 12 May 2018. The Gallery, which is part of the University of Southampton and one of the UK’s leading contemporary art galleries, reopens with a headline exhibition of works by Gerhard Richter from the ARTIST ROOMS collection. This exhibition celebrates the huge breadth and impact of the work of the one of the world’s most significant contemporary artists. In advance of the official opening of John Hansard Gallery, Studio 144 will launch with Southampton Celebrates on 16 February 2018 in collaboration with Southampton City Council, Nuffield Southampton Theatres and City Eye. From 16 to 24 February 2018 John Hansard Gallery will present a dynamic sampler programme of activity featuring artists Rhona Byrne, Rob Crosse, Hetain Patel, Sam Laughlin, Elaine Mitchener and participative public projects including Conversation Station with artist collective Stair/Slide/Space, and Department for Doing Nothing, in partnership with Southampton Youth Offending Service, Compass School and InFocus Training Ltd. The opening of Studio 144, Southampton’s ambitious new venue for theatre, visual art and film, completes the city’s ambitious Cultural Quarter and forms the next step in Southampton’s cultural regeneration. ARTIST ROOMS: Gerhard Richter begins John Hansard Gallery’s much anticipated year-round programme that will showcase leading international artists who are under-exposed in the UK and provide an international platform for artists working in the UK within a global context. As well as annual keynote exhibitions, the Gallery will present a series of overlapping and parallel exhibitions, events, education and research projects to excite, challenge, represent and reach the widest possible public audience. John Hansard Gallery’s programme in Studio 144 will highlight the rich diversity of contemporary art, from emerging talents to celebrated international figures, from solo projects to historical surveys, and from painting, sculpture and photography, to film, performance, installations and digital media. The long awaited move, from the Gallery’s historic home at the University of Southampton’s Highfield Campus to the new purpose-built Studio 144 in Southampton’s Cultural Quarter, triples the space available for public programming, community-focused projects and active learning opportunities. The city-centre Gallery will dramatically increase opportunities for the public to experience and be inspired by great art, as well as for creative collaboration with its new cultural neighbours in Studio 144 – City Eye and Nuffield Southampton Theatres. John Hansard Gallery’s new Director Woodrow Kernohan was appointed earlier this year. Previously Director/CEO of EVA International – Ireland’s biennial of contemporary art, Kernohan will lead the Gallery into an exciting new phase. “I am excited to welcome John Hansard Gallery to the heart of our city’s cultural quarter. Southampton is on a journey to becoming the cultural destination in the region, and the opening of Studio 144 is set to put us on the map. With the creative industries being one of the biggest contributors to the UK’s economy, I am pleased Studio 144 will be creating hundreds of local jobs and attracting thousands to our city every week. “The opening of Studio 144 and relocation of John Hansard Gallery is a moment of unprecedented opportunity for Southampton and the region. We are delighted to support John Hansard Gallery through our National Portfolio funding programme. Their curatorial work is exceptional, showcasing some of the most incredible artists of our generation, and they are an amazing platform for young and emerging artists. The opening exhibition – part of ARTIST ROOMS – demonstrates the Gallery’s capacity to draw artists of national acclaim to the region. John Hansard Gallery is part of University of Southampton and funded by Arts Council England. The development of Studio 144 is led by Southampton City Council and supported by the National Lottery through Arts Council England, in partnership with Grosvenor Britain and Ireland Developments Limited. The Studio 144 arts venue comprises around 75,000 sq. ft. of stunning gallery, performing arts and film/media studio space across two iconic buildings, as part of a mixed-use development. PreviousMusic On The Street – Christmas celebrations on Oxford Street Southampton! NextGuess How Much I Love You – live on stage in Southampton this half term!
f0d04c5214f8272d4f158a2a2994dc98
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
We are CoversDirect® - an online retailer of protective covers for cars, boats, RV’s, PWC’s, motorcycles, and other vehicles. Located in Chesnee, SC, we are a small company with about 10 employees. CoversDirect® sells only the finest RV covers available, by Carver, ADCO, and Covercraft. In addition to offering the best RV covers, we also have accessories such as covers for tires, AC units, propane tanks, and windshields. Our company believes in customer satisfaction above all else, and if there is any way in which we can help you, please don’t hesitate to call, email, or chat with us. Whether you need help finding a cover, need assistance with returning an item, or just have general questions, we will always be happy to help you.
5600c920c295b97a4b7d1fc6352dffeb
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
Easter Adventure Days -15th, 16th and 17th April. Outdoor Adventure days for Primary School age children. £20 per day (£18 /day if you book more than 1 day . From den building to minibeast hunting to camp fires. Come and have an adventure. Drop off 9.30 and pick up at 3.30! Contact beaconfelladventures@gmail.co.uk to book on or enquire. Sunday 25th April – come and decorate an egg and roll your eggs on the fell. Other crafts and sculpture available to do on the day. June -date to be confirmed. Games, stories and watching the sun set on the fell.
79bdd1dac295f9273b827ba50dd67774
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }
NETHERLANDS - NOVEMBER 25: HILVERSUM Photo of Kurt COBAIN and NIRVANA, Kurt Cobain recording in Hilversum Studios, playing Takamine acoustic guitar. His suicide was one of the most rattling deaths of the 90s. They say that everything old becomes new again, and the saying has not been more obvious than in the past few years as we have seen grunge, 90s TV shows, and scrunchies come back with a vengeance. Could it be that our generation, with the majority of us born in the mid-to-late 90s, are in love with the decade we never got to experience fully? Yes, the early 2000s were very 90s-esque with the fashion trends, and most of us have likely caught all the reruns of Friends and Fresh Prince of Bel-air, but we weren’t immersed in it. We didn’t get to watch these shows as they became iconic, or get to witness 90s bands in concert, or the shock of Kurt Cobain’s death. It was a decade worth experiencing, and we missed it. But we love it nonetheless. From the velvet dresses, denim overalls, and plaid skirts to the combat boots, ripped jeans, and button ups over graphic t-shirts. Even the somewhat questionable trends like frosted tips and baggy clothes, we are infatuated. Life and society have changed astronomical amounts since the 90s. We went from pagers and payphones to touchscreen cell phones (and touchscreen everything). We have drones that deliver packages and cars that drive themselves. You don’t have to leave your house to get food other than pizza, because there’s multiple different phone apps that will deliver anything you want. We have music, movies, and games at our fingertips with iPods, tablets, and cell phone apps, rather than braving Blockbuster, relying on the radio, or going to an arcade. In the 90s, we had Boy Meets World and Buffy the Vampire Slayer; Today, we have Girl Meets World and The Walking Dead. We had Rugrats, Teenage Mutant Ninja Turtles, and Dexter’s Laboratory; Now we have Monster High, Bob’s Burgers, and Gravity Falls. There’s no doubt just about everything has changed, but we all still love the stories told through slightly grainy film, the iconic fashion trends, and the nostalgia that goes with it all. We may have been unable to experience everything the 90s had to offer, but we are still the last generation of 90s kids and proud.
a4ab6a315cccc4645a23590a051d9daa
{ "file_path": "/home/ubuntu/dolma-v1_7/c4-0009.json.gz" }