input
stringlengths
2.56k
275k
output
stringclasses
1 value
this bad history is , we have in Lemma 3. Theorem 1 is proved by setting . ∎ Theorem 1 shows that, when is large enough, the NPB procedure used in previous work Eckles and Kaptein (2014); Tang et al. (2015); McNellis et al. (2017) incurs an expected cumulative regret arbitrarily close to a linear regret in the order of . It is straightforward to prove a variant of this lower bound with any constant (in terms of ) number of pseudo-examples. Next, we show that NPB with appropriate forced exploration can result in sub-linear regret. 3.3 Forced Exploration In this subsection, we show that NPB, when coupled with an appropriate amount of forced exploration, can result in sub-linear regret in the Bernoulli bandit setting. In order to force exploration, we pull each arm times before starting Algorithm 1. The following theorem shows that for an appropriate value of , this strategy can result in an upper bound on the regret. Theorem 2. In any -armed bandit setting, if each arm is initially pulled times before starting Algorithm 1, then E[R(T)]=O(T2/3). Proof. The claim is proved in Appendix B based on the following observation: If the gap of the suboptimal arm is large, the prescribed steps are sufficient to guarantee that the bootstrap sample of the optimal arm is higher than that of the suboptimal arm with a high probability at any round . On the other hand, if the gap of the suboptimal arm is small, no algorithm can have high regret. ∎ Although we can remedy the NPB procedure using this strategy, it results in a sub-optimal regret bound. In the next section, we consider a weighted bootstrapping approach as an alternative to NPB. 4 Weighted Bootstrapping In this section, we propose weighted bootstrapping (WB) as an alternative to the non-parametric bootstrap. We first describe the weighted bootstrapping procedure in Section 4.1. For the bandit setting with Bernoulli rewards, we show the mathematical equivalence between WB and TS, hence proving that WB attains near-optimal regret (Section 4.2). 4.1 Procedure In order to formulate the bootstrapped log-likelihood, we use a random transformation of the labels in the corresponding log-likelihood function. First, consider the case of Bernoulli observations where the labels . In this case, the log-likelihood function is given by: L(θ)=∑i∈Djyilog(g(⟨xi,θ⟩))+(1−yi)log(1−g(⟨xi,θ⟩)) where the function is the inverse-link function. For each observation , we sample a random weight from an exponential distribution, specifically, for all , . We use the following transformation of the labels: and . Since we transform the labels by multiplying them with exponential weights, we refer to this case as WB with multiplicative exponential weights. Observe that this transformation procedure extends the domain for the labels from values in to those in and does not result in a valid probability mass function. However, below, we describe several advantages of using this transformation. Given this transformation, the bootstrapped log-likelihood function is defined as: (4) Here is the log-likelihood of observing point . As before, the bootstrap sample is computed as: . Note that in WB, the randomness for bootstrapping is induced by the weights and that . As a special case, in the absence of features, when for all , assuming positive and negative pseudo-counts and denoting , we obtain the following closed-form expression for computing the bootstrap sample: ~θ =∑ni=1[wi⋅yi]+∑α0i=1[wi]∑n+α0+β0i=1wi (5) Using the above transformation has the following advantages: (i) Using equation 4, we can interpret as a random re-weighting (by the weights ) of the observations. This formulation is equivalent to the weighted likelihood bootstrapping procedure proposed and proven to be asymptotically consistent in the offline case in Newton and Raftery (1994). (ii) From an implementation perspective, computing involves solving a weighted maximum likelihood estimation problem. It thus has the same computational complexity as NPB and can be solved by using black-box optimization routines. (iii) In the next section, we show that using WB with multiplicative exponential weights has good theoretical properties in the bandit setting. Furthermore, such a procedure of randomly transforming the labels lends itself naturally to the Gaussian case and in Appendix C.2.1, we show that WB with an additive transformation using Gaussian weights is equivalent to TS. 4.2 Equivalence to Thompson sampling We now analyze the theoretical performance of WB in the Bernoulli bandit setting. In the following proposition proved in appendix C.1.1, we show that WB with multiplicative exponential weights is equivalent to TS. Proposition 1. If the rewards , then weighted bootstrapping using the estimator in equation 5 results in , where and is the number of positive and negative observations respectively; and are the positive and negative pseudo-counts. In this case, WB is equivalent to Thompson sampling under the prior. Since WB is mathematically equivalent to TS, the bounds in Agrawal and Goyal (2013a) imply near-optimal regret for WB in the Bernoulli bandit setting. In Appendix C.1.2, we show that this equivalence extends to the more general categorical (with categories) reward distribution i.e. for . In appendix C.2.1, we prove that for Gaussian rewards, WB with additive Gaussian weights, i.e. and using the additive transformation , is equivalent to TS under an uninformative prior. Furthermore, this equivalence holds even in the presence of features, i.e. in the linear bandit case. Using the results in Agrawal and Goyal (2013b), this implies that for Gaussian rewards, WB with additive Gaussian weights achieves near-optimal regret. 5 Experiments In Section 5.1, we first compare the empirical performance of bootstrapping and Thompson sampling in the bandit setting. In section 5.2, we describe the experimental setup for the contextual bandit setting and compare the performance of different algorithms under different feature-reward mappings. 5.1 Bandit setting We consider arms (refer to Appendix D for results with other values of ), a horizon of rounds and average our results across runs. We perform experiments for four different reward distributions - Bernoulli, Truncated Normal, Beta and the Triangular distribution, all bounded on the interval. In each run and for each arm , we choose the expected reward (mean of the corresponding distribution) to be a uniformly distributed random number in . For the Truncated-Normal distribution, we choose the standard deviation to be equal to , whereas for the Beta distribution, the shape parameters of arm are chosen to be and . We use the prior for TS. In order to use TS on distributions other than Bernoulli, we follow the procedure proposed in Agrawal and Goyal (2013a): for a reward in we flip a coin with the probability of obtaining equal to the reward, resulting in a binary “pseudo-reward”. This pseudo-reward is then used to update the Beta posterior as in the Bernoulli case. For NPB and WB, we use the estimators in equations 3 and 5 respectively. For both of these, we use the pseudo-counts . In the Bernoulli case, NPB obtains a higher regret as compared to both TS and WB which are equivalent. For the other distributions, we observe that both WB and NPB (with WB resulting in consistently better performance) obtain lower cumulative regret than the modified TS procedure. This shows that for distributions that do not admit a conjugate prior, WB (and NPB) can be directly used and results in good empirical performance as compared to making modifications to the TS procedure. 5.2 Contextual bandit setting We adopt the one-versus-all multi-class classification setting for evaluating contextual bandits Agarwal et al. (2014); McNellis et al. (2017). Each arm corresponds to a class. In each round, the algorithm receives a reward of one if the context vector belongs to the class corresponding to the selected arm and zero otherwise. Each arm maintains an independent set of sufficient statistics that map the context vector to the observed binary reward. We use two multi-class datasets: CoverType ( and ) and MNIST ( and ). The number of rounds in experiments is and we average results over independent runs. We experiment with LinUCB Abbasi-Yadkori et al. (2011), which we call UCB, linear Thompson sampling (TS) Agrawal and Goyal (2013b), -greedy (EG) Langford and Zhang (2008), non-parametric bootstrapping
the local problem \begin{align} \label{LSlocal} & \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\varphi \varphi}} & {\mathbb{A}_k^{\varphi \mathbf{E}}} \\ [4pt] {\mathbb{A}_k^{\mathbf{E} \varphi}} & {\mathbb{A}_k^{\mathbf{E} \mathbf{E}}} \\ [2pt] \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {\underline{\varphi}_k} \\ [4pt] {\underline{\mathbf{E}}_k} \\ [2pt] \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_{k}^{\varphi \hat{\varphi}}} & \mathbb{O} \\ [4pt] {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}}} & {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}^c}} \\ [2pt] \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {\underline{\hat \varphi}_k} \\ [4pt] {{\hat \varphi}^c} \\ [2pt] \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {F_k^\varphi} \\ [4pt] {F_k^{\mathbf{E}}} \\ [2pt] \end{array}} \right] \end{align} where the local unknown vectors are defined as \begin{equation} \label{unklocal} \underline{\varphi}_k = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\varphi_k^1} \hfill \\ \vdots \hfill \\ {\varphi_k^{N_p}} \hfill \\ \end{gathered} \end{array}} \right], \quad \underline{\mathbf{E}}_k = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\mathbf{E}_k^1} \hfill \\ \vdots \hfill \\ {\mathbf{E}_k^{N_p}} \hfill \\ \end{gathered} \end{array}} \right] \end{equation} and $\underline{\hat{\varphi}}_k$ is a vector of dimension $N_{fe}N_{fp}\times1$, which stores the global unknowns (see \eqref{unkglobal} below) and is defined as \begin{equation} \label{unkmap} \underline{\hat{\varphi}}_k = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\underline{\hat{\varphi}}_{k,1}} \hfill \\ \vdots \hfill \\ {\underline{\hat{\varphi}}_{k,{N_{fe}}}} \hfill \\ \end{gathered} \end{array}} \right], \quad {\underline{\hat{\varphi}}_{k,{f_l}}} = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\hat{\varphi}_{k,f_l}^1} \hfill \\ \vdots \hfill \\ {\hat{\varphi}_{k,f_l}^{N_{fp}}} \hfill \\ \end{gathered} \end{array}} \right]. \end{equation} In \eqref{LSlocal}, the right hand side vectors ${F_k^{\alpha}}$, $\alpha \in \{ \varphi, \mathbf{E} \}$, correspond to the right hand sides of \eqref{weakfp0}-\eqref{weakfp1}, respectively. The matrices $\mathbb{A}_k^{\alpha \beta}$, ($\alpha \in \{ {\varphi, \mathbf{E}} \}$, $\beta \in \{ {\varphi, \mathbf{E}, \hat{\varphi}} \}$), correspond to the inner products in \eqref{weakfp0}-\eqref{weakfp1} are the standard DG matrices, e.g., mass, stiffness, and lift matrices. For details, readers are referred to the authors' previous work~\cite{Chen2020float}. Note that $\mathbb{A}_k^{\alpha \beta}$ has dimensions $N_{\alpha} \times N_{\beta}$, where $N_{\beta}$ is the dimension of the input vector $\beta_k$ and $N_{\alpha}$ is the dimension of the output vector $\alpha_k$. Similarly, Galerkin testing \eqref{weakfp2}-\eqref{weakfp3} yields the following matrix system for the global problem \begin{align} \label{LSglobal} & \sum_{k=1}^K \left\{ \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\hat{\varphi} \varphi}} & {\mathbb{A}_k^{\hat{\varphi} \mathbf{E}}} \\ [4pt] \mathbb{O} & {\mathbb{A}_{k}^{\hat{\varphi}^c \mathbf{E}}} \\ [2pt] \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {\underline{\varphi}_k} \\ [4pt] {\underline{\mathbf{E}}_k} \\ [2pt] \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\hat{\varphi} \hat{\varphi}}} & {\mathbb{O}} \\ [4pt] {\mathbb{O}} & {\mathbb{O}}\\ [2pt] \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {\underline{\hat \varphi}_k} \\ [4pt] {{\hat \varphi}^c} \\ [2pt] \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {F_k^{\hat \varphi}} \\ [4pt] Q^c \\ [2pt] \end{array}} \right] \right\} \end{align} where the right hand side vector ${F_k^{\hat{\varphi}}}$ corresponds to the right hand side of \eqref{weakfp2}, and the matrices $\mathbb{A}_k^{\hat{\varphi} \alpha}$, ($\alpha \in \{ {\varphi, \mathbf{E}, \hat{\varphi}} \}$), correspond to the inner products in \eqref{weakfp2}. Solving $[\underline{\varphi}_k, \underline{\mathbf{E}}_k]^T$ in terms of $\underline{\hat{\varphi}}$ and $\hat{\varphi}^c$ from \eqref{LSlocal} yields \begin{align} \label{LSlocal_} \left[ {\begin{array}{*{20}{c}} {\underline{\varphi}_k} \\ [4pt] {\underline{\mathbf{E}}_k} \\ [2pt] \end{array}} \right] = \mathbb{A}_k^{-1} \left[ {\begin{array}{*{20}{c}} {F_k^\varphi} \\ [4pt] {F_k^{\mathbf{E}}} \\ [2pt] \end{array}} \right] - \mathbb{A}_k^{-1} \bar{\mathbb{A}}_k \left[ {\begin{array}{*{20}{c}} {\underline{\hat \varphi}_k} \\ [4pt] {{\hat \varphi}^c} \\ [2pt] \end{array}} \right] \end{align} where \begin{align} \nonumber & \mathbb{A}_k = \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\varphi \varphi}} & {\mathbb{A}_k^{\varphi \mathbf{E}}} \\ [4pt] {\mathbb{A}_k^{\mathbf{E} \varphi}} & {\mathbb{A}_k^{\mathbf{E} \mathbf{E}}} \\ [2pt] \end{array}} \right], \quad \bar{\mathbb{A}}_k = \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_{k}^{\varphi \hat{\varphi}}} & \mathbb{O} \\ [4pt] {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}}} & {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}^c}} \\ [2pt] \end{array}} \right]. \end{align} Inserting \eqref{LSlocal_} into \eqref{LSglobal} yields a global system involving only the global unknowns \begin{align} \label{LSglobal_} & \mathbb{A}_{global} \left[ {\begin{array}{*{20}{c}} {\underline{\hat \varphi}} \\ [4pt] {{\hat \varphi}^c} \\ [2pt] \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {F^{\hat \varphi}} \\ [4pt] Q^c \\ [2pt] \end{array}} \right] - \sum_{k=1}^K { \tilde{\mathbb{A}}_k \mathbb{A}_k^{-1} \left[ {\begin{array}{*{20}{c}} {F_k^\varphi} \\ [4pt] {F_k^{\mathbf{E}}} \\ [2pt] \end{array}} \right] } \end{align} where the global unknown vector is defined as \begin{equation} \label{unkglobal} \underline{\hat{\varphi}} = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\underline{\hat{\varphi}}_1} \hfill \\ \vdots \hfill \\ {\underline{\hat{\varphi}}_{N_f}} \hfill \\ \end{gathered} \end{array}} \right], \quad {\underline{\hat{\varphi}}_f} = \left[ {\begin{array}{*{20}{c}} \begin{gathered} {\hat{\varphi}_f^1} \hfill \\ \vdots \hfill \\ {\hat{\varphi}_f^{N_{fp}}} \hfill \\ \end{gathered} \end{array}} \right]. \end{equation} and \begin{align} \label{Aglobal_} \mathbb{A}_{global} = \sum_{k=1}^K \left\{ \hat{\mathbb{A}}_k - { \tilde{\mathbb{A}}_k \mathbb{A}_k^{-1} \bar{\mathbb{A}}_k } \right\}, \quad \tilde{\mathbb{A}}_k = \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\hat{\varphi} \varphi}} & {\mathbb{A}_k^{\hat{\varphi} \mathbf{E}}} \\ [4pt] \mathbb{O} & {\mathbb{A}_{k}^{\hat{\varphi}^c \mathbf{E}}} \\ [2pt] \end{array}} \right] = \bar{\mathbb{A}}_k^T, \quad \hat{\mathbb{A}}_k = \left[ {\begin{array}{*{20}{c}} {\mathbb{A}_k^{\hat{\varphi} \hat{\varphi}}} & {\mathbb{O}} \\ [4pt] {\mathbb{O}} & {\mathbb{O}}\\ [2pt] \end{array}} \right]. \end{align} In~\eqref{unkmap} and~\eqref{unkglobal}, $f_l \in \{1,...,N_{fe}\}$, $f \in \{1,...,N_f\}$, $\underline{\hat{\varphi}}_{k,{f_l}}$ contains the unknowns on local face $f_l$ of element $k$, and $\underline{\hat{\varphi}}_f$ contains the unknowns on face $f$ of $\Gamma$. Apparently, each local face $f_l$ of element $k$ can be mapped to a global face $f$ of $\Gamma$. Fig.~\ref{mesh} illustrates the mapping between the nodes of the local elements (blue dots) and the nodes of the skeleton (red circles). This mapping is included in~\eqref{Aglobal_} in the summation over $k$, i.e., each local face $f_l$ of the $N_{fe}$ faces of element $k$ is mapped to one face $f$ of $\Gamma$ and, the matrix entries of the two local faces corresponding to the same $f$ are combined. The assembled matrix system from~\eqref{Aglobal_} approximately has dimensions $(N_f N_{fp}+1) \times (N_f N_{fp}+1)$. The actual size is smaller than $(N_f N_{fp}+1)$ since the nodes on $\partial \Omega$ are not included in the global problem [see \eqref{Globalfp0}]. The same mapping is done in the summation on the right hand side of~\eqref{LSglobal_}. Note that the elemental matrix ${\mathbb{A}_k^{\hat{\varphi} \varphi}}$ has dimension $N_{fe} N_{fp} \times N_p$ and the resulting vector for each $k$ has the same dimension as $\underline{\hat{\varphi}}_{k}$. } The size of the global system \eqref{LSglobal_} [$\sim (N_f N_{fp}+1)$] is much smaller than that of the DG method ($\sim K N_p$, see~\cite{Chen2020float}). Once $\underline{\hat{\varphi}}$ and $\hat{\varphi}^c$ are solved from \eqref{LSglobal_}, they can be used to solve $[\underline{\varphi}_k, \underline{\mathbf{E}}_k]^T$ in the local system \eqref{LSlocal_}. Since the local problems of different elements are independent from each other, they can be solved in parallel. As the dimension of \eqref{LSlocal_} is only $\sim N_p$, the computational cost of this step is relatively low and can be ignored, especially in large scale problems~\cite{Cockburn2016static}. \section{Numerical Examples} \subsection{Coaxial Capacitor with FPC} The proposed method is first validated using a canonical problem with an analytical solution. The simulation domain is illustrated in Figure~\ref{Capacitor} (a). A thin metal tube is inserted into a coaxial capacitor. The voltages applied on the inner and outer boundaries of the capacitor are $\varphi (|\mathbf{r}| = {r_0}) = {V_0}$ and $\varphi (|\mathbf{r}| = {r_1}) = {V_1}$, respectively. The metal tube is modeled as an FPC and the FPBC is applied on $|\mathbf{r}|={r_2}$ and $|\mathbf{r}|={r_3}$. The total charge on the FPC is $Q$.The analytical solution of the electric potential is given by \begin{equation*} \varphi_{Ana} (r) = \left\{ \begin{gathered} {a_0} + {b_0}\ln (r),\;r \in [{r_0},{r_2}] \hfill \\ {a_1} + {b_1}\ln (r),\;r \in [{r_3},{r_1}] \hfill \\ \end{gathered} \right. \end{equation*} where ${a_0} = {V_0} - {b_0}\ln ({r_0})$, ${a_1} = {V_1} - {b_e}\ln ({r_1})$, ${b_0} = {b_1} + Q/(2\pi \varepsilon )$, ${b_1} = [{V_0} - {V_0} - {C_{20}}Q/(2\pi \varepsilon )]/({C_{20}} - {C_{31}})$, and ${C_{ij}} = \ln ({r_i}/{r_j})$. In the following, ${V_0} = 0$, ${V_1} = 10$ V, ${r_0} = 0.1$ cm, ${r_1} = 2$ cm, ${r_2} = 0.8$ cm, and ${r_3} = 1.2$ cm. \begin{figure}[!ht] \centering \subfloat[\label{Capacitora}]{\includegraphics[height=0.32\columnwidth]{Capacitor.png}} \hspace{0.5cm} \subfloat[\label{Capacitorb}]{\includegraphics[height=0.32\columnwidth]{CapacitorPhiLine3.png}} \\ \subfloat[\label{Capacitorc}]{\includegraphics[height=0.36\columnwidth]{CapacitorPhi_DoF1.png}} \caption{(a) Schematic description of the coaxial capacitor model. (b) $\varphi$ computed by HDG and $\varphi_{Ana}$ on line $(x,y = 0)$ for different values of $Q$. (c) Illustration of the nodes where $\varphi$ and $\hat{\varphi}$ are defined.} \label{Capacitor} \end{figure} Figure~\ref{Capacitor} (b) compares the electric potential computed by HDG with $p=2$ to the analytical solution along the line $(x,y = 0)$ for $Q\in\{0, -5\times10^{10}e, -10^{10}e\}$, where $e$ is the electron charge. One can see that the numerical solution agree very well with the analytical one. The absolute value of the difference between the FPC potentials computed using HDG and the analytical solution is $1.58\times10^{-7}V$, $2.30\times10^{-8}V$ and $1.45\times10^{-8}V$ for $Q=0$, $Q=-5\times10^{9} e$ and $Q=-10^{10} e$, respectively. \begin{table}[!ht] \scriptsize \centering \begin{threeparttable} \renewcommand{\arraystretch}{1.5} \centering \caption{Dimension and condition number of the DG and HDG matrices, (wall) time and (peak) memory required by DG and HDG, and absolute error in FPC potential computed using DG and HDG for the coaxial capacitor example with zero total charge on the FPC\tnote{*}.} \label{nunk} \setlength{\tabcolsep}{3pt} \begin{tabular}{ p{52pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} | p{36pt} } \hline & \multicolumn{2}{c|}{$p=1$} & \multicolumn{2}{c|}{$p=2$} & \multicolumn{2}{c|}{$p=3$} & \multicolumn{2}{c|}{$p=4$} & \multicolumn{2}{c}{$p=5$} \\ \hline & DG & HDG & DG & HDG & DG & HDG & DG & HDG & DG & HDG \\ \hline Dimension & 254,838 & 252,319 & 509,676 & 378,478 & 849,460 & 504,637 & 1,274,190 & 630,796 & 1,783,866 & 756,955 \\ \hline Condition \# & 1.33$\times 10^8$ & 1.17$\times 10^8$ & 5.44$\times 10^8$ & 1.67$\times 10^8$ & 16.17$\times 10^8$ & 2.62$\times 10^8$ & 39.4$\times 10^8$ & 3.47$\times 10^8$ & 84.2$\times 10^8$ & 4.57$\times 10^8$ \\ \hline Time (s) & 1.97 & 1.79 & 5.35 & 3.83 & 10.5 & 7.09 & 18.8 & 9.29 & 32.3 & 12.6 \\ \hline Memory (GB)& $0.41$ & $0.38$ & $1.07$ & $0.93$ & $2.11$ & $1.66$ & $3.89$ & $2.42$ & $6.05$ & $3.21$ \\ \hline $\mathrm{Error}$ (V) & $2.86$$\times10^{-4}$ & $2.82$$\times10^{-4}$ & $2.30$$\times10^{-7}$ & $2.26$$\times10^{-7}$ & $2.01$$\times10^{-7}$ & $1.99$$\times10^{-7}$ & $1.90$$\times10^{-7}$ & $1.85$$\times10^{-7}$ & $1.90$$\times10^{-7}$ & $1.83$$\times10^{-7}$ \\ \hline \end{tabular} \smallskip \scriptsize \begin{tablenotes} \item[*] {The matrix systems are solved using UMFPACK (multifrontal sparse LU factorization) implemented by
p^{\otimes n/2})$ for an even $n>1$. Let $\sigma:[0,1)\to\cP(\Omega)$, $x\mapsto p\vec{1}\{x<1/2\}+q\vec{1}\{x\geq1/2\}$ and $\tau:[0,1)\to\cP(\Omega)$, $x\mapsto \sigma(1-x)$. Let $\nu=\frac12(\atom_\sigma+\atom_\tau)$. Then \begin{equation}\label{eqex2} \cutm(\mu,\nu)=\Cutm(\mu,\nu)=O(n^{-1/2}). \end{equation} Indeed, to construct a coupling $\gamma$ of $\mu,\nu$ let $X,Y,Y'$ be three independent random variable such that $X={\rm Be}(1/2)$, $Y\in\{0,1\}^n$ has distribution $p^{\otimes n/2}\otimes q^{\otimes n/2}$ and $Y'\in\{0,1\}^n$ has distribution $p^{\otimes n/2}\otimes q^{\otimes n/2}$. Further, let $G=(Y,\atom_\sigma)$ if $X=0$ and $G=(Y',\atom_\tau)$ otherwise and let $\gamma$ be the law of $G$. A similar application of Azuma's inequality as in the previous example yields (\ref{eqex2}). \end{example} \subsection{Alternative descriptions} We recall that the (bipartite, decorated version of the) cut metric on the space $W_\Omega$ of measurable maps $[0,1)^2\to\cP(\Omega)$ can be defined as \begin{align*} \delta_{\Box}(f,g)&=\inf_{s,t\in S_{[0,1)}}\sup_{U,V\subset[0,1)}\TV{\int_{U\times V}f(x,y)-g(s(x),t(y))\,dx\,dy}\qquad \mbox{(cf.~\cite{Janson,Lovasz,LovaszSzegedy}).} \end{align*} Let $\cW_\Omega$ be the space obtained from $W_\Omega$ by identifying $f,g\in W_\Omega$ such that $\delta_{\Box}(f,g)=0$. Applying \cite[\Thm~7.1]{Janson} to our setting, we obtain \begin{proposition}\label{Prop_homeomorphic} There is a homeomorphism $\cM_\Omega\to\cW_\Omega$. \end{proposition} \begin{proof} We recall that for any $\mu\in\cP(\step_\Omega)$ there exists a measurable $\varphi:[0,1)\to\step_\Omega$ such that $\mu=\varphi(\lambda)$, i.e., $\mu(A)=\lambda(\varphi^{-1}(A))$ for all measurable $A\subset\step_\Omega$. Hence, recalling that $\varphi(x)\in L_1([0,1),\cP(\Omega))$, $\mu$ yields a graphon $w_\mu:[0,1]^2\to\cP(\Omega)$, $(x,y)\mapsto(\varphi(x))(y)$. Due to~\cite[\Thm~7.1]{Janson} the map $\bar\mu\in\cM_\Omega\mapsto w_\mu\in\cW_\Omega$ is a homeomorphism. \end{proof} \begin{corollary} $\cM_\Omega$ is a compact Polish space. \end{corollary} \begin{proof} This follows from \Prop~\ref{Prop_homeomorphic} and the fact that $\cW_\Omega$ has these properties~\cite[\Thm~9.23]{Lovasz}. \end{proof} Diaconis and Janson~\cite{DJ} pointed out that the connection between $\cW_\Omega$ and the Aldous-Hoover representation of ``exchangeable arrays'' (see also Panchenko~\cite[Appendix A]{PanchenkoBook}). To apply this observation to $\cM_\Omega$, recall that $\Omega^{\NN\times\NN}$ is compact (by Tychonoff's theorem) and that a sequence $(A(n))_n$ of $\Omega^{\NN\times\NN}$-valued random variables converges to $A$ in distribution iff $$\lim_{n\to\infty}\pr\brk{\forall i,j\leq k:A_{ij}(n)=a_{ij}}=\pr\brk{\forall i,j\leq k:A_{ij}=a_{ij}} \qquad\mbox{for all $k$, $a_{ij}\in\Omega$}.$$ Now, for $\bar\mu\in\cM_\Omega$ define a random array $\vec A(\bar\mu)=(\vec A_{ij}(\bar\mu))\in\Omega^{\NN\times\NN}$ as follows. Let $(\SIGMA_i)_{i\in\NN}$ be a sequence of independent samples from the distribution $\mu$, independent of the sequence $(\vec x_i)_{i\in\NN}$ of independent uniform samples from $[0,1)$. Finally, independently for all $i,j$ choose $\vec A_{ij}(\bar\mu)\in\Omega$ from the distribution $\SIGMA_i(\vec x_j)\in\cP(\Omega)$. Then in our context the correspondence from~\cite[\Thm~8.4]{DJ} reads \begin{corollary}\label{Cor_AldousHoover} The sequence $(\bar\mu_n)_n$ converges to $\bar\mu\in\cM_\Omega$ iff $\vec A(\bar\mu_n)$ converges to $\vec A(\bar\mu)$ in distribution. \end{corollary} While \Cor~\ref{Cor_AldousHoover} characterizes convergence in $\cutm(\nix,\nix)$, the following statement applies to the strong metric $\Cutm(\nix,\nix)$. For $\sigma\in\step_\Omega$ and $x_1,\ldots,x_k\in[0,1)$ define $\sigma_{\marg x_1,\ldots,x_k}=\sigma(x_1)\otimes\cdots\otimes\sigma(x_k)\in\cP(\Omega^k)$. Moreover, for $\mu\in M_\Omega$ let $$\mu_{\marg x_1,\ldots,x_k}=\int_{\step_\Omega}\sigma_{\marg x_1,\ldots,x_k}\,d\mu(\sigma).$$ If $\mu\in\cP(\Omega^n)$ is a discrete measure, then $\hat\mu_{\marg x_1,\ldots,x_k}=\widehat{\mu_{\marg i_1,\ldots,i_k}}$ with $i_j=\lceil nx_j\rceil$. As before, we let $(\vec x_i)_{i\geq1}$ be a sequence of independent uniform samples from $[0,1)$. \begin{corollary}\label{Cor_sampling} If $(\mu_n)_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$, then for any integer $k\geq1$ we have $\lim_{n\to\infty}\Erw\TV{\mu_{n\marg \vec x_1,\ldots,\vec x_k}-\mu_{\marg \vec x_1,\ldots,\vec x_k}}=0.$ \end{corollary} \begin{proof} By \cite[\Thm~8.6]{Janson} we can turn $\mu,\mu_n$ into graphons $w,w_n:[0,1)^2\to\cP(\Omega)$ such that for all $n$ \begin{align*} \mu&=\int_0^1 \atom_{w(\nix,y)}\,dy,\quad\mu_n=\int_0^1 \atom_{w_n(\nix,y)}\,dy\quad\mbox{and}\quad \Cutm(\mu,\mu_n)=\sup_{U,V\subset[0,1)}\TV{\int_{U\times V}w(x,y)-w_n(x,y) \,dx\,dy}. \end{align*} Let $(\vec y_j)_{j\geq 1}$ be independent and uniform on $[0,1)$ and independent of $(\vec x_i)_{i\geq1}$. By \cite[\Thm~10.7]{Lovasz}, we have $\lim_{n\to\infty}\Cutm(\mu_n,\mu)=0$ iff \begin{equation}\label{eq_samp} \lim_{r\to\infty}\limsup_{n\to\infty}\ \Erw\brk{\max_{I,J\subset[r]}\TV{\sum_{(i,j)\in I\times J} w(\vec x_i,\vec y_j)-w_n(\vec x_i,\vec y_j) }}=0. \end{equation} Hence, we are left to show that (\ref{eq_samp}) implies \begin{equation}\label{eq_marg} \forall k\geq1: \lim_{n\to\infty}\Erw\TV{\mu_{n\marg \vec x_1,\ldots,\vec x_k}-\mu_{\marg \vec x_1,\ldots,\vec x_k}} = 0. \end{equation} To this end, we note that by the strong law of large numbers uniformly for all $x_1,\ldots, x_k\in[0,1]$ and $n$, \begin{align}\label{eq_LLN1} \frac 1r \sum_{j=1}^r (w(x_1,\vec y_j),\ldots,w(x_k,\vec y_j))&\ \stacksign{$r\to\infty$}\to\ \mu_{\marg x_1,\ldots, x_k}&\mbox{in probability},\\ \frac 1r \sum_{j=1}^r (w_n(x_1,\vec y_j),\ldots,w_n(x_k,\vec y_j))&\ \stacksign{$r\to\infty$}\to\ \mu_{n\marg x_1,\ldots, x_k}&\mbox{in probability}. \label{eq_LLN2} \end{align} Hence, if \eqref{eq_samp} holds, then (\ref{eq_marg}) follows from (\ref{eq_LLN1})--(\ref{eq_LLN2}). \end{proof} \noindent As an application of \Cor~\ref{Cor_sampling} we obtain \begin{corollary}\label{Cor_factorise} Assume that $(\mu_n)_n$ is a sequence such that $\mu_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$. The following statements are equivalent. \begin{enumerate}[(i)] \item There is $\sigma\in\Sigma_\Omega$ such that $\mu=\atom_\sigma$. \item For any integer $k\geq2$ we have \begin{equation}\label{eqFactorise} \lim_{n\to\infty}\Erw\TV{\mu_{n\marg\vec x_1,\ldots,\vec x_k}-\mu_{n\marg\vec x_1}\otimes\cdots\otimes\mu_{n\marg\vec x_k}}=0. \end{equation} \item The condition (\ref{eqFactorise}) holds for $k=2$. \end{enumerate} \end{corollary} \begin{proof} The implication (i)$\Rightarrow$(ii) follows from \Cor~\ref{Cor_sampling} and the step from (ii) to (iii) is immediate. Hence, assume that (iii) holds. Then by \Cor~\ref{Cor_sampling} and the continuity of the $\otimes$-operator, \begin{align}\label{eqFactorise0} \Erw\TV{\mu_{\marg\vec x_1,\vec x_2}-\mu_{\marg\vec x_1}\otimes\mu_{\marg\vec x_2}}&= \lim_{n\to\infty}\Erw\TV{\mu_{n\marg\vec x_1,\vec x_2}-\mu_{n\marg\vec x_1}\otimes\mu_{n\marg\vec x_2}}=0. \end{align} Define $\tilde\sigma:[0,1)\to\cP(\Omega)$ by $x\mapsto\mu_{\marg x}$ and assume that $\mu\neq\atom_{\tilde\sigma}$. Then $\Cutm(\mu,\atom_{\tilde\sigma})>0$ (by Fact~\ref{Fact_attained}), whence there exist $B\subset\step_\Omega$, $U\subset[0,1)$, $\omega\in\Omega$ such that \begin{align}\label{eqFactorise1} \int_B\brk{\int_U\sigma_x(\omega)-\tilde\sigma_x(\omega)\,dx}^2\,d\mu(\sigma)>0. \end{align} However, (\ref{eqFactorise0}) entails \begin{align*} \int_{\step_\Omega}\brk{\int_U\sigma_x(\omega)-\tilde\sigma_x(\omega)\,dx}^2\,d\mu(\sigma) &=\int_{\step_\Omega}\int_U\int_U\sigma_x(\omega)\sigma_y(\omega)-\tilde\sigma_x(\omega)\tilde\sigma_y(\omega)\,dx\,dy\,d\mu(\sigma)\\ &=\Erw[\mu_{\marg\vec x_1,\vec x_2}-\mu_{\marg\vec x_1}\otimes\mu_{\marg\vec x_2}|\vec x_1,\vec x_2\in U]=0, \end{align*} in contradiction to (\ref{eqFactorise1}). \end{proof} \begin{remark} Strictly speaking, the results from~\cite{DJ,Lovasz} are stated for graphons with values in $[0,1]$, i.e., $\cP(\Omega)$ for $|\Omega|=2$. However, they extend to $|\Omega|>2$ directly. For instance, the compactness proof~\cite[\Chap~9]{Lovasz} is by way of the regularity lemma, which we extend in \Sec~\ref{Sec_reg} explicitly. Moreover, the sampling result for \Cor~\ref{Cor_sampling} follows from~\cite[\Chap~10]{Lovasz} by viewing $w:[0,1)^2\to\cP(\Omega)$ as a family $(w_\omega)_{\omega\in\Omega}$, $w_\omega:(x,y)\mapsto w_{x,y}(\omega)\in[0,1]$. Finally, the proof of \Cor~\ref{Cor_AldousHoover} in~\cite{DJ} by counting homomorphisms, extends to $\cP(\Omega)$-valued graphons~\cite[\Sec~17.1]{Lovasz}. \end{remark} \subsection{Algebraic properties} The cut metric is compatible with basic algebraic operations on measures. The following is immediate. \begin{fact} If $\mu_n\stacksign{$\Cutm$}\to\mu$, $\nu_n\stacksign{$\Cutm$}\to\nu$, then $\alpha\mu_n+(1-\alpha)\nu_n\stacksign{$\Cutm$}\to\alpha\mu+(1-\alpha)\nu$ for any $\alpha\in(0,1)$. \end{fact} The construction of a ``product measure'' is slightly more interesting. Let $\Omega,\Omega'$ be finite sets. For $\sigma\in\step_\Omega,\tau\in\step_{\Omega'}$ we define $\sigma\times\tau\in\step_{\Omega\times\Omega'}$ by letting $\sigma\times\tau(x)=\sigma(x)\otimes\tau(x)$, where $\sigma(x)\otimes\tau(x)\in\cP(\Omega\times\Omega')$ is the usual product measure of $\sigma(x),\tau(x)$. Further, for $\mu\in M_\Omega,\nu\in M_{\Omega'}$ we define $\mu\times\nu\in M_{\Omega\times\Omega'}$ by \begin{align*} \mu\times\nu&=\int_{\step_{\Omega}\times\step_{\Omega'}}\atom_{\sigma\times\tau}\,d\mu\otimes\nu(\sigma,\tau). \end{align*} Clearly, $\mu\times\nu$ is quite different from the usual product measure $\mu\otimes\nu$. However, for {\em discrete} measures we observe the following. \begin{fact} For $\mu\in\cP(\Omega^n)$ and $\nu\in\cP({\Omega'}^n)$ we have $\hat\mu\times\hat\nu=\widehat{\mu\otimes\nu}$. \end{fact} \begin{proposition} If $\mu_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$, $\nu_n\stacksign{$\Cutm$}\to\nu\in M_{\Omega'}$, then $\mu_n\times\nu_n\stacksign{$\Cutm$}\to\mu\times\nu$. \end{proposition} \begin{proof} Let $\eps>0$ and choose $n_0$ large enough so that $\Cutm(\mu_n,\mu)<\eps$ and $\Cutm(\nu_n,\nu)<\eps$ for all $n>n_0$. By Fact~\ref{Fact_attained} there exist couplings $\gamma_n,\gamma_n'$ of $\mu_n,\mu$ and $\nu_n,\nu$ such that (\ref{eqmetric1}) is attained. Because $\TV{p\otimes p'-q\otimes q'}\leq\TV{p-q}+\TV{q-q'}$ for any $p,q\in\cP(\Omega)$, $p',q'\in\cP(\Omega)$, we obtain for any $U\subset[0,1)$, $B\subset M_\Omega$, $B'\subset M_\Omega$ \begin{align*} \TV{\int_{B\times B'}\int_U\sigma\times\sigma'(x)-\tau\times\tau'(x)\,dx\,d\gamma_n\otimes\gamma_n'(\sigma,\tau,\sigma',\tau')}<2\eps, \end{align*} as desired. \end{proof} \subsection{Regularity}\label{Sec_reg} For $\sigma\in\Sigma_\Omega$ and $U\subset[0,1)$ measurable we write $$\sigma[\omega|U]=\int_U\sigma_x(\omega)\,dx.$$ Moreover, for $\mu\in M_\Omega$ and a measurable $S\subset\step_\Omega$ with $\mu(S)>0$ we let $\mu[\nix|S]\in M_\Omega$ be the conditional distribution. Further, let $\vV=(V_1,\ldots,V_K)$ be a partition of $[0,1)$ into a finite number of pairwise disjoint measurable sets. Similarly, let $\vS=(S_1,\ldots,S_L)$ be a partition of $\step_\Omega$ into pairwise disjoint measurable sets. We write $\#\vV,\#\vS$ for the number $K,L$ of classes, respectively. A measure $\mu\in M_\Omega$ is {\em $\eps$-regular} with respect to $(\vV,\vS)$ if there exists $R\subset[\#\vV]\times[\#\vS]$ such that the following conditions hold. \begin{description} \item[REG1] $\lambda(V_i)>0$ and $\mu(S_j)>0$ for all $(i,j)\in R$. \item[REG2] $\sum_{(i,j)\in R}\lambda(V_i)\mu(S_j)>1-\eps$. \item[REG3] for all $(i,j)\in R$ and all $\sigma,\sigma'\in S_j$ we have $\TV{\sigma[\nix|V_i]-\sigma'[\nix|V_i]}<\eps$. \item[REG4] if $(i,j)\in R$, then for every $U\subset V_i$ with $\lambda(U)\geq\eps\lambda(V_i)$ and every $T\subset S_j$ with $\mu(T)\geq\eps\mu(S_j)$ we have $$\TV{\bck{\SIGMA[\nix|U]}_{\mu[\nix|T]}-\bck{\SIGMA[\nix|V_i]}_{\mu[\nix|S_j]}}<\eps.$$ \end{description} Thus, $R$ is a set of index pairs $(i,j)$ of ``good squares'' $V_i\times S_j$. {\bf REG1} provides that every good square has positive measure and {\bf REG2} that the total probability mass of good squares is at least $1-\eps$. Further, by {\bf REG3} the averages $\sigma[\nix|V_i],\sigma'[\nix|V_i]\in\cP(\Omega)$ over $V_i$ of any two $\sigma,\sigma'\in S_j$ are close. Finally, and most importantly, {\bf REG4} requires that the average $\bck{\SIGMA[\nix|U]}_{\mu[\nix|T]}$ over a ``biggish'' sub-square $U\times T$ is close to the mean over the entire square $V_i\times S_j$. A {\em refinement} of a partition $(\vV,\vS)$ is a partition $(\vV',\vS')$ such that for every pair $(i',j')\in[\#\vV']\times[\vS']$ there is a pair $(i,j)\in [\#\vV]\times[\vS]$ such that $(V_{i'}',S_{j'}')\subset(V_i,S_j)$. \begin{theorem}\label{Thm_reg} For any $\eps>0$ there exists $N=N(\eps,\Omega)$ such that for every $\mu\in M_\Omega$ the following is true. Every partition $(\vV_0,\vS_0)$ with $\#\vV_0+\#\vS_0\leq1/\eps$ has a refinement $(\vV,\vS)$ such that $\#\vV+\#\vS\leq N$ with respect to which $\mu$ is $\eps$-regular. \end{theorem} In light of \Prop~\ref{Prop_homeomorphic}, \Thm~\ref{Thm_reg} would follow from the regularity lemma for graphons~\cite[\Lem~9.16]{Lovasz} if we were to drop condition {\bf REG3}. In fact, adapting the standard proof from~\cite{Szemeredi} to accommodate {\bf REG3} is not difficult. For the sake of completeness we carry this out in detail in \Sec~\ref{Sec_Kathrin}. A regularity lemma for measures on $\Omega^n$ was proved in~\cite{Victor}. But even in the discrete case \Thm~\ref{Thm_reg} gives a stronger result. The improvement is that {\bf REG4} above holds for all ``small sub-squares'' $U\times T$ simultaneously. How does the concept of regularity connect with the cut metric? For a partition $\vV$ of $[0,1]$ and $\sigma\in\step_\Omega$ define $\sigma[\nix|\vV]\in\cW_\Omega$ by $$\sigma_x[\omega|\vV]=\sum_{i\in[\#\vV]}\vec{1}\{x\in V_i\}\sigma_x[\omega|V_i].$$ Thus, $\sigma[\nix|\vV]:[0,1)\to\cP(\Omega)$ is constant on the classes of $\vV$. Further, for a pair $(\vV,\vS)$ of partitions and $\mu\in M_\Omega$ let $$\mu[\nix|\vV,\vS]=\sum_{i\in[\#\vS]}\atom_{\int_{S_i}\sigma[\nix|\vV]d\mu(\sigma)}.$$ Hence, $\mu[\nix|\vV,\vS]\in M_\Omega$ is supported on a discrete set of functions $[0,1)\to\cP(\Omega)$ that are constant on the classes of $\vV$. We might think of $\mu[\nix|\vV,\vS]$ as the ``conditional expectation'' of $\mu$ with respect to $(\vV,\vS)$. \begin{proposition}\label{Prop_reg2metric} Let $\eps>0$ and assume that $\mu$ is $\eps$-regular w.r.t.\ $(\vV,\vS)$. Then $\Cutm(\mu,\mu[\nix|\vV,\vS])<2\eps$. \end{proposition} \begin{proof} Let $\sigma^{(i)}=\int_{S_i}\sigma[\nix|\vV]d\mu(\sigma)$. We define a coupling $\gamma$ of $\mu,\mu[\nix|\vV,\vS]$ in the obvious way: for a measurable $X\subset S_i$ let $\gamma(X\times\{\sigma^{(i)}\})=\mu(X)$. Now, let $U\subset[0,1]$ and $B\subset\step_\Omega^2$ be measurable. Due to the construction of our coupling we may assume that $B=\bigcup_i B_i\times\{\sigma^{(i)}\}$ for certain sets $B_i\subset S_i$. Moreover, let $U_j=U\cap V_j$. Then \begin{align*} \TV{\int_B\int_U\sigma(x)-\tau(x) dx d\eta(\sigma,\tau)}&\leq \sum_{(i,j):\mu(S_i)\lambda(V_j)>0}\mu(S_i)\lambda(V_j)\TV{\int_{B_i}\int_{U_j}\sigma(x)-s_i(x)\frac{dx}{\lambda(V_j)}\frac{d\mu(\sigma)}{\mu(S_i)}}. \end{align*} By {\bf REG1} and {\bf REG4} the last expression is less than $2\eps$. \end{proof} \begin{corollary} For any $\eps>0$ there exists $N=N(\eps)>0$ such that for any $\mu\in M_\Omega$ there exist $\sigma_1,\ldots,\sigma_N\in\step_\Omega$ and $w=(w_1,\ldots,w_N)\in\cP([N])$ such that $\Cutm\bc{\mu,\sum_{i=1}^kw_i\atom_{\sigma_i}}<\eps.$ \end{corollary} \begin{proof} This is immediate from \Thm~\ref{Thm_reg} and \Prop~\ref{Prop_reg2metric}. \end{proof} \subsection{Proof of \Thm~\ref{Thm_reg}}\label{Sec_Kathrin} Following the path beaten in~\cite{Victor,Szemeredi,Tao}, we define the index of $(\vV,\vS)$ as \begin{align*} \ind_\mu(\vV,\vS)&=\Erw\bck{\Var[\SIGMA_{\vec x}[\omega]|\vV,\vS]}_\mu =\frac1{|\Omega|}\sum_{\omega\in\Omega}\sum_{i=1}^{\#\vV}\sum_{j=1}^{\#S_j} \int_{S_j}\int_{V_i}\bc{\sigma_x(\omega)-\int_{S_j}\int_{V_i}\sigma_y(\omega)\frac{dy}{\lambda(V_i)}\frac{d\mu(\sigma)}{\mu(S_j)}}^2dxd\mu(\sigma). \end{align*} \noindent There is only one simple step that we add to the proof from~\cite{Szemeredi}. Namely, following~\cite{Victor}, we begin by refining the partition
# SonicWALL Aventail E-Class SRA EX-Series Installation and Aventail E-Class SRA 10.7 | 1 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your system. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Trademarks: Dell™, the DELL logo, SonicWALL™, Aventail™, Reassembly-Free Deep Packet Inspection™, Dynamic Security for the Global Network™, SonicWALL Aventail Advanced End Point Control™ (EPC™), SonicWALL Aventail Advanced Reporting™, SonicWALL Aventail Connect Mobile™, SonicWALL Aventail Connect™, SonicWALL Aventail Native Access Modules™, SonicWALL Aventail Policy Zones™, SonicWALL Aventail Smart Access™, SonicWALL Aventail Unified Policy™, SonicWALL Aventail™ Advanced EPC™, SonicWALL Clean VPN™, SonicWALL Clean Wireless™, SonicWALL Global Response Intelligent Defense (GRID) Network™, SonicWALL Mobile Connect™, and all other SonicWALL product and service names and slogans are trademarks of Dell Inc. 2014 – 02 P/N 232-001830-00 2 | Aventail E-Class SRA 10.7 Administrator Guide Rev. B Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Features of Your E-Class SRA Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 E-Class SRA Appliance Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Administrator Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 User Access Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 ADA 508 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 What’s New in This Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Client Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Chapter 2. Installation and Initial Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Preparing for the Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Gathering Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Verifying Your Firewall Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Helpful Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Installation and Deployment Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Specifications and Rack Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Front Panel Controls and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Connecting the Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Powering Up and Configuring Basic Network Settings . . . . . . . . . . . . . . . . . . . . 46 Web-Based Configuration Using Setup Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . 48 Configuring the Appliance Using the Management Console . . . . . . . . . . . . . . . . 49 Moving the Appliance into Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Powering Down and Restarting the Appliance . . . . . . . . . . . . . . . . . . . . . . .
a worst-case deadline that is unknown to the scheduler .… ## Preprints as accelerator of scholarly communication An empirical analysis in Mathematics In this study we analyse the key driving factors of preprints in enhancingscholarly communication . Articles with preprint versions are morelikely to be mentioned in social media and have shorter Altmetric attentiondelay . We could observe that the “early-view” and “open-access” effects of preprint” of .… ## Dendritic trafficking synaptic scaling and structural plasticity Neuronal circuits internally regulate electrical signaling via a host of homeostatic mechanisms . Two prominent mechanisms, synaptic scaling andstructural plasticity, are believed to maintain average activity within an operating range . However, both mechanisms operate on relatively slow timescales and thus face fundamental limits due to delays .… ## The Reads From Equivalence for the TSO and PSO Memory Models The verification of concurrent programs remains an open challenge due to thenon-determinism in inter-process communication . The reads-from (RF) equivalence was recently shownto be coarser than the Mazurkiewicz equivalence, leading to impressivescalability improvements for SMC under SC . For TSO and PSO, the standard equivalence has been Shasha-Snir traces .… ## Inverse problems for semiconductors models and methods We consider the problem of identifying discontinuous doping profiles insemiconductor devices from data obtained by different models connected to thevoltage-current map . Stationary as well as transient settings are discussed . Numericalimplementations for the so-called stationary unipolar and stationary bipolarcases show the effectiveness of a level set approach to tackle the inverse problem .… Stochastic sparse adversarial attacks (SSAA) are simple, fast and purely noise-based targeted and untargeted attacks of NNC . SSAA offer new examples of sparse (or $L_0$) attacks for which only few methodshave been proposed previously . These attacks are devised by exploiting asmall-time expansion idea widely used for Markov processes .… ## Energy Based Models for Continual Learning Energy-Based Models (EBMs) have a natural way to support a dynamically-growing number of tasks or classes that causes less interference with previously learned information . EBMs outperform the baseline methods by a large margin on several continual learningbenchmarks . We also show that EBMs are adaptable to a more general continuallearning setting where the data distribution changes without the notion of explicitly delineated tasks .… When using heterogeneous hardware, barriers of technical skills such as OpenMP, CUDA and OpenCL are high . I have proposedenvironment-adaptive software that enables automatic conversion, configuration . However, there has been no research toproperly and automatically offload the mixed offloading destination environmentsuch as GPU, FPGA and many core CPU.… ## Hyper parameter estimation method with particle swarm optimization Particle swarm optimization (PSO) method cannot be directly used in the problem of hyper-parameter estimation . Bayesian optimization (BO) framework is capable of converting the optimization of an acquisition function . The proposed method in this paper uses theparticle swarm method to optimize the acquisition function in the BO framework .… ## SpinNet Learning a General Surface Descriptor for 3D Point Cloud Registration A SpatialPoint Transformer is first introduced to map the input local surface into acarefully designed cylindrical space, enabling end-to-end optimization withSO(2) equivariant representation . A Neural Feature Extractor leverages the powerful point-based and 3D cylindrict convolutional neural layers to derive a compact and representative descriptor for matching .… ## A DC Autotransformer based Multilevel Inverter for Automotive Applications This paper proposes a novel multilevel inverter for automotive applications . The topology consists of a modular DCDC converter and a tap selector . The DC-DC converter is capable of self-balancing its modules and thus does not require large capacitors which yields a high power density .… ## General Purpose Atomic Crosschain Transactions The General Purpose Atomic Crosschain Transaction protocol allows composableprogramming across multiple Ethereum blockchains . It allows for inter-contractand inter-blockchain function calls that are both synchronous and atomic . If one part fails, the whole call graph of function calls is rolled back .… ## Rotational Error Metrics for Quadrotor Control We analyze and experimentally compare various rotational error metrics for use in quadrotor controllers . We provide a catalog of proposed rotational metrics, place them into the same framework . We show experimental results to highlight the salient differences between the rotational errors .… ## New method of verifying cryptographic protocols based on the process model A cryptographic protocol (CP) is a distributed algorithm designed to provide a secure communication in an insecure environment . Errors in the CPs can lead to great financial and social damage, therefore it is necessary to use mathematical methods to justify the correctness and safety of the CP .… ## Solving Two Dimensional H curl elliptic Interface Systems with Optimal Convergence On Unfitted Meshes In this article, we develop and analyze a finite element method with the first family N\’ed\’elec elements of the lowest degree for solving a Maxwellinterface problem . We establish a few important properties for the IFEfunctions including the unisolvence according to the edge degrees of freedom, the exact sequence relating to the $H^1$ IFE functions and the optimalapproximation capabilities .… ## Fuzzy Stochastic Timed Petri Nets for Causal properties representation Imagery is frequently used to model, represent and communicate knowledge . Causality is defined in terms of precedence (thecause precedes the effect), concurrency (often, an effect is provokedsimultaneously by two or more causes) and circularity (a cause provokes the effectand the effect reinforces the cause) We will introduce Fuzzy Stochastic Timed Petri Nets as a graphical tool able to represent time, co-occurrence, loopingand imprecision in causal flow .… ## Tight Integrated End to End Training for Cascaded Speech Translation A cascaded speech translation model relies on discrete and non-differentiabletranscription . Such modelingsuffers from error propagation between ASR and MT models . Our experiments on four tasks with different datascenarios show that the model outperforms cascade models up to 1.8% in BLEU and 2.0% in TER .… ## GLGE A New General Language Generation Evaluation Benchmark Multi-task benchmarks such as GLUE and SuperGLUE have driven great progressof pretraining and transfer learning in Natural Language Processing (NLP) These benchmarks mostly focus on a range of Natural Language Understanding(NLU) tasks, without considering the Natural Language Generation (NLG) models .… ## Generate Asset Condition Data for Power System Reliability Studies Paper explores unconventional method-generating numerical and non-numerical asset condition data based on condition degradation, condition correlation and categorical distribution models . Empirical knowledge from human experts can also be incorporated in themodeling process . Method can be used to conveniently generatehypothetical data for research purposes .… ## PeleNet A Reservoir Computing Framework for Loihi PeleNetframework aims to simplify reservoir computing for neuromorphic hardware . It is build on top of the NxSDK from Intel and is written in Python . The framework manages weight matrices, parameters and probes . With this, the user is not confronted with technical details and can concentrate on experiments .… ## Effective Parallelism for Equation and Jacobian Evaluation in Power Flow Calculation This letter investigates parallelism approaches for equations and Jacobianevaluation in power flow calculations . Case studies on the 70,000-bus synthetic grid show that equation evaluations can be accelerated by ten times, and the overall Newton power flow outperforms MATPOWER by 20% .… ## InstaHide s Sample Complexity When Mixing Two Private Images Inspired by InstaHide challenge [Huang, Song, Li and Arora’20], [Chen, Songand Zhuo’20] recently provides one mathematical formulation of InstaHash attackproblem under Gaussian images distribution . They show that it suffices to use $O(n_{\mathsf{priv) +\mathrm{poly}(n__priv)$ samplesto recover one private image in $n_n_priv_time for any integer$k_\maths$– 2/(k_maths) + 1)$ .… ## Provably Robust Runtime Monitoring of Neuron Activation Patterns For safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training . The algorithm performs a soundworst-case estimate of neuron values with inputs (or features) subject toperturbation, before the abstraction function is applied to build the monitor .… ## On the Serverless Nature of Blockchains and Smart Contracts Serverless architecture is more
# Shirshendu Ganguly I am an Associate Professor in the Department of Statistics at UC Berkeley. 401 Evans Hall UC Berkeley Berkeley, CA 94720 sganguly@berkeley.edu. ## Research I am broadly interested in probability theory and its applications. Recently I have been working on problems in Disordered metric geometries with focus on geometry of geodesics in percolation models, Scaling limits and Phase transitions in statistical mechanics, Large deviations and counting problems in sparse non-linear settings, Mixing time of Markov Chains, Random walk on graphs and Random Matrix theory. ## Education and past employment. • UC Berkeley. Assistant Professsor. 2018-2021. • UC Berkeley. Miller Postdoctoral Fellow. Statistics and Mathematics. 2016-2018. • University of Washington. PhD in Mathematics. 2011-2016. . ## Teaching • Stat 155. Game theory. Fall 2018. • Stat C205A/ Math C218A. Probability theory. Fall 2019. • Stat C205A/ Math C218A. Probability theory. Fall 2020. • Stat C205A/ Math C218A. Probability theory (With Prof. Steve Evans). Fall 2021. ## Recent Works ### Stability, Noise sensitivity and Chaos in dynamical last passsage percolation models. • (with Alan Hammond). Arxiv Many complex statistical mechanical models have intricate energy landscapes. The ground state, or lowest energy state, lies at the base of the deepest valley. In examples such as spin glasses and Gaussian polymers, there are many valleys; the abundance of near-ground states (at the base of valleys) indicates the phenomenon of chaos, under which the ground state alters profoundly when the model's disorder is slightly perturbed. In this article, we compute the critical exponent that governs the onset of chaos in a dynamic manifestation of a canonical model in the Kardar-Parisi-Zhang [KPZ] universality class, Brownian last passage percolation [LPP]. In this model in its static form, semi-discrete polymers advance through Brownian noise, their energy given by the integral of the white noise encountered along their journey. A ground state is a geodesic, of extremal energy given its endpoints. We perturb Brownian LPP by evolving the disorder under an Ornstein-Uhlenbeck flow. We prove that, for polymers of length n, a sharp phase transition marking the onset of chaos is witnessed at the critical time $$n^{−1/3}$$. Indeed, the overlap between the geodesics at times zero and $$t>0$$ that travel a given distance of order n will be shown to be of order n when $$t\ll n^{−1/3}$$; and to be of smaller order when $$t\gg n^{−1/3}$$. We expect this exponent to be shared among many interface models. The present work thus sheds light on the dynamical aspect of the KPZ class; it builds on several recent advances. These include Chatterjee's harmonic analytic theory [Cha14] of equivalence of superconcentration and chaos in Gaussian spaces; a refined understanding of the static landscape geometry of Brownian LPP developed in the companion paper [GH20]; and, underlying the latter, strong comparison estimates of the geodesic energy profile to Brownian motion in [CHH19]. • (with Alan Hammond). Arxiv The energy and geometry of maximizing paths in integrable last passage percolation models are governed by the characteristic KPZ scaling exponents of one-third and two-thirds. When represented in scaled coordinates that respect these exponents, this random field of paths may be viewed as a complex energy landscape. We investigate the structure of valleys and connecting pathways in this landscape. The routed weight profile $$\mathbb{R}\to \mathbb{R}$$ associates to $$x\in \mathbb{R}$$ the maximum scaled energy obtainable by a path whose scaled journey from $$(0,0)$$ to $$(0,1)$$ passes through the point $$(x,1/2)$$. Developing tools of Brownian Gibbs analysis from [Ham16] and [CHH19], we prove an assertion of strong similarity of this profile for Brownian last passage percolation to Brownian motion of rate two on the unit-order scale. A sharp estimate on the rarity that two macroscopically different routes in the energy landscape offer energies close to the global maximum results. We prove robust assertions concerning modulus of continuity for the energy and geometry of scaled maximizing paths, that develop the results and approach of [HS20], delivering estimates valid on all scales above the microscopic. The geometry of excursions of near ground states about the maximizing path is investigated: indeed, we estimate the energetic shortfall of scaled paths forced to closely mimic the geometry of the maximizing route while remaining disjoint from it. We also provide bounds on the approximate gradient of the maximizing path, viewed as a function, ruling out sharp steep movement down to the microscopic scale. Our results find application in a companion study [GH20a] of the stability, and fragility, of last passage percolation under a dynamical perturbation. ### Fractal Geometry of Airy Sheet • (with Milind Hegde). Arxiv There has recently been much activity within the Kardar-Parisi-Zhang universality class spurred by the construction of the canonical limiting object, the parabolic Airy sheet $$\mathcal{S}:\mathbb{R}^2\to\mathbb{R}$$ \cite{dauvergne2018directed}. The parabolic Airy sheet provides a coupling of parabolic Airy$$_2$$ processes---a universal limiting geodesic weight profile in planar last passage percolation models---and a natural goal is to understand this coupling. Geodesic geometry suggests that the difference of two parabolic Airy$$_2$$ processes, i.e., a difference profile, encodes important structural information. This difference profile $\wdf$, given by $$\mathbb{R}\to\mathbb{R}:x\mapsto \mathcal{S}(1,x)-\mathcal{S}(-1,x)$$, was first studied by Basu, Ganguly, and Hammond \cite{basu2019fractal}, who showed that it is monotone and almost everywhere constant, with its points of non-constancy forming a set of Hausdorff dimension $$1/2$$. Noticing that this is also the Hausdorff dimension of the zero set of Brownian motion leads to the question: is there a connection between $$\mathcal{D}$$ and Brownian local time? Establishing that there is indeed a connection, we prove two results. On a global scale, we show that $$\mathcal{D}$$ can be written as a \emph{Brownian local time patchwork quilt}, i.e., as a concatenation of random restrictions of functions which are each absolutely continuous to Brownian local time (of rate four) away from the origin. On a local scale, we explicitly obtain Brownian local time of rate four as a local limit of $$\mathcal{D}$$ at a point of increase, picked by a number of methods, including at a typical point sampled according to the distribution function $$\mathcal{D}$$. Our arguments rely on the representation of $\S$ in terms of a last passage problem through the parabolic Airy line ensemble and an understanding of geodesic geometry at deterministic and random times. • (with Riddhipratim Basu, Alan Hammond) To appear in Annals of Probability. Arxiv n last passage percolation models lying in the Kardar-Parisi-Zhang universality class, maximizing paths that travel over distances of order n accrue energy that fluctuates on scale $$n^{1/3}$$; and these paths deviate from the linear interpolation of their endpoints on scale $$n^{2/3}$$. These maximizing paths and their energies may be viewed via a coordinate system that respects these scalings. What emerges by doing so is a system indexed by $$x,y \in \mathbb{R}$$ and $$s,t \in \mathbb{R}$$ with $$s < t$$ of unit order quantities $$W_n(x,s;y,t)$$ specifying the scaled energy of the maximizing path that moves in scaled coordinates between $$(x,s)$$ and $$(y,t)$$. The space-time Airy sheet is, after a parabolic adjustment, the putative distributional limit $$W_{\infty}$$ of this system as $$n\to \infty$$. The Airy sheet has recently been constructed in [15] as such a limit of Brownian last passage percolation. In this article, we initiate the study of fractal geometry in the Airy sheet. We prove that the scaled energy difference profile given by $$\mathbb{R} \to \mathbb{R} :z \to W_{\infty}(1,0;z,1)−W_{\infty}(−1,0;z,1)$$ is a non-decreasing process that is constant in a random neighbourhood of almost every $$z \in \mathbb{R}$$; and that the exceptional set of $$z \in \mathbb{R}$$ that violate this condition almost surely has Hausdorff dimension one-half. Points of violation correspond to special behaviour for scaled maximizing paths, and we prove the result by investigating this behaviour, making use of two inputs from recent studies of scaled Brownian LPP; namely, Brownian regularity of profiles, and estimates on the rarity of pairs of disjoint scaled maximizing paths that begin and end close to each other. • (with Erik Bates, Alan Hammond) Arxiv Within the
""" A library for cycle triplet extraction and cycle error computation. Checks the cumulative rotation errors between triplets to throw away cameras. Note: the same property does not hold for cumulative translation errors when scale is unknown (i.e. in SfM). Author: John Lambert """ import os from collections import defaultdict from typing import DefaultDict, Dict, List, Optional, Set, Tuple import matplotlib.pyplot as plt import numpy as np from gtsam import Rot3, Unit3 import gtsfm.utils.geometry_comparisons as comp_utils import gtsfm.utils.logger as logger_utils import gtsfm.utils.metrics as metrics_utils from gtsfm.evaluation.metrics import GtsfmMetric, GtsfmMetricsGroup from gtsfm.two_view_estimator import TwoViewEstimationReport logger = logger_utils.get_logger() CYCLE_ERROR_THRESHOLD = 5.0 MAX_INLIER_MEASUREMENT_ERROR_DEG = 5.0 def extract_triplets(i2Ri1_dict: Dict[Tuple[int, int], Rot3]) -> List[Tuple[int, int, int]]: """Discover triplets from a graph, without O(n^3) complexity, by using intersection within adjacency lists. Based off of Theia's implementation: https://github.com/sweeneychris/TheiaSfM/blob/master/src/theia/math/graph/triplet_extractor.h If we have an edge a<->b, if we can find any node c such that a<->c and b<->c, then we have discovered a triplet. In other words, we need only look at the intersection between the nodes connected to `a` and the nodes connected to `b`. Args: i2Ri1_dict: mapping from image pair indices to relative rotation. Returns: triplets: 3-tuples of nodes that form a cycle. Nodes of each triplet are provided in sorted order. """ adj_list = create_adjacency_list(i2Ri1_dict) # only want to keep the unique ones triplets = set() # find intersections for (i1, i2), i2Ri1 in i2Ri1_dict.items(): if i2Ri1 is None: continue if i1 >= i2: raise RuntimeError("Graph edges (i1,i2) must be ordered with i1 < i2 in the image loader.") nodes_from_i1 = adj_list[i1] nodes_from_i2 = adj_list[i2] node_intersection = (nodes_from_i1).intersection(nodes_from_i2) for node in node_intersection: cycle_nodes = tuple(sorted([i1, i2, node])) if cycle_nodes not in triplets: triplets.add(cycle_nodes) return list(triplets) def create_adjacency_list(i2Ri1_dict: Dict[Tuple[int, int], Rot3]) -> DefaultDict[int, Set[int]]: """Create an adjacency-list representation of a **rotation** graph G=(V,E) when provided its edges E. Note: this is specific to the rotation averaging use case, where some edges may be unestimated (i.e. their relative rotation is None), in which case they are not incorporated into the graph. In an adjacency list, the neighbors of each vertex may be listed efficiently, in time proportional to the degree of the vertex. In an adjacency matrix, this operation takes time proportional to the number of vertices in the graph, which may be significantly higher than the degree. Args: i2Ri1_dict: mapping from image pair indices to relative rotation. Returns: adj_list: adjacency list representation of the graph, mapping an image index to its neighbors """ adj_list = defaultdict(set) for (i1, i2), i2Ri1 in i2Ri1_dict.items(): if i2Ri1 is None: continue adj_list[i1].add(i2) adj_list[i2].add(i1) return adj_list def compute_cycle_error( i2Ri1_dict: Dict[Tuple[int, int], Rot3], cycle_nodes: Tuple[int, int, int], two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport], verbose: bool = True, ) -> Tuple[float, Optional[float], Optional[float]]: """Compute the cycle error by the magnitude of the axis-angle rotation after composing 3 rotations. Note: a < b for every valid edge (a,b), by construction inside the image loader class. Args: i2Ri1_dict: mapping from image pair indices to relative rotation. cycle_nodes: 3-tuples of nodes that form a cycle. Nodes of are provided in sorted order. two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always. verbose: whether to dump to logger information about error in each Euler angle Returns: cycle_error: deviation from 3x3 identity matrix, in degrees. In other words, it is defined as the magnitude of the axis-angle rotation of the composed transformations. max_rot_error: maximum rotation error w.r.t. GT across triplet edges, in degrees. If ground truth is not known for a scene, None will be returned instead. max_trans_error: maximum translation error w.r.t. GT across triplet edges, in degrees. If ground truth is not known for a scene, None will be returned instead. """ cycle_nodes = list(cycle_nodes) cycle_nodes.sort() i0, i1, i2 = cycle_nodes i1Ri0 = i2Ri1_dict[(i0, i1)] i2Ri1 = i2Ri1_dict[(i1, i2)] i0Ri2 = i2Ri1_dict[(i0, i2)].inverse() # should compose to identity, with ideal measurements i0Ri0 = i0Ri2.compose(i2Ri1).compose(i1Ri0) I_3x3 = Rot3() cycle_error = comp_utils.compute_relative_rotation_angle(I_3x3, i0Ri0) # form 3 edges e_i, e_j, e_k between fully connected subgraph (nodes i0,i1,i2) edges = [(i0, i1), (i1, i2), (i0, i2)] rot_errors = [two_view_reports_dict[e].R_error_deg for e in edges] trans_errors = [two_view_reports_dict[e].U_error_deg for e in edges] gt_known = all([err is not None for err in rot_errors]) if gt_known: max_rot_error = float(np.max(rot_errors)) max_trans_error = float(np.max(trans_errors)) else: # ground truth unknown, so cannot estimate error w.r.t. GT max_rot_error = None max_trans_error = None if verbose: # for each rotation R: find a vector [x,y,z] s.t. R = Rot3.RzRyRx(x,y,z) # this is equivalent to scipy.spatial.transform's `.as_euler("xyz")` i1Ri0_euler = np.rad2deg(i1Ri0.xyz()) i2Ri1_euler = np.rad2deg(i2Ri1.xyz()) i0Ri2_euler = np.rad2deg(i0Ri2.xyz()) logger.info("\n") logger.info(f"{i0},{i1},{i2} --> Cycle error is: {cycle_error:.1f}") if gt_known: logger.info(f"Triplet: w/ max. R err {max_rot_error:.1f}, and w/ max. t err {max_trans_error:.1f}") logger.info( "X: (0->1) %.1f deg., (1->2) %.1f deg., (2->0) %.1f deg.", i1Ri0_euler[0], i2Ri1_euler[0], i0Ri2_euler[0] ) logger.info( "Y: (0->1) %.1f deg., (1->2) %.1f deg., (2->0) %.1f deg.", i1Ri0_euler[1], i2Ri1_euler[1], i0Ri2_euler[1] ) logger.info( "Z: (0->1) %.1f deg., (1->2) %.1f deg., (2->0) %.1f deg.", i1Ri0_euler[2], i2Ri1_euler[2], i0Ri2_euler[2] ) return cycle_error, max_rot_error, max_trans_error def filter_to_cycle_consistent_edges( i2Ri1_dict: Dict[Tuple[int, int], Rot3], i2Ui1_dict: Dict[Tuple[int, int], Unit3], v_corr_idxs_dict: Dict[Tuple[int, int], np.ndarray], two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport], visualize: bool = True, ) -> Tuple[Dict[Tuple[int, int], Rot3], Dict[Tuple[int, int], Unit3], GtsfmMetricsGroup]: """Remove edges in a graph where concatenated transformations along a 3-cycle does not compose to identity. Note: will return only a subset of these two dictionaries Concatenating the transformations along a loop in the graph should return the identity function in an ideal, noise-free setting. Based off of: https://github.com/sweeneychris/TheiaSfM/blob/master/src/theia/sfm/filter_view_graph_cycles_by_rotation.cc See also: C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In CVPR, 2010 http://people.inf.ethz.ch/pomarc/pubs/ZachCVPR10.pdf Enqvist, Olof; Kahl, Fredrik; Olsson, Carl. Non-Sequential Structure from Motion. ICCVW, 2011. https://portal.research.lu.se/ws/files/6239297/2255278.pdf Args: i2Ri1_dict: mapping from image pair indices (i1,i2) to relative rotation i2Ri1. i2Ui1_dict: mapping from image pair indices (i1,i2) to relative translation direction i2Ui1. Should have same keys as i2Ri1_dict. v_corr_idxs_dict: dictionary, with key as image pair (i1,i2) and value as matching keypoint indices. two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always. visualize: boolean indicating whether to plot cycle error vs. pose error w.r.t. GT Returns: i2Ri1_dict_consistent: subset of i2Ri1_dict, i.e. only including edges that belonged to some triplet and had cycle error below the predefined threshold. i2Ui1_dict_consistent: subset of i2Ui1_dict, as above. v_corr_idxs_dict_consistent: subset of v_corr_idxs_dict above. metrics_group: Rotation cycle consistency metrics as a metrics group. """ cycle_errors = [] max_rot_errors = [] max_trans_errors = [] n_valid_edges = len([i2Ri1 for (i1, i2), i2Ri1 in i2Ri1_dict.items() if i2Ri1 is not None]) # (i1,i2) pairs cycle_consistent_keys = set() triplets = extract_triplets(i2Ri1_dict) for (i0, i1, i2) in triplets: cycle_error, max_rot_error, max_trans_error = compute_cycle_error( i2Ri1_dict, (i0, i1, i2), two_view_reports_dict ) if cycle_error < CYCLE_ERROR_THRESHOLD: # since i0 < i1 < i2 by construction, we preserve the property `a < b` for each edge (a,b) cycle_consistent_keys.add((i0, i1)) cycle_consistent_keys.add((i1, i2)) cycle_consistent_keys.add((i0, i2)) cycle_errors.append(cycle_error) max_rot_errors.append(max_rot_error) max_trans_errors.append(max_trans_error) if visualize: plt.scatter(cycle_errors, max_rot_errors) plt.xlabel("Cycle error") plt.ylabel("Avg. Rot3 error over cycle triplet") plt.savefig(os.path.join("plots", "cycle_error_vs_GT_rot_error.jpg"), dpi=200) plt.close("all") plt.scatter(cycle_errors, max_trans_errors) plt.xlabel("Cycle error") plt.ylabel("Avg. Unit3 error over cycle triplet") plt.savefig(os.path.join("plots", "cycle_error_vs_GT_trans_error.jpg"), dpi=200) logger.info("cycle_consistent_keys: " + str(cycle_consistent_keys)) i2Ri1_dict_consistent, i2Ui1_dict_consistent, v_corr_idxs_dict_consistent = {}, {}, {} for (i1, i2) in cycle_consistent_keys: i2Ri1_dict_consistent[(i1, i2)] = i2Ri1_dict[(i1, i2)] i2Ui1_dict_consistent[(i1, i2)] = i2Ui1_dict[(i1, i2)] v_corr_idxs_dict_consistent[(i1, i2)] = v_corr_idxs_dict[(i1, i2)] logger.info("Found %d consistent rel. rotations from %d original edges.", len(i2Ri1_dict_consistent), n_valid_edges) metrics_group = _compute_metrics( inlier_i1_i2_pairs=cycle_consistent_keys, two_view_reports_dict=two_view_reports_dict ) return i2Ri1_dict_consistent, i2Ui1_dict_consistent, v_corr_idxs_dict_consistent, metrics_group def _compute_metrics( inlier_i1_i2_pairs: List[Tuple[int, int]], two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport] ) -> GtsfmMetricsGroup: """Computes the rotation cycle consistency metrics as a metrics group. Args: inlier_i1_i2_pairs: List of inlier camera pair indices. two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always. Returns: Rotation cycle consistency metrics as a metrics group. Includes the following metrics: - Number of inlier,
hold. This completes the proof. \hfill\hspace{10pt}\fbox{} \vskip.2cm \noindent{\bf Proof of Proposition \ref{3.1}} We shall prove the item $(1)$. The proof of item $(2)$ follows similarly using Lemma \ref{lem2ps} instead of Lemma \ref{lem1ps}. Applying Ekeland's variational principle there exists a sequence $(u_n)\subset{\cal{N}}^+$ in such way that \begin{description} \item[(i)] $J(u_n) \,= \,\alpha^+ + o_n(1)$, \item [(ii)] $J(u_n)<J(w)+\frac{1}{n}||w-u||, \,\,\forall \,\, w\,\in{\cal{N}}^+.$ \end{description} In what follows we shall prove that $\displaystyle \lim_{n \rightarrow \infty} ||J'(u_n)||\to 0$. From Proposition \ref{xi_n-bound}, there exist $C>0$ independent on $n\in\mathbb{N}$ such that $\|\xi_n(0)\|\leq C$. This estimate together with Proposition \ref{J'-lim-pont} $$ \left<J'(u_n), \displaystyle\frac{u}{||u||}\right>\leq \displaystyle\frac{C}{n},~u\inW_0^{1,\Phi}(\Omega)/\{0\}. $$ This implies that $\|J'(u_n)\|\rightarrow 0$ as $n\rightarrow\infty$. This finishes the proof. \hfill\hspace{10pt}\fbox{} \section{The proof of our main theorems} \subsection{The proof of Theorem \ref{teorem1}} We are going to apply the following result, whose proof is made by using the concentration compactness principle due to Lions for Orlicz -Sobolev framework, see \cite{Willem} or else in \cite{CSGG,Fuk_1}. \begin{lem}\label{conv_grad_qtp} $(i)$ $\phi(|\nabla u_n|)\nabla u_n\rightharpoonup \phi(|\nabla u|)\nabla u$ in $\prod L_{\widetilde{\Phi}}(\Omega)$;\\ $(ii)$ $|u_n|^{\ell^*-2}u_n \rightharpoonup|u|^{\ell^*-2}u$ in $L^{\frac{\ell^*}{\ell^*-1}}(\Omega)$. \end{lem} Let $\|f\|_{(\ell^*)'} < \Lambda_1 = \min\left\{\lambda_1,\displaystyle\frac{\ell^*-m}{m-1}\right\}$ where $\lambda_1 > 0$ is given by $(f_1)$. From Lemma \ref{nehari+} we infer that $$\alpha^+:=\displaystyle\inf_{u\in{\cal{N}}^+}J(u)=\displaystyle\inf_{u\in{\cal{N}}}J(u) < 0.$$ We will find a function $u\in {\cal{N}}^+$ in such that $$J(u)= \displaystyle\min_{u\in {\cal{N}}^+}J (u)=:\alpha^+ \,\, \mbox{and} \,\, J^{\prime}(u) \equiv 0.$$ First of all, using Proposition \ref{lem1ps}, there exists a minimizing sequence denoted by $(u_n)\subset W^{1,\Phi}(\Omega)$ such that \begin{equation}\label{cerami1} J(u_n)=\alpha^++o_n(1) \mbox{ and } J'(u_n)=o_n(1). \end{equation} Since the functional $J$ is coercive in ${\cal{N}}^+$, this implies that $(u_n)$ is bounded in ${\cal{N}}^+$. Therefore, there exists a function $u\in{W^{1,\Phi}_0(\Omega)}$ such that \begin{equation} \label{convergencia} u_n \rightharpoonup u \,\, \mbox{ in } \,\, W_0^{1,\Phi}(\Omega),~~ u_n \to u \,\,\mbox{a.e.}\,\, \mbox{ in } \Omega,~~ u_n \to u \,\, \mbox{ in } \,\, L^{\Phi}(\Omega). \end{equation} \noindent We shall prove that $u$ is a weak solution for the problem elliptic problem \eqref{eq1}. Notice that, by \eqref{cerami1}, we mention that $$o_n(1)=\left<J'(u_n),v\right>=\displaystyle\int_{\Omega} \phi(|\nabla u_n|)\nabla u_n\nabla v- fv -|u_n|^{\ell^*-2}u_nv$$ holds for any $v \in W_0^{1,\Phi}(\Omega)$. In view of \eqref{convergencia} and Lemma \ref{conv_grad_qtp} we get $$\displaystyle\int_{\Omega} \phi(|\nabla u|)\nabla u\nabla v-f v-|u|^{\ell^*-2}uv = 0$$ for any $v\in W^{1,\Phi}(\Omega)$ proving that u is a weak solution to the elliptic problem \eqref{eq1}. In addition, the weak solution $u$ is not zero. In fact, using the fact that $u_n\in \mathcal{N}^{+},$ we obtain $$\begin{array}{rcl} \displaystyle\int_{\Omega} fu_n&=& \displaystyle\int_{\Omega}( \Phi(|\nabla u_n|)- \displaystyle\frac{1}{\ell^*}\phi(|\nabla u_n|)|\nabla u_n|^2) \displaystyle\frac{\ell^*}{\ell^*-1} -J(u_n) \displaystyle\frac{\ell^*}{\ell^*-1}\\[3ex] &\geq& \displaystyle\frac{\ell^*}{\ell^*-1} \left(1 - \dfrac{m}{\ell^*}\right)\displaystyle\int_{\Omega} \Phi(|\nabla u_n|) -J(u_n)\displaystyle\frac{\ell^*}{\ell^*-1} \\[3ex] &\geq& -J(u_n)\displaystyle\frac{\ell^*}{\ell^*-1}. \end{array} $$ From \eqref{cerami1} and \eqref{convergencia} we obtain \begin{eqnarray}\label{fu-pos} \displaystyle\int_{\Omega} fu\geq-\alpha^{+}\displaystyle\frac{\ell^*}{\ell^*-1} > 0. \end{eqnarray} Hence $u\not\equiv 0$. We shall prove that $J(u)=\alpha^+$ and $u_n\to u$ in $W_0^{1,\Phi}(\Omega)$. Since $u\in {\cal{N}}$ we also see that $$\alpha^+ \leq J(u)=\displaystyle\int_{\Omega} \Phi(|\nabla u|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u|)|\nabla u|^2-\left(1-\displaystyle\frac{1}{\ell^*}\right)fu.$$ Notice that $$t\mapsto\Phi(t)-\displaystyle\frac{1}{\ell^*}\phi(t)t^2$$ is a convex function. In fact, by hypothesis $(\phi_3)$ and $m<\ell^*$, we infer that \begin{eqnarray} \left(\Phi(t)-\displaystyle\frac{1}{\ell^*}\phi(t)t^2\right)''&=&\left[ \left(1-\frac{1}{\ell^*}\right)t\phi(t)-\frac{1}{\ell^*}t(t\phi(t))'\right]'\nonumber\\ &=& (t\phi(t))' \left[\left(1-\frac{2}{\ell^*}\right)-\frac{1}{\ell^*}\frac{t(t\phi(t))''}{(t\phi(t))'}\right]\nonumber\\ &\geq& (t\phi(t))'\left(1-\frac{m}{\ell^*}\right)>0, t > 0.\nonumber \end{eqnarray} In addition, the last assertion says that $$u\longmapsto \displaystyle\int_{\Omega} \Phi(|\nabla u |)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u |)|\nabla u |^2dx$$ is weakly lower semicontinuous function. Therefore we obtain \begin{eqnarray} \alpha^+ \leq J(u) &\leq & \liminf \left(\displaystyle\int_{\Omega} \Phi(|\nabla u_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u_n|)|\nabla u_n|^2 -\left(1-\displaystyle\frac{1}{\ell^*}\right)fu_n\right)\nonumber\\ &=&\liminf J(u_n)= \alpha^+.\nonumber \end{eqnarray} This implies that $J(u)=\alpha^+.$ Additionally, using \eqref{convergencia}, we also have $$\begin{array}{rcl} J(u)&=&\displaystyle\int_{\Omega} \Phi(|\nabla u|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u|)|\nabla u|^2-\left(1-\displaystyle\frac{1}{\ell^*}\right)fu\\[3ex] &=& \lim \left(\displaystyle\int_{\Omega} \Phi(|\nabla u_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u_n|)|\nabla u_n|^2\right)-\left(1-\displaystyle\frac{1}{\ell^*}\right)\displaystyle\int_{\Omega} fu. \end{array}$$ From the last identity $$\lim \left(\displaystyle\int_{\Omega} \Phi(|\nabla u_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u_n|)|\nabla u_n|^2\right)=\displaystyle\int_{\Omega} \Phi(|\nabla u|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u|)|\nabla u|^2.$$ In view of Brezis-Lieb Lemma, choosing $v_n=u_n-u,$ we infer that \begin{eqnarray} \lim \left(\displaystyle\int_{\Omega} \Phi(|\nabla u_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u_n|)|\nabla u_n|^2+ \Phi(|\nabla v_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla v_n|)|\nabla v_n|^2\right)\nonumber\\ =\displaystyle\int_{\Omega} \Phi(|\nabla u|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla u|)|\nabla u|^2. \end{eqnarray} The previous assertion implies that $$0=\lim \left(\displaystyle\int_{\Omega} \Phi(|\nabla v_n|)-\displaystyle\frac{1}{\ell^*}\phi(|\nabla v_n|)|\nabla v_n|^2\right)\geq \lim\left(1-\displaystyle\frac{m}{\ell^*}\right)\displaystyle\int_{\Omega} \Phi(|\nabla v_n|)\geq 0.$$ Therefore, we obtain that $\lim \int_{\Omega} \Phi(|\nabla v_n|)=0$ and $u_n\to u \,\,\mbox{in} \,\, W^{1,\Phi}(\Omega).$ Hence we conclude that $u_{n} \rightarrow u$ in $W_0^{1,\Phi}(\Omega)$. We shall prove that $u\in {\cal{N}}^+$. Arguing by contradiction we have that $u\notin{\cal{N}}^+$. Using Lemma \ref{fib} there are unique $t_0^+, t_0^->0$ in such way that $t_0^+u\in{\cal{N}}^+$ and $t_0^-u\in{\cal{N}}^-$. In particular, we know that $t_0^+< t_0^-=1.$ Since $$\displaystyle\frac{d}{dt}J(t_0^+u)=0$$ and using \eqref{fu-pos} together the Lemma \ref{fib} we have that $$\displaystyle\frac{d}{dt}J(tu)>0,~t\in (t_0^+, t_0^-).$$ So, there exist $t^- \in ( t_0^+, t_0^- )$ such that $J(t_0^+u)<J(t^-u)$. \noindent In addition $J(t_0^+u)<J(t^-u)\leq J(t_0^-u)=J(u)$ which is a contradiction to the fact that $u$ is a minimizer in ${\cal{N}}^+$. So that $u$ is in ${\cal{N}}^+$. \noindent To conclude the proof of theorem it remains to show that $ u\geq 0 $ when $ f \geq0.$ For this we will argue as in \cite{tarantello}. Since $u \in {\cal{N}}^+$, by Lemma \ref{fib} there exists a $ t_0 \geq 1 $ such that $ t_0 | u | \in {\cal{N}}^+ $ and $t_0|u|\geq |u|.$ Therefore if $f\geq 0$, we get $$J(u)=\displaystyle\inf_{w\in{\cal{N}}^+}J(w)\leq J(t_0|u|)\leq J(|u|)\leq J(u).$$ \noindent So we can assume without loss of generality that $u\geq 0.$ \subsection{The proof of Theorem \ref{teorema2}} Let $||f||_{(\ell^*)'} < \Lambda_2 = \min\left\{\lambda_2,\displaystyle\frac{\ell^*-m}{m-1}\right\}$ where $\lambda_2 > 0$ is given by Lemma \ref{nehari-}. First of all, from Lemma \ref{nehari-}, there exists $\delta_1>0$ such that $J(v)\geq \delta_1$ for any $v\in {\cal{N}}^{-}.$ So that, $$\alpha^{-}:= \displaystyle \inf_{v \in {\cal{N}}^{-}}J(v)\geq \delta_1>0.$$ Now we shall consider a minimizing sequence $(v_n)\subset {\cal{N}}^{-}$ given in Proposition \ref{lem1ps}, i.e, $(v_n)\subset {\cal{N}}^{-}$ is a sequence satisfying \begin{equation} \label{e1} \displaystyle\lim_{n\to\infty}J(v_n)=\alpha^{-} \,\,\mbox{and} \,\, \displaystyle\lim_{n\to\infty} J^{\prime}(v_{n}) = 0. \end{equation} Since $J$ is coercive in ${\cal{N}}$ and so on ${\cal{N}}^{-}$, using Lemma \ref{c1}, we have that $(v_n)$ is bounded sequence in $W^{1,\Phi}_{0}(\Omega).$ Up to a subsequence we assume that $v_n\rightharpoonup v$ in $W^{1,\Phi}_{0}(\Omega)$ holds for some $v \in W_0^{1,\Phi}(\Omega)$. Additionally, using the fact that $\ell^{*}>1$, we get $t<<\Phi_{*}(t)$ and $W_{0}^{1,\Phi}(\Omega)\hookrightarrow L^1(\Omega)$ is also a compact embedding. This fact implies that $v_n\to v$ in $L^{1}(\Omega).$ In this way, we can obtain \begin{equation*}\label{lim1} \displaystyle\lim_{n\to\infty} \displaystyle\int_{\Omega} fv_n=\displaystyle\int_{\Omega} fv. \end{equation*} Now we claim that $v \in W_0^{1,\Phi}(\Omega)$ given just above is a weak solution to the elliptic problem \eqref{eq1}. In fact, using \eqref{e1}, we infer that $$\left<J'(v_n), w\right>=\displaystyle\int_{\Omega} \phi(|\nabla v_n|)\nabla v_n\nabla w-fw-|v_n|^{\ell^*-2}v_n w = o_{n}(1)$$ holds for any $w \in W_0^{1,\Phi}(\Omega)$. Now using Lemma \ref{conv_grad_qtp} we get $$\displaystyle\int_{\Omega} \phi(|\nabla v|)\nabla v\nabla w-fw -|v|^{\ell^*-2}v w = 0, w \in W_0^{1,\Phi}(\Omega).$$ So that $v$ is a critical point for the functional $J$. Without any loss of generality, changing the sequence $(v_{n})$ by $(|v_{n}|)$, we can assume that $v \geq 0$ in $\Omega$. Next we claim that $v \neq 0$. The proof for this claim follows arguing by contradiction assuming that $v \equiv 0$. Recall that $J(t v_{n}) \leq J(v_{n})$ for any $t \geq 0$ and $n \in \mathbb{N}$. These facts together with Lemma \ref{lema_naru} imply that \begin{eqnarray} \left(1 - \dfrac{m}{\ell^{*}}\right)\int_{\Omega} \Phi(|\nabla t v_{n}|) &\leq& \left(t - 1\right)\left(1-\displaystyle\frac{1}{\ell^*}\right) \int_{\Omega} f v_n\nonumber \\ &+& \left(1 - \dfrac{\ell}{\ell^{*}}\right)\int_{\Omega} \Phi(|\nabla v_{n}|). \nonumber \end{eqnarray} Using the above estimate, Lemma \ref{lema_naru} and the fact that $(v_{n})$ is bounded, we obtain \begin{equation*} \min(t^{\ell}, t^{m}) \left(1 - \dfrac{m}{\ell^{*}}\right) \int_{\Omega} \Phi(|\nabla v_{n}|) \leq \left(t - 1\right)\left(1-\displaystyle\frac{1}{\ell^*}\right) \int_{\Omega} fv_n + C \end{equation*} holds for some $C > 0$. These inequalities give us \begin{equation*} \begin{array}{rcl}\min(t^{\ell}, t^{m}) \left(1 - \dfrac{m}{\ell^{*}}\right) \displaystyle\int_{\Omega} \Phi(|\nabla v_{n}|) &\leq& \left(t - 1\right)\left(1-\displaystyle\frac{1}{\ell^*}\right)S^{\frac{-1}{\ell}} ||f||_{(\ell^*)'}||v_n|| + C.\end{array} \end{equation*} It is no hard to verify that $\|v_{n}\| \geq c > 0$ for any $n \in \mathbb{N}$. Using Proposition \ref{lema_naru} we get \begin{equation*} \min(t^{\ell}, t^{m}) \leq o_{n}(1) t + C \end{equation*} holds for any $t \geq 0$ where $C = C(\ell,m,\ell^{*}, \Omega, a,b) > 0$ where $o_{n}(1)$ denotes a quantity that goes to zero as $n \rightarrow \infty$. Here was used the fact $v_{n} \rightarrow 0$ in $L^1(\Omega)$. This estimate does not make sense for any $t > 0$ big enough. Hence $v \neq 0$ as claimed. Hence $v$ is in $\mathcal{N} = \mathcal{N^{+}} \cup \mathcal{N^{+}}$. Next, we shall prove that $v_n\to v$ in $W_0^{1,\Phi}(\Omega)$. The proof follows arguing by contradiction. Assume that $\displaystyle \liminf_{n \rightarrow \infty} \int_{\Omega} \Phi(|\nabla v_{n} - \nabla v|) \geq \delta$ holds for some $\delta > 0$. Recall that $\Psi: \mathbb{R} \rightarrow \mathbb{R}$ given by $$t\mapsto \Psi(t) := \Phi(t)-\displaystyle\frac{1}{\ell^*}\phi(t)t^2$$ is a convex function for each $t \geq 0$. The Brezis-Lieb Lemma for convex functions says that \begin{equation*} \lim_{n \rightarrow \infty} \int_{\Omega} \Psi(|\nabla v_{n}|) - \Psi(|\nabla v_{n}- v|) = \int_{\Omega} \Psi(|\nabla v|) \end{equation*} In particular, the last estimate give us \begin{equation*} \int_{\Omega} \Psi(|\nabla v|) < \liminf_{n \rightarrow \infty} \int_{\Omega} \Psi(|\nabla v_{n}|). \end{equation*} Since $v\in {\cal{N}}$ there exists unique $t_{0}$ in $(0, \infty)$ such that $t_{0} v \in \mathcal{N}^{-}$. It is easy to verify that \begin{equation*} \int_{\Omega} \Psi(|\nabla t_{0}v|) < \liminf_{n \rightarrow \infty} \int_{\Omega} \Psi(|\nabla t_{0} v_{n}|). \end{equation*} This implies that \begin{eqnarray} \alpha^{-}&\leq& J(t_{0}v ) = \displaystyle\int_{\Omega} \Psi(|\nabla t_{0} v|)-\left(1-\displaystyle\frac{1}{\ell^*}\right)t_{0}f \nonumber \\ &<& \liminf_{n \rightarrow \infty} \displaystyle\int_{\Omega} \Psi(|\nabla t_{0} v_{n}|)-\left({1}-\displaystyle\frac{1}{\ell^*}\right)t_{0} fv_n \nonumber \\ &=& \liminf_{n \rightarrow \infty} J(t_{0}v_{n}) \leq \liminf_{n \rightarrow \infty} J(v_{n}) = \alpha^{-}. \nonumber \end{eqnarray} This is a contradiction proving that $v_n\to v$ in $W_0^{1,\Phi}(\Omega)$. Therefore $v$
of mutations over different human tissues. To achieve so, the GTEx gene expression matrices corresponding to 30 different tissues (see Additional Table 1), containing each a variable number of individuals, are used as controls and then, equivalent matrices of cases are generated by simulating the mutations as previously described on all the individuals. Then a case/control contrast with a Wilcoxon test is carried out for each tissue, which would reveal whether some of the mutations in the list have a significant impact on one or several tissues and the functional nature of such impact. ### The web interface The input of the program consists of normalized gene expression matrices in CSV format for the two first options of Differential signaling activity and the Perturbation effect (Fig. 1A,B) and also optionally for the Variant interpreter option that explores the effect of mutations across tissues (Fig. 1C), as user defined tissue. Expression may have been measured with any sequencing or microarray technology. The gene expression matrix must include samples as columns and genes as rows. Gene names must be Entrez or HUGO IDs. For the Variant Interpreter option, a list of Entrez or HUGO gene names can be provided. ### Graphical representation of the results Different analysis types are carried out on the circuit’s activities calculated, which include two-class comparisons and PCA, with the corresponding visualizations as heatmaps and PCA plots. Graphical representation of the circuits significantly up- or down-activated, including the individual node expression change, are also provided (see Fig. 1 right). An interactive graphical output in which the pathways analyzed are displayed with the possible ways in which the signal can be transmitted from receptor proteins to the corresponding effector proteins, highlighting those in which significant changes in signaling are found. In this visual representation, disruptions or activations in the signal transduction caused by gene perturbations (mutations or expression changes) can be easily visualized and understood in terms of their consequences on cell signaling and their ultimate effect over the corresponding functions triggered by the effectors. The client of the web application has been implemented in JavaScript using the HTML5 and SVG standards and uses CellMaps72 libraries for interactive visual representation of pathways. ### Mechanistic model of cell functionality triggered by signaling The Hipathia (acronym for High-throughput pathway interpretation and analysis) is a mechanistic model of signaling circuit activities previously described66. In brief, circuits that connect receptor proteins to specific effector proteins, which ultimately trigger cell activities, are defined using KEGG pathways60. Such circuits represent the sequence of activation (and inhibition) steps that mediates the transduction of the signal from the receptor to the effector protein. The method assumptions are that, in order to transduce the signal, all the proteins that connect the receptor with the effector should be present and the higher the amount of these proteins the stronger will be the signal. Measurements of mRNA levels are taken as proxies of the amount of the corresponding proteins (a quite common assumption73,74,75,76,77,78). Then, in order to quantify the intensity of signal transduction, the following steps are taken: normalized gene expression values, rescaled to a value in the range [0,1], obtained as explained above, are used as proxies of the protein activities (activations or inhibitions in the transmission chain)73,75,79. Thus, the intensity value of signal transduced along a circuit that reaches the effector is estimated by starting with initial signal intensity with the maximum value of 1 in the receptor, which is propagated along the nodes of the signaling circuits according the recursive formula: $${S}_{n}={\upsilon }_{n}\cdot (1-\prod _{{s}_{a}\in A}(1-{s}_{a}))\cdot \prod _{{s}_{i}\in I}(1-{s}_{i})$$ (1) where Sn is the signal intensity for the current node n, vn is its normalized gene expression value, A is the set of activation signals (sa), arriving to the node n from the corresponding activation edges, I is the set of inhibitory signals (si) arriving to the node from inhibition edges66. Like normalized gene expression values, circuit activity values are measurements with no absolute meaning by themselves but rather in a comparison. The application of this formula to all the circuits defined in all the pathways allows transforming a gene expression profile into the corresponding signaling circuit activity profile for any sample studied. If two conditions are compared, a Wilcoxon test can used to assess differences in signaling circuit activity between both types of samples. ### Estimation of the impact of a mutation over cell functionality The effect of a mutation is dependent on the context which includes the activity (gene expression status) and the integrity (mutational status) of the rest of proteins involved in the pathways that trigger functionalities relevant to the disease analyzed (disease hallmarks). The effect of one or several simultaneous mutations in a specific tissue can easily be predicted using the mechanistic model68,69. The reference or control dataset is taken from the tissue of interest in GTEx80. Then, an affected dataset is simulated from the control dataset by drastically reducing the expression of the gene(s) with a pLoF mutation by multiplying their expression values by 0.01 in all the control samples. This simulates either an inactive gene or a non-functional gene product. Then, the circuit activities are recalculated in the affected dataset and it is compared to the reference dataset. Although not completely realistic, given that the model does not have information on the way in which the diseased tissue will transcriptionally react to the perturbation induced by the mutated genes, the results will certainly point with precision to those cell functions affected in first instance. ### Data Sources In the current version of HiPathia more than 8000 circuits have been identified and modeled within a total of more than 150 pathways downloaded from KEGG60 corresponding to three species (human 145, mouse 141 and rat 141). Gene expression data from 30 non-diseased tissue sites (See Additional Table 1) used in the third option were taken from the GTEx Portal80 (GTEx Analysis V7; dbGaP Accession phs000424.v7.p2). ### Data and methods for the examples Gene expression for bone marrow, which is not present in GTEx, was downloaded from the Gene Expression Omnibus (GEO) database (GSE16334)81. Gene expression microarray study that compares human islets gene expression from 54 non-diabetic and 9 type 2 diabetic donors82 was downloaded from GEO (GSE38642). Data on natural variability of different populations, which comprises over 88 million variants of 2,504 individuals from 26 populations, was obtained from the 1000 Genomes project portal3,83. In order to assess the impact of the natural variation found in genes of healthy population, variants located within gene regions were annotated using CADD29. As proposed by CADD developers, a gene was considered to carry a pLoF mutation when the CADD score is over the threshold of 2084. A gene is considered to be affected by pLoF in a recessive scenario, when the two alternative alleles are present. ### Transcriptomics data processing Gene expression data from microarrays were summarized and normalized by quantiles with the Robust Multiarray Analysis method using affy R package85. Probes were mapped to the corresponding genes using BiomaRt86. Gene expression values are estimated as the 90 percentile of probe expression values. Probes that mapped in more than one gene were discarded (except in the case that they were the unique probes mapping on the gene, that the median value of intensities was taken.) RNA-seq gene expression data were normalized with the Trimmed mean of M values (TMM) normalization method using the edgeR package87. Then, the Hipathia66 algorithm requires some extra steps for the calculation of the signal intensities. Thus, a logarithm transformation (apply log(matrix + 1)) followed by a truncation by the quantile 0.99 (all values greater than quantile 0.99 are truncated to this upper value, all values lower than quantile 0.01 are truncated to this lower value) were applied to the normalized gene expression values. Finally, in both cases, quantiles normalization using the preprocessCoreR package88 was carried out. ## Results We demonstrate the possibilities that mechanistic models offer
the examples in Section \ref{sect:examp}. From \cite{MejariCDC2019} it follows that this assumption is sufficient, as the identification algorithm in \cite{MejariCDC2019} returns a minimal asLPV-SSA in innovation form. We conjecture that the same will be true for most of the existing subspace identification algorithms \cite{Wingerden09,fewive,Verdult02,veve,CoxTothSubspace,RamosSubspace}. To deal only with minimal asLPV-SSA representations in innovation form, simple conditions to check minimality and being in innovation form are needed. The latter is necessary in order to check if the elements of a parametrization of asLPV-SSAs are minimal and in innovation form, or to construct such parametrizations. \paragraph*{\textbf{Related work}} As it was mentioned above, there is a rich literature on subspace identification methods for stochastic LPV-SSA representations \cite{RamosSubspace,FavoreelTAC,CoxTothSubspace,Wingerden09}. However, the cited papers do not deal with the problem of characterizing minimal stochastic LPV state-space representations in innovation form. In \cite{CoxLPVSS,CoxTothSubspace} the existence of an LPV state-space representation in innovation form was studied, but due to the specific assumptions (deterministic scheduling) and the definition of the innovation process, the resulting LPV state-space representation in innovation form had dynamic dependence on the scheduling parameters. Moreover, \cite{CoxLPVSS,CoxTothSubspace} do not address the issue of minimality of the stochastic part of LPV state-space representations. This paper uses realization theory of stochastic generalized bilinear systems (\emph{\textbf{GBS}\ } for short) of \cite{PetreczkyBilinear}. In particular, asLPV-SSAs correspond to \textbf{GBS}{s}. The existence and uniqueness of minimal asLPV-SSA{s} in innovation form follows from the results of \cite{PetreczkyBilinear}. The main novelty of the present paper with respect to \cite{PetreczkyBilinear} is the new algebraic characterization of minimal asLPV-SSA{s} in innovation form, and that the results on existence and uniqueness of minimal \textbf{GBS}{s} are spelled out explicitly for LPV-SSAs. The paper \cite{MejariCDC2019} used the correspondence between \textbf{GBS}{s} and asLPV-SSA{s} to state existence and uniqueness of minimal asLPV-SSA{s} in innovation form. However, \cite{MejariCDC2019} did not provide an algebraic characterization of minimality or innovation form. Moreover, it considered only scheduling signals which were zero mean white noises. In contrast, in this paper more general scheduling signals are considered. The present paper is complementary to \cite{MejariCDC2019}. This paper explains when the assumption that the data generating system is minimal asLPV-SSA in innovation form could be true, while \cite{MejariCDC2019} presents an identification algorithm which is statistically consistent under the latter assumption. \textbf{Outline of the paper} In Section \ref{sect:prelim} we introduce the notations used and we recall \cite{PetreczkyBilinear}, some technical assumptions which are necessary to define of the stationary LPV-SSA representation. In Section \ref{sect:min} some principal results on minimal asLPV-SSA{s} in innovation form are reviewed. In Section \ref{sect:main} we present the main results of the paper, namely, algebraic conditions for an asLPV-SSA to be minimal in innovation form. Finally, in Section \ref{sect:examp} numerical examples are developed to illustrate the contributions. \subsection{Necessary conditions for a representation in innovation form} \section{Main results: algebraic conditions for an asLPV-SSA to be minimal in innovation form} \label{sect:main} Motivated by the challenges explained in Remark \ref{rem:motiv1}, in this section we present sufficient conditions for an asLPV-SSA to be minimal and in innovation form. These conditions depend only on the matrices of the asLPV-SSA in question and do not require any information on the noise processes. The first result concerns an algebraic characterization of asLPV-SSA in innovation form. This characterization does not require any knowledge of the noise process, only the knowledge of system matrices. In order to streamline the discussion, we introduce the following definition. \begin{Definition}[Stably invertable w.r.t. $\p$] Assume that $\mathcal{S}$ is an asLPV-SSA of the form \eqref{eq:aslpv} and $F=I_{n_y}$. We will call $\mathcal{S}$ \emph{stably invertable with respect to $\p$}, or \emph{stably invertable} if $\p$ is clear from the context, if the matrix \begin{equation} \label{inv:gbs:lemma:eq1} \sum_{i=1}^{\pdim} p_i (A_i-K_iC) \otimes (A_i-K_iC) \end{equation} is stable (all its eigenvalues are inside the complex unit disk). \end{Definition} Note that a system can be stably invertable w.r.t. one scheduling process, and not to be stably invertable w.r.t. another one. We can now state the result relating stable invertability to asLPV-SSAs in innovation forms. \begin{Theorem}[Innovation form condition] \label{inv:gbs:lemma} Assume that $\textbf{y}$ is SII and $(\textbf{y},\p)$ is full rank. If an asLPV-SSA realization of $(\textbf{y},\p)$ is stably invertable, then it is in innovation form. \end{Theorem} The proof of Theorem \ref{inv:gbs:lemma} can be found in Appendix \ref{App:proof}. Stably invertable asLPV-SSAs can be viewed as optimal predictors. Indeed, let $\mathcal{S}$ be the asLPV-SSA of the form \eqref{eq:aslpv} which is in innovation form, and let $\textbf{x}$ be the state process of $\mathcal{S}$. It then follows \begin{equation} \label{gen:filt:bil:def:pred} \begin{split} & \textbf{x}(t+1) = \sum_{i=1}^{\pdim} (A_i-K_iC)\textbf{x}(t)+K_i\textbf{y}(t))\p_i(t), \\ & \hat{\textbf{y}}(t) = C\textbf{x}(t) \end{split} \end{equation} where $\hat{\textbf{y}}(t)=E_l[\textbf{y}(t) \mid \{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}]$, i.e., $\hat{\textbf{y}}$ is the best linear prediction of $\textbf{y}(t)$ based on the predictors $\{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}$. Intuitively, \eqref{gen:filt:bil:def:pred} could be viewed as a filter, i.e., a dynamical system driven by past values of $\textbf{y}$ and generating the best possible linear prediction $\hat{\textbf{y}}(t)$ of $\textbf{y}(t)$ based on $\{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}$. However, the solution of \eqref{gen:filt:bil:def:pred} is defined on the whole time axis $\mathbb{Z}$ and hence cannot be computed exactly. For stably invertable asLPV-SSA we can approximate $\hat{\textbf{y}}(t)$ as follows. \begin{Lemma} \label{gbs:finite_filt:lemma} With the assumptions of Theorem \ref{inv:gbs:lemma}, if $\mathcal{S}$ of the form \eqref{eq:aslpv} is a stably invertable realization of $(\textbf{y},\p)$, and we consider the following dynamical system: \begin{equation} \label{gbs:finite_filt:eq} \begin{split} & \bar{\textbf{x}}(t+1)= \sum_{i=1}^{\pdim} (A_i-K_iC)\bar{\textbf{x}}(t)+K_i\textbf{y}(t))\bm{\mu}_i(t), \\ & \bar{\textbf{y}}(t) = C\bar{x}(t), ~ \bar{\textbf{x}}(0)=0 \end{split} \end{equation} then \( \underset{t \rightarrow \infty}{\lim} \left(\bar{\textbf{x}}(t) - \textbf{x}(t)\right)\!\!=\!\!0 \), and \( \underset{t \rightarrow \infty}{\lim} \left(\bar{\textbf{y}}(t) - \textbf{y}(t)\right)=0 \), where the limits are understood in the mean square sense. \end{Lemma} The proof of Lemma \ref{gbs:finite_filt:lemma} is found in Appendix \ref{App:proof}. That is, the output $\bar{\textbf{y}}(t)$ of the recursive filter \eqref{gbs:finite_filt:eq} is an approximation of the optimal prediction $\hat{\textbf{y}}(t)$ of $\textbf{y}(t)$ for large enough $t$. That is, stably invertable asLPV-SSA not only result in asLPV-SSAs in innovation form, but they represent a class of asLPV-SSAs for which recursive filters of the form \eqref{gbs:finite_filt:eq} exist. % % Next, we present algebraic conditions for minimality of an asLPV-SSA in innovation form. \begin{Theorem}[Minimality condition in innovation form] \label{min:forw:gbs:lemma} Assume that $\mathcal{S}$ is an asLPV-SSA of the form \eqref{eq:aslpv} and that $\mathcal{S}$ is a realization of $(\textbf{y},\p)$ in innovation form. Assume that $(\textbf{y},\p)$ is full rank and $\textbf{y}$ is SII. Then $\mathcal{S}$ is a minimal realization of $(\textbf{y},\p)$, if and only if the dLPV-SSA $\mathcal{D}_{\mathcal{S}}=(\{A_i,K_i\}_{i=0}^{\pdim},C,I_{n_y})$ is minimal. \end{Theorem} The proof of Theorem \ref{min:forw:gbs:lemma} can be found in Appendix \ref{App:proof}. Theorem \ref{min:forw:gbs:lemma}, in combination with Theorem \ref{inv:gbs:lemma}, leads to the following corollary. \begin{Corollary}[Minimality and innovation form] \label{min:forw:gbs:lemma:col} With the assumptions of Theorem \ref{min:forw:gbs:lemma}, if $\mathcal{D}_{\mathcal{S}}$ is minimal and $\mathcal{S}$ if stably invertable, then $\mathcal{S}$ is a minimal asLPV-SSA realization of $(\textbf{y},\p)$ in innovation form. \end{Corollary} \begin{Remark}[Checking minimality and innovation form] \label{rem:check1} We recall that $\mathcal{D}_{\mathcal{S}}$ is minimal, if and only if it satisfies the rank conditions for the extended $n$-step reachability and observability matrices \cite[Theorem 2]{PetreczkyLPVSS}, which can easily be computed from the matrices of $\mathcal{S}$. Checking that $\mathcal{S}$ is stably invertable boils down to checking the eigenvalues of the matrix \eqref{inv:gbs:lemma:eq1}. That is, Corollary \ref{min:forw:gbs:lemma:col} provides effective procedure for verifying that an asLPV-SSA is minimal and in innovation form. Note that in contrast to the rank condition of Theorem \ref{theo:rank_cond}, which required computing the limit of \eqref{gi:comp:eq1}, the procedure above uses only the matrices of the system. \end{Remark} \begin{Remark}[Parametrizations of asLPV-SSAs] Below we will sketch some ideas for applying the above results to parametrizations of asLPV-SSAs. A detailed study of these issues remains a topic for future research. For all the elements of a parametrization of asLPV-SSAs to be minimal and in innovation form, by Corollary \ref{min:forw:gbs:lemma:col} it is necessary that \textbf{(A)} all elements of the parametrization, when viewed as dLPV-SSA, are minimal, and that \textbf{(B)} they are stably invertable and satisfy condition \textbf{(3)} of Definition \ref{defn:LPV_SSA_wo_u}. In order to
of a symmetric matrix and a skew-symmetric matrix, and this shows that M m×n (F) is the direct sum of these two subspaces. Next, the matrices E i j − E ji (see the proof of Theorem 9.1.2), where 1 ≤ i < j ≤ n, are skew-symmetric, and as the diagonal elements of a skew-symmetric matrix are zero, it is clear that these matrices span the subspace of skew-symmetric matrices. As they are also linearly independent, we see that the subspace of skew-symmetric matrices has dimension (n 2 − n)/2. By Theorems 7.4.1 and 9.1.2, the subspace of symmetric  matrices has dimension n 2 − (n 2 − n)/2. We give two more examples. Example 9.1.5 Consider the space Z m×n of real m × n matrices X with the property that the sum of the elements over each row, and over each column, is zero. It should be clear that Z m×n is a real vector space, and we claim that dim(Z m×n ) = (m − 1)(n − 1). We start the proof in the case when m = n = 3, but only to give the general idea. 152 Matrices We start with an ‘empty’ 3 × 3 matrix and then make an arbitrary choice of the entries, say a, b, c and d, that are not in the last row or column; this gives us a matrix   a b ∗ c d ∗, ∗ ∗ ∗ where ∗ represents an as yet undetermined entry in the matrix. We now impose the condition that the first two columns must sum to zero, and after this we impose the condition that all rows sum to zero; thus the ‘matrix’ becomes   a b −a − b  c . d −c − d −a − c −b − d a + b + c + d Notice that the last column automatically sums to zero (because the sum over all elements is zero, as is seen by summing over rows, and the first two columns sum to zero). Exactly the same argument can be used for any ‘empty’ m × n matrix. The choice of elements not in the last row or last column is actually a choice of an arbitrary matrix in M (m−1)×(n−1) (F), so this construction actually creates a surjective map  from M (m−1)×(n−1) (F) onto Z m×n . It should be clear that this map is linear, and that the only element in its kernel is the zero matrix. Thus dim(Z m×n ) = dim ker() + dim M (m−1)×(n−1) (F) = (m − 1)(n − 1) as required.  Example 9.1.6 This example contains a discussion of magic squares (this is a ‘popular’ item, but it is not important). For any n × n matrix X , the trace tr(X ) of X is the sum x11 + · · · + xnn of the diagonal elements, and the anti-trace tr∗ (X ) of X is the sum over the ‘other’ diagonal, namely x1n + · · · + xn1 . A real n × n matrix A is a magic square if the sum over each row, the sum over each column, and the sum over each of the two diagonals (that is, tr( A) and  tr∗ (X )), all give the same value, say µ(A). We note that µ(A) = n −1 i, j ai j . It is easy to see that the space S n×n of n × n magic squares is a real vector space so, naturally, we ask what is its dimension? It is easy to see that dim(S n×n ) = 1 when n is 1 or 2, and we shall now show that for n ≥ 3, dim(S n×n ) = n(n − 2). Let S0n×n be the subspace of matrices A for which µ(A) = 0. This subspace is the kernel of the linear map A → µ(A) from S n×n to R, and as this map is surjective (consider the matrix A with all entries x/n) we see that dim(S n×n ) = dim(S0n×n ) + 1. 9.1 The vector space of matrices 153 Next, the space Z n×n of n × n matrices all of whose rows and columns sum to zero has dimension (n − 1) 2 (see Example 9.1.6). Now define  : Z n×n → R2  by (X ) = tr(X ), tr∗ (X ) . Then  is a linear map, and ker() = S0n×n . It is not difficult to show that  is surjective (we shall prove this shortly), and with this we see that (n − 1)2 = dim(Z n×n ) = dimS0n×n + 2, so that dim(S n×n ) = (n − 1)2 − 1 = n(n − 2). It remains to show that  is surjective, and it is sufficient to construct matrices P and Q in Z n×n such that (P) = (a, 0) and (Q) = (0, b) for all (or just some non-zero) a and b. If n = 3, we let     1 1 −2 −2 1 1 1 1  , Q = (b/3)  1 1 −2  , P = (a/3)  −2 1 −2 1 1 −2 1 and then (P) = (a, 0) and (Q) = (0, b). If n ≥ 4 we can take p11 = p22 = a/2, p12 = p21 = −a/2 and all other pi j = 0; then t(P) = a and t ∗ (P) = 0, so that (P) = (a, 0). Similarly, we choose q1,n−1 = q2n = −b/2 and q1n = q2,n−1 = b/2, so that (Q) =  (0, b). Exercise 9.1 1. A matrix (ai j ) is a diagonal matrix if ai j = 0 whenever i = j. Show that the space D of real n × n diagonal matrices is a vector space of dimension n. 2. A matrix (ai j ) is an upper-triangular matrix if ai j = 0 whenever i > j. Show that the space U of real n × n upper-triangular matrices is a vector space. What is its dimension? 3. Define what it means to say that a matrix (ai j ) is a lower-triangular matrix (see Exercise 2). Let L be the vector space of real lower-triangular matrices, and let D and U be as in Exercises 1 and 2. Show, without calculating any of the dimensions, that dim(U ) + dim(L) = dim(D) + n 2 . Now verify this by calculating each of the dimensions. 4. Show that the space of n × n matrices with trace zero is a vector space of dimension n 2 − 1. 5. Show (in Example 9.1.7) that dim(S 1×1 ) = dim(S 2×2 ) = 1. 6. Show that if X is a 3 × 3 magic square, then x22 = (X )/3. Deduce that if (X ) = 0 then X is of the form   a −a − b b 0 a − b. X = b − a −b a+b −a 154 Matrices Let A, B, C be the matrices    1 −1 0 0  −1 0 1,  1 0 1 −1 −1  −1 1 0 −1  , 1 0 1 1 1 1 1 1  1 1, 1 respectively. Show that (a) {A, B, C} is a basis of S 3×3 ; (b) {A, B} is a basis of S03×3 ; (c) {A, C} is a basis of the space of symmetric 3 × 3 magic squares; (d) {B} is a basis of the space of skew-symmetric 3 × 3 magic squares. 9.2 A matrix as a linear transformation A
import os import numpy as np import units import damping import induce import config import analyze import erepel3 import empole3 def epolar3(ATOMID): if 'epolar3a' in config.param_dict.keys() or 'one-center-induce-mutual' in config.param_dict.keys() or 'one-center-induce-direct' in config.param_dict.keys() or 'two-center-induce-mutual' in config.param_dict.keys() or 'two-center-induce-direct' in config.param_dict.keys(): ep, einter, nep = epolar3a(ATOMID) elif 'induce-trick' in config.param_dict.keys(): ep, einter, nep = epolar3a(ATOMID) else: ep, einter, nep = epolar3m(ATOMID) return ep, einter, nep def epolar3minduce(uind, ATOMID): n = len(ATOMID) uind = uind.reshape(3,n) uTu_ep, uTu_einter, uTu_nep = uTu(ATOMID, uind) Eu_ep, Eu_nep = Eu(ATOMID, uind) if 'exchange-2-1' in config.param_dict.keys(): exchind, exchind_inter = erepel3.exchange21(ATOMID, uind) elif 'exchange-2-2' in config.param_dict.keys(): exchind = erepel3.exchange22(ATOMID, uind) else: exchind = 0 exchind_inter = 0 ep = 1/2*uTu_ep - Eu_ep + exchind return ep def epolar3m(ATOMID, uind=[]): '''epolar3m calculates energy using the equation U_pol = 1/2*uTu - Eu''' n = len(ATOMID) uTu_ep, uTu_einter, uTu_nep = uTu(ATOMID, uind) Eu_ep, Eu_nep = Eu(ATOMID, uind) ep = 1/2*uTu_ep - Eu_ep einter = 1/2*uTu_einter - Eu_ep nep = uTu_nep + Eu_nep ############## # print('Eu', Eu_ep) # e1 = empole3.empole3c(ATOMID) # e2, junk1, junk2 = empole3.empole3a(ATOMID) # print('Eu_check', e1 - e2 - 1/2*uTu_check(ATOMID)) ############## return ep, einter, nep def epolar3oinduce(uind, ATOMID): '''Minimize 1/2u*alpha^-1*u - 1/2u*gamma*u - Eu using (M+u)T(M+u) trick.''' n = len(ATOMID) uind = uind.reshape(3,n) # zero out the total polarization energy and partitioning ep = 0. # set conversion factor, cutoff and switching coefficients f = units.electric / units.dielec # calculate u*alpha^-1*u for i in range(n): uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] poli = ATOMID[i].pol uiu = uix**2 + uiy**2 + uiz**2 e = f * uiu / poli if e != 0.: ep = ep + e ep = 1/2 * ep e1 = empole3.empole3b(ATOMID, uind) e2, junk1, junk2 = empole3.empole3a(ATOMID) ep = ep + (e1-e2) return ep def epolar3ninduce(uind, ATOMID): # minimize Tu - E = 0 where T = alpha^-1 + gamma via least squared n = len(ATOMID) uind = uind.reshape(3,n) resi = np.zeros_like(uind) # calculate alpha^-1 * u for i in range(n): uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] poli = ATOMID[i].pol resi[0,i] = uix / poli resi[1,i] = uiy / poli resi[2,i] = uiz / poli # calculate gamma * u for i in range(n-1): uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] xi = ATOMID[i].coordinates[0] yi = ATOMID[i].coordinates[1] zi = ATOMID[i].coordinates[2] alphai = ATOMID[i].palpha # set exclusion coefficients for connected atoms wscale = ATOMID[i].returnscale('w', ATOMID) # evaluate all sites within the cutoff distance for k in range(i+1, n): xr = ATOMID[k].coordinates[0] - xi yr = ATOMID[k].coordinates[1] - yi zr = ATOMID[k].coordinates[2] - zi r2 = xr**2 + yr**2 + zr**2 r = np.sqrt(r2) ukx = uind[0,k] uky = uind[1,k] ukz = uind[2,k] uir = uix*xr + uiy*yr + uiz*zr ukr = ukx*xr + uky*yr + ukz*zr rr3 = 1 / (r*r2) rr5 = 3. * rr3 / r2 alphak = ATOMID[k].palpha if 'one-center-induce-mutual' in config.param_dict.keys(): dmpi, dmpk = damping.dampdir(r, alphai, alphak) rr3i = dmpi[2]*rr3 rr5i = dmpi[4]*rr5 rr3k = dmpk[2]*rr3 rr5k = dmpk[4]*rr5 fid = np.empty(3) fkd = np.empty(3) fid[0] = -xr*(-rr5k*ukr) - rr3k*ukx fid[1] = -yr*(-rr5k*ukr) - rr3k*uky fid[2] = -zr*(-rr5k*ukr) - rr3k*ukz fkd[0] = xr*(rr5i*uir) - rr3i*uix fkd[1] = yr*(rr5i*uir) - rr3i*uiy fkd[2] = zr*(rr5i*uir) - rr3i*uiz elif 'two-center-induce-mutual' in config.param_dict.keys(): dmpik = damping.dampmut(r, alphai, alphak) rr3ik = dmpik[2]*rr3 rr5ik = dmpik[4]*rr5 fid = np.empty(3) fkd = np.empty(3) fid[0] = -xr*(-rr5ik*ukr) - rr3ik*ukx fid[1] = -yr*(-rr5ik*ukr) - rr3ik*uky fid[2] = -zr*(-rr5ik*ukr) - rr3ik*ukz fkd[0] = xr*(rr5ik*uir) - rr3ik*uix fkd[1] = yr*(rr5ik*uir) - rr3ik*uiy fkd[2] = zr*(rr5ik*uir) - rr3ik*uiz for j in range(3): resi[j,i] = resi[j,i] - fid[j]*wscale[k] resi[j,k] = resi[j,k] - fkd[j]*wscale[k] # Tu - E # get permanent electric field if 'one-center-induce-mutual' in config.param_dict.keys(): field = config.fieldp elif 'two-center-induce-mutual' in config.param_dict.keys(): field = config.mfieldp for i in range(n): resi[0,i] = resi[0,i] - field[0,i] resi[1,i] = resi[1,i] - field[1,i] resi[2,i] = resi[2,i] - field[2,i] resi = resi.reshape(3*n) return np.dot(resi, resi) def uTu(ATOMID, uind=[]): '''1/2*uTu is the energy it takes to construct the induced dipoles. Since uTu = u(alpha^-1 - gamma)u = u*alpha^-1*u - u*gamma*u, we separate out the two loops. The first term u*alpha^-1*u is a single loop since alpha^-1 is diagonal. The second term u*gamma*u is a double loop i < k since gamma has no values in its diagonal.''' n = len(ATOMID) # zero out the total polarization energy and partitioning nep = 0 ep = 0. einter = 0 aep = np.zeros(n) # set conversion factor, cutoff and switching coefficients f = units.electric / units.dielec # calculate u*alpha^-1*u for i in range(n): if len(uind) != 0: uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] else: uix = ATOMID[i].uind[0] uiy = ATOMID[i].uind[1] uiz = ATOMID[i].uind[2] poli = ATOMID[i].pol uiu = uix**2 + uiy**2 + uiz**2 e = f * uiu / poli if e != 0.: ep = ep + e nep = nep + 1 aep[i] = e einter = einter + e # calculate u*gamma*u for i in range(n-1): if len(uind) != 0: uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] else: uix = ATOMID[i].uind[0] uiy = ATOMID[i].uind[1] uiz = ATOMID[i].uind[2] xi = ATOMID[i].coordinates[0] yi = ATOMID[i].coordinates[1] zi = ATOMID[i].coordinates[2] alphai = ATOMID[i].palpha # set exclusion coefficients for connected atoms wscale = ATOMID[i].returnscale('w', ATOMID) # evaluate all sites within the cutoff distance for k in range(i+1, n): xr = ATOMID[k].coordinates[0] - xi yr = ATOMID[k].coordinates[1] - yi zr = ATOMID[k].coordinates[2] - zi r2 = xr**2 + yr**2 + zr**2 r = np.sqrt(r2) if len(uind) != 0: ukx = uind[0,k] uky = uind[1,k] ukz = uind[2,k] else: ukx = ATOMID[k].uind[0] uky = ATOMID[k].uind[1] ukz = ATOMID[k].uind[2] uik = uix*ukx + uiy*uky + uiz*ukz uir = uix*xr + uiy*yr + uiz*zr ukr = ukx*xr + uky*yr + ukz*zr rr1 = f * wscale[k] / r rr3 = rr1 / r2 rr5 = 3. * rr3 / r2 alphak = ATOMID[k].palpha term2ik = uik term3ik = -uir*ukr dmpik = damping.dampmut(r, alphai, alphak) rr3ik = dmpik[2]*rr3 rr5ik = dmpik[4]*rr5 e = term2ik*rr3ik + term3ik*rr5ik if e != 0.: ep = ep + 2*e nep = nep + 1 aep[i] = aep[i] + 2*0.5*e aep[k] = aep[k] + 2*0.5*e if ATOMID[k].index not in ATOMID[i].connectivity: einter = einter + 2*e return ep, einter, nep def Eu(ATOMID, uind=[]): n = len(ATOMID) # zero out the total polarization energy and partitioning nep = 0 ep = 0. einter = 0 aep = np.zeros(n) # set conversion factor, cutoff and switching coefficients f = units.electric / units.dielec if 'two-center' in config.param_dict.keys(): fieldp = config.mfieldp else: fieldp = config.fieldp # calculate Eu for i in range(n): if len(uind) != 0: uix = uind[0,i] uiy = uind[1,i] uiz = uind[2,i] else: uix = ATOMID[i].uind[0] uiy = ATOMID[i].uind[1] uiz = ATOMID[i].uind[2] e = f*uix*fieldp[0,i] + f*uiy*fieldp[1,i] + f*uiz*fieldp[2,i] ep = ep + e nep = nep + 1 aep[i] = aep[i] + e return ep, nep def epolar3a(ATOMID): '''empole3a calculates pairwise electrostatic energies between atoms i and k.''' n = len(ATOMID) # zero out the total polarization energy and partitioning nep = 0 ep = 0. einter = 0. aep = np.zeros(n) # set conversion factor, cutoff and switching coefficients f = 0.5 * units.electric / units.dielec for i in range(n-1): xi = ATOMID[i].coordinates[0] yi = ATOMID[i].coordinates[1] zi = ATOMID[i].coordinates[2] ci = ATOMID[i].rpole[0] dix = ATOMID[i].rpole[1] diy = ATOMID[i].rpole[2] diz = ATOMID[i].rpole[3] qixx = ATOMID[i].rpole[4] qixy = ATOMID[i].rpole[5] qixz = ATOMID[i].rpole[6] qiyy = ATOMID[i].rpole[8] qiyz = ATOMID[i].rpole[9] qizz = ATOMID[i].rpole[12] uix = ATOMID[i].uind[0] uiy = ATOMID[i].uind[1] uiz = ATOMID[i].uind[2] corei = ATOMID[i].pcore vali = ATOMID[i].pval alphai = ATOMID[i].palpha # set exclusion coefficients for connected atoms pscale = ATOMID[i].returnpscale(ATOMID) # evaluate all sites within the cutoff distance for k in range(i+1, n): xr = ATOMID[k].coordinates[0] - xi yr = ATOMID[k].coordinates[1] - yi zr = ATOMID[k].coordinates[2] - zi r2 = xr**2 + yr**2 + zr**2 r = np.sqrt(r2)
Using the directional derivative definition, we can find the directional derivative f at k in the direction of a unit vector u as. The directional derivative of f(x;y) at (x0;y0) along u is the pointwise rate of change of fwith respect to the distance along the line parallel to u passing through (x0;y0). At the point (â 2, 1) on the ellipse, there are drawn two … The partial derivatives off at the point (x,y)=(3,2) are:∂f∂x(x,y)=2xy∂f∂y(x,y)=x2∂f∂x(3,2)=12∂f∂y(3,2)=9Therefore, the gradient is∇f(3,2)=12i+9j=(12,9). All you’ve to do is that, enter a function, point and vectors and then click on the show result button, it will show you the answer of your given function. Given a function , there are many ways to denote the derivative of with respect to . Directional Derivative Calculator All you have to do is that just put the function which you wanted this tool to solve for you and it will show you the step by step answer of your question. If the function f is differentiable at x, then the directional derivative exists along any vector v, and one has If you get an error, double-check your expression, add parentheses and multiplication signs where needed, and consult the table below. To embed this widget in a post, install the Wolfram|Alpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source. This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.. Sometimes I see expressions like tan^2xsec^3x: this will be parsed as tan^(2*3)(x sec(x)). He also covers the definition of a gradient vector. Calculate directional derivatives and gradients in three dimensions. Consider the domain of as a subset of Euclidean space. Matrix Inverse Calculator; What are derivatives? Next Section . Directional Derivative Calculator All you have to do is that just put the function which you wanted this tool to solve for you and it will show you the step by step answer of your question. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Next lesson. (b) Find the derivative of fin the direction of (1,2) at the point(3,2). Why the gradient is the direction of steepest ascent. $\begingroup$ I understand that, partial derivatives are just directional derivatives on the axis. Find more Mathematics widgets in Wolfram|Alpha. To find the directional derivative in the direction of th… Derivative Calculator – How It Works. Vector field is 3i – 4k. The derivative is an important tool in calculus that represents an infinitesimal change in a function with respect to one of its variables. by supriya July 7, 2020. Drag the point P or type specific values on the boxes. Now, we will learn about how to use the gradient to measure the rate of change of the function with respect to a change of its variables in any direction, as opposed to a change in a single variable. To get tan^2(x)sec^3(x), use parentheses: tan^2(x)sec^3(x). Suppose is a function of many variables. The calculator will find the directional derivative (with steps shown) of the given function at the point in the direction of the given vector. (a) Find ∇f(3,2). Definition at a point Generic definition. Darcy's law states that the local velocity q in a direction s is given by the directional derivative q = - (k/μ) ∂p/∂ s, where p is the transient or steady pressure, with k and μ representing permeability and viscosity. In a similar way to how we developed shortcut rules for standard derivatives in single variable calculus, and for partial derivatives in multivariable calculus, we can also find a way to evaluate directional derivatives without resorting to the limit definition found in Equation . However, in practice this can be a very difficult limit to compute so we need an easier way of taking directional derivatives. This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.. We can calculate the directional derivative of a function of three variables by using the gradient, leading to a formula that is analogous to Equation 4.38. Get the free "Directional derivative" widget for your website, blog, Wordpress, Blogger, or iGoogle. Section. All you’ve to do is that, enter a function, point and vectors and then click on the show result button, it will show you the answer of your given function. $\begingroup$ The directional derivative as mentioned above will attain its maximum if $\theta=0^\circ$ $\endgroup$ – Juniven Mar 24 '17 at 11:19 $\begingroup$ @Reddevil magnitude of vector d-hat is 1 because it is a unit vector. The directional derivative is the dot product of the gradient and the vector u. Since directional derivatives are composed of partial derivatives. Also, be careful when you write fractions: 1/x^2 ln(x) is 1/x^2 ln(x), and 1/(x^2 ln(x)) is 1/(x^2 ln(x)). Free derivative calculator - differentiate functions with all the steps. derivative to show the directional derivative. Now, we have to find the gradient f for finding the directional derivative. Let's look at an example of finding a higher order directional derivative… The slope of the tangent line to this curve (within the vertical plane) at the point C IS the directional derivative of the function at A in the direction of u. Similarly, tanxsec^3x will be parsed as tan(xsec^3(x)). Free partial derivative calculator - partial differentiation solver step-by-step. Then, the directional derivative at the point in the direction is the derivative of the function with respect to movement of the point along that direction, at the specific point. Mobile Notice. If the calculator did not compute something or you have identified an error, please write it in At the point (â 2, 1) on the ellipse, there are drawn two arrows, one tangent vector and one normal vector. Thedirectional derivative at (3,2) in the direction of u isDuf(3,2)=∇f(3,2)⋅u=(12i+9j)⋅(u1i+u2j)=12u1+9u2. h3,5i = 1 25 p 34 (920) = 11 25 p 34 Example 5.4.2.2 Find the directional derivative of f(x,y,z)= p xyz in the direction of ~ v = h1,2,2i at the point (3,2,6). Things to try: Change the function f(x,y). Hint: consider the level curve at $(1,1).$ By computation, find the directional derivative at $(1,1)$ in the direction of $… From the table below, you can notice that sech is not supported, but you can still enter it using the identity sech(x)=1/cosh(x). The Derivative Calculator has to detect these cases and insert the multiplication … The rate of change of a function of several variables in the direction u is called the directional derivative in the direction u. The concept of directional derivatives is … Note that if u is a unit vector in the x direction u = (1,0), then the directional derivative is simply the partial derivative with respect to x. The directional derivative of $$f$$ at the point $$(x,y)$$ in the direction of the unit vector $$\vu = \langle u_1, u_2 \rangle$$ is \begin{equation*} D_{\vu}f(x,y) = \lim_{h \to 0} \frac{f(x+u_1h, y+u_2h) - … A specialty in mathematical expressions is that the multiplication sign can be left out sometimes, for example we write "5x" instead of "5*x". The Derivative Calculator supports computing first, second, …, fifth derivatives as well as differentiating functions with many variables (partial derivatives), implicit differentiation and calculating roots/zeros. If you skip parentheses or a multiplication sign, type at least a whitespace, i.e. Directional Derivative Definition. Fix a direction in this space and a point in the domain. So let's say we have a multivariable function. Instructor/speaker: Prof. Herbert Gross Now, to get one's hands on directional derivatives in polar, or any non-Cartesian or curvilinear coordinate system, one
more information on this topic. For $X \Subset \mathbb{R}^d$ we write $\mathscr{E}(\overline{X})$ for the space consisting of all $f \in \mathscr{E}(X)$ such that $\partial^\alpha f$ has a continuous extension to $\overline{X}$ for all $\alpha \in \mathbb{N}^d$. We endow it with the family of norms $\{\| \, \cdot \, \|_{{\overline{X},J}} \, ; \, J \in \mathbb{N}_0\}$, hence it becomes a Fr\'echet space. We set $$ \mathscr{E}_P(\overline{X}) = \{f \in \mathscr{E}(\overline{X}) \, ; \, P(D) f = 0\} $$ and endow it with the subspace topology from $\mathscr{E}(\overline{X})$. Then, $\mathscr{E}_P(\overline{X})$ is also a Fr\'echet space. \begin{proposition}\label{cor: explicit Omega} Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ and let $X \subseteq \mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective. Let $X_1, X_2 \Subset X$ with $X_1 \subseteq X_2$ be such that $(\overline{X_1}, \overline{X_2})$ is augmentedly $P$-locating for $X$. Then, for all $X'_1 \Subset X''_1 \Subset X_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C>0$ such that \begin{gather*} \forall f \in\mathscr{E}_P(X),\,\varepsilon\in (0,1) \,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,:\, \\ \|f-h_\varepsilon\|_{\overline{X'_1},r_1}\leq\varepsilon\max\{\|f\|_{\overline{X''_1}, r_1+1}, \|f\|_{\overline{X_2}}\} \quad \mbox{ and } \quad \|h_\varepsilon\|_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\|f\|_{\overline{X_2}}. \end{gather*} In particular, it holds that $ \|f-h_\varepsilon\|_{\overline{X'_1},r_1}\ \leq \varepsilon\|f\|_{\overline{X_2}, r_1+1}$. \end{proposition} \begin{proof} Fix $X'_1 \Subset X''_1 \Subset X_1$. \textsc{Auxiliary Claim 1:} \emph{For all $X_1 \Subset X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C_1,C_2>0$ and $\varepsilon_0 \in (0,1)$ such that \begin{gather*} \forall f\in\mathscr{E}_P(X), \varepsilon\in (0,\varepsilon_0) \,\exists\,g_\varepsilon \in\mathscr{E}_P(\overline{X'_3})\,: \\ \|f-g_\varepsilon\|_{\overline{X'_1},r_1}\leq C_1\varepsilon\max\{\|f\|_{\overline{X''_1}, r_1+1}, \|f\|_{\overline{X_2}}\} \quad \mbox{ and } \quad \|g_\varepsilon\|_{\overline{X'_3},r_2}\leq\frac{C_2}{\varepsilon^s}\|f\|_{\overline{X_2}}. \end{gather*} } \emph{Proof of auxiliary claim 1.} Let $X_1 \Subset X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ be arbitrary. Choose $X_3 \Subset X$ such that $X_2 \subseteq X_3$ and $X'_3 \Subset X_3$. Set $\varepsilon_0 = \min \{1,d(\overline{X'_1}, \mathbb{R}^d \backslash X''_1), d(\overline{X'_3},\mathbb{R}^d \backslash X_3) \}$. By Proposition \ref{lem: explicit POmega}, there are $K \in \mathbb{N}_0$ and $s',C >0$ such that $$ B_{\overline{X}_2,0} \subseteq\frac{C}{\delta^{s'}} B_{\overline{X}_3,K}+\delta B_{\overline{X}_1,0}, \qquad \forall \delta \in (0,1). $$ Let $f\in\mathscr{E}_P(X)$ be arbitrary. As $f \in \|f\|_{\overline{X_2}}B_{\overline{X}_2,0}$, we have that for all $\delta \in (0,1)$ there is $f_\delta \in C\delta^{-s'} \|f\|_{\overline{X_2}}B_{\overline{X}_3,K}$ such that $f - f_\delta \in \delta \|f\|_{\overline{X_2}}B_{\overline{X}_1,0}$. Choose $\chi \in \mathscr{D}(\mathbb{R}^d)$ with $\chi \geq 0$, $\operatorname{supp} \chi \subseteq B(0,1)$, and $\int_{\mathbb{R}^d} \chi(x) {\rm d}x = 1$, and set $\chi_\varepsilon = \varepsilon^{-d}\chi(x/\varepsilon)$ for $\varepsilon \in (0,1)$. For $\delta \in (0,1)$ and $\varepsilon \in (0,\varepsilon_0)$ we define $g_{\delta, \varepsilon} = f_\delta \ast \chi_\varepsilon \in \mathscr{E}_P(\overline{X'_3})$. The mean value theorem implies that $$ \| f - f \ast \chi_\varepsilon\|_{\overline{X'_1}, r_1} \leq \sqrt{d} \varepsilon \|f\|_{\overline{X''_1}, r_1+1}, \qquad \forall \varepsilon \in (0,\varepsilon_0). $$ Since $f - f_\delta \in \delta \|f\|_{\overline{X_2}}B_{\overline{X}_1,0}$, it holds that $$ \| f \ast \chi_\varepsilon - g_{\delta, \varepsilon}\|_{\overline{X'_1}, r_1} = \| (f - f_\delta) \ast \chi_\varepsilon\|_{\overline{X'_1}, r_1} \leq \frac{\delta}{\varepsilon^{r_1 + d}} \| \chi \|_{r_1}\|f\|_{\overline{X_2}}, \qquad \forall \varepsilon \in (0,\varepsilon_0). $$ Similarly, as $f_\delta \in C\delta^{-s'} \|f\|_{\overline{X_2}}B_{\overline{X}_3,K}$, we have that $$ \| g_{\delta, \varepsilon}\|_{\overline{X'_3}, r_2} \leq \frac{C}{\delta^{s'}\varepsilon^{K +r_2 + d}} \| \chi \|_{K + r_2}\|f\|_{\overline{X_2}}, \qquad \forall \varepsilon \in (0,\varepsilon_0). $$ Define $g_\varepsilon = g_{\varepsilon^{r_1+d + 1}, \varepsilon} \in \mathscr{E}_P(\overline{X'_3})$ for $\varepsilon \in (0,\varepsilon_0)$. Set $s = s'(r_1 + d + 1) + K + r_2 + d$ Then, for all $\varepsilon \in (0,\varepsilon_0)$ \begin{eqnarray*} \| f - g_\varepsilon\|_{\overline{X'_1}, r_1} &\leq& \| f - f\ast \chi_\varepsilon\|_{\overline{X'_1}, r_1} + \| f \ast \chi_\varepsilon - g_\varepsilon\|_{\overline{X'_1}, r_1} \\ &\leq& (\sqrt{d} + \| \chi\|_{r_1}) \varepsilon\max\{\|f\|_{\overline{X''_1}, r_1+1}, \|f\|_{\overline{X_2}}\}, \end{eqnarray*} and $$ \| g_\varepsilon\|_{\overline{X'_3}, r_2} \leq \frac{C \| \chi\|_{K +r_2} }{\varepsilon^s}\|f\|_{\overline{X_2}}. $$ \textsc{Auxiliary Claim 2:} \emph{For all $X' \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C_1,C_2>0$ and $\varepsilon_0 \in (0,1)$ such that \begin{gather*} \forall f\in\mathscr{E}_P(X), \varepsilon\in (0,\varepsilon_0) \,\exists\,g_\varepsilon \in\mathscr{E}_P(X)\,: \\ \|f-g_\varepsilon\|_{\overline{X'_1},r_1}\leq C_1\varepsilon\max\{\|f\|_{\overline{X''_1}, r_1+1}, \|f\|_{\overline{X_2}}\} \quad \mbox{ and } \quad \|g_\varepsilon\|_{\overline{X'},r_2}\leq\frac{C_2}{\varepsilon^s}\|f\|_{\overline{X_2}}. \end{gather*} } \noindent Note that, by a simple rescaling argument, the auxiliary claim 2 implies the result. \emph{Proof of auxiliary claim 2.} Let $(\Omega_j)_{j \in \mathbb{N}_0}$ be an exhaustion by relatively compact open subsets of $X$, i.e., $\Omega_j \Subset \Omega_{j+1} \Subset X$ for all $j \in \mathbb{N}_0$ and $X = \bigcup_{j \in \mathbb{N}_0} \Omega_j$. Since $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective, $P(D): \mathscr{E}(X) \to \mathscr{E}(X)$ is as well. Hence, by \cite[Lemma 3.1]{DeKa22}, we have that $\operatorname{Proj}^1( \mathscr{E}_P(\overline{\Omega}_j))_{j \in \mathbb{N}_0} = 0$. Let $X' \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ be arbitrary. We may assume that $X'_1 \Subset X'$. Since $\operatorname{Proj}^1( \mathscr{E}_P(\overline{\Omega}_j))_{j \in \mathbb{N}_0} = 0$, the Mittag-Leffler lemma \cite[Theorem 3.2.8]{Wengenroth} implies that there is $X'_3 \Subset X$ such that \[ \mathscr{E}_P(\overline{X'_3}) \subseteq \mathscr{E}_P(X) + \{ f \in \mathscr{E}_P(\overline{X'}) \, ; \, \| f \|_{\overline{X'},r_2} \leq 1 \}. \] We may assume that $X' \subseteq X'_3$. By multiplying both sides of the above inclusion with $\delta$, we find that \begin{equation} \label{ML} \mathscr{E}_P(\overline{X'_3}) \subseteq \mathscr{E}_P(X) + \{ f \in \mathscr{E}_P(\overline{X'}) \, ; \, \| f \|_{\overline{X'},r_2} \leq \delta \}, \qquad \forall \delta > 0. \end{equation} Let $s,C_1,C_2, \varepsilon_0$ be as in the auxiliary claim 1 (with $X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ as above). Let $f\in\mathscr{E}_P(X)$ be arbitrary. Choose $g_\varepsilon \in\mathscr{E}_P(\overline{X'_3})$, $\varepsilon\in (0,\varepsilon_0)$, as in auxiliary claim 1. By \eqref{ML}, there is $h_\varepsilon \in \mathscr{E}_P(X)$ such that $\|g_\varepsilon-h_\varepsilon\|_{\overline{X'},r_2}\leq \varepsilon \|f\|_{\overline{X_2}}$ for all $\varepsilon\in (0,\varepsilon_0)$. Hence, for all $\varepsilon \in (0,\varepsilon_0)$ $$ \| f - h_\varepsilon\|_{\overline{X'_1}, r_1} \leq \| f - g_\varepsilon\|_{\overline{X'_1}, r_1} + \| g_\varepsilon - h_\varepsilon\|_{\overline{X'}, r_2} \leq (C_1 +1) \varepsilon\max\{\|f\|_{\overline{X''_1}, r_1+1}, \|f\|_{\overline{X_2}}\}, $$ and $$ \| h_\varepsilon\|_{\overline{X'}, r_2} \leq \| g_\varepsilon - h_\varepsilon\|_{\overline{X'}, r_2} + \| g_\varepsilon\|_{\overline{X'_3}, r_2} \leq \frac{C_2 +1}{\varepsilon^s} \|f\|_{\overline{X_2}}. $$ \end{proof} \begin{remark} We believe that Proposition \ref{cor: explicit Omega} holds with $\|f-h_\varepsilon\|_{\overline{X'_1},r_1}$ replaced by $\|f-h_\varepsilon\|_{\overline{X_1},r_1}$, but are unable to show this. \end{remark} For hypoelliptic operators it is more natural to work with sup-seminorms. In this regard, we have the following result. \begin{corollary}\label{cor: explicit Omega-hypo} Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ be hypoelliptic and let $X \subseteq \mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective. Let $X_1, X_2 \Subset X$ with $X_1 \subseteq X_2$ be such that $(\overline{X_1}, \overline{X_2})$ is augmentedly $P$-locating for $X$. Then, for all $X'_1 \Subset X_1$ and $X' \Subset X$ there exist $s, C>0$ such that \begin{gather*} \forall f \in\mathscr{E}_P(X),\,\varepsilon\in (0,1) \,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,:\, \\ \|f-h_\varepsilon\|_{\overline{X'_1}}\leq\varepsilon\|f\|_{\overline{X_2}} \quad \mbox{ and } \quad \|h_\varepsilon\|_{\overline{X'}}\leq\frac{C}{\varepsilon^s}\|f\|_{\overline{X_2}}. \end{gather*} \end{corollary} \begin{proof} Fix $X''_1 \Subset X_1$ such that $X'_1 \Subset X''_1$. Since $P(D)$ is hyoelliptic, there is $C' >0$ such that $$ \|f\|_{\overline{X''_1},1} \leq C' \|f\|_{\overline{X_2}}, \qquad f \in \mathscr{E}_P(X). $$ The result now follows from Proposition \ref{cor: explicit Omega} with $r_1 = r_2 = 0$. \end{proof} \section{Quantitative Runge type approximation theorems}\label{sec: quantitative Runge} We now combine the results from Sections \ref{sec: qualitative Runge} and \ref{sec: technical} to obtain quantitative approximations results. In particular, we shall show Theorems \ref{theo: quantitative convex} -\ref{theo: quantitative Runge for wave operator} from the introduction. We start with the following general result. \begin{proposition}\label{prop: general quantitative} Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ and let $X\subseteq\mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X\times\mathbb{R}) \to \mathscr{D}'(X\times\mathbb{R})$ is surjective. Let $Y \subseteq X$ be open such that the restriction map $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range. Let $Y_1, Y_2 \Subset Y$ with $Y_1 \subseteq Y_2$ be such that $(\overline{Y_1}, \overline{Y_2})$ is augmentedly $P$-locating for $X$. Then, for all $Y'_1 \Subset Y_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C>0$ such that \begin{gather*} \forall f\in\mathscr{E}_P(Y), \varepsilon\in (0,1) \,\exists h_\varepsilon \in\mathscr{E}_P(X) \, :\\ \|f-h_\varepsilon\|_{\overline{Y'_1},r_1}\leq \varepsilon\|f\|_{\overline{Y_2},r_1+1} \quad \mbox{ and } \quad \|h_\varepsilon\|_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\|f\|_{\overline{Y_2}}. \end{gather*} \end{proposition} \begin{proof} Let $Y'_1 \Subset Y_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ be arbitrary. By Proposition \ref{cor: explicit Omega} we find that there are $s,C >0$ such that \begin{gather}\label{eq: decomposition 1} \forall g\in\mathscr{E}_P(X),\varepsilon\in (0,1)\,\,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,: \\ \nonumber \|g-h_\varepsilon\|_{\overline{Y'_1},r_1}\leq \varepsilon\|g\|_{\overline{Y_2},r_1+1} \quad \mbox{ and } \quad \|h_\varepsilon\|_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\|g\|_{\overline{Y_2}}. \end{gather} Let $f\in\mathscr{E}_P(Y)$ and $\varepsilon\in (0,1)$ be arbitrary. Since $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range, there is $g_\varepsilon\in\mathscr{E}_P(X)$ with $\|f-g_\varepsilon\|_{\overline{Y_2}, r_1+1}\leq\varepsilon\|f\|_{\overline{Y_2}}$. Choose $h_\varepsilon$ according to \eqref{eq: decomposition 1} for $g = g_\varepsilon$. Then, \begin{eqnarray*} \|f-h_\varepsilon\|_{\overline{Y'_1}, r_1}&\leq&\|f-g_\varepsilon\|_{\overline{Y'_1}, r_1}+\|g_\varepsilon-h_\varepsilon\|_{\overline{Y'_1}, r_1}\\ &\leq&\|f-g_\varepsilon\|_{\overline{Y_2}, r_1+1}+\varepsilon \|g_\varepsilon\|_{\overline{Y_2},r_1+1} \\ &\leq&\|f-g_\varepsilon\|_{\overline{Y_2}, r_1+1}+\varepsilon\left(\|f-g_\varepsilon\|_{\overline{Y_2}, r_1+1}+\|f\|_{\overline{Y_2}, r_1+1}\right)\\ &\leq& 3\varepsilon\|f\|_{\overline{Y_2},r_1+1}, \end{eqnarray*} and $$\|h_\varepsilon\|_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\|g_\varepsilon\|_{\overline{Y_2}}\leq \frac{C}{\varepsilon^s}\left(\|f-g_\varepsilon\|_{\overline{Y_2}}+\|f\|_{\overline{Y_2}}\right)\leq\frac{2C}{\varepsilon^s}\|f\|_{\overline{Y_2}}.$$ This implies the result. \end{proof} For hypoelliptic operators, we obtain the following result. \begin{proposition}\label{cor: general quantitative-hypo} Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ be hypoelliptic and let $X\subseteq\mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X\times\mathbb{R}) \to \mathscr{D}'(X\times\mathbb{R})$ is surjective. Let $Y \subseteq X$ be open such that the restriction map $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range. Let $Y_1, Y_2 \Subset Y$ with $Y_1 \subseteq Y_2$ be such that $(\overline{Y_1}, \overline{Y_2})$ is augmentedly $P$-locating for $X$. Then, for all $Y'_1 \Subset Y_1$ and $X' \Subset X$ there exist $s,
based on discrete logarithms Abstract: A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. *Bad news for any public-key scheme. It can be defined over any cyclic group G. Its security depends upon the difficulty of a certain problem in G related to computing discrete logarithms. The sym… To decrypt the ciphertext, the receiver needs to compute. Last Updated: 16-11-2018 ElGamal encryption is an public-key cryptosystem. ElGamal doesn't have signature generation capabilities. endstream ElGamal is another popular public-key encryption algorithm. It chooses random exponent, say, computes the ciphertext, and sends this to the receiver. If Bob now wants to send a message m to Alice, he randomly picks a The resultant encryption scheme has 1 + 1/n ciphertext expansion, a roughly reduction by half. The ElGamal signature algorithm is rarely used in practice. The receiver now has the message digest. Alice can use this to reconstruct the message m by computing. ElGamal T (1985) A public key cryptosystem and a signature scheme based on discrete logarithms. So in total two key pairs, one of the receiver and one of the sender. So here’s an overview of ElGamal … Thus, “mod p” is omitted when computing exponentiations and discrete logarithms, and “mod q” is omitted when performing computation on exponents. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange I want to implement a chat feature whereby every text chat to be sent will be encrypted with the public key of the receiver. We can sample one random value r and encrypt the message in the Karosawa's manner, as if each point of the public key is for an independent receiver. q can be an ElGamal private key, and then K = (p,q,g,y) with y = gk mod p is the correspond-ing public key. deterministic like ElGamal public-key cryptosystem. ElGamal encryption is an public-key cryptosystem. prime number p and a generator g. Alice chooses a random number • Then a second encryption is performed using the receivers public key, which delivers confidentiality. although Mr. Elgamal's last name does not have a capital letter 'G'. If you are thinking "Maybe I could securely distribute the public key only to the intended receiver", then you are not disclosing any key at all, and the definition of public and private do not hold anymore: you use RSA as a sort of secret key cipher like AES. not to encrypt messages. • The disadvantage with this scheme is that the public-key algorithm which is complex must be used four times. A disadvantage of the ElGamal system is that the encrypted message becomes very big, about twice the size of the original message m. For this reason it is only used for small messages such as secret keys. Each entity A should do the following:- 1. In asymmetric cryptography or public-key cryptography, the sender and the receiver use a pair of public-private keys, as opposed to the same symmetric key, and therefore their cryptographic operations are … .% �5|��.��N��E�U�kэ �Q�n��H�Q��bFC���;�i��G0*�AI�� (�j.�V� �HӒo����������ǡ��'w*�Pu~�OS��\_|oV V+�Xe�� ɤԑ�h��;�P���}���S��!��%�V�i� � endobj /Filter /FlateDecode Security of an asymmetric key (public-key) cryptosystem such as RSA and ElGamal is measured with respect to a chosen plaintext attack (CPA) and a chosen ciphertext attack (CCA). In this the Plain text is encrypted using receiver public key. • Pseudorandom to the rescue. Idea of ElGamal cryptosystem View Tutorial 7.pdf from COMPUTER S Math at University of California, Berkeley. As before, the group is the largest multiplicative sub-group of the integers modulo p, with pprime. The security of the ElGamal signature scheme is based (like DSA) on the discrete logarithm problem ().Given a cyclic group, a generator g, and an element h, it is hard to find an integer x such that $$g^x = h$$.. The public key of the receiver is retrieved and calculated: c_2=m \beta^v\alpha^v\mod p The c=(c_1,c_2) encryption At this point it is said that the user3 , found by chance a value "d" and an outline of the above mentioned encryption. Select the second encryption key as E1. ������~髬q�� 5����ʕ�խ=nQ�����A����$Ѿi����Q/���~�h;E��G�VZo-ү�NG�v�X�*��i�vȀU_�S��}k��U This paper proposes a new three-party extension of ElGamal encryption scheme and a multi-receiver extension of ElGamal encryption scheme. These public key systems are generally called ElGamal public key encryption schemes. Taher ElGamal was actually Marty Hellman's student. 64 0 obj y = g x mod p. (1). A. Algorithm Key generation for ElGamal public-key encryption Each entity creates a public key and a corresponding private key. IEEE Trans Inf Theory 31:469–472 zbMATH MathSciNet CrossRef Google Scholar 2. The group is the largest multiplicative sub-group of the integers modulo p, with p prime. Symmetric cryptography was well suited for organizations such as governments, military, and big financial corporations were involved in the classified communication. (This assures authenticity,as only sender has his private key so only sender can encrypt using his private key which can thus be decrypted by sender’s public key). El-gamal digital signature scheme: This scheme used the same keys but a different algorithm. It can be defined over any cyclic group G. Its security depends upon the difficulty of a certain problem in G related to computing discrete logarithms. Receiver decrypts the digital signature using the public key of sender. A. Algorithm Key generation for ElGamal public-key encryption Each entity creates a public key and a corresponding private key. For no apparent reason everyone calls this the "ElGamal" system /Type /XObject ElGamal cryptosystem can be defined as the cryptography algorithm that uses the public and private key concept to secure the communication occurring between two systems. As with Diffie-Hellman, Alice and Bob have a (publicly known) /ed]c+��d���*w�ܧ�w� %���� Each entity A should do the following:- 1. Overview The ElGamal signature scheme is a digital signature scheme based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem. There are three main methods of creating public key encryption; RSA (based on prime number factorization); Elliptic Curve; and Discrete Logarithms (ElGamal). Figure 6.4shows steps through the algorithm from encryption to decryption. More likely, you want public-key authenticated encryption, for which you should use NaCl/libsodium crypto_box_curve25519xsalsa20poly1305, if you have a definite notion of a sender and receiver who know one another's public keys and want to exchange unforgeable secret messages. To sign a message M, choose a random number k such that k has no factor in common with p — 1 and compute a = g k mod p. Then find a value s that satisfies. 67 0 obj A disadvantage of the ElGamal system is that the encrypted message This cryptosystem is based on the difficulty of finding discrete logarithm in a cyclic group that is even if we know g a and g k, it is extremely difficult to compute g ak.. The ElGamal public-key encryption scheme is based on the intractability of the discrete logarithm problem (DLP), which will be described in this section. Diffie-Hellman system. The resultant encryption scheme has 1 + 1/n ciphertext expansion, a roughly reduction by half. In a chosen plaintext attack (sometimes called a semantic attack) is Alice and Bob's adversary Eve passive, i.e. Each entity A should do the following:- 1. number k which is smaller than p. He then computes: and sends c1 and c2 (ElGamal Public-Key Encryption Scheme) The ElGamal. In 1984 aherT ElGamal introduced a cryptosystem which depends on the Discrete Logarithm Problem.The ElGamal encryption system is an asymmet- ric key encryption algorithm for public-key cryptography which is based on the Die-Hellman key exchange.ElGamal depends on the one way function, means that the
at \(s \ngeq r\), and that is \(\pi_n(X(s),\phi^X_{r,s}(x)) \in \mathbf{Grp}\) at \(s \geq r\). \end{defn} Note that \(\pi_n\) is functorial for every \(n \in \mathbb{N}\). \begin{defn} \label{interleaving-in-homotopy-groups} Let \(\epsilon,\delta \geq 0 \in \mathbf{R}^{m}\). Assume given a homotopy class of morphisms \([f] \colon X' \to Y'^\epsilon \in \mathsf{Ho}\left(\SS^{\mathbf{R}^m}\right)\). Let \(X' \simeq X\) be a cofibrant replacement, let \(Y' \simeq Y\) be a fibrant replacement, and let \(f \colon X \to_\epsilon Y\) be a representative of \(f\). We say that \([f]\) \define{induces an \((\epsilon,\delta)\)-interleaving in homotopy groups} if the induced map \(\pi_0(f) \colon \pi_0(X) \to_\epsilon \pi_0(Y)\) is part of an \((\epsilon,\delta)\)-interleaving of persistent sets, and if for every \(r \in \mathbf{R}^m\), every \(x \in X(r)\), and every \(n \geq 1 \in \mathbb{N}\), the induced map \(\pi_n(f) : \pi_n(X,x) \to_\epsilon \pi_n(Y,f(x))\) is part of an \((\epsilon,\delta)\)-interleaving of persistent groups. \end{defn} It is clear that the definition above is independent of the choices of representatives. A standard result in classical homotopy theory is that a fibration of Kan complexes inducing an isomorphism in all homotopy groups has the right lifting property with respect to cofibrations (\cite[Theorem~I.7.10]{GJ}). An analogous, persistent, result (\cref{lifting-property-n-cofibrations}), says that, for a fibration of fibrant objects inducing a \(\delta\)-interleaving in homotopy groups, the lift exists up to a shift, which depends on both \(\delta\) and on a certain ``length'' \(n \in \mathbb{N}\) associated to the cofibration. To make this precise, we introduce the notion of \(n\)-dimensional extension. \begin{defn} \label{def n cofibration} Let \(A,B \in \SS^{\RR^m}\) and let \(n \in \mathbb{N}\). A map \(j\colon A \to B\) is a \define{\(n\)-dimensional extension} (of \(A\)) if there exists a set \(I\), a family of tuples of real numbers \(\left\{r_i \in \mathbf{R}^m\right\}_{i \in I}\), and commutative squares of the form depicted on the left below, that together give rise to the pushout square on the right below. Here, \(\partial D^n \hookrightarrow D^n\) stands for \(S^{n-1}\hookrightarrow D^n\) if \(\ \SS = \mathbf{Top}\), and for \(\partial \Delta^n \hookrightarrow \Delta^n\) if \(\ \SS = \mathbf{sSet}\). \[\begin{tikzcd} \partial D^n \ar[r,"f_i"] \ar[d,hook] & A(r_i) \ar[d,"j_{r_i}"] & & & & \coprod_{i \in I} r_i \odot(\partial D^n) \ar[r,"f"] \ar[d] & A \ar[d,"j"]\\ D^n \ar[r,"g_i"] & B(r_i) & & & & \coprod_i r_i \odot( D^n) \ar[r,"g"] & B \end{tikzcd}\] \end{defn} A \define{single dimensional extension} is an \(n\)-dimensional extension for some \(n\in \mathbb{N}\). \begin{defn} Let \(\iota \colon A \to B\) be a projective cofibration of \(\SS^{\RR^m}\) and let \(n \in \mathbb{N}\). We say that \(\iota\) is an \define{\(n\)-cofibration} if it factors as the composite of \(n+1\) maps \(f_0, \dots, f_n\), with \(f_i\) an \(n_i\)-dimensional extension for some \(n_i \in \mathbb{N}\). We say that \(A\in \SS^{\RR^m}\) is \(n\)-cofibrant if the map \(\emptyset \to A\) is an \(n\)-cofibration. \end{defn} The next lemma, which follows directly from \cref{t: cofibrant = filtered}, gives a rich family of examples of \(n\)-cofibrant persistent simplicial sets. Recall that a simplicial set is \(n\)-skeletal if all its simplices in dimensions above \(n\) are degenerate. \begin{lem} \label{sset-n-cofibrant} Let \(A \in \mathbf{sSet}^{\mathbf{R}^m}\) and let \(n \in \mathbb{N}\). If \(A\) is projective cofibrant and pointwise \(n\)-skeletal, then it is \(n\)-cofibrant.\qed \end{lem} \begin{eg} The Vietoris--Rips complex \(\mathsf{VR}(X)\) of a metric space \(X\), as defined in \cref{VR-complex example}, is \(n\)-cofibrant if the underlying set of \(X\) has finite cardinality \(\vert X \vert = n + 1\). If one is interested in persistent (co)homology of some bounded degree \(n\), then one can restrict computations to the \((n+1)\)-skeleton of a Vietoris--Rips complex, which is \((n+1)\)-cofibrant. \end{eg} A result analogous to \cref{sset-n-cofibrant}, but for persistent topological spaces, does not hold, as cells are not necessarily attached in order of dimension. This motivates the following definition. \begin{defn} \label{persistent CW} Let \(n\in \mathbb{N}\). A persistent topological space \(A \in \mathbf{Top}^{\mathbf{R}^m}\) is an \define{\(n\)-dimensional persistent CW-complex} if the map \(\emptyset \to X\) can be factored as a composite of maps \(f_0, \dots, f_n\), with \(f_i\) an \(i\)-dimensional extension. \end{defn} \begin{eg} The geometric realization of any \(n\)-cofibrant persistent simplicial set is an \(n\)-dimensional persistent CW-complex. \end{eg} \begin{lem} \label{cw-n-cofibrant} Every \(n\)-dimensional persistent CW-complex is \(n\)-cofibrant.\qed \end{lem} We now make precise the notion of lifting property up to a shift. \begin{defn} Let \(i \colon A \to B\) and \(p \colon Y \to X\) be morphisms in \(\SS^{\RR^m}\) and let \(\delta \geq 0\). We say that \(p\) has the \define{right \(\delta\)-lifting property} with respect to \(i\) if for all morphisms \(A \to Y\) and \(B \to X\) making the square on the left below commute, there exists a diagonal \(\delta\)-morphism \(f \colon B \to_\delta Y\) rendering the diagram commutative. Below, the diagram on the left is shorthand for the one on the right. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}] { A & Y & & & A & Y & Y^\delta\\ B & X & & & B & X & X^\delta.\\}; \path[line width=0.75pt, -{>[width=8pt]}] (m-1-1) edge node [above] {} (m-1-2) edge node [left] {$i$} (m-2-1) (m-2-1) edge [dotted] node [above] {} node [at end, below] {$\leng{\delta}\;\;\,$} (m-1-2) edge node [above] {} (m-2-2) (m-1-2) edge node [right] {$p$} (m-2-2) (m-1-5) edge node [above] {} (m-1-6) edge node [left] {$i$} (m-2-5) (m-1-6) edge node [right] {$p$} (m-2-6) edge node [above] {$\mathsf{S}_{0,\delta}(\mathsf{id}_Y)$} (m-1-7) (m-1-7) edge node [right] {$p^\delta$} (m-2-7) (m-2-5) edge [bend left=10,-,line width=6pt,draw=white] (m-1-7) edge [bend left=10, dotted] node [left] {$\,\,\,\,\,\,$} (m-1-7) edge node [above] {} (m-2-6) (m-2-6) edge node [below] {$\mathsf{S}_{0,\delta}(\mathsf{id}_X)$} (m-2-7) ; \end{tikzpicture}\] \end{defn} We now prove \cref{jardines-lemma}, an adaptation of a result of Jardine, which says that fibrations inducing interleavings in homotopy groups have a shifted right lifting property, as defined above. The main difference is that we work in the multi-persistent setting. We use simplicial notation and observe that the corresponding statement for persistent topological spaces follows from the simplicial one by using the singular complex-realization adjunction. We recall a standard, technical lemma whose proof is given within that of, e.g., \cite[Theorem~I.7.10]{GJ}. \begin{lem} \label{homotopy-lifting} Suppose given a commutative square of simplicial sets \begin{equation} \label{lp2} \begin{tikzcd} \partial \Delta^n \ar[r,"\alpha"] \ar[d,hook] & X \ar[d,"p"]\\ \Delta^n \ar[r,"\beta"] &Y, \end{tikzcd} \end{equation} where \(p\) is a Kan fibration between Kan complexes. If there is commutative diagram like the one on the left below, for which the lifting problem on the right admits a solution, then the initial square \eqref{lp2} admits a solution. \[\begin{tikzcd} \partial \Delta^n \ar[ddd,hook] \ar[dr,"(\mathsf{id}_{\partial \Delta^n} \times \{1\}) "] \ar[drrr,"\alpha", bend left]&&\\ & \partial \Delta^n \times \Delta^1 \ar[d,hook] \ar[rr," h"] & & X \ar[d,"p"] & & & \partial \Delta^n \ar[rr,"h \circ (\mathsf{id}_{\partial \Delta^n} \times \{0\}) "] \ar[d,hook] & & X \ar[d,"p"]\\ &\Delta^n \times \Delta^1 \ar[rr,"g"] & & Y & & & \Delta^n \ar[rr,"g \circ( \mathsf{id}_{\Delta^n} \times \{0\})"] &&Y\\ \Delta^n \ar[ur,"(\mathsf{id}_{\Delta^n} \times \{1\}) "{swap}] \ar[urrr,"\beta"{swap},bend right] && \end{tikzcd}\] \end{lem} \begin{lem}[cf.~{\cite[Lemma~14]{Jardine2020}}] \label{jardines-lemma} Let \(\delta \geq 0\), and let \(f \colon X \to Y \in \SS^{\mathbf{R}^m}\) induce a \((0,\delta)\)-interleaving in homotopy groups. If \(X\) and \(Y\) are projective fibrant and \(f\) is a projective fibration, then \(f\) has the right \(2\delta\)-lifting property with respect to boundary inclusions \(r\odot \partial D^n \to r \odot D^n\), for every \(r \in \mathbf{R}^m\) and every \(n \in \mathbb{N}\). \end{lem} \begin{proof} Suppose given a commutative diagram as on the left below, which corresponds to the one on the right: \begin{equation} \label{lifting problem} \begin{tikzcd} r\odot \partial \Delta^n \ar[r,"a"] \ar[d] & X \ar[d,"p"] & & & \partial \Delta^n \ar[r,"\alpha"] \ar[d] & X(r) \ar[d,"p_r"]\\ r\odot \Delta^n \ar[r,"b"] & Y & & & \Delta^n \ar[r,"\beta"] &Y(r). \end{tikzcd} \end{equation} We must find a \(2\delta\)-lift for the diagram on the right. The proof strategy is to appeal to \cref{homotopy-lifting} to simplify \(\alpha\), then prove that at the cost of a \(\delta\)-shift we can further reduce \(\alpha\) to a constant map, and then show that the simplified lifting problem can be solved at the cost of another \(\delta\)-shift. So we end up with a \(2\delta\)-lift, as in the statement. We proceed by proving the claims in opposite order. We start by showing that \eqref{lifting problem} can be solved up to a \(\delta\)-shift whenever \(\alpha\) is constant. Let us assume that \(\alpha\) is of the form \(\alpha = \ast\) for some \(\ast \in X(r)_0\). Since, then, \(\beta\) represents an element \([ \beta] \in \pi_n(Y(r),\ast)\), there exists a map \(\alpha' \colon \Delta^n \to X(r+\delta)\) whose restriction to \(\partial \Delta^n\) is constant on \(\ast \in X(r)_0\), and such that there is a homotopy \(h\colon \beta \simeq p\alpha'\)
# Behavioural Modelling & Timing in Verilog #### VLSI, PLC, Microcontrollers, and Assembly Language 23 Lectures 12 hours Behavioral models in Verilog contain procedural statements, which control the simulation and manipulate variables of the data types. These all statements are contained within the procedures. Each of the procedure has an activity flow associated with it. During simulation of behavioral model, all the flows defined by the ‘always’ and ‘initial’ statements start together at simulation time ‘zero’. The initial statements are executed once, and the always statements are executed repetitively. In this model, the register variables a and b are initialized to binary 1 and 0 respectively at simulation time ‘zero’. The initial statement is then completed and is not executed again during that simulation run. This initial statement is containing a begin-end block (also called a sequential block) of statements. In this begin-end type block, a is initialized first followed by b. ### Example of Behavioral Modeling module behave; reg [1:0]a,b; initial begin a = ’b1; b = ’b0; end always begin #50 a = ~a; end always begin #100 b = ~b; end End module ## Procedural Assignments Procedural assignments are for updating reg, integer, time, and memory variables. There is a significant difference between procedural assignment and continuous assignment as described below − Continuous assignments drive net variables and are evaluated and updated whenever an input operand changes value. Procedural assignments update the value of register variables under the control of the procedural flow constructs that surround them. The right-hand side of a procedural assignment can be any expression that evaluates to a value. However, part-selects on the right-hand side must have constant indices. The lefthand side indicates the variable that receives the assignment from the right-hand side. The left-hand side of a procedural assignment can take one of the following forms − • register, integer, real, or time variable − An assignment to the name reference of one of these data types. • bit-select of a register, integer, real, or time variable − An assignment to a single bit that leaves the other bits untouched. • part-select of a register, integer, real, or time variable − A part-select of two or more contiguous bits that leaves the rest of the bits untouched. For the part-select form, only constant expressions are legal. • memory element − A single word of a memory. Note that bit-selects and part-selects are illegal on memory element references. • concatenation of any of the above − A concatenation of any of the previous four forms can be specified, which effectively partitions the result of the right-hand side expression and assigns the partition parts, in order, to the various parts of the concatenation. ## Delay in Assignment (not for synthesis) In a delayed assignment Δt time units pass before the statement is executed and the lefthand assignment is made. With intra-assignment delay, the right side is evaluated immediately but there is a delay of Δt before the result is place in the left hand assignment. If another procedure changes a right-hand side signal during Δt, it does not effect the output. Delays are not supported by synthesis tools. ### Syntax • Procedural Assignmentvariable = expression • Delayed assignment#Δt variable = expression; • Intra-assignment delayvariable = #Δt expression; ### Example reg [6:0] sum; reg h, ziltch; sum[7] = b[7] ^ c[7]; // execute now. ziltch = #15 ckz&h; /* ckz&a evaluated now; ziltch changed after 15 time units. */ #10 hat = b&c; /* 10 units after ziltch changes, b&c is evaluated and hat changes. */ ## Blocking Assignments A blocking procedural assignment statement must be executed before the execution of the statements that follow it in a sequential block. A blocking procedural assignment statement does not prevent the execution of statements that follow it in a parallel block. ### Syntax The syntax for a blocking procedural assignment is as follows − <lvalue> = <timing_control> <expression> Where, lvalue is a data type that is valid for a procedural assignment statement, = is the assignment operator, and timing control is the optional intra - assignment delay. The timing control delay can be either a delay control (for example, #6) or an event control (for example, @(posedge clk)). The expression is the right-hand side value the simulator assigns to the left-hand side. The = assignment operator used by blocking procedural assignments is also used by procedural continuous assignments and continuous assignments. ### Example rega = 0; rega[3] = 1; // a bit-select rega[3:5] = 7; // a part-select mema[address] = 8’hff; // assignment to a memory element {carry, acc} = rega + regb; // a concatenation ## Nonblocking (RTL) Assignments The non-blocking procedural assignment allows you to schedule assignments without blocking the procedural flow. You can use the non-blocking procedural statement whenever you want to make several register assignments within the same time step without regard to order or dependance upon each other. ### Syntax The syntax for a non-blocking procedural assignment is as follows − <lvalue> <= <timing_control> <expression> Where lvalue is a data type that is valid for a procedural assignment statement, <= is the non-blocking assignment operator, and timing control is the optional intra-assignment timing control. The timing control delay can be either a delay control or an event control (for example, @(posedge clk)). The expression is the right-hand side value the simulator assigns to the left-hand side. The non-blocking assignment operator is the same operator the simulator uses for the less-than-orequal relational operator. The simulator interprets the <= operator to be a relational operator when you use it in an expression, and interprets the <= operator to be an assignment operator when you use it in a non-blocking procedural assignment construct. How the simulator evaluates non-blocking procedural assignments When the simulator encounters a non-blocking procedural assignment, the simulator evaluates and executes the non-blocking procedural assignment in two steps as follows − • The simulator evaluates the right-hand side and schedules the assignment of the new value to take place at a time specified by a procedural timing control. The simulator evaluates the right-hand side and schedules the assignment of the new value to take place at a time specified by a procedural timing control. • At the end of the time step, in which the given delay has expired or the appropriate event has taken place, the simulator executes the assignment by assigning the value to the left-hand side. ### Example module evaluates2(out); output out; reg a, b, c; initial begin a = 0; b = 1; c = 0; end always c = #5 ~c; always @(posedge c) begin a <= b; b <= a; end endmodule ## Conditions The conditional statement (or if-else statement) is used to make a decision as to whether a statement is executed or not. Formally, the syntax is as follows − <statement> ::= if ( <expression> ) <statement_or_null> ||= if ( <expression> ) <statement_or_null> else <statement_or_null> <statement_or_null> ::= <statement> ||= ; The <expression> is evaluated; if it is true (that is, has a non-zero known value), the first statement executes. If it is false (has a zero value or the value is x or z), the first statement does not execute. If there is an else statement and <expression> is false, the else statement executes. Since, the numeric value of the if expression is tested for being zero, certain shortcuts are possible. For example, the following two statements express the same logic − if (expression) if (expression != 0) Since, the else part of an if-else is optional, there can be confusion when an else is omitted from a nested if sequence. This is resolved by always associating the else with the closest previous if that lacks an else. ### Example if (index > 0) if (rega > regb) result = rega; else // else applies to preceding if result = regb; If that association is not what you want, use a begin-end block statement to force the proper association if (index > 0) begin if (rega > regb) result = rega; end else result = regb; ### Construction of: if- else- if The following construction occurs so often that it is worth a brief separate discussion. Example if (<expression>) <statement> else if (<expression>) <statement> else if (<expression>) <statement> else <statement> This sequence of if’s (known as an if-else-if construct) is the most general way of writing a multi-way decision. The expressions are evaluated in order; if any expression is
\section{Introduction} Resources such as datasets, pretrained models, and benchmarks are crucial for the advancement of natural language processing (NLP) research. Nevertheless, most pretrained models and datasets are developed for high-resource languages such as English, French, and Chinese~\cite{Devlin2019bert,martin-etal-2020-camembert,chen-etal-2020-sibert}. Although the number of datasets, models, and benchmarks has been increasing for low-resource languages such as Indonesian~\cite{wilie2020indonlu, koto-etal-2020-indolem}, Bangla~\cite{bhattacharjee2021banglabert}, and Filipino~\cite{cruz2020establishing}, these datasets primarily focus on natural language understanding (NLU) tasks, which only cover a subset of practical NLP systems today. In contrast, much fewer natural language generation (NLG) benchmarks have been developed for low-resource languages; most multilingual NLG resources thus far have primarily focused on machine translation, highlighting the need to generalize these low-resource NLG benchmarks to other commonly used NLG tasks such as summarization and question answering. While recent work has developed more comprehensive multilingual NLG benchmarks, such as XGLUE~\cite{Liang2020xglue} and GEM~\cite{gehrmann2021gem}, these efforts still primarily evaluate the NLG models on fairly high-resource languages. \begin{table*}[!t] \centering \resizebox{0.96\textwidth}{!}{ \begin{tabular}{lrrrcl} \toprule \textbf{Dataset} & \textbf{\# Words} & \textbf{\# Sentences} & \textbf{Size} & \textbf{Style}& \textbf{Source} \\ \midrule \texttt{Indo4B} \cite{wilie2020indonlu} & 3,581,301,476 & 275,301,176 & 23.43 GB & mixed & IndoBenchmark \\ Wiki Sundanese$^1$ & 4,644,282 & 182,581 & 40.1 MB & formal & Wikipedia \\ Wiki Javanese$^1$ & 6,015,961 & 231,571 & 53.2 MB & formal & Wikipedia \\ CC-100 Sundanese & 13,761,754 & 433,086 & 107.6 MB & mixed & Common Crawl \\ CC-100 Javanese & 20,560,458 & 690,517 & 161.9 MB & mixed & Common Crawl \\ \midrule \textbf{TOTAL} & 3,626,283,931 & 276,838,931 & 23.79 GB & & \\ \bottomrule \end{tabular} } \caption{\texttt{Indo4B-Plus} dataset statistics. $^1$ \hyperlink{https://dumps.wikimedia.org/backup-index.html}{https://dumps.wikimedia.org/backup-index.html}.} \label{tab:ID4B_corpus_stats} \end{table*} In this paper, we take a step towards building NLG models for some low-resource languages by introducing \texttt{IndoNLG}---a benchmark of multilingual resources and standardized evaluation data for three widely spoken languages of Indonesia: Indonesian, Javanese, and Sundanese. Cumulatively, these languages are spoken by more than 100 million native speakers, and thus comprise an important use case of NLG systems today. Despite the prevalence of these languages, there has been relatively few prior work on developing accurate NLG systems for these languages---a limitation we attribute to a lack of publicly available resources and evaluation benchmarks. To help address this problem, \texttt{IndoNLG} encompasses clean pretraining data, pretrained models, and downstream NLG tasks for these three languages. For the downstream tasks, we collect pre-existing datasets for English--Indonesian machine translation, monolingual summarization, question answering, and dialogue datasets. Beyond these existing datasets, we prepare two new machine translation datasets (Sundanese--Indonesian and Javanese--Indonesian) to evaluate models on the regional languages, Javanese and Sundanese, which have substantially fewer resources---in terms of \emph{both} unlabelled and labelled datasets---than the Indonesian language. How, then, can we build models that perform well for such low-resource languages? Building monolingual pretrained models solely using low-resource languages, such as Sundanese and Javanese, is ineffective since there are only few unlabelled data available for pretraining. In this paper, we explore two approaches. The first approach is to leverage existing pretrained multilingual models, such as mBART~\citep{liu2020mbart}. While this approach is quite effective, we explore a second approach that leverages positive transfer from related languages~\cite{Hu2020xtreme,Khanuja2021muril}, such as pretraining with a corpus of mostly Indonesian text. We justify this approach through the fact that Sundanese, Javanese, and Indonesian all belong to the same Austronesian language family~\cite{blust2013austronesian, novitasari2020cross}, and share various morphological and semantic features as well as common lexical items through the presence of Sundanese and Javanese loanwords in the Indonesian language~\citep{devianty2016loan}. We show that pretraining on mostly Indonesian text achieves competitive performance to the larger multilingual models---despite using 5$\times$ fewer parameters and smaller pretraining data---and achieves particularly strong performance on tasks involving the very low-resource Javanese and Sundanese languages. Our contributions are as follows: 1) we curate a multilingual pretraining dataset for Indonesian, Sundanese, and Javanese; 2) we introduce two models that support generation in these three major languages in Indonesia, IndoBART and IndoGPT; 3) to the best of our knowledge, we develop the first diverse benchmark to evaluate the capability of Indonesian, Sundanese, and Javanese generation models; and 4) we show that pretraining solely on related languages (i.e. mostly Indonesian text) can achieve strong performance on two very low-resource languages, Javanese and Sundanese, compared to existing multilingual models, despite using fewer parameters and smaller pretraining data. This finding showcases the benefits of pretraining on closely related, \emph{local} languages to enable more efficient learning of low-resource languages. \begin{table*}[!t] \centering \resizebox{0.95\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule \textbf{Dataset} & $|$\textbf{Train}$|$ & $|$\textbf{Valid}$|$ & $|$\textbf{Test}$|$ & \textbf{Task Description} &\textbf{Domain} & \textbf{Style}\\ \midrule \multicolumn{7}{c}{Language Pair Tasks} \\ \midrule Bible En$\leftrightarrow$Id & 23,308 & 3,109 & 4,661 & machine translation & religion & formal \\ TED En$\leftrightarrow$Id & 87,406 & 2,677 & 3,179 & machine translation & mixed & formal \\ News En$\leftrightarrow$Id & 38,469 & 1,953 & 1,954 & machine translation & news & formal \\ Bible Su$\leftrightarrow$Id & 5,968 & 797 & 1193 & machine translation & religion & formal \\ Bible Jv$\leftrightarrow$Id & 5,967 & 797 & 1193 & machine translation & religion & formal \\ \midrule \multicolumn{7}{c}{Indonesian Tasks} \\ \midrule Liputan6 (Canonical) & \multirow{2}{*}{193,883} & 10,972 & 10,972 & \multirow{2}{*}{summarization} & \multirow{2}{*}{news} & \multirow{2}{*}{formal} \\ Liputan6 (Xtreme) & & 4,948 & 3,862 \\ Indosum & 2,495 & 311 & 311 & summarization & news & formal \\ TyDiQA (Id)$^{\dagger}$ & 4,847 & 565 & 855 & question answering & mixed & formal \\ XPersona (Id) & 16,878 & 484 & 484 & chit-chat & casual & colloquial \\ \bottomrule \end{tabular} } \caption{Task statistics and descriptions. $^{\dagger}$We create new splits for the train and test.} \label{tab:dataset} \end{table*} \section{Related Work} \paragraph{NLP Benchmarks.} Numerous benchmarks have recently emerged, which have catalyzed advances in monolingual and cross-lingual transfer learning. These include NLU benchmarks for low-resource languages including IndoNLU~\cite{wilie2020indonlu}, IndoLEM~\cite{koto-etal-2020-indolem}, and those focusing on Filipino~\cite{cruz2020establishing}, Bangla~\cite{bhattacharjee2021banglabert}, and Thai~\cite{lowphansirikul2021wangchanberta}; neural machine translation (MT) datasets for low-resource scenarios including for Indonesian \cite{guntara2020benchmarking}, African languages \cite{duh-etal-2020-benchmarking,lakew2020low}, and Nepali and Sinhala \cite{guzman2019flores}; and large-scale multilingual benchmarks such as XTREME \cite{Hu2020xtreme}, MTOP \cite{li2020mtop}, and XGLUE \cite{Liang2020xglue}. \citet{winata2021multilingual,aguilar2020lince,khanuja2020gluecos} further developed multilingual benchmarks to evaluate the effectiveness of pretrained multilingual language models. More recently, GEM \cite{gehrmann2021gem} covers NLG tasks in various languages, together with automated and human evaluation metrics. Our benchmark compiles languages and tasks that are \emph{not} covered in those prior work, such as local multilingual (Indonesian, Javanese, Sundanese, and English) MT tasks, Indonesian summarization, and Indonesian chit-chat dialogue. \paragraph{Pretrained NLG Models.} Recently, the paradigm of pretraining-then-fine-tuning has achieved remarkable success in NLG, as evidenced by the success of monolingual pretrained NLG models. GPT-2 \cite{radford2019language}, and later GPT-3 \cite{NEURIPS2020_1457c0d6}, demonstrated that language models can perform zero-shot transfer to downstream tasks via generation. Other recent state-of-the-art models are BART \cite{lewis2020bart}, which maps corrupted documents to their original, and the encoder-decoder T5 \cite{raffel2020exploring}, which resulted from a thorough investigation of architectures, objectives, datasets, and pretraining strategies. These monolingual models have been generalised to the \emph{multilingual} case by pretraining the architectures on multiple languages; examples include mBART~\cite{liu2020mbart} and mT5 \cite{xue2020mt5}. In this paper, we focus on local, near-monolingual models for the languages of Indonesia, and systematically compare them on our benchmark with such larger multilingual models. \section{\texttt{IndoNLG} Benchmark} \subsection{\texttt{Indo4B-Plus} Pretraining Dataset} \label{sec:indo4b} Our \texttt{Indo4B-Plus} dataset consists of three languages: Indonesian, Sundanese, and Javanese. For the Indonesian data, we use the \texttt{Indo4B} dataset~\cite{wilie2020indonlu}. For the Sundanese and Javanese data, we collect and preprocess text from Wikipedia and CC-100~\cite{wenzek2020ccnet}. As shown in Table \ref{tab:ID4B_corpus_stats}, the total number of words in the local languages is minuscule ($\approx$~1\% combined) compared to the total number of words in the Indonesian language. In order to alleviate this problem, we rebalance the \texttt{Indo4B-Plus} corpus. Following~\citet{liu2020mbart}, we upsample or downsample data in each language according to the following formula: \begin{align} \lambda_i = \frac{p_i^\alpha}{p_i \sum_j^L{p_j^\alpha}}, \end{align} where $\lambda_i$ denotes up/down-sampling ratio for language $i$ and $p_i$ is the percentage of language $i$ in \texttt{Indo4B-Plus}. Following ~\newcite{liu2020mbart}, we set the smoothing parameter $\alpha$ to 0.7. After rebalancing, the percentage of data in the local languages increases to $\sim$3\%. \begin{table*}[!t] \centering \resizebox{0.88\textwidth}{!}{ \begin{tabular}{lrccccccc} \toprule \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{\#Params}} & \textbf{\#Enc} & \textbf{\#Dec} & \multirow{2}{*}{\textbf{\#Heads}} & \textbf{Emb.} & \textbf{Head} & \textbf{FFN} & \textbf{Language} \\ & & \textbf{Layers} & \textbf{Layers} &
entries[1:]: sy.append(word) if line[0:11] == ' abundance:': entries = line.split() for word in entries[1:]: ab.append(word) assert (len(sy) == len(ab)), 'different elements in arrays sy (elemental symbols) and ab (abundances)' abu = np.ones(99)*1e-99 i = 0 for item in sy: try: index = symbol.index(item) abu[index] = 10.**(float(ab[i])-12.) except ValueError: print("the symbol ",item," is not recognized as a valid element") i = i + 1 print('abu=',abu) while line[0:72] != " l tstd temperature pgas pe density mu": line = f.readline() line = f.readline() entries = line.split() t = [ float(entries[2].replace('D','E')) ] p = [ float(entries[3].replace('D','E')) ] ne = [ float(entries[4].replace('D','E')) / bolk / float(entries[2].replace('D','E')) ] dm = [ float(entries[3].replace('D','E')) / 10.**logg ] #assuming hydrostatic equil. and negliglible radiation and turb. pressure for i in range(nd-1): line = f.readline() entries = line.split() t.append( float(entries[2].replace('D','E'))) p.append( float(entries[3].replace('D','E'))) ne.append( float(entries[4].replace('D','E')) / bolk / float(entries[2])) dm.append ( float(entries[3].replace('D','E')) / 10.**logg ) vmicro = 0.0 while (line[0:6] != " greli"): line = f.readline() if line == '': print('Cannot find a value for vmicro (vturb) in the model atmosphere file ',modelfile) break if line != '': entries = line.split() vmicro = float(entries[5]) atmos = np.zeros(nd, dtype={'names':('dm', 't', 'p','ne'), 'formats':('f', 'f', 'f','f')}) atmos['dm'] = dm atmos['t'] = t atmos['p'] = p atmos['ne'] = ne return (teff,logg,vmicro,abu,nd,atmos) def interp_spl(xout, x, y): """Interpolates in 1D using cubic splines Parameters ---------- x: numpy array or list input abscissae y: numpy array or list input ordinates xout: numpy array or list array of abscissae to interpolate to Returns ------- yout: numpy array or list array of interpolated values """ tck = interpolate.splrep(x, y, s=0) yout = interpolate.splev(xout, tck, der=0) return(yout) def elements(husser=False): """Reads the solar elemental abundances Parameters ---------- husser: bool, optional when set the abundances adopted for Phoenix models by Huser et al. (2013) are adopted. Otherwise Asplund et al. (2005) are used -- consistent with the MARCS (Gustafsson et al. 2008) models and and Kurucz (Meszaros et al. 2012) Kurucz model atmospheres. Returns ------- symbol: numpy array of str element symbols mass: numpy array of floats atomic masses (elements Z=1-99) sol: numpy array of floats solar abundances N/N(H) """ symbol = [ 'H' ,'He','Li','Be','B' ,'C' ,'N' ,'O' ,'F' ,'Ne', 'Na','Mg','Al','Si','P' ,'S' ,'Cl','Ar','K' ,'Ca', 'Sc','Ti','V' ,'Cr','Mn','Fe','Co','Ni','Cu','Zn', 'Ga','Ge','As','Se','Br','Kr','Rb','Sr','Y' ,'Zr', 'Nb','Mo','Tc','Ru','Rh','Pd','Ag','Cd','In','Sn', 'Sb','Te','I' ,'Xe','Cs','Ba','La','Ce','Pr','Nd', 'Pm','Sm','Eu','Gd','Tb','Dy','Ho','Er','Tm','Yb', 'Lu','Hf','Ta','W' ,'Re','Os','Ir','Pt','Au','Hg', 'Tl','Pb','Bi','Po','At','Rn','Fr','Ra','Ac','Th', 'Pa','U' ,'Np','Pu','Am','Cm','Bk','Cf','Es' ] mass = [ 1.00794, 4.00260, 6.941, 9.01218, 10.811, 12.0107, 14.00674, 15.9994, 18.99840, 20.1797, 22.98977, 24.3050, 26.98154, 28.0855, 30.97376, 32.066, 35.4527, 39.948, 39.0983, 40.078, 44.95591, 47.867, 50.9415, 51.9961, 54.93805, 55.845, 58.93320, 58.6934, 63.546, 65.39, 69.723, 72.61, 74.92160, 78.96, 79.904, 83.80, 85.4678, 87.62, 88.90585, 91.224, 92.90638, 95.94, 98., 101.07, 102.90550, 106.42, 107.8682, 112.411, 114.818, 118.710, 121.760, 127.60, 126.90447, 131.29, 132.90545, 137.327, 138.9055, 140.116, 140.90765, 144.24, 145, 150.36, 151.964, 157.25, 158.92534, 162.50, 164.93032, 167.26, 168.93421, 173.04, 174.967, 178.49, 180.9479, 183.84, 186.207, 190.23, 192.217, 195.078, 196.96655, 200.59, 204.3833, 207.2, 208.98038, 209., 210., 222., 223., 226., 227., 232.0381, 231.03588, 238.0289, 237., 244., 243., 247., 247., 251., 252. ] if not husser: #Asplund, Grevesse and Sauval (2005), basically the same as #Grevesse N., Asplund M., Sauval A.J. 2007, Space Science Review 130, 205 sol = [ 0.911, 10.93, 1.05, 1.38, 2.70, 8.39, 7.78, 8.66, 4.56, 7.84, 6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.50, 6.18, 5.08, 6.31, 3.05, 4.90, 4.00, 5.64, 5.39, 7.45, 4.92, 6.23, 4.21, 4.60, 2.88, 3.58, 2.29, 3.33, 2.56, 3.28, 2.60, 2.92, 2.21, 2.59, 1.42, 1.92, -9.99, 1.84, 1.12, 1.69, 0.94, 1.77, 1.60, 2.00, 1.00, 2.19, 1.51, 2.27, 1.07, 2.17, 1.13, 1.58, 0.71, 1.45, -9.99, 1.01, 0.52, 1.12, 0.28, 1.14, 0.51, 0.93, 0.00, 1.08, 0.06, 0.88, -0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13, 0.90, 2.00, 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06, -9.99, -0.52, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99 ] sol[0] = 1. else: #a combination of meteoritic/photospheric abundances from Asplund et al. 2009 #chosen for the Husser et al. (2013) Phoenix model atmospheres sol = [ 12.00, 10.93, 3.26, 1.38, 2.79, 8.43, 7.83, 8.69, 4.56, 7.93, 6.24, 7.60, 6.45, 7.51, 5.41, 7.12, 5.50, 6.40, 5.08, 6.34, 3.15, 4.95, 3.93, 5.64, 5.43, 7.50, 4.99, 6.22, 4.19, 4.56, 3.04, 3.65, 2.30, 3.34, 2.54, 3.25, 2.36, 2.87, 2.21, 2.58, 1.46, 1.88, -9.99, 1.75, 1.06, 1.65, 1.20, 1.71, 0.76, 2.04, 1.01, 2.18, 1.55, 2.24, 1.08, 2.18, 1.10, 1.58, 0.72, 1.42, -9.99, 0.96, 0.52, 1.07, 0.30, 1.10, 0.48, 0.92, 0.10, 0.92, 0.10, 0.85, -0.12, 0.65, 0.26, 1.40, 1.38, 1.62, 0.80, 1.17, 0.77, 2.04, 0.65, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, 0.06, -9.99, -0.54, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99, -9.99 ] sol[0] = 1. for i in range(len(sol)-1): sol[i+1] = 10.**(sol[i+1]-12.0) return (symbol,mass,sol) def lgconv(xinput, yinput, fwhm, ppr=None): """convolution with a Gaussian in linear lambda scale for a constant resolution Parameters ---------- xinput: numpy float array wavelengths yinput: numpy array of floats fluxes fwhm: float FWHM of the Gaussian (same units as for xinput) ppr: float, optional Points per resolution element to downsample the convolved spectrum (default None, to keep the original sampling) Returns ------- x: numpy float array wavelengths after convolution, will be a subset of xinput when that is linear, otherwise a subset of the linearly resampled version y: numpy array of floats fluxes after convolution """ #resampling to a linear lambda wavelength scale if need be xx = np.diff(xinput) if max(xx) - min(xx) > 1.e-7: #input not linearly sampled nel = len(xinput) minx = np.min(xinput) maxx = np.max(xinput) x = np.linspace(minx,maxx,nel) #y = np.interp( x, xinput, yinput) y = interp_spl( x, xinput, yinput) else: #input linearly sampled x = xinput y = yinput step = x[1] - x[0] sigma=fwhm/2.0/np.sqrt(-2.0*np.log(0.5)) npoints = 2*int(3*fwhm/2./step)+1 half = npoints * step /2. xx = np.linspace(-half,half,npoints) kernel = np.exp(-(xx-np.mean(xx))**2/2./sigma**2) kernel = kernel/np.sum(kernel) y = np.convolve(y,kernel,'valid') #y = ss.fftconvolve(y,kernel,'valid') print(npoints) edge = int(npoints/2) x = x[edge:-edge] print(xinput.size,x.size,y.size) if ppr != None: fac = int(fwhm / step / ppr) subset = np.arange(x.size / fac, dtype=int) * fac x = x[subset] y = y[subset] return(x,y) def vgconv(xinput,yinput,fwhm, ppr=None): """convolution with a Gaussian in log lambda scale for a constant resolving power Parameters ---------- xinput: numpy float array wavelengths yinput: numpy array of floats fluxes fwhm: float FWHM of the Gaussian (km/s) ppr: float, optional Points per resolution element to downsample the convolved spectrum (default None, to keep the original sampling) Returns ------- x: numpy float array wavelengths after convolution, will be a subset of xinput when that is equidistant in log lambda, otherwise a subset of the resampled version y: numpy array of floats fluxes after convolution """ #resampling to ln(lambda) if need be xx = np.diff(np.log(xinput)) if max(xx) - min(xx) > 1.e-7: #input not equidist in loglambda nel = len(xinput) minx = np.log(xinput[0]) maxx = np.log(xinput[-1]) x = np.linspace(minx,maxx,nel) step = x[1] - x[0] x = np.exp(x) #y = np.interp( x, xinput, yinput) y = interp_spl( x, xinput, yinput) else: x = xinput y = yinput step = np.log(xinput[1])-np.log(xinput[0]) fwhm = fwhm/clight # inverse of the resolving power sigma=fwhm/2.0/np.sqrt(-2.0*np.log(0.5)) npoints = 2*int(3*fwhm/2./step)+1 half = npoints * step /2. xx = np.linspace(-half,half,npoints) kernel = np.exp(-(xx-np.mean(xx))**2/2./sigma**2) kernel = kernel/np.sum(kernel) y = np.convolve(y,kernel,'valid') edge = int(npoints/2) x = x[edge:-edge] #print(xinput.size,x.size,y.size) if ppr != None: fac = int(fwhm / step / ppr) print(fwhm,step,ppr,fac) subset = np.arange(x.size / fac, dtype=int) * fac x = x[subset] y = y[subset] return(x,y) def rotconv(xinput,yinput,vsini, ppr=None): """convolution with a Rotation profile Parameters ---------- xinput: numpy float array wavelengths yinput: numpy array of floats fluxes vsini: float projected rotational velocity (km/s) ppr: float, optional Points per resolution element to downsample the convolved spectrum (default None, to keep the original sampling) Returns -------
- Korea. ### Center of Excellence in the Area of Human and Robotic Structures Technologies for Lunar and Planetary Exploration Matthew K. Ronning, Mo-Yuen Chow, Harvey T. Banks, Ashok Gopalarathnam, Vinod K. Saxena, Gregory D. Buckner, Mohammad Noori, Fuh G. Yuan, Jack R. Edwards, Fred R. DeJarnette, Robert T. Nagel 10/01/02 - 09/25/17 No abstract currently available. This project is sponsored by National Institute of Aerospace. ### Collaborative Research: GOALI: Ais Gene Library Based Real-time Resource Allocation On Time-sensitive Large-scale Multi-rate Systems Mo-Yuen Chow 09/01/08 - 08/31/13 In this project, we propose to use Gene Library to classify and detect abnormality in vehicle movements in various traffic environments and to provide optimal real-time sampling rate adaptations and emergency interventions. The gene library stores relevant information in the memory for real-time fetching to avoid on demand optimization and computation. Artificial Immune Systems (AIS) optimization is used to tune the gene library so that the gene library can be used in real-time as well as can adapt to its environment for optimal solutions. The primary purpose of IDEA is to prevent accident caused by abnormal behavior of the impaired drivers during driving. At this phase, the attention is directed at assistive technology which warns the driver and drivers in the impact neighborhood of potential safety problem and generates the necessary corrective commands only under emergency situations. In order to achieve this goal, we propose to use the Intelligent Space (iSpace) concept, which is to integrate globally distributed sensor agents, distributed actuator agents, and distributed controller agents over networks to make optimal local decisions which cannot otherwise be achieved. This project is sponsored by National Science Foundation (NSF). ### Distributed Control of FREEDM System (former Distributed Grid Intelligence) Mo-Yuen Chow 09/01/08 - 08/31/14 This project will investigate a FREEDM system plug-and-play function for automating updates in feeder circuit topology and impedance modeling when a new device is added or the circuit is reconfigured. This project is sponsored by NCSU Future Renewable Electric Energy Delivery and Management Systems Center (FREEDM). ### I-Corps: iSpace Technology for Novel Traffic Light Managements Mo-Yuen Chow 04/15/13 - 09/30/14 The iSpace technology developed by Advanced Diagnosis, Automation, and Control (ADAC) Lab at the North Carolina State University (NCSU) provides a solution to meet the demands for efficiency, scalability, security and robustness in the control and management of cyber-physical industrial applications, such as transportation systems, energy management system and power grids. In this proposal, we focus on applying iSpace technology to substantially improve the traffic signal management performance in transportation systems. Such a solution will reduce traffic congestion, fuel consumption, gas emissions and driver frustration by minimizing vehicle stops and delays at intersections. Therefore, we plan to investigate and identify the potential customers and the appropriate markets and test the commercial feasibility of iSpace technology through this Innovation Corps project. This project is sponsored by National Science Foundation (NSF). ### Verification of FREEDM System Control Robustness Mo-Yuen Chow 09/01/13 - 08/31/14 Verification of FREEDM System Control Robustness Abstract: A rigorous system level analysis on the stability, robustness and convergence of distributed control algorithms on FREEDM systems considering the interactions between the Distributed Control algorithms and the FREEDM Power Management and Control through SST This project is sponsored by NCSU Future Renewable Electric Energy Delivery and Management Systems Center (FREEDM). ### Is Wireless Channel Dependable for Security Provisioning? Huaiyu Dai, Peng Ning 08/15/13 - 07/31/16 Wireless security is receiving increasing attention as wireless systems become a key component in our daily life as well as critical cyber-physical systems. Recent progress in this area exploits physical layer characteristics to offer enhanced and sometimes the only available security mechanisms. The success of such security mechanisms depends crucially on the correct modeling of underlying wireless propagation. It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we seek to understand the fundamental limits in passive inference of wireless channel characteristics, and further advance our knowledge and practice in wireless security. This project is sponsored by National Science Foundation (NSF). ### TC: Small: Defending against Insider Jammers in DSSS- and FH-Based Wireless Communication Systems Mladen A. Vouk, Huaiyu Dai, Peng Ning 09/01/10 - 08/31/14 Jamming resistance is crucial for applications where reliable wireless communication is required, such as rescue missions and military techniques such as Frequency Hopping (FH) and Direct Sequence Spread Spectrum (DSSS) have been used as countermeasures against jamming attacks. However, these anti-jamming techniques require that senders and receivers share a secret key to communicate with each other, and thus are vulnerable to insider attacks where the The objective of this project is to develop a suite of techniques to defend against insider jammers in DSSS and FH based wireless communication systems. We will develop novel and efficient insider-jamming-resistant techniques for both DSSS- and FH-based wireless communication systems. Our proposed research consists of two thrusts. The first thrust is to develop novel spreading/despreading techniques, called DSD-DSSS (which stands for DSSS based on Delayed Seed Disclosure), to enhance DSSS-based wireless communication to defend against insider jamming threats, while the second thrust is to develop a new approach, called USD-FH (which stands for FH based on Uncoordinated Seed Disclosure), to enable sender and receivers using FH to communicate without pre-establishing any common secret hopping pattern. A key property of our new approaches is that they do not depend on any secret shared by the sender and receivers. Our solution has the potential to significantly enhance the anti-jamming capability of today?s wireless communication systems. This project is sponsored by National Science Foundation (NSF). ### Toward a General Automatic Reasoning Framework for Networked Systems Huaiyu Dai 07/01/10 - 06/30/14 This work intends to contribute to an automatic reasoning framework for networked systems through research in two areas: structured variational methods and their distributed implementation, and distributed clustering. Interaction and integration of these two components will also be explored, leading to a holistic cross-layer approach for automatic reasoning in networked systems. This project is sponsored by National Science Foundation (NSF). ### CSR:Small: Enabling Aggressive Voltage Scaling for Real-Time and Embedded Systems with Inexpensive Yet Efficiant Power Conversion Alexander G. Dean, Subhashish Bhattacharya, Julie Schwindt 08/15/11 - 07/31/14 The goal of this project is to improve the energy efficiency of cost-sensitive embedded systems through aggressive voltage scaling. We seek to increase (at minimal cost) the number of separate voltage domains in a system and run each as efficiently as possible. Dynamic voltage and frequency scaling methods are standard on systems which can afford the additional costs to save the energy. Supporting multiple voltage domains incurs overhead such as additional voltage regulators/converters and level translators. Improving energy efficiency allows a designer to extend a systems operational life, reduce battery size and weight, and use more sophisticated (and computationally intense) control methods. Recent advances in subthreshold voltage processor design offer tremendous opportunities to reduce energy requirements for computation. Energy-scavenging, ultracapacitor and wireless power transmission technologies are rapidly advancing, further increasing opportunities. We plan to reduce costs by moving control into software and general-purpose hardware on common microcontrollers. A second challenge is further reducing system cost, size and weight by removing noise-filtering components. When an SMPS switches current on and off through the inductor, it generates wideband harmonic noise. We will use real-time system approaches to prevent SMPS switching activity from coinciding with noise-sensitive operations. There are additional challenges: determining practical power supply and system architectures, adapting existing methods in power-aware real-time and non-real-time scheduling to such systems, implementing this mixed-criticality system with robust software in a practical way on real hardware which partitions critical code from application code, that software becomes critical), and evaluating the trade-offs among the many design parameters. This project is sponsored by National Science Foundation (NSF). ### CAREER: Complex Polarization Gratings? Extreme
4), whereas for σ > 22.9° (e.g. 40.1°) they get again closer and closer to 1. This latter fact can be understood by noting that increasing σ is analogous to enlarging the time scale set by K, as the characteristic time scale of dephasing gets shorter for a fixed K. Based on these observations, we conclude that there is a certain intermediate time scale at which eigenvalues >1 are extrapolated from the data in the presence of sufficiently strong non-Markovian noise of the kind described in this section. Section “Frame mismatch accumulation” discusses a model with a different kind of time correlation leading to a spectral footprint which is incompatible with that of a TPCP map. #### Non-Markovianity: coherent revivals In order to better understand the occurrence of eigenvalue estimates |λest| > 1, we apply the matrix-pencil method on a signal (of a somewhat different physical origin), which has a revival over the time period set by K. It is well known that, in the exchange of energy between a two-level atom with a bosonic mode, the Rabi oscillations of the two-level atom are subject to temporal revivals. These revivals are due to the fact that the bosonic driving field is not purely classical but rather gets entangled with the state of the qubit via the Jaynes–Cummings interaction. In particular, for a coherent driving field with coherent amplitude α with average photon number $$\bar n = |\alpha |^2$$, the probability for the atom to be excited equals (see section 3.4.3 in ref. 20): $$P_{\mathrm{e}}(t) = \frac{1}{2} + \frac{1}{2}\mathop {\sum}\limits_{n = 0}^\infty {p_\alpha } (n){\mathrm{cos}}(\Omega t\sqrt {n + 1} ),$$ (20) with $$p_\alpha (n) = {\mathrm{exp}}( - |\alpha |^2)\frac{{|\alpha |^{2n}}}{{n!}}$$. We consider $$\bar n = 5$$ and sample the damped oscillatory function $$P_{\mathrm{e}}(t) - \frac{1}{2}$$ at regular intervals kΩδt with Ωδt = 0.05 and k = 0, …, K = 900. The signal function $$g(t) = P_{\mathrm{e}}(t) - \frac{1}{2}$$ contains eigenvalues equal to $$\lambda _n = {\mathrm{exp}}( \pm i{\kern 1pt} 0.05\sqrt {n + 1} )$$ with amplitudes according to the Poisson distribution pα(n) with mean photon number $$\bar n$$. We observe that the matrix-pencil method finds eigenvalues >1, see Fig. 5, which contribute significantly (p < 0.01 via F-test) to the reconstructed signal. We can understand this feature of eigenvalues >1 as a way in which the matrix-pencil method handles revivals: the signal has more spectral content than what can be resolved from the window of time given by K, in particular there is no hard cut-off on the number of eigenvalues that contribute. We have observed that an analysis of the signal over a longer period of time, that is, a larger K up to K = 5000, gives eigenvalues whose norm converges to at most 1. ## Discussion We have introduced spectral quantum tomography, a simple method that uses tomographic data of the repeated application of a noisy quantum gate to reconstruct the spectrum of this quantum gate in a manner resistant to SPAM errors. We have experimentally validated our method on one- and two-qubit gates and have also numerically investigated its behavior in the presence of temporally correlated non-trivial error models. The effective upshot of leakage and non-Markovian noise is that the signal will have more spectral content than what can be resolved given a chosen sequence length K, leading to unphysical features in the spectrum such as an eigenvalue estimate >1, or the absence of a real eigenvalue. Even though we have seen in our examples that a physical spectrum can be regained by going to larger K, depending on the noise model, this convergence may be very slow requiring much data-taking time. Hence, these unphysical features are useful markers for deviations from our model of repeated TPCP qubit maps $${\cal{S}}^k$$. We view it as an open question how well one can reliably distinguish different sources of deviations. An interesting application of the spectral tomography method could be the assessment of logical gates on encoded quantum information in a SPAM-resistant fashion. In this logical scenario (for, say, a single logical qubit), one first prepares the eigenstates of the logical Pauli operators $$\overline X ,\overline Y$$, and $$\overline Z$$. One then applies a unit of error-correction k = 0, …, K times: a single unit could be, say, the repeated error correction for L rounds of a distance-L surface code. Or a unit is the application of a fault-tolerant logical gate, e.g., by means of code-deformed error correction or a transversal logical gate followed by a unit of error correction. After k units, one measures the logical Pauli operators fault-tolerantly and repeats experiments to obtain the logical signal $$\overline g (k)$$. Studying the spectral features of such logical channel will give information about the efficacy of the quantum error correction unit and/or the applied logical gate while departures from the code space or a need to time-correlate syndrome data beyond the given QEC unit can show up as leakage and non-Markovian errors. ## Methods ### Single-qubit case with non-diagonalizable matrix T In general, a matrix T can be brought to Jordan normal form by a similarity transformation, i.e., T = VJV−1 with J = i Ji where each Jordan block Ji is of the form $$J_i = \left( {\begin{array}{*{20}{c}} {\lambda _i} & 1 & {} & {} \\ {} & {\lambda _i} & \ddots & {} \\ {} & {} & \ddots & 1 \\ {} & {} & {} & {\lambda _i} \end{array}} \right),$$ (21) see, e.g., Theorem 3.1.11 in ref. 31. T is diagonalizable when each Jordan block is fully diagonal. An example of a non-diagonalizable Lindblad superoperator on a single qubit has been constructed in ref. 32. Using this, one can easily get a single-qubit superoperator $${\cal{S}}$$ for which the traceless block of the Pauli transfer matrix is a non-diagonalizable matrix T as follows. Let $${\cal{S}}(\rho ) = \exp ({\cal{L}}\epsilon )(\rho ) \approx \rho + \epsilon {\cal{L}}(\rho ) + O(\epsilon ^2)$$ with $${\cal{L}}(\rho ) = - i[\frac{{yZ}}{2},\rho ] + {\cal{D}}[(2x)^{1/2}\sigma _ - ](\rho ) + {\cal{D}}[y^{1/2}X](\rho )$$ with $${\cal{D}}[A](\rho ) = A\rho A^\dagger - \frac{1}{2}\{ A^\dagger A,\rho \}$$ and real parameters x, y ≥ 0. This implies that $${\cal{S}}$$ has the 4 × 4 Pauli transfer matrix $$S = \left( {\begin{array}{*{20}{c}} 1 & 0 & 0 & 0 \\ 0 & {1 - \epsilon x} & { - \epsilon y} & 0 \\ 0 & {\epsilon y} & {1 - \epsilon (x + 2y)} & 0 \\ {2\epsilon x} & 0 & 0 & {1 - 2\epsilon (x + y)} \end{array}} \right) + O(\epsilon ^2).$$ Taking some small $$\epsilon$$ and x ≠ 0, one can check that the submatrix T does not have 3 eigenvectors and it has a pair of degenerate eigenvalues, so T is not diagonalizable. When we take x = 0, $${\cal{S}}$$ is unital, that is, $${\cal{S}}$$(I) = I, and the submatrix T is not diagonalizable either. Even though a matrix T is not always diagonalizable, there still exists the so-called Schur triangular form for any matrix T.31 This form says that $$T = W(D + E)W^\dagger$$, with W a unitary matrix, D a diagonal matrix with the eigenvalues of T, and E a strictly upper-triangular “nilpotent” matrix with non-zero entries only above the diagonal. Since the N × N matrix E is strictly upper-triangular, one has Tr[DiEj] = 0 for all j ≠ 0. If we use this form in Eq. (12), one obtains for any k $$g^{{\mathrm{NO}}\,{\mathrm{SPAM}}}(k) = {\mathrm{Tr}}\left[ {T^k} \right] = {\mathrm{Tr}}\left[ {(D + E)^k} \right] = {\mathrm{Tr}}\left[ {D^k} \right],$$ (22) since any product of the form $$D^{l_1}E^{l_2}D^{l_3} \ldots E^{l_m}$$ with some non-zero li > 0 is a matrix with zeros on the diagonal. In case of SPAM errors and non-diagonalizable T, we consider $$g(k) = {\mathrm{Tr}}[W^\dagger T_{{\mathrm{prep}}}T_{{\mathrm{meas}}}W(D + E)^k],$$ (23) where $$W^\dagger T_{{\mathrm{prep}}}T_{{\mathrm{meas}}}W$$ is not the identity matrix due SPAM errors, implying that g(k) can depend on E and have a non-exponential dependence on k. Thus, in the special case of a non-diagonalizable matrix T, the signal g(k) would not have the dependence on the eigenvalues as in Eq. (13). In particular, we can
node[above, black] {$\text{NS5}_2$}; \draw[thick, midnightblue] (-1,0.5) -- ++(0:1); \draw[line width=3pt, white] (0,0.4) -- ++(0:1.5); \draw[thick, firebrick] (0,0.4) -- ++(0:1.5); \node at (1.9,0.4) {$\dots$}; \draw[thick, firebrick] (2.2,0.4) -- ++(0:1.5); \draw[line width=3pt, white] (0,0.9) -- ++(90:0.2); \draw[line width=3pt, white] (0,2.1) -- ++(90:0.2); \draw[thick, olive] (0,0) -- ++(90:2.5) node[above, black] {$\text{NS5}_1$}; \draw[line width=3pt, white] ($(-1,1.8)-(0:{0.2/sqrt(2)})$) -- ++($(0:2.5)+(0:{0.2/sqrt(2)})$); \draw[thick, midnightblue] ($(-1,1.8)-(0:{0.2/sqrt(2)})$) -- ++($(0:2.5)+(0:{0.2/sqrt(2)})$); \node at (1.9,1.8) {$\dots$}; \draw[thick, midnightblue] (2.2,1.8) -- ++($(0:0.5)-(0:{0.2/sqrt(2)})$); \draw[thick, firebrick] ($(2.7,1.9)-(0:{0.2/sqrt(2)})$) -- ++($(0:1)+(0:{0/sqrt(2)})$); \draw[line width=3pt, shift={(45:-0.2)}, white] (2.7,0.5) -- ++(90:1.2); \draw[thick, shift={(45:-0.2)}, olive] (2.7,0) -- ++(90:2.5) node[above, black] {$\text{NS5}_N$}; \node at (-0.5,1.5) {$\vdots$}; \node at (3.2,1.6) {$\vdots$}; \draw[decorate, decoration={brace}, xshift=-4pt] ($(-1,0.45)-(0:{0.2/sqrt(2)})$) -- ++(90:1.4) node[left=4pt, pos=0.5] {$k$ $\text{D4}^\suarrow$}; \draw[decorate, decoration={brace, mirror}, xshift=4pt] ($(3.7,0.35)+(0:{0.2/sqrt(2)})$) -- ++(90:1.6) node[right=4pt, pos=0.5] {$k$ $\text{D4}^\sdarrow$}; \draw[line width=3pt, white, shift={(45:0.2)}] ($(1,0.6)+(45:-0.4)$) -- ++(-135:0.218); \draw[densely dashed, thick, shift={(45:0.2)}] ($(1,0.6)$) -- ++(-135:1.2) node[below, shift={(-0.1,0)}] {$\text{D2}^\sdashuarrow$}; \draw[densely dashed,thick] (-1,2.2) node[left] {$\text{D0}^\sdashrarrow$} -- ++($(0:1)$); \end{tikzpicture} } \qquad \subfloat[\label{fig:LQ}]{ \begin{tikzpicture}[align at bottom] \draw (0,0) node [fnode, minimum size=12pt] {$k$} -- (1,0) node[gnode, minimum size=12pt] {$k$} -- (2,0) node[gnode, minimum size=12pt] {$k$} -- (2.5,0); \node at (2.75,0) {$\,\dots$}; \draw (3,0) -- (3.5,0) node[gnode, minimum size=12pt] {$k$} -- (4.5,0) node [fnode, minimum size=12pt] {$k$}; \draw[decoration={brace, mirror}, decorate, yshift=-10pt] (-0.25,0) -- node[pos=0.5, yshift=-10pt] {$N+1$ nodes} (4.75,0); \node at (0,-1.2) {}; \end{tikzpicture} } \caption{(a) A D4--NS5 brane configuration of Hanany--Witten type with additional D0- and D2-branes. (b)~The linear quiver for the theory realized by the D4--NS5 brane configuration.} \label{tab:BC-HW} \end{figure} Let us decompactify the holomorphic surface $C = E$ to $\mathbb{C}$. Then, the part of the system consisting of the D4- and NS5-branes is a well-known brane configuration studied in Witten's classic paper~\cite{Witten:1997sc}, which builds on his earlier work~\cite{Hanany:1996ie} with Hanany. The D4--NS5 brane configuration realizes a four-dimensional $\CN = 2$ supersymmetric gauge theory on $\mathbb{R}^2 \times T^2$. This theory is described by a linear quiver shown in Figure~\ref{fig:LQ}. A circle node represents a vector multiplet for an $\mathrm{SU}(k)$ gauge group, a square node an $\mathrm{SU}(k)$ flavor group, and an edge a bifundamental hypermultiplet. The value $\phi_x^{i+1} - \phi_x^i$ determines the gauge coupling of the $i$th $\mathrm{SU}(k)$ gauge group, while the difference of the periodic scalars on $\text{NS5}_i$ and $\text{NS5}_{i+1}$ gives the $\theta$-angle for this group; together they form a complexified gauge coupling. The positions $z^\uarrow_\gamma$ and $z^\darrow_\gamma$ of the D4-branes in $C$ determine the masses of the hypermultiplets charged under the left and right $\mathrm{SU}(k)$ flavor groups, respectively. For generic values of $\phi_y^i$, the theory is in the Higgs phase in which the gauge symmetry is completely broken. The topological twist used in the construction of the six-dimensional topological--holomorphic theory becomes the Donaldson--Witten twist of the linear quiver theory, as can be seen as follows. If there are only the NS5- and D4-branes, the dualities used above can be applied to a more general setup where $M$ is the product of a three-manifold $W$ and $S^1$, instead of $\mathbb{R}^2 \times T^2$. By dimensional reduction on $S^1$, the linear quiver theory reduces to a three-dimensional $\CN = 4$ supersymmetric gauge theory on $W$. There are two topological twists for a general $\CN = 4$ supersymmetric gauge theory~\cite{Blau:1996bx}, and what we get here is the one using the $\mathrm{SU}(2)$ R-symmetry coming from the rotation symmetry of $\mathbb{R}^3_{679}$. This is known to be the dimensional reduction of the Donaldson--Witten twist. The presence of the B-field and other $\eps$-dependent part of the background has the effect of introducing the standard $\Omega$-deformation. A quick way to see this is to note that if we apply S-duality, T-duality in the horizontal direction of $T^2$, and T-duality on $E$ to the brane configuration in Table~\ref{tab:BC-BT}, we arrive at an almost identical Hanany--Witten configuration, in which $E$ is replaced with the dual elliptic curve~$E^\vee$. The linear quiver theory realized by this brane configuration is clearly subjected to the standard $\Omega$-deformation because the last T-duality is applied to a twisted product of $\mathbb{R}^2$ and $E$ and, as discussed earlier, this is how the standard $\Omega$-deformation is constructed. The theories realized by the two Hanany--Witten configurations are related by a diffeomorphism between the elliptic curves, so the deformations they receive are the same. The D0-branes insert local operators in the linear quiver theory, while the D2-branes create surface operators supported on $\{0\} \times T^2$. In particular, the D0-branes act on the partition function of the $\Omega$-deformed linear quiver theory as a transfer matrix. Let us consider the situation where all $\text{D4}^\uarrow_\gamma$ and $\text{D4}^\darrow_\gamma$ end on the same NS5-brane, say $\text{NS5}_1$. In this case, this transfer matrix is constructed from $k$ copies of a rational version of $L^{(N,1)}$ corresponding to the decompactification of $E$ to $\mathbb{C}$.% \footnote{The dynamical parameter is absent for $C = \mathbb{C}$ as we explain in section~\ref{sec:2dNS}, so the decompactification acts as if wrapping $T^2$ with a surface operator and then taking the rational limit. The transfer matrix still consists of $L^{(N,1)}$ if the positions of the 't Hooft lines are pairwise interchanged. To be precise, the L-operator that enters the transfer matrix is not equal but gauge equivalent to $L^{(N,1)}$ because we have defined $L^{(N,1)}$ as the L-operator in the background with $\CA_x = \CA_y = 0$.} If we further specialize to the case $N = 2$, these L-operators are R-matrices for the rational six-vertex model (the rational limit of the eight-vertex model) whose vertical lines carry Verma modules of $\mathfrak{sl}_2$. The module structure comes from dynamical creation and annihilation of D2-branes stretched between $\text{D4}^\darrow_\gamma$ and $\text{NS5}_2$~\cite{Dorey:2011pa}. A transfer matrix of the rational six-vertex model is a generating function of the conserved charges of the XXX spin chain. Thus, our brane construction naturally explains the appearance of the ``noncompact'' XXX spin chain of length $k$, whose spins take values in Verma modules of $\mathfrak{sl}_2$, from the $\Omega$-deformed $\CN = 2$ supersymmetric gauge theory with a single $\mathrm{SU}(k)$ gauge group and two fundamental hypermultiplets \cite{Nekrasov:2009rc, Dorey:2011pa}. This phenomenon generalizes to any $N \geq 2$, for which an $\mathfrak{sl}_N$ spin chain arises~\cite{Chen:2011sj, Nekrasov:2012xe, Nekrasov:2013xda}. Now let us make $C$ compact again, taking $C = E$. Then, the D4--NS5 brane configuration realizes a six-dimensional lift of the linear quiver theory compactified on $E$, as one can see by applying T-duality on $E$. Correspondingly, the six-vertex model is promoted to the eight-vertex model, whose transfer matrix generates the conserved charges of the XYZ spin chain. If we compactify only one direction so that $C = \mathbb{C}^\times$, the brane configuration produces a five-dimensional gauge theory and the XXZ spin chain. \subsection{Nekrasov--Shatashvili realization of compact spin chains} \label{sec:2dNS} In the same brane configuration, the crossings of the D0- and D2-branes create transfer matrices constructed from R-matrices in the vector representation of $\mathfrak{sl}_N$. Therefore, the $\mathfrak{sl}_N$ spin chains with spins in the vector representation also appear in this setup. It is interesting to look at these spin chains from the point of view of the D2-branes. For the moment let us take $N = 2$, so there are two NS5-branes. The possible configurations of $n$ D2-branes ending on either NS5-brane are classified by an integer $M$ such that $0 \leq M \leq n$, namely the number of D2-branes ending on $\text{NS5}_2$. This is the magnon number of the spin chain, counting the total number of ``up'' spins in the chain. A case with $M = 2$ is illustrated in Figure~\ref{fig:D2-NS5}. \begin{figure} \centering \begin{tikzpicture}[align at bottom] \fill[brown!8] (-2,0) -- ++(-135:0.4) -- ++(0:4.5) -- ++(45:0.8) -- ++(0:-4.5) -- cycle; \begin{scope}[shift={($(-3.5,0)+(-135:0.4)$)}] \draw[->,>=stealth] (0,0) -- ++(90:0.5) node[above] {$z$}; \draw[->,>=stealth] (0,0) -- ++(0:0.5) node[right, yshift=2pt] {$x^9$}; \draw[->,>=stealth] (0,0) -- ++(45:0.5) node[above right=-2pt] {$x^8$}; \end{scope} \draw[densely dashed,thick] ($(-2,1.2)+(0:{0.2/sqrt(2)})$) -- ++(0:2); \draw[densely dashed,thick] ($(-2,2)+(0:{0.2/sqrt(2)})$) -- ++(0:2); \draw[-,thick, shift={(45:0.2)}, olive] (0,0) -- ++(90:2.5) node[above, black] {$\text{NS5}_1$}; \draw[line width=3pt, white] ($(-2,0.4)-(0:{0.2/sqrt(2)})$) -- ++(0:3); \draw[line width=3pt, white] ($(-2,0.8)-(0:{0.2/sqrt(2)})$) -- ++(0:3); \draw[densely dashed,thick] ($(-2,0.4)-(0:{0.2/sqrt(2)})$) -- ++(0:4); \draw[densely dashed,thick] ($(-2,0.8)-(0:{0.2/sqrt(2)})$) -- ++(0:4); \draw[-,thick, shift={(45:-0.2)}, olive] (2,0) -- ++(90:2.5) node[above, black] {$\text{NS5}_2$}; \node[shift={(0:{0.2/sqrt(2)})}] at (-1,1.7) {$\vdots$}; \draw[decorate, decoration={brace,mirror}, xshift=4pt] ($(2,0.35)-(0:{0.2/sqrt(2)})$) -- ++(90:0.5) node[right=4pt, pos=0.5] {$M$ $\text{D2}^\sdashuarrow$}; \draw[decorate, decoration={brace}, xshift=-4pt] (-2,1.15) -- (-2,2.05) node[left=4pt, pos=0.5] {$n - M$ $\text{D2}^\sdashuarrow$}; \end{tikzpicture} \caption{A brane configuration for a two-dimensional $\CN = (4,4)$ supersymmetric gauge theory.} \label{fig:D2-NS5} \end{figure} In the case when $C = \mathbb{C}$ and the $\Omega$-deformation is absent, the D2--NS5 brane configuration with fixed $M$ realizes an $\CN = (4,4)$ supersymmetric gauge theory on~$T^2$. This theory has a $\mathrm{U}(M)$ gauge group and a hypermultiplet in the bifundamental representation of the gauge group and a $\mathrm{U}(n)$ flavor symmetry. The
an essentially unique section, i.e.~the space of sections \[\operatorname{Fun}_{\mathcal{D}'}(\mathcal{D}',\mathcal{D})\coloneqq \operatorname{Fun}(\mathcal{D}',\mathcal{D}) \times_{\operatorname{Fun}(\mathcal{D}',\mathcal{D}')} \{id_{\mathcal{D}'}\}\] is contractible. We can make the choice of one such section $F: \mathcal{D}'\rightarrow \mathcal{D}$ and have constructed an interesting functor. \begin{remark}\label{rem:twistdef} We illustrate the above procedure with the construction of the twist and cotwist functors associated to an adjunction of stable $\infty$-categories in \Cref{twistconstr}, as appearing in \cite{DKSS19}. Before doing so, let us comment on equivalent way to describe the twist and cotwist functors. Consider an adjunction $\mathcal{M}\rightarrow \Delta^1$ of stable $\infty$-categories, associated to a pair of functor $F:\mathcal{A}\leftrightarrow \mathcal{B}:G$ and choose a unit $u$ and counit $cu$. Then the twist functor $T_\mathcal{A}$ defined in \Cref{twistconstr} is equivalent to the functor given by the cofiber of $u$ in the stable $\infty$-category $\operatorname{Fun}(\mathcal{A},\mathcal{A})$. Dually, the cotwist functor $T_\mathcal{B}$ is equivalent to the fiber of $cu$ in the stable $\infty$-category $\operatorname{Fun}(\mathcal{B},\mathcal{B})$. We will use the definition provided in \Cref{twistconstr} because it is better applicable in proofs. \end{remark} \begin{construction}\label{twistconstr} Let $p:\mathcal{M}\rightarrow \Delta^1$ be an adjunction between stable $\infty$-categories $\mathcal{A}$ and $\mathcal{B}$. We split the construction of the twist functor into six steps, denoted {\bf a)} to {\bf f)} below. \noindent {\bf a)} Consider the full subcategory $\mathcal{D}_1$ of $\operatorname{Fun}_{\Delta^1}(\Delta^{1},\mathcal{M})$ spanned by functors that are a left Kan extension relative $p$ of their restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^{\{0\}},\mathcal{M})$. The vertices of $\mathcal{D}_1$ can be depicted as \[ a\xlongrightarrow{!}b\] with $a\in \mathcal{A}$ and $b\in \mathcal{B}$. By \cite[4.3.2.15]{HTT}, the restriction functor to $a$ is a trivial fibration from $\mathcal{D}_1$ to $\mathcal{A}$. \noindent {\bf b)} We consider $\Lambda^2_2$ as lying over $\Delta^1$, by mapping $0,1$ to $0$ and $2$ to $1$. Consider the inclusion $\Delta^1\simeq \Delta^{\{1,2\}}\subset \Lambda^2_2$ and the resulting restriction functor $\operatorname{Fun}_{\Delta^1}(\Lambda^2_2,\mathcal{M})\rightarrow \operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$. We define $\mathcal{D}_2$ to be the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Lambda^2_2,\mathcal{M})$ spanned by diagrams whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$ lies in $\mathcal{D}_1$ and that are a right Kan extension relative $p$ of their restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^1,\mathcal{M})$. The vertices of $\mathcal{D}_2$ are of the form \[ \begin{tikzcd} a\arrow[dr, "!"]&\\ a'\arrow[r, "\ast"]& b \end{tikzcd} \] where $a,a'\in \mathcal{A}$ and $b\in \mathcal{B}$. The restriction functor defines a trivial fibration from $\mathcal{D}_2$ to $\mathcal{D}_1$. \noindent {\bf c)} We consider $\Delta^2$ as lying over $\Delta^1$, by mapping $0,1$ to $0$ and $2$ to $1$. Let $E$ denote the set of all degenerate edges of $\Delta^2$ together with the edge $\Delta^{\{1,2\}}$. The inclusion $(\Lambda^2_2,E\cap (\Lambda^2_2)_1)\subset (\Delta^2,E)$ is by \cite[3.1.1.1]{HTT} a marked anodyne morphism of marked simplicial sets, so that the restriction functor $\operatorname{Fun}_{\Delta^1}((\Delta^2,E),\mathcal{M}^\natural)\rightarrow\operatorname{Fun}_{\Delta^1}((\Lambda^2_2,E\cap (\Lambda^2_2)_1),\mathcal{M}^\natural)$ is a trivial fibration by \cite[3.1.3.4]{HTT}, see also \cite[3.1.1.8]{HTT} for the notation $M^\natural$. Consider the pullback of simplicial sets $\mathcal{D}_3=\operatorname{Fun}_{\Delta^1}((\Delta^2,E),\mathcal{M}^\natural)\times_{\operatorname{Fun}_{\Delta^1}((\Lambda^2_2,E\cap (\Lambda^2_2)_1),\mathcal{M}^\natural)}\mathcal{D}_2$. The $\infty$-category $\mathcal{D}_3$ is equivalent to the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Delta^2,\mathcal{M})$ spanned by vertices of the following form. \[ \begin{tikzcd} a\arrow[dr, "!"]\arrow[d]&\\ a'\arrow[r, "\ast"]& b \end{tikzcd} \] The functor from $\mathcal{D}_3$ to $\mathcal{D}_2$ contained in the defining pullback diagram of $\mathcal{D}_3$ is again a trivial fibration. \noindent {\bf d)} We consider the simplicial set $\Delta^{\{0,1'\}}$ as lying over $\Delta^1$ via the constant map with value $0$. Let $\mathcal{D}_4$ be the full subcategory of $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}},\mathcal{M})$ spanned by functors that are a $p$-relative right Kan extension of their restriction to $\Delta^2$ and whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^2,\mathcal{M})$ is contained in $\mathcal{D}_3$. The vertices of $\mathcal{D}_4$ are diagrams of the following form. \[ \begin{tikzcd} 0 & a \arrow[rd, "!"] \arrow[d] \arrow[l] & \\ & a' \arrow[r, "\ast"] & b \end{tikzcd} \] We find the restriction functor to be a trivial fibration from $\mathcal{D}_4$ to $\mathcal{D}_3$. \noindent {\bf e)} We consider the simplicial set $\Delta^1\times\Delta^1$ lying over $\Delta^1$ with the constant map with value $0$. Consider the full subcategory $\mathcal{D}_5$ of $\operatorname{Fun}_{\Delta^1}(\Delta^2 \coprod_{\Delta^{\{0,1\}}} \Delta^1\times\Delta^1,\mathcal{M})$ spanned by functors that are $p$-relative left Kan extensions of their restriction to $\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}}$ and whose restriction to $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{0\}}} \Delta^{\{0,1'\}},\mathcal{M})$ is contained in $\mathcal{D}_4$. The vertices of $\mathcal{D}_5$ can be depicted as follows, \begin{equation}\label{squeq} \begin{tikzcd} 0 \arrow[d] & a \arrow[rd, "!"] \arrow[d] \arrow[l] \arrow[ld, "\square", phantom] & \\ a'' & a' \arrow[r, "\ast"] \arrow[l] & b \end{tikzcd} \end{equation} with $a,a',a''\in \mathcal{A}$ and $b\in \mathcal{B}$. The box $\square$ in the center of the commutative square in diagram \eqref{squeq} denotes that the square is biCartesian, i.e.~both pullback and pushout. The square thus describes a fiber and cofiber sequence. We find the restriction functor to be a trivial fibration from $\mathcal{D}_5$ to $\mathcal{D}_4$. \noindent {\bf f)} Composing the above constructed trivial fibrations, we obtain the trivial fibration $R:\mathcal{D}_5\rightarrow \mathcal{A}$ given by the restriction functor to the vertex $a$. We define up to contractible choice the twist functor \[ T_\mathcal{A}:\mathcal{A}\longrightarrow \mathcal{A} \] as the composition of a section of $R$ with the restriction functor to $a''$ from $\mathcal{D}_5$ to $\mathcal{A}$.\\ The construction of the cotwist functor is dual. We consider the full subcategory $\mathcal{D}'$ of $\operatorname{Fun}_{\Delta^1}(\Delta^2\coprod_{\Delta^{\{1,2\}}}\Delta^1\times\Delta^1,\mathcal{M})$ spanned by functors of the following form, \[ \begin{tikzcd} b'' \arrow[r] \arrow[d] \arrow[rd, "\square", phantom] & b' \arrow[d] & \\ 0 \arrow[r] & b & a \arrow[l, "\ast"'] \arrow[lu, "!"'] \end{tikzcd} \] where $a\in \mathcal{A}$ and $b,b',b''\in \mathcal{B}$. Similar to before, the restriction functor $R':\mathcal{D}'\rightarrow \mathcal{B}$ to the vertex $b$ is a trivial fibration. We define up to contractible choice the cotwist functor \[ T_\mathcal{B}:\mathcal{B}\longrightarrow \mathcal{B} \] as the composition of a section of $R'$ with the restriction functor to $b''$ from $\mathcal{D}'$ to $\mathcal{B}$. \end{construction} \begin{remark} We will often describe appearing diagram $\infty$-categories by specifying their vertices up to equivalence, leaving their construction using Kan extensions implicit. For example, consider the setup of \Cref{twistconstr} and denote the adjoint functors associated to the biCartesian fibration $p:\mathcal{M}\rightarrow \Delta^1$ by $F:\mathcal{A}\leftrightarrow \mathcal{B}:G$. We can describe the $\infty$-category $\mathcal{D}'$ as spanned by functors of the following form, \[ \begin{tikzcd} T_\mathcal{B}(b) \arrow[r] \arrow[d] \arrow[rd, "\square", phantom] & FG(a) \arrow[d] & \\ 0 \arrow[r] & b & G(b) \arrow[l, "\ast"'] \arrow[lu, "!"'] \end{tikzcd} \] up to equivalence. This notational convention will help to remember the meaning of smaller diagram $\infty$-categories and also simplify the notation in the construction of large diagram $\infty$-categories. \end{remark} \subsection{Monadic adjunctions}\label{sec1.4} In this section we recall the theory of modules over a monad and of monadic adjunctions and describe the Kleisli $\infty$-category and stable Kleisli $\infty$-category associated to a monad. Before turning to monads, we first recall some concepts and notation regarding monoidal $\infty$-categories. The definition is based on the formalism of $\infty$-operads, see \cite[Section 2.1]{HA}, meaning certain functors $O^\otimes\rightarrow N(\operatorname{Fin}_\ast)$, where $\operatorname{Fin}_\ast$ denotes the category of finite pointed sets. The associative $\infty$-operad is denoted by $\operatorname{Assoc}^\otimes$, for the definition see \cite[4.1.1.3]{HA}. A monoidal $\infty$-category $\mathcal{C}$ is defined to be a coCartesian fibration of $\infty$-operads $\mathcal{C}^\otimes\rightarrow \operatorname{Assoc}^\otimes$, see \cite[4.1.1.10]{HA}. The $\infty$-category $\mathcal{C}$ arises from the $\infty$-operad $\mathcal{C}^{\otimes}\rightarrow N(\operatorname{Fin}_\ast)$ as the fiber over $\langle 1\rangle \in N(\operatorname{Fin}_\ast)$. The coCartesian fibration $\mathcal{C}^{\otimes}\rightarrow \operatorname{Assoc}^\otimes$ serves to encode a monoidal product $\otimes:\mathcal{C}\times\mathcal{C}\rightarrow \mathcal{C}$, a monoidal unit and the data exhibiting coherent associativity and unitality. Let $\mathcal{D}$ be an $\infty$-category. We can turn the $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ of endofunctors into a monoidal $\infty$-category via the composition monoidal structure, see \Cref{endlem} below. The monoidal product of the composition monoidal structure is given by composition of functors. Given a monoidal $\infty$-category, there is a notion of an associative algebra object, see \cite[2.1.3.1]{HA}. If the monoidal $\infty$-category is the nerve of a monoidal $1$-category, then this notion agrees with the 1-categorical notion of an associative algebra object. An associative algebra object in the monoidal $\infty$-category $\operatorname{Fun}(\mathcal{D},\mathcal{D})$ is called a monad on $\mathcal{D}$. We can informally express the datum of a monad $M:\mathcal{D}\rightarrow \mathcal{D}$ as follows. \begin{itemize} \item A multiplication map $m:M\otimes M= M^2 \rightarrow M$. \item The data expressing the coherent associativity of the map $m$. \item A unit map $u:id_\mathcal{D}\rightarrow M$. \item The data expressing the unitality of $u$ and $m$. \end{itemize} Every adjunction $F:\mathcal{D}\leftrightarrow\mathcal{C}: G$ of $\infty$-categories determines a monad $M=GF:\mathcal{D}\rightarrow\mathcal{D}$, see \cite[4.7.3.3]{HA}. We call the monad $M$ the adjunction monad of $F\dashv G$. The multiplication map of $M$ is induced by the counit of the adjunction and the unit $id_\mathcal{D}\rightarrow M$ of the monad $M$ is equivalent to the unit map of the adjunction. Every monad arises as the adjunction monad of the associated monadic adjunction. To define
of the solutions. The third part is devoted to the proof of an energy inequality available in the case $0< s <1/4$, this inequality leads to a local existence theorem. The passage to the limit is omitted here since it is the same as the one in \cite{Laz}. \\ Throughout this article, $C$ stands for any controlled and positive constant, which therefore could be different from line to line. We also denote $A \lesssim B$ if $A$ is less to $B$ up to a positive multiplicative constant which can be different from line to line as well. Those constants depend only on some controlled norms. We denote by $\mathcal{D}(\mathbb R^{2})$ the space of smooth functions in $\mathbb R^{2}$ that are compactly supported. As usually, we denote by $L^{p}$ the usual Lebesgue spaces, $H^{s}$ the classical Sobolev spaces and $\dot H^{s}$ the homogeneous ones. We shall also use the shorter notation $L^{p}X$ for $L^{p}([0,T],X)$ where $X$ is a Lebesgue or Sobolev space. \\ Our main result reads as follows. \begin{theo} \label{tp} Let us denote $X_{T}$ and $X^{s}_{T}$ the spaces $$ X_{T}\equiv L^\infty ([0,T], L^{2}_{uloc})\cap (L^{2}_{t}([0,T], \dot H^{1/2}))_{uloc}, $$ and $$ X^{s}_{T} \equiv L^\infty ([0,T], \dot H^{s}_{uloc})\cap (L^{2}_{t}([0,T], \dot H^{s+1/2}))_{uloc}. $$ \\ Assume that $\theta_0= \Lambda^{s} w_{0} \in \Lambda^{s}(H^{s}_{uloc}) \cap L^{\infty},$ then: \\ $\bullet$ If $1/4\leq s\leq1/2$, the critical (SQG) has at least one global weak solution $\theta$ which satisfies $\theta \in X_T$ and $w \in X^{s}_{T}$ for all $T<\infty$. Futhermore, we have the following control $$ \Vert w(x,t) \Vert^{2}_{{\dot H^{s}_{uloc}(\mathbb{R}^2)}} \leq c \ e^{CT}, $$ $\bullet$ If $0<s<1/4$, the critical (SQG) has at least one local weak solution $\theta$ so that for all $$T<T^{*}\equiv {\frac{C(\Vert \theta_0 \Vert_{\infty})} {1+\Vert w_{0} \Vert^{2}_{\dot H^{s}_{uloc}}}}$$ we have $$\theta \in X_{T^*} \ \text{and} \ w \in X^{s}_{T^*}.$$ \noindent Moreover, for all $T\leq T^*$, the solution $w$ satisfies the following energy inequality: $$ \Vert w(x,T) \Vert^{2}_{H^{s}_{uloc}} \leq \Vert w_{0} \Vert^{2}_{ H^{s}_{uloc}} + C \int_{0}^{T} \left(\Vert w(x,s) \Vert^{2}_{\dot H^{s}_{uloc}}+\Vert w(x,s) \Vert^{4}_{{\dot H^s}_{uloc}} + C \right) \ ds, $$ \noindent where $C$ is a positive constant depending only on $\displaystyle\Vert \theta_{0} \Vert_{L^{\infty}(\mathbb{R}^2)}$ and \ $\Vert w_{0} \Vert_{ H^{s}_{uloc}(\mathbb{R}^2)}$. \end{theo} \begin{remark} Previous results obtained in \cite{Laz} and the global result of the above theorem imply that we have global existence for $1/4 \leq s \leq 1$. \end{remark} \begin{remark} In fact, for the case $0<s< 1/4$, we prove a more general inequality, namely, for all $t\leq T^*$ and for all $K \in (0,2)$, solution $w$ satisfies the following energy inequality $$ \Vert w(x,T) \Vert^{2}_{ H^{s}_{uloc}} \leq \Vert w_{0} \Vert^{2}_{ H^{s}_{uloc}} + C \int_{0}^{T} \left(\Vert w \Vert^{2}_{\dot H^{s}_{uloc}}+\Vert w \Vert^{2(\frac{3-K}{2-K})}_{{\dot H^s}_{uloc}} + C \right) \ ds. $$ The best power that we can obtain in the above inequality is close to 3 (roughly speaking, it corresponds to $K \rightarrow 0$) and therefore we only have a local control of the solutions. In order to make the statement and the time existence of the solutions simpler, we have considered the case $K=1$ which gives a power 4. \end{remark} \section{The $L^{p}_{uloc} (\mathbb R^2 )$ and $H^{s}_{uloc}(\mathbb R^2)$ spaces} In this section, we recall the definition of the $L^{p}_{uloc} (\mathbb R^2 )$ and $H^{s}_{uloc} (\mathbb R^2)$ spaces. To do so, we need to introduce the set of translations of a given test function. \begin{defi} Let us fix a positive test function $\phi_{0}$ such that $ \phi_{0} \in \mathcal{D} (\mathbb R^2) $ and \begin{equation} \left \{ \aligned &\phi_{0}(x)= 1 & \mathrm{if} \ \vert x\vert \leq 2, \\ \nonumber & \phi_{0}(x)= 0 & \mathrm{if} \ \vert x \vert \geq 3. \endaligned \right. \end{equation} \end{defi} \noindent We define the set of translations of the function $\phi_{0}$ as $B_{\phi_{0}}\equiv \{\phi_{0}(x-k), k\in \mathbb Z^2 \}.$ We are now ready to define both $L^{p}_{uloc} (\mathbb R^2 )$ and $H^{s}_{uloc} (\mathbb R^2)$ spaces. \begin{defi} Let $1\leq p \leq \infty$ then $f \in L^{p}_{uloc}(\mathbb R^2)$ if and only if $f \in L^{p}_{loc}(\mathbb R^2)$ and the following norm is finite $$ \Vert f \Vert_{L^{p}_{uloc}(\mathbb R^2)}= \sup_{\phi \in {B_{\phi_0}}} \Vert \phi f \Vert_{L^{p}(\mathbb R^2)}. $$ \end{defi} \noindent We will also use the following useful equivalent norms $$\Vert f \Vert_{L^{p}_{uloc}(\mathbb R^2)} \approx \sup_{k \in \mathbb{Z}^2 } \left(\int_{k+[0,1]^2} \vert f(x) \vert^p \ dx \right)^{1/p} \approx \sup_{k \in \mathbb{Z}^2 } \Vert \phi_{0} (x-k) f \Vert_{L^{p}(\mathbb R^2)}. $$ Let us recall the definition of the $H^{s}_{uloc}(\mathbb R^2)$ spaces with $0<s<1$. \begin{defi} Let $\phi_{0}$ be a positive test function chosen as in definition 4. We say that $f \in H^{s}_{uloc}(\mathbb R^2)$ if and only if $f \in H^{s}_{loc}(\mathbb R^2)$ and the following norm is finite \begin{center} $\displaystyle \Vert f \Vert^{2}_{{H^{s}_{uloc}} (\mathbb R^2)} = \sup_{\phi\in B_{\phi_0} } \Vert \phi f \Vert_{H^{s}}.$ \end{center} \end{defi} \noindent We will also use the following equivalent norms, let us set \begin{equation} \label{norm} A_{\phi} f\equiv\int \frac{ \vert \phi f \vert^2}{2} + \frac{ \vert \Lambda^{s}(\phi f) \vert^{2}}{2} \ dx, \end{equation} then, \begin{center} $\displaystyle \Vert f \Vert^{2}_{{H^{s}_{uloc}} (\mathbb R^2)} = \sup_{\phi\in B_{\phi_0} } A_{\phi} f,$ \end{center} and, $$ \Vert f \Vert^{2}_{\dot H^{s}_{uloc}}= \sup_{k \in \mathbb{Z}^{2}} \int \vert \Lambda^{s} (\phi_{0}(x-k) f(x,t)) \vert^{2} \ dx. $$ \begin{remark} It is important to note that the norms do not depend on the choice of the test functions. In \cite{Laz}, we have seen that the $H^{s}_{uloc} (\mathbb R^{2})$ can be considered with $\phi$ inside or outside the fractional derivative since the norms are equivalents. Therefore, it does not matter whether the function $\phi$ is inside or outside the brackets in our computations. \end{remark} The spaces $(L^{2}_{T}\dot H^{s})_{uloc}$ and $L^{\infty}_{T} \dot H^{s}_{uloc}$ with $0<s<1$ and $1\leq p < \infty$ will be used throughout the paper. These spaces are endowed with the following norms \begin{eqnarray*} \Vert w \Vert^{2}_{ (L^{2}_{T} \dot H^{s})_{uloc}}&=& \sup_{\phi \in B_{\phi_0}} \int_{0}^{T} \int \phi \vert \Lambda^{s} w(x,s) \vert^2 \ dx \ ds < \infty, \\ \Vert w \Vert^{2}_{L^\infty_{T} \dot H^{s}_{uloc}} &=& \sup_{t\in[0,T]} \sup_{\phi \in B_{\phi_0}} \int \phi \vert \Lambda^{s} w(x,t) \vert^{2} \ dx < \infty. \end{eqnarray*} In the next section, we recall some classical tools from the so-called Littlewood-Paley theory (see e.g \cite{MC}, or \cite{PGLR}). \section{Besov spaces and Bernstein inequalities.} In order to define the Besov spaces, we need to recall the definition of the dyadic blocks. \begin{defi} Let $\phi \in \mathcal{D}(\mathbb R^2)$ be a non negative function such that $\phi(x)=1$ if $\vert x \vert \leq 1/2$ and $0$ if $\vert x \vert \geq 1$. We also define $\psi \in \mathcal{D}(\mathbb R^2)$ by $\psi(x)=\phi(x/2)-\phi(x)$ supported on a corona. Then, we define the Fourier multiplier $S_j$ and $\Delta_j$ by $$ \widehat{S_{j} f}(\xi)=\phi(\frac{\xi}{2^{j}}) \widehat{ f}(\xi) \ \ \text{and} \ \ \widehat{\Delta_{j} f}(\xi)=\psi(\frac{\xi}{2^{j}}) \widehat{ f} (\xi). $$ \end{defi} \hspace{-0,5cm}From these operators we deduce the Littlewood decomposition of a distribution $f\in \mathcal{S}'$, that is, for all $N\in \mathbb Z$, we have $$ f=S_N f+ \sum_{j\geq N} \Delta_j f \ \text{in} \ \mathcal{S}'(\mathbb{R}^{2}). $$ If moreover, $$ S_N f \xrightarrow[N\to-\infty]{ } 0 \ \ \text{in} \ \ \mathcal{S}'(\mathbb{R}^{2}), $$ we obtain the homogeneous decomposition of $f \in \mathcal{S}'(\mathbb{R}^{2}) $ (modulo polynomials) : $$ f= \sum_{j\in \mathbb{Z} } \Delta_j f \hspace{0,6cm} \text{in} \hspace{0,2cm} \ \mathcal{S}'(\mathbb{R}^{2}). $$ The inhomogeneous Besov spaces are defined as follow \begin{defi} For $s\in \mathbb R$, $(p, q) \in [1, \infty]^2$ and $N \in \mathbb Z$, a distribution $f \in \mathcal{S}'(\mathbb R^2)$ belongs to the inhomogeneous Besov space $B^{s}_{p,q}$ if and only if $$\Vert f \Vert_{B^{s}_{p,q}}=\Vert \Delta_N f \Vert_{p} + \left( \sum_{j\geq N} 2^{jqs} \Vert \Delta_j f \Vert^{q}_{p} \right)^{1/q} < \infty $$ \end{defi} \noindent We also recall the definition of the homogeneous Besov spaces which are defined only modulo polynomials $\mathcal{P}$. \begin{defi} If $s<0$, or \ $0<s<1$ a distribution $f \in \mathcal{S}'(\mathbb R^2)$ belongs to the homogeneous Besov space $\dot B^{s}_{\infty,\infty}$ if and only if, (in the case $0<s<1$, $\Delta_j f \in L^\infty$, for all $j\in \mathbb Z$) $$\Vert f \Vert_{\dot B^{s}_{\infty,\infty}}=\sup_{j\in \mathbb Z} 2^{js}\Vert \Delta_j f \Vert_{\infty}< \infty, $$ and $f \in \mathcal{S}'(\mathbb R^2)$ belongs to the homogeneous Besov space $\dot B^{0}_{\infty,1}$ if and only if $$ \Vert f \Vert_{\dot B^{0}_{\infty,1}}=\sum_{j \in \mathbb Z } \Vert \Delta_j f \Vert_{\infty}< \infty. $$ \end{defi} \noindent A very useful tool when we work with Besov norms is the Bernstein inequalities. \begin{lemm} If $f \in \mathcal{S}'(\mathbb{R}^{2})$, then for all $(s, j) \in \mathbb{R} \times \mathbb{Z}$, and for all $1\leq p \leq q \leq \infty$ we have $$2^{js} \Vert \Delta_j f \Vert_{L^{p}} \lesssim \Vert \Lambda^{s} \Delta_{j} f \Vert_{p} \lesssim 2^{js} \Vert \Delta_j f \Vert_{L^{p}}, $$ $$\Vert \Lambda^{s} S_j f \Vert_{L^{p}} \lesssim 2^{js} \Vert S_j f \Vert_{L^{p}},
random matrix, each having possible realizations generated from the probability mass function corresponding to the vector , the Hessian can be computed with the expectation of and the covariance matrix of as follows: ∇2gs(x)=s(Σps(x)[∇fi(x)])+Eps(x)[∇2fi(x)], where , are defined as in Lemma 2 and the covariance matrix is given as, Σps(x)[∇fi(x)]△=Eps(x)[∇fi(x)∇fi(x)T]−Eps(x)[∇fi(x)]Eps(x)[∇fi(x)]T. (6) ###### Proof. The result directly follows from taking further partial derivatives of gradient in Lemma 2. ∎ In the following section, we explain our methodology for accelerating the convergence rate. ## Iii Accelerated Optimization of the Approximation We utilize Nesterov’s accelerated gradient descent method for smooth and strongly-convex functions, for which more details are given in [12]. The algorithm is an iterative one, where the iterations are done in an alternating fashion. Starting with the initial argument pair , we have the following iterative relations for and for : xt+1 =yt−1βs∇gs(yt), (7) yt+1 =xt+1+√κs−1√κs+1(xt+1−xt), with being the condition number of the Hessian in Lemma 5, which is computed as κs=βs/αs, for αsI⪯∇2gs(x)⪯βsI, for all x∈Ks, (8) where and are the lower and upper bounds on the eigenvalues of Hessian , respectively, the identity matrix of dimensions is denoted as , and is a set guaranteed to include the convex-hull of all iterations , and the optimal point as defined in (5). Generating the Hessian upper-bound (the smoothness parameter) , and consequently the condition number for the set is sufficient as in (8). The reason is twofold. Firstly, the optimality gap guarantee shown in the following as Lemma 4 is dependent upon upper-bounding the Hessian on line segments pair-wise connecting the algorithm iterations (, ) and the optimal point via . All of such segments are encapsulated by the convex-hull of , and the optimal point . Secondly, this convex-hull is itself a subset of as previously defined. ###### Lemma 4. The following optimality gap is guaranteed for : gs(xt)−gs(x∗)≤(αs2∥x1−x∗∥2+gs(x1)−gs(x∗))exp(−t−1√κs), where is the condition number. and are the strong-convexity and Lipschitz-smoothness parameters, respectively, and is the optimal point as defined in (5). ###### Proof. The proof directly follows a similar formulation given in [12] under "the smooth and strong convex case" subsection of the section "Nesterov’s accelerated gradient descent". The only exception is that we do not replace with an upper-bound and leave it as is. ∎ ### Iii-a Parameters of Strong-Convexity and Lipschitz-Smoothness To compute , we bound the eigenvalues of . ###### Lemma 5. We can lower and upper bound eigenvalues of the Hessian matrix for as follows: (min1≤i≤nαs,i)I⪯∇2gs(x)⪯(sL2s+max1≤i≤nβs,i)I, where and are further defined as the strong-convexity and smoothness parameters for the components from the "" operator generating , i.e. , respectively, such that we have , for . The parameter is a common gradient norm bound for each such that for each and . ###### Proof. We start with proving the lower-bound relation. Using Lemma 3, we obtain ∇2gs(x)⪰Eps(x)[∇2fi(x)], since the covariance matrix is lower-bounded by as it is a convex combination of rank- self-outer-product matrices with their lowest eigenvalue being . The expectation operation is linear. Thus we can replace each with its lower-bound without affecting the inequality relation . After taking the constant identity matrix outside of the expectation, we have the renewed relation ∇2gs(x)⪰Eps(x)[αs,i]×I. Since the expectation is a convex combination of scalars , we further lower bound by replacing the expectation with , which gives the lower-bound of this lemma. For the upper-bound, we can generate ∇2gs(x)⪯(max1≤i≤nβs,i)I+s(Σps(x)[∇fi(x)]) using Lemma 3 by upper bounding each with and the resulting expectation with similar to the lower-bound. We can upper bound the covariance matrix by first noting that the eigenvalues of an -dimensional outer-product are and zeros. Consequently, we upper bound it by replacing the negative outer-product, i.e. , in (6) with . Then, utilizing the linearity of expectation again, we get the final upper-bound by replacing the outer-product inside expectation with . The resulting upper-bound is given as ∇2gs(x)⪯(max1≤i≤nβs,i)I+s(Eps(x)[∥∇fi(x)∥2])I, after taking the constant identity matrix outside of expectation. We can replace the scalar with a common squared gradient-norm bound , which gives the upper-bound relation of this lemma, thus concluding the proof. ∎ ### Iii-B Algorithm Description We start at some point . We determine the "smoother" needed to achieve the requested optimality gap and the set such that includes the optimal point and all future iterations , . We use the update rules in (7) after determining the common gradient norm bound , the individual strong-convexity and Lipschitz-smoothness parameters and , respectively, via the set . The condition number and the smoothness parameter are calculated using the lower and upper bounds in Lemma 5. The pseudo-code is given in Algorithm 1. For this algorithm, we have the following performance result. ###### Theorem 1. We run Algorithm 1 for a given optimality gap guarantee . Then, we achieve the gap after sufficient iterations such that: t∈O(√lognδlog1δ), since t=1+√2δL2slognαs+~βsαslog(1δ(αsD2s+2LsDs)), where is the big-O notation for asymptotic upper-bounding, is the number of functions contributing to the operation resulting in , is the common gradient norm bound for each component function in the operator such that , for all , . is the strong-convexity parameter of the approximation , is the pseudo-smoothness parameter upper bounding the matrix , and is the unknown initial distance between and . ###### Proof. From Lemma 4, we see that the lower results in faster convergence for a fixed optimality gap. Without further information on the gradient and Hessian bounds, we need to lower the "smoother" for a lower . However, the "smoothing" regret from Corollary 1 works in the opposite direction. Consequently, we will equate both the optimality gap from the smooth approximation and the "smoothing" regret to . This results in , with being the number of function contributing to the same operation. is generated consequently. Immediately, we have the "smoothing" regret in Corollary 1 as . Then, we equate the gap from using the upper-bound in Lemma 4. Afterwards, we replace the condition number in accordance with (8) after calculating the strong-convexity and smoothness parameters and via Lemma 5. Finally, we upper bound the initial smooth approximation gap with using the convexity relation and arrive at the result of the theorem. ∎ #### Iii-B1 Computational Cost of the Algorithm ###### Corollary 2. For an optimality gap , the computation time needed is such that for an arbitrarily small . More specifically: T∈O(n√lognδlog1δ(cd+log1δ+loglogn)), where is the average cost of calculating a partial derivative for any , is the number of functions contributing to and is the dimension of the domain of ’s . ###### Proof. We need iterations as shown in Theorem 1. We observe that each iteration of the while-loop in Algorithm 1 requires partial derivative calculations. Due to the computation of probability vector with respect to Definition 1, each iteration also requires a total of exponentiation to the power of when . Each of such exponentiations has additional computational cost of . Combination of these costs gives the corollary. ∎ #### Iii-B2 Online Version of the Algorithm (without Specifying δ) ###### Corollary 3. We can achieve the time-complexity in Corollary 2, which is of the form , in an online fashion with no requested optimality gap guarantee . is the soft-O notation ignoring logarithmic factors compared to big-O. ###### Proof. We initialize with some and run Algorithm 1 with as the optimality guarantee. Then, after sufficient iterations to achieve the requested , we restart Algorithm 1 with a new guarantee , for and repeat non-stop. For such that for some integer , the total exhausted time can be upper-bounded as follows using the fact that is monotonically increasing and is lower-bounded with , T∈O(n(m∑k=0√logn2−kδ0)log2δ(cd+log2δloglogn)) This bound translates to the same bound in Corollary 2. ∎ In the next section, we shall investigate an interesting specific application for the general accelerated min-max optimization via smooth approximation, which we have introduced. ## Iv (1+ε)-Approximation for the Problem of Minimal Bounding Sphere Let us suppose we have points, each located at for , in the dimensional space . Our minimization target is such that: f(x)=max1≤i≤n∥x−bi∥2. (9) This is the so-called minimal bounding sphere problem
""" Copyright 2013 Steven Diamond This file is part of CVXPY. CVXPY is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. CVXPY is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with CVXPY. If not, see <http://www.gnu.org/licenses/>. """ from cvxpy.atoms.affine.add_expr import AddExpression from cvxpy.expressions.expression import * from cvxpy.expressions.variables import Variable from cvxpy.expressions.constants import Constant from cvxpy.expressions.constants import Parameter from cvxpy import Problem, Minimize import cvxpy.utilities as u import cvxpy.interface.matrix_utilities as intf import cvxpy.settings as s from collections import deque import unittest from cvxpy.tests.base_test import BaseTest from cvxopt import matrix import numpy as np import warnings class TestExpressions(BaseTest): """ Unit tests for the expression/expression module. """ def setUp(self): self.a = Variable(name='a') self.x = Variable(2, name='x') self.y = Variable(3, name='y') self.z = Variable(2, name='z') self.A = Variable(2,2,name='A') self.B = Variable(2,2,name='B') self.C = Variable(3,2,name='C') self.intf = intf.DEFAULT_INTERFACE # Test the Variable class. def test_variable(self): x = Variable(2) y = Variable(2) assert y.name() != x.name() x = Variable(2, name='x') y = Variable() self.assertEqual(x.name(), 'x') self.assertEqual(x.size, (2,1)) self.assertEqual(y.size, (1,1)) self.assertEqual(x.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(x.canonical_form[0].size, (2,1)) self.assertEqual(x.canonical_form[1], []) self.assertEquals(repr(self.x), "Variable(2, 1)") self.assertEquals(repr(self.A), "Variable(2, 2)") # # Scalar variable # coeff = self.a.coefficients() # self.assertEqual(coeff[self.a.id], [1]) # # Vector variable. # coeffs = x.coefficients() # self.assertItemsEqual(coeffs.keys(), [x.id]) # vec = coeffs[x.id][0] # self.assertEqual(vec.shape, (2,2)) # self.assertEqual(vec[0,0], 1) # # Matrix variable. # coeffs = self.A.coefficients() # self.assertItemsEqual(coeffs.keys(), [self.A.id]) # self.assertEqual(len(coeffs[self.A.id]), 2) # mat = coeffs[self.A.id][1] # self.assertEqual(mat.shape, (2,4)) # self.assertEqual(mat[0,2], 1) def test_assign_var_value(self): """Test assigning a value to a variable. """ # Scalar variable. a = Variable() a.value = 1 self.assertEqual(a.value, 1) with self.assertRaises(Exception) as cm: a.value = [2, 1] self.assertEqual(str(cm.exception), "Invalid dimensions (2, 1) for Variable value.") # Vector variable. x = Variable(2) x.value = [2, 1] self.assertItemsAlmostEqual(x.value, [2, 1]) # Matrix variable. A = Variable(3, 2) A.value = np.ones((3, 2)) self.assertItemsAlmostEqual(A.value, np.ones((3, 2))) # Test tranposing variables. def test_transpose_variable(self): var = self.a.T self.assertEquals(var.name(), "a") self.assertEquals(var.size, (1,1)) self.a.save_value(2) self.assertEquals(var.value, 2) var = self.x.T self.assertEquals(var.name(), "x.T") self.assertEquals(var.size, (1,2)) self.x.save_value( matrix([1,2]) ) self.assertEquals(var.value[0,0], 1) self.assertEquals(var.value[0,1], 2) var = self.C.T self.assertEquals(var.name(), "C.T") self.assertEquals(var.size, (2,3)) # coeffs = var.canonical_form[0].coefficients() # mat = coeffs.values()[0][0] # self.assertEqual(mat.size, (2,6)) # self.assertEqual(mat[1,3], 1) index = var[1,0] self.assertEquals(index.name(), "C.T[1, 0]") self.assertEquals(index.size, (1,1)) var = self.x.T.T self.assertEquals(var.name(), "x.T.T") self.assertEquals(var.size, (2,1)) # Test the Constant class. def test_constants(self): c = Constant(2) self.assertEqual(c.name(), str(2)) c = Constant(2) self.assertEqual(c.value, 2) self.assertEqual(c.size, (1,1)) self.assertEqual(c.curvature, u.Curvature.CONSTANT_KEY) self.assertEqual(c.sign, u.Sign.POSITIVE_KEY) self.assertEqual(Constant(-2).sign, u.Sign.NEGATIVE_KEY) self.assertEqual(Constant(0).sign, u.Sign.ZERO_KEY) self.assertEqual(c.canonical_form[0].size, (1,1)) self.assertEqual(c.canonical_form[1], []) # coeffs = c.coefficients() # self.assertEqual(coeffs.keys(), [s.CONSTANT]) # self.assertEqual(coeffs[s.CONSTANT], [2]) # Test the sign. c = Constant([[2], [2]]) self.assertEqual(c.size, (1, 2)) self.assertEqual(c.sign, u.Sign.POSITIVE_KEY) self.assertEqual((-c).sign, u.Sign.NEGATIVE_KEY) self.assertEqual((0*c).sign, u.Sign.ZERO_KEY) c = Constant([[2], [-2]]) self.assertEqual(c.sign, u.Sign.UNKNOWN_KEY) # Test sign of a complex expression. c = Constant([1, 2]) A = Constant([[1,1],[1,1]]) exp = c.T*A*c self.assertEqual(exp.sign, u.Sign.POSITIVE_KEY) self.assertEqual((c.T*c).sign, u.Sign.POSITIVE_KEY) exp = c.T.T self.assertEqual(exp.sign, u.Sign.POSITIVE_KEY) exp = c.T*self.A self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) # Test repr. self.assertEqual(repr(c), "Constant(CONSTANT, POSITIVE, (2, 1))") # def test_1D_array(self): # """Test NumPy 1D arrays as constants. # """ # c = np.array([1,2]) # p = Parameter(2) # with warnings.catch_warnings(record=True) as w: # # Cause all warnings to always be triggered. # warnings.simplefilter("always") # # Trigger a warning. # Constant(c) # self.x + c # p.value = c # # Verify some things # self.assertEqual(len(w), 3) # for warning in w: # self.assertEqual(str(warning.message), "NumPy 1D arrays are treated as column vectors.") # Test the Parameter class. def test_parameters(self): p = Parameter(name='p') self.assertEqual(p.name(), "p") self.assertEqual(p.size, (1,1)) p = Parameter(4, 3, sign="positive") with self.assertRaises(Exception) as cm: p.value = 1 self.assertEqual(str(cm.exception), "Invalid dimensions (1, 1) for Parameter value.") val = -np.ones((4,3)) val[0,0] = 2 p = Parameter(4, 3, sign="positive") with self.assertRaises(Exception) as cm: p.value = val self.assertEqual(str(cm.exception), "Invalid sign for Parameter value.") p = Parameter(4, 3, sign="negative") with self.assertRaises(Exception) as cm: p.value = val self.assertEqual(str(cm.exception), "Invalid sign for Parameter value.") # No error for unknown sign. p = Parameter(4, 3) p.value = val # Initialize a parameter with a value. p = Parameter(value=10) self.assertEqual(p.value, 10) with self.assertRaises(Exception) as cm: p = Parameter(2, 1, sign="negative", value=[2,1]) self.assertEqual(str(cm.exception), "Invalid sign for Parameter value.") with self.assertRaises(Exception) as cm: p = Parameter(4, 3, sign="positive", value=[1,2]) self.assertEqual(str(cm.exception), "Invalid dimensions (2, 1) for Parameter value.") # Test repr. p = Parameter(4, 3, sign="negative") self.assertEqual(repr(p), 'Parameter(4, 3, sign="NEGATIVE")') # Test the AddExpresion class. def test_add_expression(self): # Vectors c = Constant([2,2]) exp = self.x + c self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) self.assertEqual(exp.canonical_form[0].size, (2,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), self.x.name() + " + " + c.name()) self.assertEqual(exp.size, (2,1)) z = Variable(2, name='z') exp = exp + z + self.x with self.assertRaises(Exception) as cm: (self.x + self.y) self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)") # Matrices exp = self.A + self.B self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.size, (2,2)) with self.assertRaises(Exception) as cm: (self.A + self.C) self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 2) (3, 2)") with self.assertRaises(Exception) as cm: AddExpression([self.A, self.C]) self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 2) (3, 2)") # Test that sum is flattened. exp = self.x + c + self.x self.assertEqual(len(exp.args), 3) # Test repr. self.assertEqual(repr(exp), "Expression(AFFINE, UNKNOWN, (2, 1))") # Test the SubExpresion class. def test_sub_expression(self): # Vectors c = Constant([2,2]) exp = self.x - c self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) self.assertEqual(exp.canonical_form[0].size, (2,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), self.x.name() + " - " + Constant([2,2]).name()) self.assertEqual(exp.size, (2,1)) z = Variable(2, name='z') exp = exp - z - self.x with self.assertRaises(Exception) as cm: (self.x - self.y) self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 1) (3, 1)") # Matrices exp = self.A - self.B self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.size, (2,2)) with self.assertRaises(Exception) as cm: (self.A - self.C) self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 2) (3, 2)") # Test repr. self.assertEqual(repr(self.x - c), "Expression(AFFINE, UNKNOWN, (2, 1))") # Test the MulExpresion class. def test_mul_expression(self): # Vectors c = Constant([[2],[2]]) exp = c*self.x self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual((c[0]*self.x).sign, u.Sign.UNKNOWN_KEY) self.assertEqual(exp.canonical_form[0].size, (1,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), c.name() + " * " + self.x.name()) self.assertEqual(exp.size, (1,1)) with self.assertRaises(Exception) as cm: ([2,2,3]*self.x) self.assertEqual(str(cm.exception), "Incompatible dimensions (3, 1) (2, 1)") # Matrices with self.assertRaises(Exception) as cm: Constant([[2, 1],[2, 2]]) * self.C self.assertEqual(str(cm.exception), "Incompatible dimensions (2, 2) (3, 2)") with self.assertRaises(Exception) as cm: (self.A * self.B) self.assertEqual(str(cm.exception), "Cannot multiply two non-constants.") # Constant expressions T = Constant([[1,2,3],[3,5,5]]) exp = (T + T) * self.B self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.size, (3,2)) # Expression that would break sign multiplication without promotion. c = Constant([[2], [2], [-2]]) exp = [[1], [2]] + c*self.C self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) # Scalar constants on the right should be moved left. expr = self.C*2 self.assertEqual(expr.args[0].value, 2) # Scalar variables on the left should be moved right. expr = self.a*[2,1] self.assertItemsAlmostEqual(expr.args[0].value, [2,1]) # Test the DivExpresion class. def test_div_expression(self): # Vectors exp = self.x/2 self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) self.assertEqual(exp.canonical_form[0].size, (2,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), c.name() + " * " + self.x.name()) self.assertEqual(exp.size, (2,1)) with self.assertRaises(Exception) as cm: (self.x/[2,2,3]) print(cm.exception) self.assertEqual(str(cm.exception), "Can only divide by a scalar constant.") # Constant expressions. c = Constant(2) exp = c/(3 - 5) self.assertEqual(exp.curvature, u.Curvature.CONSTANT_KEY) self.assertEqual(exp.size, (1,1)) self.assertEqual(exp.sign, u.Sign.NEGATIVE_KEY) # Parameters. p = Parameter(sign="positive") exp = 2/p p.value = 2 self.assertEquals(exp.value, 1) rho = Parameter(sign="positive") rho.value = 1 self.assertEquals(rho.sign, u.Sign.POSITIVE_KEY) self.assertEquals(Constant(2).sign, u.Sign.POSITIVE_KEY) self.assertEquals((Constant(2)/Constant(2)).sign, u.Sign.POSITIVE_KEY) self.assertEquals((Constant(2)*rho).sign, u.Sign.POSITIVE_KEY) self.assertEquals((rho/2).sign, u.Sign.POSITIVE_KEY) # Test the NegExpression class. def test_neg_expression(self): # Vectors exp = -self.x self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) assert exp.is_affine() self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) assert not exp.is_positive() self.assertEqual(exp.canonical_form[0].size, (2,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), "-%s" % self.x.name()) self.assertEqual(exp.size, self.x.size) # Matrices exp = -self.C self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) self.assertEqual(exp.size, (3,2)) # Test promotion of scalar constants. def test_scalar_const_promotion(self): # Vectors exp = self.x + 2 self.assertEqual(exp.curvature, u.Curvature.AFFINE_KEY) assert exp.is_affine() self.assertEqual(exp.sign, u.Sign.UNKNOWN_KEY) assert not exp.is_negative() self.assertEqual(exp.canonical_form[0].size, (2,1)) self.assertEqual(exp.canonical_form[1], []) # self.assertEqual(exp.name(), self.x.name() + " + " + Constant(2).name()) self.assertEqual(exp.size, (2,1)) self.assertEqual((4 - self.x).size, (2,1)) self.assertEqual((4 * self.x).size, (2,1)) self.assertEqual((4 <= self.x).size, (2,1)) self.assertEqual((4 == self.x).size, (2,1)) self.assertEqual((self.x >= 4).size,
that $\phi$ and $\psi \circ \iota$ are conjugate. Moreover, if $\sigma \in S_n$ is such that $\psi(a_i) = a_{\sigma(i)}$ and $\sigma = c_1 \circ \ldots \circ c_k$ is the disjoint cycle decomposition of $\sigma$, we can choose $\iota$ such that each cycle $c_j = (c_{j, 1} \ldots c_{j, n_{j}})$ contains at most one number $i_j$ with $\iota(a_{i_j}) = \inv{a}_{i_j}$. (Here, we also write cycles of length $1$, e.g. the identity permutation is written as $(1)(2) \ldots (n)$). \end{lemma} \begin{proof} By \cref{lem:descriptionAutomorphismsTildeA}, we know that there is a permutation $\sigma \in S_n$ and elements $e_1, \ldots, e_n \in \{-1, 1\}$ such that $\phi(a_i) = a_{\sigma(i)}^{e_i}$ and such that $\psi: \Gamma \to \Gamma: a_i \mapsto a_{\sigma(i)}$ is a well-defined graph automorphism of $\Gamma$. Therefore, $\psi$ induces a graph automorphism of $A_\Gamma$, which we still denote by $\psi$. Write $\sigma = c_1 \circ \ldots \circ c_k$ in disjoint cycle decomposition. After renumbering the generators, we can assume that there are integers $1 = i_1 < i_2 < \ldots < i_k$ such that $c_j = (i_j \ldots i_{j + 1} - 1)$, where we put $i_{k + 1} - 1 = n$. If we consider $\phi_1: A_\Gamma / \gamma_2(A_\Gamma) \to A_\Gamma / \gamma_2(A_\Gamma)$, then the matrix of $\phi_1$ with respect to the basis $a_1, \ldots, a_n$ is of the form \[ M := \begin{pmatrix} P(e_{i_1}, \ldots, e_{i_2 - 1}) & 0 & 0 & \ldots & 0 \\ 0 & P(e_{i_2}, \ldots, e_{i_3 - 1}) & 0 & \ldots & 0 \\ 0 & 0 & P(e_{i_3}, \ldots, e_{i_4 - 1}) & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & P(e_{i_k}, \ldots, e_n) \end{pmatrix} \] with each $P(e_{i_j}, \ldots, e_{i_{j + 1} - 1})$ as in \cref{lem:specialDeterminant}. If we can find a diagonal matrix $D$ with only $\pm 1$ on the diagonal such that $DMD$ is of the form \[ \begin{pmatrix} P(\pm1, 1, \ldots, 1) & 0 & 0 & \ldots & 0 \\ 0 & P(\pm1, 1, \ldots, 1) & 0 & \ldots & 0 \\ 0 & 0 & P(\pm1, 1, \ldots, 1) & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & P(\pm1, 1, \ldots, 1) \end{pmatrix} \] with each block of the same dimension as in $M$, we are done. Indeed, the matrix $D$ will then be the matrix of $\jmath_1: A_\Gamma / \gamma_2(A_\Gamma) \to A_\Gamma / \gamma_2(A_\Gamma)$ w.r.t.\ the basis $a_1, \ldots, a_n$, with $\jmath \in \Aut(A_\Gamma)$ a composition of inversions and $DMD$ corresponds to the automorphism $\psi \circ \iota$, where $\iota$ maps $a_{i_j}$ to $a^{\pm 1}_{i_j}$ for all $1 \leq j \leq k$ according to the sign in the $j$-th block of $DMD$ and leaves all other generators fixed. Then $\psi \circ \iota = \jmath \circ \phi \circ \jmath$ and $\iota$ will satisfy the `moreover'-part of the lemma, since each $P$-block contains at most one $-1$. It is sufficient to find such a diagonal matrix $D$ for each $P$-block and then putting all these blocks into one matrix. For ease of notation, we put $m = i_2 - 1$ and consider the block $P(e_1, \ldots, e_m)$. If $m = 1$, there is nothing to prove, so assume $m \geq 2$. Denote by $D_i$ the diagonal matrix with $1$'s on the diagonal except for the $i$-th position, there we put $-1$. Note that any product of the $D_i$'s is a diagonal matrix with only $\pm 1$ on the diagonal. It is not hard to see that \[ D_iP(e_1, \ldots, e_m)D_i = P(e_1, \ldots, e_{i - 2}, -e_{i - 1}, -e_i, e_{i + 1}, \ldots, e_m), \] where $e_{0} = e_{m}$. Note that the parity of the number of $-1$'s is left unchanged. By starting with the $-1$ with the highest index and moving to $e_1$, we can clear out all $-1$'s, except for one if there were an odd number of them to begin with. Wherever this last $-1$ is situated, we can move it to the first position by conjugating with suitable $D_i$'s. In the end, we end up with $P(\pm 1, 1, \ldots, 1)$ and we are done. \end{proof} At last, we can give the proof of \cref{theo:transvectionFreeAutomorphism}. \begin{proof}[Proof of \cref{theo:transvectionFreeAutomorphism}] Let $\phi \in A$. Since we will work with the induced morphism $\phi_*$ on $L(A_\Gamma)$, we may assume by \cref{cor:conjugationIsTrivialInLiering} that $\phi \in \Autip(A_{\Gamma})$. By \cref{lem:conjugationInAtilde}, $\phi$ is conjugate to $\psi \circ \iota$ with $\psi$ a graph automorphism and $\iota$ a composition of (possible zero) inversions satisfying the `moreover'-part of the statement of \cref{lem:conjugationInAtilde}. As $\phi$ and $\psi \circ \iota$ are conjugate, $R(\phi) = R(\psi \circ \iota)$ by \cref{prop:conjugateEndomorphisms}. So, we can assume that $\phi = \psi \circ \iota$. Let $\sigma \in S_n$ be the permutation associated to $\psi$ and write $\sigma = c_1 \circ \ldots \circ c_k$ in disjoint cycle decomposition. Denote by $\phi_i$ the induced automorphism on $L_i(A_\Gamma)$. By \cref{theo:RAAGsAreFGTorsionFreeLCS}, we can apply \cref{theo:usingL(G)ToEstablishRinf} to find that $R(\phi) = \infty$ if $\phi_i$ has eigenvalue $1$ for some $i$. We will distinguish several cases, and as each case will end with the sentence `$\phi_i$ has eigenvalue $1$', we will not mention \cref{theo:usingL(G)ToEstablishRinf} each time. \medskip \emph{Case 1:} Suppose there is a cycle, say, $c_1$, such that $\iota$ does not invert any of the generators $a_i$ with $i \in c_1$. After renumbering, we can assume that $c_1 = (1 \ldots m)$ for some $m \geq 1$. Then, on $L_1(A_\Gamma)$, we have that \[ \phi_1(a_1\ldots a_m\gamma_2(A_\Gamma)) = a_2a_3\ldots a_ma_1\gamma_2(A_\Gamma), \] so $\phi_1$ has a fixed point, i.e.\ $1$ is an eigenvalue of $\phi_1$, hence $R(\phi) = \infty$. From now on, assume that each cycle contains an index $i_j$ such that $\iota(a_{i_j}) = \inv{a}_{i_j}$. \medskip \emph{Case 2:} $k = n$, i.e.\ each cycle in the decomposition of $\sigma$ has length $1$. Then $\phi(a_i) = \inv{a}_i$ for all $i$. As $\Gamma$ is not complete, there are $a_i \ne a_j$ with $a_ia_j \notin E$. Hence, $[a_i, a_j]\gamma_3(A_\Gamma)$ is non-trivial and \[ \phi_2([a_i, a_j]\gamma_3(A_\Gamma)) = [\inv{a}_i, \inv{a}_j]\gamma_3(A_\Gamma) = [a_i, a_j]\gamma_3(A_\Gamma) \] by \cref{lem:congruenceModuloLCS}\eqref{item:exponentiationModGammai}. Then $\phi_2$ has eigenvalue $1$, and thus $R(\phi) = \infty$. From now on, assume that $k < n$. \medskip \emph{Case 3:} There is a cycle, say, $c_1$ containing indices $i < j$ with $a_ia_j \notin E$. Again, we can assume that $c_1 = (1 \ldots m)$. As $\sigma$ induces a graph automorphism, we have that $a_{i + l}a_{j + l}$ is not an edge of $E$ either for all $1 \leq l \leq m$ (here, we work with indices modulo $m$ where we use \(1\) up to \(m\) as representatives, rather than \(0\) up to \(m - 1\)). After renumbering the generators, we can assume that $a_1$ is mapped onto $\inv{a}_2$. Consider then the set \[ B := \{[a_{i + l}, a_{j + l}]\gamma_3(A_\Gamma) \mid 0 \leq l \leq m - 1\} \] Note that $B$ does not necessarily form a linearly independent set: if $i \equiv j + l \bmod m$ and $j \equiv i + l \bmod m$ for some $1 \leq l \leq m - 1$, then $[a_i, a_j] = - [a_{i + l}, a_{j + l}]$. These two congruences can be simultaneously fulfilled if and only if $2(j - i) \equiv 0 \bmod m$. If this condition is not satisfied, then $B$ is indeed a linearly independent set: each element in $B$ can be rewritten (up to sign) such that the first index is strictly less than the second. No two elements will have the same indices, and by \cref{prop:basisL2(AGamma)}, $B$ is a subset of a basis of $L_2(A_{\Gamma})$. We then can proceed as follows: first remark that \[ \phi_2([a_{i + l}, a_{j + l}]\gamma_3(A_\Gamma)) = \begin{cases} -[a_{i + l + 1}, a_{j + l + 1}]\gamma_3(A_\Gamma) & \parbox[t]{.25\textwidth}{if $i + l \equiv 1 \bmod m$ or $j + l \equiv 1 \bmod m$} \\[1em] [a_{i + l + 1}, a_{j + l + 1}]\gamma_3(A_\Gamma) & \mbox{otherwise}. \end{cases} \] Note that $i + l$ and $j + l$ cannot both be congruent to $1$ modulo $m$ simultaneously, as otherwise $i \equiv j \bmod m$, which is impossible as $1 \leq i < j \leq m$ (and thus also $m \geq 2$). As all elements in $B$ are distinct, there will be precisely two elements $[a_{i + l}, a_{j + l}]\gamma_3(A_\Gamma)$ that are mapped to $-[a_{i + l + 1}, a_{j + l + 1}]\gamma_3(A_\Gamma)$. Moreover, $\phi_2(\Span_\mathbb{Z}(B)) = \Span_\mathbb{Z}(B) =: V$, hence we can consider the matrix of $\phi_2$ restricted to $V$ with respect to this basis and find a matrix $P(e_1, \ldots, e_m)$ where precisely two of the $e_i$ are equal
\chapter{Introduction} The remarkable solution of Seiberg and Witten [\SW] for the complete non-perturbative chiral part of the effective action of $SU(2)$ $N=2$ Yang-Mills theory spontaneously broken to $U(1)$ represents one of the few examples in quantum field theory where the complete quantum effects are known. The result was derived using essentially two inputs; the first was the complete perturbative result for this action which had been known for many years [\HStelleW,\Seiberg] and the second was an application of electromagnetic duality [\MO]. Since only the first of these results is well established there have been a number of attempts to give alternative derivations of the Seiberg-Witten effective action. Some of the terms in the effective action have been confirmed using instanton corrections [\Confirm]. It has also been observed that the Seiberg-Witten effective action obeys a particular non-trivial relation [\Matone], which has been shown to be a consequence of the anomalous $N=2$ superconformal Ward identities of a spontaneously broken $N=2$ Yang-Mills theory [\HW]. It has since been argued [\MOS] that this relation implies the full effective action. One of the most mysterious features of the Seiberg-Witten effective action is the way that it is related to an associated Riemann surface. In particular it was shown how the effective action could be constructed given the Riemann surface using a specific recipe. However, how this surface arises naturally in the theory was not apparent. Shortly after the appearance of references [\SW] Riemann surfaces that were thought to correspond to the spontaneously broken gauge groups $SU(N)$ were proposed [\AF]. An alternative approach has been to attempt to derive the Seiberg-Witten results from a string theory. It was found in [\KV] that the Seiberg-Witten curves occured in Calabi-Yau Compactifications of type IIB string theory and furthermore, from the M theory perspective, these could be interpreted as a single fivebrane wrapped on a Riemann surface. More recently, Witten [\Witten] considered configurations of intersecting NS-fivebranes and D-fourbranes in IIA string theory. He argued that the spontaneously broken $N=2$ Yang-Mills theory appeared on the parallel D-fourbranes. By embedding this picture in M theory he argued that the configuration could be represented by a single M-fivebrane and was able to show that the Riemann surfaces which arise in not only the $SU(2)$ theory, but also in its $SU(N)$ generalisations, appear in a very natural way. However, in this work the connection between the M-fivebrane dynamics and the final Seiberg-Witten effective action remained obscure. In this paper, we shall use the M-theory fivebrane classical dynamics to derive the Seiberg-Witten effective action. In particular we use the M-fivebrane dynamics as formulated in [\HSW], although there are other formulations [\others]. It was recently shown that the M-fivebrane admits onebrane [\one] and threebrane solutions [\three] within it. We will consider a single M-fivebrane on which some threebranes are moving. The zero modes of this configuration correspond to the modes of the spontaneously broken $N=2$ Yang-Mills theory which can be viewed as living on the four-dimensional worldvolume of a threebrane. It is a consequence of the Bogomoln'yi condition for the threebranes that the M-fivebrane can be viewed as being wrapped on a Riemann surface which is itself embedded in a four-dimensional space. This latter space is composed of the two dimensions of the M-fivebrane transverse to the threebranes and the active two of the five transverse dimensions of the fivebrane embedded in M theory. We consider the classical fivebrane action for threebrane configurations in which the zero modes are allowed to depend on the threebrane coordinates. We show that it is precisely the same as the full non-perturbative Seiberg-Witten effective action for spontaneously broken $SU(N)$ Yang-Mills gauge theory. In practice we only carry out this calculation for the scalars, but it follows from $N=2$ supersymmetry that it holds for the full action involving the fermions and vectors. \chapter{The Threebrane Effective Action} The M theory fivebrane has a six-dimensional $(2,0)$ tensor multiplet of massless fields on its worldvolume. The classical equations of motion in the absence of fermions and background fields are [\HSW] $$\eqalign{ G^{\hat m\hat n}\nabla_{\hat m}\nabla_{\hat n} X^{a'} &= 0 \ , \cr G^{\hat m\hat n}\nabla_{\hat m}H_{\hat n\hat p\hat q} & = 0 \ ,\cr} \eqn\eqom $$ where the worldvolume indices are $\hat m,\hat n=0,1,...,5$ and the transverse indices are $a',b'=6,7,8,9,10$. In \eqom\ $\nabla$ is the Levi-Civita connection of the metric $g_{\hat m\hat n}=\eta_{\hat m\hat n} + \partial_{\hat m}X^{a'}\partial_{\hat n} X^{a'}$. The three form $H_{\hat m\hat n\hat p}$ is closed but not self-dual. Rather, in the vielbein frame defined by $E_{\hat a}^{\ \hat m}$, $\hat a,\hat b=0,1,...,5$, it is related to the self-dual three form $h_{\hat a\hat b\hat c}$ via $H_{\hat a\hat b\hat c} = m_{\hat a}^{\ \hat d}m_{\hat b}^{\ \hat e}h_{\hat d\hat e\hat c}$, where $m_{\hat a}^{\ \hat b} = \delta_{\hat a}^{\ \hat b} -2h_{\hat a\hat c\hat d}h^{\hat b\hat c\hat d}$. The vielbein $E_{\hat a}^{\ \hat m}$ is associated to the metric $G^{\hat m\hat n}$ through $G^{\hat m\hat n} = E_{\hat a}^{\ \hat m}E_{\hat b}^{\ \hat n}\eta^{\hat a\hat b}$ but the metric $g^{\hat m\hat n}$ is obtained from the vielbein $e_{\hat a}^{\ \hat m} =(m^{-1})_{\hat a}^{\ \hat b}E_{\hat b}^{\ \hat m}$. We refer the reader to references [\HSW,\one] for more details of the formalism and notation. Using the solution of [\three] and adopting the notation of [\Witten] we consider a multi-threebrane solution which lies in the $(x^0,x^1,x^2,x^3)$ plane. We take only two transverse fields $X^{6}$ and $X^{10}$ to be active and assume that $X^{10}$ is a compact dimension of radius $R$. The other three scalars are constant and the three form vanishes. First let us consider the dependence of the fields on the M-fivebrane coordinates transverse to the threebranes, $x^4$ and $x^5$. This solution preserves half of the six-dimensional supersymmetries if [\three] $$ \partial_4 X^{6} = \partial_5 X^{10} \ ,\ \ \ \ \ \partial_5 X^{6} = -\partial_4 X^{10}\ . \eqn\halfsusy $$ This leads to an $N=2$ vector mulitplet on the four-dimensional worldvolume of the threebranes [\three]. Adopting the complex notation $s = (X^{6} + iX^{10})/R$ and $z = \Lambda^2(x^4+ ix^5)$, where $\Lambda$ is a mass scale, we recognise \halfsusy\ as the Cauchy-Riemann equation. Thus $s$ is a complex function of $z$ only. Furthermore any choice of this function will solve the field equations [\three]. The threebranes can be then thought of as an M-fivebrane in an eight-dimensional space with coordinates $x^0,...,x^5,X^6,X^{10}$, in which $x^0,...,x^3$ are flat ${\bf R}^4$ and $x^4,x^5,X^6,X^{10}$ form a non-trivial four-dimensional space $Q$. The M-fivebrane field equations in the presence of these threebranes then imply that the M-fivebrane is wrapped on a Riemann surface $\Sigma$, which is embedded in the four-dimensional space $Q$. The volume of this Riemann surface is set by the scale $R^2$. Given the geometrical construction of the Riemann surface presented here, it is perhaps more natural to assign $z$ the dimensions of length. However we have assigned $z$ the dimensions of mass in order to make contact with the literature on the Seiberg-Witten solution. Following [\Witten] we define $t=e^{-s}$ and consider a threebrane configuration defined by $$ F(t,z) = 0 \ , \eqn\Fdef $$ where $F$ is a complex polynomial. In order to make contact with [\Witten] let us consider the IIA picture in ten dimensions obtained by taking the small $R$ limit. In this case there are D-fourbranes in the $(x^0,x^1,x^2,x^3,X^6)$ plane, located at the roots of $F(\ {\cdot},z)=0$ and also NS-fivebranes in the $(x^0,x^1,x^2,x^3,x^4,x^5)$ plane, located at the roots of $F(t,{\cdot}\ )=0$ [\Witten]. The threebrane is the intersection of the D-fourbranes with the NS-fivebranes. The restriction to a polynomial $F$ then ensures that there are only a finite number of branes. A simple example of such a configuration is a collection of $N$ threebranes at the positions $d_i$, $i=1,...,N$, with charges $q_i$ [\three] $$ s = s_0-\sum_i q_i\ln (z-d_i)\ , \eqn\sexample $$ where $s_0$ is an aribitrary constant which we set to zero. The corresponding surface is $$ F(t,z)\equiv t^2 - \prod_i (z-d_i)^{2q_i} = 0 \ , \eqn\Fexample $$ where $t$ has been suitably rescaled by $\Lambda$. For integer values of $q_i$ this is a singular Riemann surface, with degenerate roots, while if $q_i = {1\over2}$ it is non-singular. In fact we will use a more general threebrane configuration below. The scalar fields in the resulting four-dimensional theory in ${\bf R}^4$ are the positions of the threebranes $d_i$. We therefore allow $s$ to be function of $x^{\mu},\ \mu=0,1,2,3$ by letting the locations of the threebranes become $x^{\mu}$ dependent. As seen from the M-fivebrane this corresponds to letting the moduli of the Riemann Surface depend on
\frac{P_{q,g}^1(\epsilon; {\bf s}_\perp, |{\bf b}_\perp | )}{1-f_{q,g}^{\rm 1\; loss}(R;s_\perp, |{\bf b}_\perp | )\, \epsilon} \int_0^1d\epsilon' \frac{P_{q,g}^2(\epsilon'; {\bf s}_\perp, |{\bf b}_\perp | )}{1-f_{q,g}^{\rm 2\; loss}(R;s_\perp, |{\bf b}_\perp | )\, \epsilon'} \nonumber \\ &\times \frac{d\sigma^{NN}_{q,g} \left( p_{1T}/ [1-f_{q,g}^{\rm 1\, loss}(R; s_\perp, |{\bf b}_\perp | )\, \epsilon], p_{2T}/ [1-f_{q,g}^{\rm2\, loss}(R; s_\perp, |{\bf b}_\perp | )\, \epsilon'] \right)}{dp_{1T}dp_{2T}}\, , \label{eq:master} \eea where $|{\bf b}_\perp |$ is the mean impact parameter for a given collision centrality. For the $b$-tagged dijet case, we further include the contributions from $b$-quarks. In Eq.~\eqref{eq:master}, $T_A\left({\bf s}_\perp \right) = \int_{-\infty}^{\infty} \rho_A({\bf s}_\perp,z ) dz$ is the so-called thickness function in the usual optical Glauber model, where we choose the inelastic nucleon-nucleon scattering cross section $\sigma_{\rm in} = 70$ mb ($42$ mb) to obtain average number of binary collisions at $\sqrt{s_{NN}} =5.02 $~TeV (200~GeV)~\cite{Miller:2007ri}, respectively. $ P_{q,g}(\epsilon) $ is the probability density for the parent parton to redistribute a fraction $\epsilon$ of its energy through medium-induced soft gluon bremsstrahlung. For reconstructed jets, what matters is the out-of-cone energy loss fraction $f_{q,g}^{\rm loss}$~\cite{Kang:2017xnc} \bea f_{q,g}^{\rm loss}(R;{\rm rad+coll}) = 1- \left( \int_0^{R } dr \int_{\omega_{\rm min}}^E d\omega \, \frac{dN^g_{q,g}(\omega,r)}{d\omega dr} \right) \Bigg/ \left( \int_0^{R_{\rm max} } dr \int_0^E d\omega \, \frac{dN^g_{q,g}(\omega,r)}{d\omega dr} \right) \;, \eea which includes both radiative and collisional energy loss effects, with $\omega_{\rm min}$ being a parameter that controls the energy dissipated by the medium-induced parton shower into the QGP due to collisional processes~\cite{Neufeld:2014yaa}. On the other hand, $\frac{dN^g_{q,g}(\omega,r)}{d\omega dr}$ is the medium-induced gluon distribution~\cite{Vitev:2007ve}, which is the soft emission limit of the complete in-medium splitting functions~\cite{Ovanesyan:2011kn}. Splitting functions themselves are calculated using the formula derived in the $\textrm{SCET}_\textrm{M,G}$ framework \cite{Kang:2016ofv}. They have been independently obtained in the lightcone wavefunction approach~\cite{Sievert:2018imd} for both massless and massive partons, and are evaluated in the QGP medium simulated by the iEBE-VISHNU code package \cite{Shen:2014vra}. The same model of the medium has been recently used to calculate quarkonium suppression at the LHC~\cite{Aronson:2017ymv} and soft-drop groomed momentum sharing distributions~\cite{Li:2017wwc}. Numerical evaluation of the splitting functions requires multi-dimensional integration over the jet production point, the propagation of the jet in matter, and the transverse momentum dependence of the jet-medium cross section. Since the integral dimension is larger than 4, we use a numerical integration based on the Monte Carlo method. In particular, the VEGAS algorithm \cite{PETERLEPAGE1978192} implemented in the CUBA multidimensional numerical integration library \cite{Hahn:2004fe} is used because the adaptive importance sampling algorithm is efficient for integrands with localized peaks. The splitting function calculation code is written in C++. The integrals are evaluated on a Xeon cluster with task parallelization for different kinematic variables such as energy, momentum, quark mass, or the splitting channel, utilizing multiple CPU cores. Integration ranges are determined following the study presented in Ref.~\cite{Ovanesyan:2011kn}. Once we obtain the medium-modified differential cross section $d\sigma^{AA}/dp_{1T}dp_{2T}$, we then use Eqs.~\eqref{eq:mass} and \eqref{eq:zJ} to compute the dijet invariant mass distribution $d\sigma^{AA}/dm_{12}$ and imbalance distribution $d\sigma^{AA}/dz_J$ in heavy ion collisions. Such a procedure is perfectly fine for $d\sigma^{AA}/dz_J$, but is an approximation for $d\sigma^{AA}/dm_{12}$, where we assume that the medium modification for the single jet mass distributions $\langle m_1^2\rangle$ and $\langle m_2^2\rangle$ are much smaller than those for the transverse momenta $p_{1T}$ and $p_{2T}$. Thus, starting from Eq.~\eqref{eq:mass}, we obtain \bea \frac{d\sigma^{AA}}{dm_{12}} = \int dp_{1T} dp_{2T} \frac{d\sigma^{AA}}{dp_{1T}dp_{2T}} \delta\left(m_{12} - \sqrt{\langle m_1^2\rangle_{pp} + \langle m_2^2\rangle_{pp} + 2p_{1T}p_{2T}\langle\mathrm{cosh(\Delta \eta)} - \mathrm{cos}(\Delta \phi)\rangle_{pp}} \right), \label{eq:massAA} \eea where we have used the same values for $\langle m_1^2\rangle$, $\langle m_2^2\rangle$, and $\langle\mathrm{cosh(\Delta \eta)} - \mathrm{cos}(\Delta \phi)\rangle$ as those in p+p collisions, as denoted by the subscript $pp$. Such an approximation is well-justified. For example, mass distributions for single inclusive jets are indeed not significantly modified, as observed by ALICE collaboration at the LHC~\cite{Acharya:2017goa}. \section{Phenomenological results at RHIC and the LHC} In this section we first present our phenomenological results for both inclusive and $b$-tagged dijet production in A+A collisions at the LHC, as well as the future sPHENIX experiment at RHIC. To investigate dijet production in heavy ion collisions and quantify its deviation from the baseline results in elementary p+p reactions, we start with the two-dimensional nuclear modification factor \bea R_{AA}(p_{1T}, p_{2T}, |{\bf b}_\perp|) = \frac{1}{\langle N_{\rm bin}\rangle}\frac{d\sigma^{AA}(|{\bf b}_\perp|)/dp_{1T}dp_{2T}}{d\sigma^{pp}/dp_{1T}dp_{2T}}, \eea where $|{\bf b}_\perp|$ is the corresponding impact parameter and $\langle N_{\rm bin}\rangle$ is the average number of nucleon-nucleon scatterings for a given centrality class. In this paper, we focus on the most central collisions. In Fig.~\ref{fig:RAA_3D_LHC}, we make 3D plots for nuclear modification factor $R_{AA}$ as a function of the jet transverse momenta $p_{1T}$ and $p_{2T}$ simultaneously. The calculations are done for the production of dijets with radii $R=0.4$ in central ($0-10\%$) Pb+Pb collisions at the LHC energy $\sqrt{s_{NN}} = 5.02$ TeV. We integrate the rapidities of both jets over the interval $|y|<2$. For the medium effects, we choose the coupling between the jet and the medium to be $g=1.8$. This is consistent with the value used in our previous studies for single inclusive jets~\cite{Kang:2017frl}, vector-boson-tagged jets~\cite{Kang:2017xnc}, jet substructure~\cite{Chien:2015hda,Chien:2016led}, and single inclusive hadrons~\cite{Kang:2014xsa,Chien:2015vja,Kang:2016ofv} in A+A collisions. The left figure is for $b$-tagged dijet production, while the right is for inclusive dijets. We note that while we plot the full symmetric range in $p_{1T}$ and $p_{2T}$, we do have in mind that the first jet (1) will be the trigger or leading jet and the second jet (2) will be the recoil or subleading jet. Thus, we incorporate on average path length and color charge bias effects in our calculation. \begin{figure}[hbt] \includegraphics[width=3.0in]{figures-new/CMS/CMS-b-tagged-RAA-3D.pdf} \hskip 0.2in \includegraphics[width=3.0in]{figures-new/CMS/CMS-inclusive-RAA-3D.pdf} \caption{Nuclear modification factor for $b$-tagged (left) and inclusive (right) dijet production in p+p collisions at $\sqrt{s}=5.02$ TeV. Kinematic cuts are implemented in our simulations as in CMS measurements~\cite{Sirunyan:2018jju}.} \label{fig:RAA_3D_LHC} \end{figure} \begin{figure}[hbt] \includegraphics[width=3.0in]{figures-new/sPHENIX/sPHENIX-b-tagged-RAA-3D.pdf} \hskip 0.2in \includegraphics[width=3.0in]{figures-new/sPHENIX/sPHENIX-inclusive-RAA-3D.pdf} \caption{Nuclear modification factor for $b$-tagged (left) and inclusive (right) dijet production in p+p collisions at $\sqrt{s}=200$ GeV. Kinematic cuts implemented in our simulations are the same as those from the sPHENIX collaboration~\cite{sPHENIX}.} \label{fig:RAA_3D_sPHENIX} \end{figure} As one can clearly see, the largest suppression occurs along the diagonal $p_{1T} = p_{2T}$, consistent with our expectation. In the region away from the diagonal, there is a striking enhancement. As the future sPHENIX~\cite{Adare:2015kwa} experiment will have good sensitivity in measuring both inclusive and $b$-tagged dijet production, it is an opportune time to make predictions for sPHENIX kinematics. In Fig.~\ref{fig:RAA_3D_sPHENIX} we make similar 3D plots of $R_{AA}$ for $b$-tagged (left) and inclusive (right) dijet production at sPHENIX energy $\sqrt{s_{NN}} = 200$ GeV. Kinematic cuts implemented in our simulations are the same as those from the sPHENIX collaboration~\cite{sPHENIX}. Obviously the kinematic coverage for the jet transverse momenta is much smaller than that of the jets at the LHC, due to a much smaller center-of-mass energy. However, the suppression is even stronger along the diagonal $p_{1T} = p_{2T}$. This is simply because the cross sections at RHIC energies fall much faster as functions of jet transverse momenta due to limited phase space, and thus jet quenching effects get amplified~\cite{Vitev:2004gn,Adil:2004cn,Wang:2004tt,Abelev:2007ra,Adare:2008ad}. If such two-dimensional nuclear modification ratios could be measured in detail, they would provide the most information and insight into jet quenching and heavy flavor dynamics in the medium. However, the statistics necessary to perform such measurements make this, at present, quite difficult. In practice, one usually integrates out one of the differential variables and, thus, achieves a one-dimensional nuclear modification ratio. In this respect, the conventional dijet momentum imbalance $z_J$ and asymmetry $A_J$ distributions have been extensively studied in the literature. The medium modification on these traditional distributions emphasize the difference in the quenching of the dijet production, which has been observed to be relatively small. We will present such studies toward the end of this section. \begin{figure}[htb]\centering \includegraphics[width=3.0in]{figures-new/CMS/CMS-RAA-inclusive.pdf} \hskip 0.2in \includegraphics[width=3.0in]{figures-new/CMS/CMS-RAA-b-tagged-mass-band.pdf} \caption{Nuclear modification factor $R_{AA}$ is plotted as a function of dijet invariant mass $m_{12}$ for inclusive (left) and $b$-tagged (right) dijet production in Pb+Pb collisions at $\sqrt{s_{NN}} = 5.02$ TeV at the LHC. Left: the band corresponds to a range of coupling strength between the jet and the medium: $g_{\rm med}=1.8-2.0$, respectively. Right: we fix $g_{\rm med}=1.8$, and the band corresponds to
\section{Introduction} \setcounter{equation}{0} Over the past two years several issues in black hole physics have been successfully addressed within the framework of string theory (see \cite{Youm,Peet} for reviews and complete lists of references). The black hole dynamics may be recovered from an effective string description \cite{StroVafa,Mald1,DasMath}. In the dilute gas approximation \cite{MaldStro}, i.e. when the left- and right-moving modes on the effective string are free and when anti-branes are suppressed, the Bekenstein-Hawking entropy is correctly reproduced. Further, assuming that this effective string couples to the bulk fields with a Dirac-Born-Infeld type action it has been possible to find agreement with the classical cross section calculations for scalar and fermionic bulk fields [5-18]. These calculations provide a highly non-trivial test of the effective string model. However, the derivation of the effective string action including its coupling to the bulk fields requires several assumptions. In other words, we would like to deduce this action from first principles as it is the case for similar calculations involving the D-3-brane \cite{Kleb,Kleb..,KlebGubs}. One of the purposes of this paper is to fill in this gap. We shall consider the D-5-brane bound state with a constant self-dual worldvolume field strength on a compact $T^4$ studied in \cite{CostaPerry}. This configuration includes as a special case the D5-D1 brane bound state used in the original derivation of the Bekenstein-Hawking entropy formula \cite{StroVafa}. The gauge theory fluctuating spectrum associated with this bound state was found to agree with the spectrum derived from open strings ending on the D-5-brane bound state \cite{Polc,CostaPerry,CostaPerry1}. For this reason the modes associated with the worldvolume fields should be regarded as fundamental excitations of the D-brane system. This includes some modes of the gauge field that are self-dual on $T^4$ and may be called instantons but should not be interpreted as solitons. We shall see that in the limit where the brane dynamics decouples from the bulk we may define two supersymmetric branches of the theory on the brane corresponding to the self-dual modes and to the modes associated with the movement of the brane system in the transverse directions. They define the Higgs and Coulomb branches of the theory that are shown to decouple in the above limit. The Higgs branch is the one associated with the dynamics of the black hole. We derive from first principles the action for the bosonic fields in the Higgs branch which we call instanton strings action rather then effective string action. We also consider the coupling of these instanton strings to a minimally coupled scalar in the black hole background, finding agreement with the scattering cross section calculation on the supergravity side. This agreement follows because both string and classical calculations have an overlapping domain of validity (this will be our analogue of the double scaling limit introduced by Klebanov \cite{Kleb}), giving a rationale for why both descriptions yield the same result. A deeper explanation is uncovered by Maldacena's duality proposal \cite{Mald2} and subsequent works \cite{Gubs..,Witt3}. We shall elaborate on this proposal. In particular, we argue that {\em the Higgs branch of the large $N$ limit of 6-dimensional super Yang-Mills theory with a 't Hooft twist on a compact $T^4$ is dual to supergravity on $AdS_3\times S^3\times T^4$}. Based on this interpretation of the duality conjecture the effective string action should be associated with this large $N$ limit of the theory. We shall give a heuristic derivation of the effective string tension which agrees with previous results \cite{Math,Gubs,HassWadia}. The paper is organised as follows: In section two we shall revise the model studied in \cite{CostaPerry} and analyse the brane dynamics when it decouples from the bulk. The regions of validity of both D-brane and supergravity approximations are explained. In section three we shall find a minimally coupled scalar in our black hole background and derive the corresponding coupling to the instanton strings. Section four is devoted to the supergravity calculation of the scattering cross section as well as the corresponding D-brane absorption probability rate. In section five we shall describe the double scaling limit where both calculations are expected to agree. After analysing the near horizon geometry associated with our black hole we consider Maldacena's duality proposal. We give our conclusions in section six. \section{The model} \setcounter{equation}{0} In this section we shall review the D-brane model associated with our five-dimensional black hole. The dynamics of the D-brane system will be derived by starting from the super Yang-Mills (SYM) action. We shall comment on the validity of such approximation. We review the fluctuating spectrum, study the decoupling of the Higgs and Coulomb branches of the theory when the brane dynamics decouples from the bulk and derive the action for the instanton strings determining the black hole dynamics. We then write the supergravity solution describing the geometry of our black hole and comment on the validity of the supergravity approximation. Because we are claiming that our model also describes the D5-D1 brane bound state we shall keep referring to this special case as we proceed. \subsection{D-brane phase} We consider a bound state of two D-5-branes wrapped on $S^1\times T^4$ with coordinates $x^1,...,x^5$ (the generalisation to the case of $n$ D-5-branes is straightforward). Each D-5-brane has winding numbers $N_i$ along $S^1$, $p_i$ along the $x^2$-direction and $\bar{p}_i$ along the $x^4$-direction. Thus, the worldvolume fields take values on the $U(N_1p_1\bar{p}_1+N_2p_2\bar{p}_2)$ Lie algebra \cite{Witt4}. In order to have a non-trivial D-5-brane configuration we turn on the worldvolume gauge field such that the corresponding field strength is diagonal and self-dual on $T^4$. The non-vanishing components are taken to be (we assume without loss of generality that $\tan{\th_1}>\tan{\th_2}$) \eqna G^0_{23}=G^0_{45}=\frac{1}{2\pi\alpha'} {\rm diag}\left(\underbrace{\tan{\th_1},..., \tan{\th_1}},\underbrace{\tan{\th_2},...,\tan{\th_2}}\right)\ , \nonumber\\ {\footnotesize N_1p_1\bar{p}_1 \ {\rm times}\ \ \ \ \ \ N_2p_2\bar{p}_2\ {\rm times}\ \ \ \ \ \ \ \ } \label{2.1} \eeqna where \eqn \frac{1}{2\pi\alpha'}\tan{\th_i}=\frac{2\pi}{L_2L_3}\frac{q_i}{p_i}= \frac{2\pi}{L_4L_5}\frac{\bar{q}_i}{\bar{p}_i}\ , \label{2.2} \end{equation} with $q_i$ and $\bar{q}_i$ integers and $L_{\hat{\alpha}}=2\pi R_{\hat{\alpha}}$ the length of each $T^4$ circle ($\hat{\alpha}=2,...,5$). This vacuum expectation value for the gauge field breaks the gauge invariance to $U(N_1p_1\bar{p}_1)\otimes U(N_2p_2\bar{p}_2)$. Because the branes are wrapped along the $x^1$-, $x^2$- and $x^4$-directions the gauge invariance is further broken to $U(1)^{N_1p_1\bar{p}_1+N_2p_2\bar{p}_2}$. Each D-5-brane carries $Q_{5_i}=N_ip_i\bar{p}_i$ units of D-5-brane charge. Thus, the total D-5-brane charge is \eqn Q_5=N_1p_1\bar{p}_1+N_2p_2\bar{p}_2\ . \label{Q5} \end{equation} Each brane carries fluxes in the $x^2x^3$ and $x^4x^5$ 2-tori. The total fluxes are \eqn \arr{l} {\cal F}_{23}={\displaystyle \frac{1}{2\pi}\int_{T^{^{2}}_{_{(23)}}}} {\rm tr}\ G^0= \left( N_1q_1\bar{p}_1+N_2q_2\bar{p}_2\right)\ ,\\ {\cal F}_{45}={\displaystyle \frac{1}{2\pi}\int_{T^{^{2}}_{_{(45)}}}} {\rm tr}\ G^0= \left( N_1p_1\bar{q}_1+N_2p_2\bar{q}_2\right)\ . \end{array} \label{fluxes} \end{equation} These fluxes induce a 't Hooft twist on the fields [30-33], i.e. the worldvolume fields obey twisted boundary conditions on $T^4$. Also, due to this vacuum expectation value for the field strength the D-5-branes carry other D-brane charges. There are $Q_3={\cal F}_{45}$ D-3-brane charge units associated with D-3-branes parallel to the $(123)$-directions, and $Q_{3'}={\cal F}_{23}$ D-3-brane charge units associated with D-3-branes parallel to the $(145)$-directions. Furthermore, the instanton number associated with the background field strength is non-zero. As a consequence the bound state carries the D-string charge \cite{Doug} \eqn Q_1=N_{ins}=\frac{1}{16\pi^2}\int_{T^4}{\rm tr}\ (G^0\wedge G^0)= N_1q_1\bar{q}_1+N_2q_2\bar{q}_2\ . \label{Q1} \end{equation} It is now clear how we can obtain a bound state with the same charges as de D5-D1 brane system. We just have to set the fluxes in (\ref{fluxes}) to zero and the charges $Q_5$ and $Q_1$ are given by (\ref{Q5}) and (\ref{Q1}), respectively. For example, if we set $q_1=\bar{q}_1=1$ and $q_2=\bar{q}_2=-1$, then $N_1p_1=N_2p_2$, $N_1\bar{p}_1=N_2\bar{p}_2$ and the D-string charge is $Q_1=N_1+N_2$. Now we consider the region of validity of the D-brane description of our bound state. Throughout this paper we shall always assume that $g\ll 1$ so closed string effects beyond tree level are suppressed. Also, we assume that the size of $T^4$ is small, i.e. $L_{\hat{\alpha}}\sim\sqrt{\alpha'}$. The effective coupling constant for D-brane string perturbation theory is usually $gN$ for $N$ D-branes on top of each other. However, the presence of a condensate on the D-brane worldvolume induces a factor $\sqrt{1+(2\pi\alpha'G^0)^2}$ in the effective coupling \cite{Abou..}. Thus, in our case the effective string coupling reads \eqn g_{eff}=gN_ip_i\bar{p}_i \sqrt{1+(2\pi\alpha'G^0_{23})^2}\sqrt{1+(2\pi\alpha'G^0_{45})^2} \equiv\frac{r_i^2}{\alpha'}\ . \label{2.3} \end{equation} The length scales $r_i$ will enter the supergravity solution below and we assume for simplicity $r_1\sim r_2$. D-brane perturbation theory is valid for \cite{MaldStro} \eqn r_i\ll 1\ , \label{2.4} \end{equation} where the $r_i$ are now written in string units. In this region open string loop corrections may be neglected and the dynamics for the low lying modes on the brane is determined by the Dirac-Born-Infeld (DBI) action. Our tool to study this region of parameters is the ten-dimensional SYM action reduced to six dimensions. The corresponding bosonic action is \eqn S_{YM}= -\frac{1}{g_{YM}^2}\int d^6x\ {\rm tr}\left\{\frac{1}{4}(G_{\alpha\b})^2+ \frac{1}{2}(\p_{\alpha}\phi_m+i[B_{\alpha},\phi_m])^2- \frac{1}{4}[\phi_m,\phi_n]^2\right\}\ , \label{2.5} \end{equation} where $\alpha,\b=0,...,5$ and $m,n=6,...,9$. We are taking
macro declaration has to be ignored. This is done by the expansion of the \rommannumeral-\. -- this primitive expands to nothing and the possible space is consumed during this expansion. (0067) -- P. O. 06. 2014 ## Removing the last space in the parameter For example the parameters of \chap, \sec and \secc includes the unwanted space at the end. But no every time. For example \sec Damned \LaTeX <empty line> doesn't generate the space at the end. OPmac solves the removing of this space by \unskip when the parameter is used. This example removes the last space at macro level. It is sufficient to save the parameter to the \tmpb macro by \def\tmpb{#1} and to do: \addto\tmpb\end \replacestrings{ \end}{}\replacestrings{\end}{} Now, the \tmpb includes the parameter without the last possible space. (0068) -- P. O. 06. 2014 ## Macro with a parameter to the end of line The macro \eoldef gives possibility to define a macro with one parameter separated by the end of current line. For example \eoldef\foo#1{\message{param: "#1"}} \foo this is parameter and this is next text. We can see that \eoldef has the same syntax as \def for macros with one unseparable parameter, but the declared macro takes its parameter to eol. In the example above, the parameter #1 is the text "this is parameter". Warning. Since OPmac version Apr. 2016, the \eoldef macro is a part of the OPmac macros and the macros \tit, \chap, \sec and \secc are defined by it. The previous version of these macros were simply separated by \par. This is not 100% backward compatible but I hope that the advantages beat disadvantages. Advantages: The "last-space" in the parameter disappears. User can write "\sec something" as the last line in the file. Disadvantages: if your source file divides the title text to more lines then you must hide the end of each line (but not the last line) by %. If you are using \sec indside your own macro then you cannot write ... \sec#1\par ... because the error message occurrs: ! Paragraph ended before \eoldefA was complete. You can use \bracedparam from OPmac trick 0036 and write ... \bracedparam\sec{#1} ... in your macro. If you want to deactivate the EOL separation of parameters of \tit, \chap, \sec, \secc macros and to return to the original behavior (separation by empty line), you can use: \def\eoldefA{\endgroup\eoldefB} \def\eoldefB#1#2\par{\csname\string#1:M\endcsname{#2}} (0121) -- P. O. 08. 2015 ## The key=value dictionaries We want to set the data in the form keyA=valueA, keyB=valueB etc. i.e. the comma separated list of equations. The macro \kv{key} expands to the appropriate value or to \kvunknown, if the key is not set. The list of key=value pairs is read by \kvscan macro and it has to be terminated by comma, comma, equal sign, comma. The macros \kv and \kvscan are usable for macro programmer. For example: \def\mymacrodefault {color={}, width=0.4pt} \optdef\mymacro [] {\bgroup \expandafter \kvscan\mymacrodefault,,=,% default values \expandafter \kvscan\opt,,=,% valuse given by user \if^\kv{color}^\else \localcolor\kv{color}\fi % color setting \let\vruleprimitive=\vrule \def\vrule{\vruleprimitive width\kv{width}}% wule width setting ... \egroup } The \optdef macro is used here from the trick 0067. User can write the \mymacro without parameters or (for example) \mymacro[width=.7pt] or \mymacro[width=.8pt, color=\Red]. The macros \kv and \kvscan can be implemented by the code: \def\kv#1{\expandafter\ifx\csname kv:#1\endcsname \relax \expandafter\kvunknown \else \csname kv:#1\expandafter\endcsname\fi } \def\kvunknown{???} \def\kvscan #1#2=#3,{\ifx#1,\else \kvdef{kv:#1#2}{#3}\expandafter\kvscan\fi} \let\kvdef=\sdef The \kvscan macro reads the key in two parameters #1#2 in order to give the possibility to ignore the spaces after commas in the comma separated list. But spaces around equal sign are not allowed. If you need to allow them then you can pre-process the list of key=value pairs by the code: ... \let\tmpb=\opt \replacestrings{ =}{=}\replacestrings{= }{=}% \expandafter \kvscan\tmpb,,=,% You can use the following code instead \let\kvdef=\sdef \def\kvdef#1{\expandafter\edef\csname#1\endcsname} if you wish to expand the values in the time of the setting. If you need to ask if the key is set already, you can use \isdefined{kv:key}\iftrue. (0069) -- P. O. 06. 2014 ## The options in key=value dictionaries We want to mix comma separated key=value items with single options, i. e. single words not followed by the equal sign and a value. For example: \mymacroset {width=0.8pt, draft, silent} The \replacestrings can help with this task, as in the following example: \def\mymacroset#1{\def\tmpb{#1,}\replacestrings{ =}{=}\replacestrings{= }{=}% \replacestrings{draft,}{my-final=0,}% \replacestrings{final,}{my-final=1,}% \replacestrings{silent,}{my-message=0,}% \replacestrings{verbose,}{my-message=1,}% \expandafter\kvscan\tmpb,=,% \if1\kv{my-message}\let\mymessage=\message \else \def\mymessage##1{}\fi ... } \mymacroset {width=0.7pt, final, silent} % default values The single options are transformed internaly to key=value format and then the \kvscan from previous OPmac trick 0069 is used. We can ask to used option by \if1\kv{...} as follows: \if1\kv{my-final}The "final" option was used.\else The "draft" option was used.\fi (0114) -- P. O. 30. 6. 2014 ## Nested brackets of another type than {} TeX checks the nested brackets only for one type of brackets: {}. OPmac uses the macros sometimes with the parameters surrounded by [] brackets, but hey cannot be simply nested. If you write (for example) \label[a[b]c] then you get the label "a[b" and the text "c]" is printed. You can check the pairs and nesting of another type of brackets than {} by the \ensurebalanced macro: \def\macro[#1]{\ensurebalanced[]\macroA{#1}} \def\macroA#1{the parameter "#1" has balanced brackets [].} for example: \macro[a[b]c] prints: the parameter "a[b]c" has balanced brackets []. The \label macro can be redefined in order to balanced [] brackets can be nested: \def\tmp{\def\labelA##1} \expandafter\tmp\expandafter{\label[#1]} \def\label[#1]{\ensurebalanced[]\labelA{#1}} The \ensurebalanced macro is defined by: \def\ensurebalanced#1#2#3#4{% \isbalanced#1#2{#4}\iftrue #3{#4}% \else \def\ensurebalancedA##1##2#2{% \isbalanced#1#2{##1#2##2}\iftrue #3{##1#2##2}% \else \def\next{\ensurebalancedA{##1#2##2}}\expandafter\next\fi }% \def\next{\ensurebalancedA{#4}}\expandafter\next\fi } \def\isbalanced#1#2#3\iftrue{\tmpnum=0 \isbalancedA#1#2#3\isbalanced} \def\isbalancedA#1#2#3{% \ifx\isbalanced#3\def\next{\csname ifnum\endcsname\tmpnum=0 }% \else \def\next{\isbalancedA#1#2}% \isonetoken#3\iftrue \fi\fi\next } \def\isonetoken#1#2\iftrue{\ifx\isbalanced#2\isbalanced} The macro \ensurebalanced checks the balanced text by \isbalanced wit used brackets [=#1 and ]=#2. If the text is balanced then \macroA is executed, i.e #3 followed by read parameter. Else the \ensurebalancedA macro is executed (maybe recursively). It reads the next part of the parameter. (0077) -- P. O. 08. 2014 ## Reading parameter text token per token We create a macro \readtoks{parameter} which read its parameter token per token and does arbitrary processing for each token. The default version of the macro simply reads tokens and saves them to the \readtoksO token list. The usage of slight modification of this principle can be found in the next OPmac trick 0079, where \readtoks scans the list of the tokens and doubles the hash token (category 6) and converts the control sequence \internalXpram to one hash token. For example: \readtoks{aha # uff {\internalXparam1 {a}} \line} is converted to: \toks1={aha ## uff {#1 {a}} \line} A slight modification of this macro is used here or here at tex.stackexchange.com The main problem of the \readtoks macro if the fact, that we cannot simply read token per token by the macro with one unseparated parameter because this processing consumes spaces and behaves bad when braces are occurred in the parameter. These situations need to be managed by futurelet. \newtoks\readtoksT \newif\ifreadtoksG \next } \else \fi } The macro scans single tokens and runs the user defined macro \readtoksX where the processing for each token can be programmed. If this macro isn't defined then the \readtoks only saves the tokens to the output register \readtoksO (which is defined as \toks1) and the result is the same as simple setting \toks1={parameter}. Note how the braces are processed. If there is an open brace then we open new group by \begingroup and start the filling of \readtoksT from empty begin. The tokens between braces are saved to the \readtoksT in such case. When the closing brace is encountered then the actual \readtoksT is expanded and surrounded by braces and put after the contents of \readtoksT from previous processing at previous level of group. And the group is finished, of course. (0088) -- P. O. 01. 2015 ## Expandable reading text token per token Previous OPmac trick reads the given text token per token at main processor level. Now, we'll do the same at expand processor level. There is a problem with spaces which are ignored by macro with non-separated parameter. So the loop: \def\apply#1{[#1]} behaves the same as \readtokens Thisissometext.\end. We need to create a macro which respect the spaces and it is full expandable. Moreover, the macro must respect the braces. Braces is second problem of macros with non-separated parameter: {abc} is read as one parameter and braces are removed. Both these problems are solved by the \etoks{tokens} macro which is full expandable, executes \eapply to each token of its parameter and respects spaces and braces. For
## continuous graph definition algebra A function f (x) is continuous at a point x = a if the following three conditions are satisfied:. Continuity lays the foundational groundwork for the intermediate value theorem and extreme value theorem. But as long as it meets all of the other requirements (for example, as long as the graph is continuous between the undefined points), it’s still considered piecewise continuous. On the other hand, the functions with jumps in the last 2 examples are truly discontinuous because they are defined at the jump. So what is not continuous (also called discontinuous) ? The domain is … In other words, a function is continuous if its graph has no holes or breaks in it. Notice how any number of pounds could be chosen between 0 and 1, 1 and 2, 2 and 3, 3 and 4. Therefore we want to say that f(x) is a continuous function. College Algebra. After having gone through the stuff given above, we hope that the students would have understood, "How to Determine If a Function is Continuous on a Graph" Apart from the stuff given in " How to Determine If a Function is Continuous on a Graph" , if you need any other stuff in math… To do that, we must see what it is that makes a graph -- a line -- continuous, and try to find that same property in the numbers. 12th grade . A function is said to be continuous if its graph has no sudden breaks or jumps. is only continuous on the intervals (-∞, -1), (-1, 1), and (1, ∞). Discrete and Continuous Graph This will be a very basic definition but understandable one . Learning Outcomes. GET STARTED. This is because at x = ±1, f has vertical asymptotes, which are breaks in the graph (you can also think think of vertical asymptotes as infinite jumps). And then it is continuous for a little while all the way. The specific problem is: the definition is completely unclear, why is the usual definition of a graph not working in the infinite case? Played 29 times. continuous graph. They are in some sense the nicest" functions possible, and many proofs in real analysis rely on approximating arbitrary functions by continuous functions. For example, the function. Continuous graphs represent functions that are continuous along their entire domain. The function approaches ½ as x gets close to 1 from the right and the left, but suddenly jumps to 1 when x is exactly 1: Important but subtle point on discontinuities: A function that is not continuous at a certain point is not necessarily discontinuous at that point. Click through to check it out! The definition given by NCTM in The Common Core Mathematics Companion defines a linear function as a relationship whose graph is a straight line, but a physicist and mathematics teacher is saying linear functions can be discrete. Look out for holes, jumps or vertical asymptotes (where the function heads up/down towards infinity).Try these different functions so you get the idea:(Use slider to zoom, drag graph to reposition, click graph to re-center.) 1. The closed dot at (2, 3) means that the function value is actually 3 at x = 2. Bienvenue sur le site de l’Institut Denis Poisson UMR CNRS 7013. This graph is not a ~TildeLink(). Continuous graph Jump to: navigation, search This article needs attention from an expert in mathematics. The range is all the values of the graph from down to up. Homework . In the graph above, we show the points (1 3), (2, 6), (3, 9), and (4, 12). The specific problem is: the definition is completely unclear, why is the usual definition of a graph not working in the infinite case? Therefore, consider the graph of a function f(x) on the left. coordinate plane ... [>>>] Graph of y=1/ (x-1), a dis continuous graph. For example, the quadratic function is defined for all real numbers and may be evaluated in any positive or negative number or ratio thereof. In calculus, knowing if the function is … stemming. This can be written as f(1) = 1 ≠ ½. add example. The function below is not continuous because at x = a, if ε is less than the distance between the closed dot and the open dot, there is no δ > 0 for which the condition |x - a| < δ guarantees |f(x) - f(a)| < ε. CallUrl('en>wikipedia>orgshodor>org 0 (ε is called epsilon), there exists a positive real δ > 0 (δ is called delta) such that whenever x is less than δ away from a, then f(x) is less than ε away from f(a), that is: |x - a| < δ guarantees that |f(x) - f(a)| < ε. CallUrl('www>intmath>comphp',1), On a close look, the floor function graph resembles the staircase. A continuous domain means that all values of x included in an interval can be used in the function. In graph theory and statistics, a graphon (also known as a graph limit) is a symmetric measurable function : [,] → [,], that is important in the study of dense graphs.Graphons arise both as a natural notion for the limit of a sequence of dense graphs, and as the fundamental defining objects of exchangeable random graph models. In this lesson, we're going to talk about discrete and continuous functions. This means that the values of the functions are not connected with each other. Print; Share; Edit; Delete; Host a game. Module 5: Function Basics. Then we have the following rules: Addition and Subtraction Rules $${ \text{f(x) + g(x) is continuous at x = a}}$$ $${ \text{f(x) – g(x) is continuous at x = a}}$$ Proof: We have to check for the continuity of (f(x) + g(x)) at x = a. Search for: Identify Functions Using Graphs. If the same values work, the function meets the definition. As we can see from this image if we pick any value, $$M$$, that is between the value of $$f\left( a \right)$$ and the value of $$f\left( b \right)$$ and draw a line straight out from this point the line will hit the graph in at least one point. But then starting at x greater than negative 2, it starts being defined. Functions. So we have this piecewise continuous function. en Beilinson continued to work on algebraic K-theory throughout the mid-1980s. These unique features make Virtual Nerd a viable alternative to private tutoring. The open dot at (2, 2) means that the function value approaches 2 as you draw the graph from the left, but the function value is not actually 2 at x = 2 (f(2) ≠ 2). Finish Editing. Therefore, the above function is continuous at a. Any definition of a continuous function therefore must be expressed in terms of numbers only. #slope #calculator #slopeintercept #6thgrade #7thgrade #algebra Continuity lays the foundational groundwork for the intermediate value theorem and extreme value theorem. That graph is a continuous, unbroken line. Share practice link. How do we quantify if a function is continuous, or has no jumps at a certain point, assuming the function is defined at that point? And then when x is greater than 6, it's once … The value of an account at any time t can be calculated using the compound interest formula when the principal, annual interest rate, and compounding periods are known. (Topic 3 of Precalculus.) Ce laboratoire de Mathématiques et Physique Théorique, bilocalisé sur Orléans et Tours compte environ 90 enseignants-chercheurs et chercheurs permanents, une trentaine de doctorants, ATER et postdocs et une dizaine de personnels de soutien à l’enseignement et à la recherche. A continuous domain means that all values of x included
import matplotlib.pyplot as plt import numpy as np def gaussian(x,x0,xsig): ''' function to calculate the gaussian probability INPUT: x = where is the data point or parameter value x0 = mu xsig = sigma ''' factor = 1. / (np.sqrt(xsig * xsig * 2. * np.pi)) exponent = -np.divide((x - x0) * (x - x0),2 * xsig * xsig) return factor * np.exp(exponent) def likelihood_evaluation(model_error, star_error_list, abundance_list, star_abundance_list): ''' This function evaluates the Gaussian for the prediction/observation comparison and returns the resulting log likelihood. The model error and the observed error are added quadratically INPUT: model_error = the error coming from the models side star_error_list = the error coming from the observations abundance_list = the predictions star_abundance_list = the observations OUTPUT: likelihood = the summed log likelihood ''' error = np.sqrt(model_error * model_error + star_error_list * star_error_list) #print(error, 'error') #print(star_abundance_list, 'star_abundance_list') #print(abundance_list, 'abundance_list') list_of_likelihoods = gaussian(star_abundance_list,abundance_list,error) #print(list_of_likelihoods, 'list_of_likelihoods') log_likelihood_list = np.log(list_of_likelihoods) #print(log_likelihood_list, 'log_likelihood_list') likelihood = np.sum(log_likelihood_list) return likelihood def sample_stars(weight,selection,element1,element2,error1,error2,nsample): ''' This function samples stars along a chemical evolution track properly taking into account the SFR and the selection function of the stellar population (e.g. red-clump stars). It can be used to produce mock observations which can be compared to survey data. INPUT weight: The SFR of the model selection: The age-distribution of a stellar population (e.g. red-clump stars). The time intervals need to be the same as for 'weight'. element1 = the values of one element for the ISM of the model (same time-intervals as SFR) element2 = the values of the other element for the ISM error1 = the measurement error of that element error2 = measurement error nsample = number of stars that should be realized ''' weight = np.cumsum(weight*selection) weight /= weight[-1] sample = np.random.random(nsample) sample = np.sort(sample) stars = np.zeros_like(weight) for i,item in enumerate(weight): if i == 0: count = len(sample[np.where(np.logical_and(sample>0.,sample<=item))]) stars[i] = count else: count = len(sample[np.where(np.logical_and(sample>weight[i-1],sample<=item))]) stars[i] = count sun_feh = [] sun_mgfe = [] for i in range(len(weight)): if stars[i] != 0: for j in range(int(stars[i])): sun_feh.append(element1[i]) sun_mgfe.append(element2[i]) sun_feh = np.array(sun_feh) sun_mgfe = np.array(sun_mgfe) perturbation = np.random.normal(0,error1,len(sun_feh)) sun_feh += perturbation perturbation = np.random.normal(0,error2,len(sun_feh)) sun_mgfe += perturbation return sun_feh,sun_mgfe def sample_stars_all_elements(weight, selection, elements, errors, nsample, random_seed=None): ''' This function samples stars along a chemical evolution track properly taking into account the SFR and the selection function of the stellar population (e.g. red-clump stars). It can be used to produce mock observations which can be compared to survey data. *This is an updated version of sample_stars() and can return mock observations with abundances for as many elements as are tracked* INPUT weight: The SFR of the model selection: The age-distribution of a stellar population (e.g. red-clump stars). The time intervals need to be the same as for 'weight'. elements = the ISM abundance of all tracked elements of the model (same time-intervals as SFR) errors = the measurement error of each element nsample = number of stars that should be realized ''' if random_seed: np.random.seed(random_seed) weight = np.cumsum(weight*selection) weight /= weight[-1] sample = np.random.random(nsample) sample = np.sort(sample) stars = np.zeros_like(weight) for i, item in enumerate(weight): if i == 0: count = len(sample[np.where(np.logical_and(sample > 0., sample <= item))]) stars[i] = count else: count = len(sample[np.where(np.logical_and(sample > weight[i-1], sample <= item))]) stars[i] = count abundances = np.zeros((len(elements), nsample)) n = 0 for i in range(len(weight)): if stars[i] != 0: for j in range(int(stars[i])): for k in range(len(elements)): abundances[k][n] = elements[k][i] n += 1 abundances = np.array(abundances) for i, element in enumerate(elements): perturbation = np.random.normal(0, errors[i], len(abundances[i])) abundances[i] += perturbation return abundances def mock_abundances(a, nsample, abundances, elements_to_sample, element_error='solar', tracer='red_clump', random_seed=None): ''' This function provides a convenient wrapper for the SampleStars() function. 1) Loads selection function and interpolates to time steps of Chempy run. 2) Compiles and formats abundances and errors for each element 3) Passes abundances, errors, selection function, and SFR to SampleStars() INPUT a: The Model Parameters used for the Chempy run. nsample: Number of stars that should be realized abundances: Abundance output from Chempy() elements_to_sample: List of strings corresponding to element symbols that you'd like to sample element_error: Observational error to provide scatter to sample. -'solar': for observational errors of solar abundances -float: uniform observational error accross all abundances -np.ndarray: array of observational errors for each element (must be same length as elements_to_sample) -dict: Of the form {<element symbol>: <observaational error for element>} tracer: Stellar tracer to sample. Looks for age distribution in inputs/selection/<tracer>.npz OUTPUT sampled_abundances: Dictionary with an array of nsample abundances for each key in elements_to_sample. All abundances are given as [X/Fe] except for iron, which is given as [Fe/H] i.e. [Al/Fe] for star0 is sampled_abundances['Al'][0] [Fe/H] for star0 is sampled_abundances['Fe'][0] Also includes abundance error for each star under the element key + '_err' i.e. sigma[Fe/H] for star0 is sampled_abundances['Fe_err'][0] ''' from .solar_abundance import solar_abundances from . import localpath basic_solar = solar_abundances() getattr(basic_solar, a.solar_abundance_name)() # Red Clump Selection Criteria try: temp = np.load(localpath + "input/selection/{}.npz".format(tracer)) except: raise Exception('Valid age distribution file not found for {}'.format(tracer)) selection_raw = temp['age_dist'] time_selection_raw = temp['time'] selection = np.interp(abundances['time'], time_selection_raw[::-1], selection_raw) elements = [] errors = [] for i, element in enumerate(elements_to_sample): if element == 'Fe': elements.append(abundances[element][1:]) if element_error == 'solar': errors.append(float(basic_solar.table['error'] [np.where(basic_solar.table['Symbol'] == element)])) elif type(element_error) == float: errors.append(element_error) elif type(element_error) == np.ndarray: assert len(element_error) == len(elements_to_sample), "Length of element_error array must match length of elements_to_sample" errors.append(element_error[i]) elif type(element_error) == dict: assert element in list(element_error.keys()), '{} not found in element_error dictionary'.format(element) errors.append(element_error[element]) else: assert 1 == 0, "Improper element_error provided" else: elements.append(abundances[element][1:]-abundances['Fe'][1:]) if element_error == 'solar': errors.append(float(basic_solar.table['error'] [np.where(basic_solar.table['Symbol'] == element)])) elif type(element_error) == float: errors.append(element_error) elif type(element_error) == np.ndarray: assert len(element_error) == len(elements_to_sample), "Length of element_error array must match length of elements_to_sample" errors.append(element_error[i]) elif type(element_error) == dict: assert element in list(element_error.keys()), '{} not found in element_error dictionary'.format(element) errors.append(element_error[element]) else: assert 1 == 0, "Improper element_error provided" sampled_abundances = sample_stars_all_elements(abundances['weights'][1:], selection[1:], elements, errors, nsample, random_seed) sampled_abundances = {y: z for y, z in zip(elements_to_sample, sampled_abundances)} for i, element in enumerate(elements_to_sample): sampled_abundances[element+'_err'] = np.ones(nsample) * errors[i] return(sampled_abundances) #def gaussian_1d_log(x,x0,xsig): # return -np.divide((x-x0)*(x-x0),2*xsig*xsig) def yield_plot(name_string, yield_class, solar_class, element): ''' This function plots [X/Fe] for the complete mass and metallicity range of a yield class. INPUT: name_string = a string which is included in the saved file name yield_class = a Chempy yield class (e.g 'Nomoto2013' from SN2_Feedback, see tutorial) solar_class = a Chempy solar class (needed for normalisation) element = the element for which to plot the yield table OUTPUT the figure will be saved into the current directory ''' elements = np.hstack(solar_class.all_elements) solar_fe_fraction = float(solar_class.fractions[np.where(elements == 'Fe')]) solar_element_fraction = float(solar_class.fractions[np.where(elements == element)]) plt.clf() fig = plt.figure(figsize=(13,8), dpi=100) ax = fig.add_subplot(111) ax.set_title('Yields of %s' %(name_string)) ax.set_xlabel(r'metallicity in $\log_{10}\left(\mathrm{Z}/\mathrm{Z}_\odot\right)$') ax.set_ylabel('[%s/Fe]' %(element)) for item in yield_class.metallicities: for j,jtem in enumerate(list(yield_class.table[item]['Mass'])): ejecta_fe = yield_class.table[item]['Fe'][j] ejecta_element = yield_class.table[item][element][j] #print "log Z, mass, [X/Fe]" #print np.log10(float(item)/solar_class.z), jtem, np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) if item == 0: metallicity = np.log10(float(1e-7)/solar_class.z) else: metallicity = np.log10(float(item)/solar_class.z) alpha_enhancement = np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) mass = jtem ax.scatter(metallicity, alpha_enhancement, s=20, c=None, marker=u'o', cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, verts=None, edgecolors=None) ax.text(metallicity, alpha_enhancement, mass) plt.savefig('yields_%s.png' %(name_string),bbox_inches='tight') def yield_comparison_plot(yield_name1, yield_name2, yield_class, yield_class2, solar_class, element): ''' a function to plot a comparison between two yield sets. It is similar to 'yield_plot' only that a second yield set can be plotted. INPUT yield_name1 = Name of the first yield class yield_name2 = Name of the second yield class yield_class = a Chempy yield class (e.g 'Nomoto2013' from SN2_Feedback, see tutorial) yield_class2 = a Chempy yield class (e.g 'Nomoto2013' from SN2_Feedback, see tutorial) solar_class = a Chempy solar class (needed for normalisation) element = the element for which to plot the yield table OUTPUT the figure will be saved into the current directory ''' elements = np.hstack(solar_class.all_elements) solar_fe_fraction = float(solar_class.fractions[np.where(elements == 'Fe')]) solar_element_fraction = float(solar_class.fractions[np.where(elements == element)]) plt.clf() fig = plt.figure(figsize=(13,8), dpi=100) ax = fig.add_subplot(111) ax.set_title('Yields of %s in blue vs %s in red' %(yield_name1,yield_name2)) ax.set_xlabel(r'metallicity in $\log_{10}\left(\mathrm{Z}/\mathrm{Z}_\odot\right)$') ax.set_ylabel('[%s/Fe]' %(element)) for item in yield_class.metallicities: for j,jtem in enumerate(list(yield_class.table[item]['Mass'])): ejecta_fe = yield_class.table[item]['Fe'][j] ejecta_element = yield_class.table[item][element][j] #print "log Z, mass, [X/Fe]" #print np.log10(float(item)/solar_class.z), jtem, np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) if item == 0: metallicity = np.log10(float(1e-7)/solar_class.z) else: metallicity = np.log10(float(item)/solar_class.z) alpha_enhancement = np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) mass = jtem ax.scatter(metallicity, alpha_enhancement, s=20*mass, c='b', marker=u'o', cmap=None, norm=None, vmin=None, vmax=None, alpha=0.5, linewidths=None, verts=None, edgecolors=None) ax.annotate(xy = (metallicity, alpha_enhancement), s = mass, color = 'b') for item in yield_class2.metallicities: for j,jtem in enumerate(list(yield_class2.table[item]['Mass'])): ejecta_fe = yield_class2.table[item]['Fe'][j] ejecta_element = yield_class2.table[item][element][j] #print "log Z, mass, [X/Fe]" #print np.log10(float(item)/solar_class.z), jtem, np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) if item == 0: metallicity = np.log10(float(1e-7)/solar_class.z) else: metallicity = np.log10(float(item)/solar_class.z) alpha_enhancement = np.log10(ejecta_element/solar_element_fraction) - np.log10(ejecta_fe/solar_fe_fraction) mass = jtem ax.scatter(metallicity, alpha_enhancement, s=20*mass, c='r', marker=u'o', cmap=None, norm=None, vmin=None, vmax=None, alpha=0.5, linewidths=None, verts=None, edgecolors=None) ax.annotate(xy = (metallicity, alpha_enhancement), s = mass, color = 'r') plt.savefig('yields_comparison_%s_vs_%s_for_%s.png' %(yield_name1,yield_name2,element),bbox_inches='tight') def fractional_yield_comparison_plot(yield_name1, yield_name2, yield_class, yield_class2, solar_class, element): ''' a function to plot a comparison between the fractional yield of two yield sets. The fractional yield is the
$ L $ the length of the system and $ a $ a short-length cutoff. In particular, for $ \alpha=\beta $ one gets $ D_{\alpha,\alpha}(\xi;t,\tau)\equiv D_{\alpha,\alpha}(\xi;\tau) $, i.e., breaking of time-translational invariance does not affect auto-correlators, although they are different from their equilibrium counterparts. On the other hand, cross-correlators exhibit an explicit time dependence, \begin{multline} D_{\alpha,-\alpha}(\xi;t,\tau)=\frac{\theta_{+}\theta_{-}}{2\pi}\\ \times\text{ln}\left\{\frac{[a^2+ 4 u_f^2 t^2][a^2+4 u_f^2 (t-\tau)^2]}{[a^2+ (-\xi-\alpha u_f (2t-\tau))^2]^2}\right\}, \label{eq:crosscorrelators} \end{multline} encoding the entanglement, and its decay in time, between bosonic fields $ \phi_{f,+}(x) $ and $ \phi_{f,-}(x)$. Note that cross-correlators are different from zero at any finite time $ t $ while $ D_{\alpha,-\alpha}(\xi;t,\tau)\rightarrow 0 $ for $ t\rightarrow \infty $. By expanding Eq.~\eqref{eq:crosscorrelators} in Taylor series in the long-time limit $ t\gg \tau,\, \xi/u_f $ one obtains a decay with integer power laws only, whose exponents are thus independent of the quench parameters. In particular, in the local case $ \xi=0 $ on which we will focus in the following one has \begin{equation} D_{\alpha,-\alpha}(0;t,\tau)=\sum_{n=2}^{\infty}\frac{d_n(\tau)}{t^n}, \label{eq:Dabexpansionlocal} \end{equation} with coefficients $ d_n(\tau) $ independent of the chirality. Therefore, in the long-time limit, cross-correlators decay with a leading power-law behavior $ \propto t^{-2} $. Finite cross-correlators $ D_{\alpha,-\alpha}(\xi;t,\tau) $ are a hallmark of the quench-induced entanglement between the two counter-propagating bosonic fields and will determine the long-time relaxation of the system towards its steady-state. Moreover, due to their algebraic long-ranged behavior, one would expect observable signature of their decay in the system properties. \section{Time-dependent spectral function}\label{sec:spectral} We now discuss the influence of quench-induced cross-correlations on a specific example of fermionic spinless LL with repulsive interactions ($K_\mu \leq 1$) and $K_i>K_f$. Using bosonization~\cite{Voit:1995,vonDelft:1998,Giamarchi:2004}, the system is described in terms of bosonic fields by Eq.~\eqref{eq:H} and the fermionic operator decomposes into right ($ R $) and left ($ L $) channels as $ \psi(x)=e^{i q_F x}\psi_R(x)+e^{-i q_F x} \psi_L(x) $. Here, $q_F$ is the Fermi wave-vector and \begin{equation} \psi_r(x)=\frac{1}{\sqrt{2\pi a}} \exp[-i\sqrt{2\pi}\Phi_r(x)] , \label{eq:bosonization} \end{equation} with $ r=R,L $. The bosonic field $ \Phi_r(x) $ can be expressed in terms of \emph{final} chiral fields as \begin{equation} \Phi_r(x)=\sum_{\eta}A_{(\epsilon_r\eta)}\phi_{f,\eta}(x), \end{equation} with $ 2A_{(\epsilon_r\eta)}=K_f^{-1/2}+\epsilon_r\eta K_f^{1/2} $ and $ \epsilon_{R/L}=\pm1 $. To show how the decay of entanglement between opposite chiral excitations affects the dynamics of the system, we focus on the behavior of the local lesser Green function, \begin{equation} G^{<}(t,t-\tau)\equiv i \langle\psi^\dagger(x,t-\tau)\psi(x,t)\rangle_i, \end{equation} in the regime $t>\tau$. Since the particle number is conserved, we can write \begin{equation} G^<(t,t-\tau)=G_R^{<}(t,t-\tau)+G_L^{<}(t,t-\tau),\label{eq:Glesserdef} \end{equation} where $ G_r^{<}(t,t-\tau) $ denotes the $ r- $channel lesser Green function. Using the bosonization identity of Eq.~\eqref{eq:bosonization} and recalling Eqns.~\eqref{eq:2points} and~\eqref{eq:Daq}, we obtain \begin{equation} G^{<}_r(t,t-\tau)=G_{r,\infty}^{<}(\tau)\mathcal{U}(t,\tau). \label{eq:Glesseraq} \end{equation} Here, \begin{align} G_{r,\infty}^{<}(\tau)&=\frac{i}{2\pi a} e^{\pi \left[A^2_{\epsilon_r}D_{+,+}(0;\tau)+A^2_{-\epsilon_r}D_{-,-}(0;\tau)\right]}\nonumber\\ &=\left[\frac{a}{a + i u_f \tau}\right]^{\nu_+} \left[\frac{a}{a - i u_f \tau} \right]^{\nu_-} \end{align} represents the steady-state $ r- $channel lesser Green function while \begin{align} \mathcal{U}(t,\tau)&= e^{\pi A_+A_-[D_{+,-}(0;t,\tau)+D_{-,+}(0;t,\tau)]}\nonumber\\ &=\left\{\frac{[a^2+u_f^2(2t-\tau)^2]^2}{(a^2+4u_f^2t^2)[a^2+4u_f^2(t-\tau)^2]}\right\}^{\gamma} \label{eq:U} \end{align} features the explicit time dependence encoded in the cross-correlators $ D_{\alpha,-\alpha}(0;t,\tau) $. Note that $ \mathcal{U}(t,\tau) $ does not depend on the channel index $ r $ and $ \mathcal{U}(t,\tau)\rightarrow1 $ for $ t\rightarrow\infty $. Here, $ \nu_\pm=\theta^2_\mp(A^2_++A^2_-) $ and $ \gamma=-A_+A_-\theta_+\theta_- $. In particular, one has $\gamma>0$ for the quench protocols with $K_i>K_f$ we are considering. Importantly, the presence of cross-correlators $D_{\alpha,-\alpha}(0;t,\tau)$ in the function $ \mathcal{U}(t,\tau) $ leads to a universal power-law decay of $ G_r^{<}(t,t-\tau) $ in the long-time limit. Indeed, by expanding $ \mathcal{U}(t,\tau) $ in Taylor series for $ \tau/t\ll 1 $ we obtain \begin{equation} G^{<}_r(t,t-\tau)= G_{r,\infty}^{<}(\tau)\left[1+\sum_{n=2}^{\infty}\frac{g_n(\tau)}{t^n}\right]. \label{eq:Glesserr} \end{equation} Since in the local case we address here $ G^{<}_r(t,t-\tau) $ does not explicitly depend on the index $ r $, one readily obtains the long-time limit expansion of the full lesser Green function \begin{align} \label{eq:Glesser} G^{<}(t,t-\tau)&=G^{<}_\infty(\tau)\left[1+\sum_{n=2}^{\infty}\frac{g_n(\tau)}{t^n}\right]\nonumber\\ &\approx G^{<}_\infty(\tau)\left(1+\frac{\gamma \tau^{2}}{2t^2}\right), \end{align} with $ G_\infty^{<}(\tau)=2G^{<}_{r,\infty}(\tau) $. Therefore, in the long-time limit, $ G^{<}(t,t-\tau) $ approaches its asymptotic value $ G^{<}_\infty(\tau) $ with a power-law decay $ \propto t^{-2} $, directly induced by the relaxation of cross-correlators $ D_{\alpha,-\alpha}(\xi;t,\tau) $ found in Eq.~\eqref{eq:Dabexpansionlocal}~\cite{footnote:nonlocal}. The long-time behavior of Eq.~\eqref{eq:Glesser} immediately reflects on spectral properties, as one can see by inspecting the long-time limit of the local (lesser) NESF~\cite{Meden:1992,Kennes:2014,Nghiem:2017} \begin{equation} A^{<}(\omega,t)\equiv \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\omega \tau}(-i)G^{<}(t,t-\tau)\,d\tau. \label{eq:spectral} \end{equation} Indeed, as shown in Appendix~\ref{app:appendix}, we find \begin{equation} A^{<}(\omega,t)= \bar{A}_0\bigg[ \bar{A}_\infty^{<}(\omega) + \sum_{n=2}^{\infty}\frac{\mathcal{A}_n(\omega)}{t^n}+\frac{\mathcal{M}^A(\omega,t)}{t^\nu}\bigg],\label{eq:Alesser} \end{equation} with $ \bar{A}_0=(2\pi^2 v)^{-1} $ and all terms inside the square brackets dimensionless. Here, $ A_\infty^{<}(\omega)=\bar{A}_0\bar{A}_\infty^{<}(\omega) $ is the steady-state value of the NESF, already discussed in Refs.~\cite{Kennes:2014,Calzona:2017}. In this work, we focus on the time-decay of $ A^{<}(\omega,t) $ towards this asymptotic value. In particular, two distinct contributions emerge. The first one contains only integer power laws $ \propto t^{-n} $ (with $ n\geq2 $) and is entirely due to the decay of $ G^{<}(t,t-\tau) $ found in Eq.~\eqref{eq:Glesser}. Here, the coefficients present in the sum are given by $ \mathcal{A}_n(\omega)=2\pi^2 v\int_{-\infty}^{\infty}G^{<}_\infty(\tau) g_n(\tau)\, d\tau $, with $ g_n(\tau) $ defined in Eq.~\eqref{eq:Glesser}. We therefore obtain that the leading contribution of this term is a \emph{universal} power-law decay $ \propto t^{-2} $, regardless of, e.g., quench parameters. On the other hand, the second contribution contains the function $ \mathcal{M}^A(\omega,t) $ which, to the leading order in $ 1/t $, is an oscillating function with constant amplitude (see Appendix~\ref{app:appendix} for details). Thus, in the long-time limit, it decays with a LL-like \emph{non-universal} power law $ \propto t^{-\nu} $, with \begin{equation} \nu=\frac{K_f^4+K_i^2+3K_f^2(1+K_i^2)}{8K_f^2K_i}\geq1 \label{eq:nu} \end{equation} strongly dependent on quench parameters $ K_i $ and $ K_f $. It turns out that the universal power-law behavior, which directly derives from the decay of entanglement between bosonic excitations $ \phi_{f,+}(x) $ and $ \phi_{f,-}(x) $, is hardly visible in the transient of the NESF. Indeed, for any reasonable quench one finds $ 1\leq\nu<2 $. Thus, the long-time decay of $ A^{<}(\omega,t) $ is governed by the non-universal contribution $ \propto t^{-\nu} $, with the universal one being a sub-leading term. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig1.pdf} \caption{(Color online) Plot of $ |\text{Re}[\Delta A^{<}(\omega,t)]| $ [units $ \bar{A}_0=(2\pi^2 v)^{-1}$] as a function of time $ t $ [units $ (v q_F)^{-1} $] with $ \omega=-0.1\, vq_F $ for the quenches $ K_i=0.9\rightarrow K_f=0.7$ (blue, dashed) and $ K_i=0.8 \rightarrow K_f=0.4$ (red, solid). Here, solid black lines represent the power-law behavior $ \propto t^{-\nu} $ for the two different cases.} \label{fig:spectral} \end{figure} This is illustrated in Fig.~\ref{fig:spectral}, which shows the deviation of the lesser NESF from its steady-state value, $ \Delta A^{<}(\omega,t)=A^{<}(\omega,t)-A^{<}_\infty(\omega) $, at large times and for two different interaction quenches~\cite{footnote:cutoff}. Here, the oscillating behavior due to $ \mathcal{M}^A(\omega,t) $ decays with non-universal power law $ \propto t^{-\nu} $ (see solid black lines) while no evidence of the universal behavior $ \propto t^{-2} $ is present. Despite the sub-leading character of the universal contribution to the behavior of the NESF in Eq.~\eqref{eq:Alesser}, in the next Section we will demonstrate that it controls the long-time behavior of charge and energy currents in a transport setup. \section{Transient dynamics of transport properties}\label{sec:trasport} Assume now that immediately after the quench, the LL (hereafter dubbed \emph {the system}) is locally tunnel-coupled to a non-interacting 1D \emph{probe}, as sketched in Fig.~\ref{fig:setup}, described by the Hamiltonian \begin{equation} H_p=-iv\int_{-\infty}^{\infty}\chi^\dagger(x)\partial_x\chi(x)\,dx, \end{equation} with $\chi(x)$ its fermionic field. The probe is subject to a bias voltage $ V $ measured with respect to the Fermi level of the system. We assume a local tunneling at $x_0$ which breaks inversion parity, focusing, e.g., on the injection in the system $ R- $channel only~\cite{Calzona:2017,Chevallier:2010,Dolcetto:2012,Vannucci:2015,Calzona:2017}, \begin{equation} H_t(t)=\vartheta(t)\lambda \; \psi_R^\dagger(x_0)\chi(x_0)+\mathrm{h.c.}, \label{eq:Ht} \end{equation} where $ \lambda $ is the tunneling amplitude and $ \vartheta(t) $ is the Heaviside step function. The whole setup is assumed to be in thermal equilibrium before the quench, with $ \rho(0) $ the associated zero-temperature density matrix. \begin{figure} \centering \includegraphics[width=1\linewidth]{Fig2} \caption{(Color online) Scheme of the system, modeled as a pair of counter-propagating channels, and the probe, biased with a dc voltage $ V $. At $ x=x_0 $, the probe injects $ R- $moving particles only.} \label{fig:setup} \end{figure} We concentrate on chiral charge and energy currents, defined as \begin{align} I_\eta(V,t) &= e\partial_t \int_{-\infty}^{\infty} \langle\delta n_\eta(x,t)\rangle \,dx,\label{eq:chiralI}\\ P_\eta(V,t) &= \partial_t \int_{-\infty}^{\infty} \langle\delta \mathcal{H}_\eta(x,t)\rangle \,dx. \label{eq:chiralP} \end{align} Here, \begin{align} n_\eta(x,t) &= -\eta\sqrt{\frac{K_f}{2\pi}}\partial_x\phi_{f,\eta}(x-\eta u_f t),\label{eq:chiraldensity}\\ \mathcal{H}_\eta(x,t)&=\frac{u_f}{2}[\partial_x\phi_{f,\eta}(x-\eta u_f t)]^2\label{eq:chiralhamiltonian} \end{align} are the chiral particle and Hamiltonian densities, respectively, while \begin{equation} \langle\delta \mathcal{O}(x,t)\rangle = \text{Tr}\{ \mathcal{O}(x,t)[\rho(t)-\rho(0)] \} \end{equation} represents the average variation, induced by the tunneling, of a given operator $ \mathcal{O}(x,t) $. The time-dependent full density matrix $ \rho(t) $ is evaluated in the interaction picture
for the old leader, so it won’t count the syncing server as accepting the message and being a part of the quorum. As a result, if the data from the old-termed sync source was committed, the syncing server has received it and will eventually receive the commit notification from the new leader. If that data is uncommitted by the old leader (i.e., a split-brain situation), then no harm is done since the syncing server does not contribute to the quorum. The syncing server will eventually learn of proper operations instead. Now speaking of performance, the paper does not provide any comparison between push- and pull-based solutions, so we are left wondering about the impact of such drastic change. However, some experiments illustrate the comparison of star topology and chained replication in a 2-regions WAN setup. While chaining does not seem to increase the performance (unless the cross-region bandwidth is severely restricted), it predictably lowers the WAN bandwidth requirements. As far as maximum performance, the paper mentions that the limiting factor is handling client requests and not replication, and this is why one fewer server pulling from the leader does not impact throughput. I am not sure I am very comfortable with this explanation, to be honest. The paper talks about a handful of other optimizations. I will mention just one that seemed the most interesting — speculative execution. With speculative execution, the nodes do not wait for a commitment notification to apply an operation, and speculatively apply it to the store right away. Since the underlying storage engine is multi-version, the system can still return strongly consistent reads by keeping track of the latest committed version. The multi-version store also allows rollbacks in case some operation fails to commit. You can see my presentation of the paper on YouTube: ## Discussion 1) Cost of pull-based replication. The performance cost of the pull-based solutions was the first big discussion topic that I raised in my presentation. As I mentioned, there is an extra communication step needed to allow the primary to learn of quorum acceptance. This step either adds additional message round/RPC call or adds latency if piggybacked to the next pull request. Another concern is pulling data when there is no new data, as this will be wasted communication cycles. Luckily, we had MongoDB people, including Siyuan Zhou, one of the paper authors, to shed some light on this. To make things a little better and not waste the communication cycles, the pulls have a rather long “shelf life” — if the source has no data to ship, it will hold on to the pull request for up to 5 seconds before replying with a blank batch. Another big optimization in the MongoDB system is a gradual shift to the push-based style! This somewhat makes the entire premise of the paper obsolete, however, this new “push-style” replication is still initiated by the syncing server with a pull, but after the initial pull, the source can push the data to the syncing server as it becomes available. So this allows building these complicated replication topologies while reducing the latency impact of a purely pull-based solution. Another aspect of cost is the monetary cost, and this is where chained replication topologies can help a lot. Apparently, this was a big ask from clients initially and shaped a lot of the system architecture. 2) Evolution vs Revolution. So, since the original unproven replication approach was pull-based to allow chained topologies, the new improved and model-checked solution had to evolve from the original replication. One might think that it would have been easier to slap a regular push-based Raft, but that would have been a drastic change for all other components (not to mention the features). This would have required a lot more engineering effort than trying to reuse as much of the existing code as possible. This brings an interesting point on how production software gradually evolves and almost never drastically revolves. 3) Evaluation. The evaluation is the biggest problem of the paper. It lacks any comparison with other replication approaches except old MongoDB’s primary backup scheme. This makes it hard to judge the impacts of the changes. Of course, as we have seen from the discussion with the authors, the actual implementation is even more complicated and evolved. It tries to bridge the gap between pull and push replication styles, so a clear comparison based on MongoDB’s implementation may not have been possible at all. That being said, I had some questions even about the provided self-comparisons. See, I would have expected to see a bit of throughput increase from chaining, similar to what I observe in PigPaxos, but there is none. The paper explains it by saying that replication at the sync source takes only 5% of a single core per syncing server, which would amount to just 20% of a core in star topology leader. Roughly, given the VMs used, this is around 5% of the total CPU used on replication, with the paper claiming that all remaining CPU is used to handle client requests. Assuming there is sync every 2 ms, we have about 2000 syncs per second at the leader for 12000 client requests. Doing some napkin math, we can see that there are 6 times more requests than replications per second, yet requests use 19 times the CPU, making each request roughly 3 times more expensive than each replication. Given that replication messages are not super cheap and require querying the log, and serializing the data, the performance cost difference sounds excessive to me, despite considering that client requests have a few writes in them (i.e., write to the log, and operation execution). Siyuan explained that there is a bit more work on the request path as well. Not only the log is written (and persisted for durability), but there are also some additional writes for session maintenance and indexes. Moreover, the writes in the underlying WiredTiger engine are transactional, costing some performance as well. 4) PigPaxos? Throwing this out here, but there are push-based solutions that can have more interesting replication topologies than a star. Our PigPaxos, in fact, solves a similar issue of cross-regional replication at a low network cost. 5) Other pull-based systems. Finally, we try to look at other solutions that may use pull-based replication, but there are not many. Pub-Sub systems fit the pull model naturally as subscribers consume the data by pulling it from the queue/log/topic. Pull-based replication can be handy in disaster recovery when nodes try to catch up to the current state and must ask for updates/changes starting some cursor or point in history. 6) Reliability/Safety. As the protocol makes one important change of not rejecting the data from the old-termed source, it is important to look at the safety of this change. The paper claims to model the protocol with TLA+ and model checking it. Intuitively, however, we know that even though the node takes the data from the old source, it actively rejects it by sending its higher term. This, intuitively, should be enough to ensure that the old leader does not reach a quorum of accepts (even though there can be a majority of nodes that copied the data) and does not reply to a client of commitment. The rest is taken care of by Raft’s commit procedure upon the term change. # Reading Group Our reading groups takes place over Zoom every Wednesday at 2:00 pm EST. We have a slack group where we post papers, hold discussions and most importantly manage Zoom invites to the papers. Please join the slack group to get involved! # Scalable but Wasteful or Why Fast Replication Protocols are Actually Slow In the last decade or so, quite a few new state machine replication protocols emerged in the literature and the internet. I am
$X_{T-}=X^\Delta_T+\Lambda^\Delta_T-\Lambda_{T-}$ (since $X_{T-}+\Lambda_{T-}=X_{0-}+B_T=X^\Delta_T+\Lambda^\Delta_T$) and $X^\Delta\ge X$, so \[ \mathbb{P}\Big( 0 < X^\Delta_T+\Lambda^\Delta_T-\Lambda_{T-} \le x,\,\inf_{0\le s < T} X^{\Delta}_s > 0\Big) \ge \frac{x}{\alpha}, \quad x \in [0, \La_T - \La_{T-}). \] We use this assertion upon setting $\ell=\Lambda^\Delta_T-\Lambda_{T-}$ to obtain for all $y\in[-\ell,\La_T - \La_{T-}-\ell)$, \begin{eqnarray} && \mathbb{P}\Big(X^\Delta_T \le y,\,\inf_{0\le s <T} X^{\Delta}_s > 0 \Big) = \mathbb{P}\Big( X^\Delta_T+\ell \le y+\ell,\,\inf_{0\le s < T} X^{\Delta}_s > 0 \Big) \ge \frac{y+\ell}{\alpha},\quad\mathrm{and\;thus} \nonumber \\ &&\mathbb{P}\Big(X^\Delta_T \le y,\,\inf_{0\le s <T} X^{\Delta}_s > 0 \Big)\ge\frac{\min(y+\ell,\Lambda_T-\Lambda_{T-})-\min(y+\ell,0)}{\alpha}, \quad y\in\mathbb{R}. \label{eq::distrib_of_jump} \end{eqnarray} \smallskip Next, we turn to $X^\Delta_{T+\theta}$, let $\widetilde{x}:=x+\La^{\Delta}_{T+\theta}-\Lambda_{T-}$, and observe \begin{equation*} \begin{split} &\mathbb{P}\Big(X^{\Delta}_{T + \theta} \le x,\,\inf_{0\le s < T} X_s^{\Delta} > 0\Big) = \mathbb{P}\Big( X^{\Delta}_T + B_{T+\theta} - B_T - \La^{\Delta}_{T+\theta} + \La^{\Delta}_{T} \le x,\,\inf_{0\le s < T} X_s^{\Delta} > 0 \Big) \\ &\qquad\qquad\qquad\quad\;\;\; = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{z^2}{2}}\, \mathbb{P}\Big(X^{\Delta}_T - \La^{\Delta}_{T+\theta} + \La^{\Delta}_T \le x + \sqrt{\theta}z,\,\inf_{0\le s < T} X_s^{\Delta} > 0 \Big)\,\mathrm{d}z \\ &\qquad\qquad\qquad\quad\;\;\; \ge \frac{1}{\sqrt{2\pi}\alpha} \int_{-\infty}^{\infty} e^{-\frac{z^2}{2}}\,\big(\min(\widetilde{x}+\sqrt{\theta}z,\Lambda_T-\Lambda_{T-})-\min(\widetilde{x}+\sqrt{\theta}z,0)\big)\,\mathrm{d}z \\ &\qquad\qquad\qquad\quad\;\;\; \ge \frac{1}{\sqrt{2\pi}\alpha} \int_{-\widetilde{x}/\sqrt{\theta}}^{(\Lambda_T-\Lambda_{T-}-\widetilde{x})/\sqrt{\theta}} e^{-\frac{z^2}{2}}\, (\widetilde{x}+\sqrt{\theta}z)\,\mathrm{d}z, \end{split} \end{equation*} where we have used \eqref{eq::distrib_of_jump} to get the first inequality. With Definition \ref{defn::discrete} and $(T+\theta)/\Delta\in\mathbb{N}$, \begin{equation*} \mathbb{P}\Big(X^{\Delta}_{T+\theta}\le x,\inf_{0\le s<T+\theta} X_s^{\Delta} > 0\Big) \ge\frac{1}{\sqrt{2\pi}\alpha} \int_{-\widetilde{x}/\sqrt{\theta}}^{(\Lambda_T-\Lambda_{T-}-\widetilde{x})/\sqrt{\theta}} \!\! e^{-\frac{z^2}{2}}(\widetilde{x}+\sqrt{\theta}z)\,\mathrm{d}z-\frac{\Lambda^\Delta_{T+\theta}\! -\!\La^{\Delta}_{T-}}{\alpha}. \end{equation*} Thus, given a $\kappa>0$, whenever $\theta>0$ is sufficiently small we have \begin{equation}\label{numer_blowup} \begin{split} \mathbb{P}\Big(X^\Delta_{T+\theta} \le x,\,\inf_{0\le s< T+\theta} X_s^{\Delta} > 0\Big) \ge \frac{x-\kappa/2}{\alpha}-\frac{\La_{T-} - \La^{\Delta}_{T-}}{\alpha},\\ \La_{T-} - \La^{\Delta}_{T+\theta}+\kappa/2\le x\le\La_T - \La^{\Delta}_{T+\theta}-\kappa, \end{split} \end{equation} with the bounds on $x$ ensuring that the latter integration interval contains $[-\kappa/(2\sqrt{\theta}),\kappa/\sqrt{\theta}]$. \medskip We now claim that $\Lambda_{T-}-\Lambda^\Delta_{T-}\le\kappa/2$ for all $\Delta>0$ small enough. Indeed, this is obvious if $T=0$. For $T>0$, we let $t\in(0,T)$ be a continuity point of $\Lambda$ with the property $\Lambda_{T-}-\Lambda_t\le\kappa/4$. For all $\Delta>0$ small enough, $\Lambda_t-\Lambda^\Delta_t\le\kappa/4$, hence \begin{equation*} \Lambda_{T-}-\Lambda^\Delta_{T-}=(\Lambda_{T-}-\Lambda_t)+(\Lambda_t-\Lambda^\Delta_t)+(\Lambda^\Delta_t-\Lambda^\Delta_{T-})\le\frac{\kappa}{4}+\frac{\kappa}{4}+0=\frac{\kappa}{2}. \end{equation*} In particular, $\La_{T-} - \La^{\Delta}_{T+\theta}\le\kappa/2$, and we deduce \eqref{eq::distrib_of_jump2} for $\kappa\le x\le\La_T - \La^{\Delta}_{T+\theta}-\kappa$ from \eqref{numer_blowup}. For $\La_T - \La^{\Delta}_{T+\theta}-\kappa<x\le\La_T - \La^{\Delta}_{T+\theta}$, we use $\mathbb{P}(X^{\Delta}_{T + \theta} \le x,\,\ldots\,) \ge \mathbb{P}(X^{\Delta}_{T + \theta} \le x - \kappa,\,\ldots\,)$, apply \eqref{eq::distrib_of_jump2} with $x$ replaced by $x-\kappa$, and relabel $2\kappa$ as $\kappa$. \hfill \ensuremath{\Box} \medskip Next, we build on Lemma \ref{lem:jump-dens} to verify that the time-stepping scheme ``catches up'' with the physical solution. \begin{lem}\label{lem::special_case} In the situation of Proposition \ref{prop_jump}, for any $\eta,\theta>0$, there exists some $\overline{\Delta}>0$ such that $\La_T - \La^\Delta_{T+\theta}\le\eta$, $\Delta\in(0,\overline{\Delta}]$. \end{lem} \noindent\textbf{Proof.} We establish the lemma by proving the following statement: If, for some $\eta> 0$, it holds that for all $\theta>0$ we can find some $\overline{\Delta}>0$ such that $\La_T - \La^{\Delta}_{T+\theta} \le \eta$, $\Delta\in(0,\overline{\Delta}]$, then the same holds for $\frac23\eta$ in place of $\eta$. To this end, we fix such an $\eta>0$ and any $\theta>0$. We seek a value of $\overline{\Delta}>0$ for which $\La_T - \La^\Delta_{T+\theta} \le \frac23 \eta$, $\Delta\in(0,\overline{\Delta}]$. For any $\theta_0>0$, the hypothesis of the statement yields a $\overline{\Delta}_0>0$ such that $\La_T - \La^{\Delta_0}_{T+\theta_0} \le \eta$, $\Delta_0\in(0,\overline{\Delta}_0]$. By Lemma \ref{lem:jump-dens}, we may select $\theta_0$ and $\Delta_0$ to which \eqref{eq::distrib_of_jump2} also applies, with a $\kappa=\kappa(\eta,\theta)>0$ to be determined below. Our aim is to show that if $\theta_0$ is picked from a suitable interval $(0,\overline{\theta}_0]$, then $\widetilde{\Delta}>0$ can be chosen such that for all $\Delta=\Delta_0/1,\,\Delta_0/2,\,\ldots\in(0,\widetilde{\Delta}]$, \begin{eqnarray}\label{taueqn} \La_T - \La^{\CTT}_{T+\theta} \le \frac23 \eta,\;\text{with}\; \CTT := (0,\Delta_0,2\Delta_0,\ldots, T+\theta_0,T+\theta_0+\Delta,T+\theta_0+2\Delta,\dots). \end{eqnarray} The claim then follows, since $\Lambda^\Delta_{T+\theta}\ge\La^{\CTT}_{T+\theta}$ by Remark \ref{rmk::discrete_general_prelim} (thus, $\Lambda_T-\Lambda^\Delta_{T+\theta}\le \frac23 \eta$) and the range $\bigcup_{\theta_0\in(0,\overline{\theta}_0]} \{(T+\theta_0)/1,(T+\theta_0)/2,\ldots\}\cap(0,\overline{\Delta}_0]$ of suitable $\Delta_0$ contains the interval $(0,\min(\overline{\theta}_0,\overline{\Delta}_0)]$ allowing us to take $\overline{\Delta}=\min(\widetilde{\Delta},\overline{\theta}_0,\overline{\Delta}_0)>0$. \medskip Fix any $\Delta=\Delta_0/1,\,\Delta_0/2,\,\ldots$ and let $\ell_n:=\La^{\CTT}_{T+\theta_0+n\Delta}-\La^{\Delta_0}_{T+\theta_0}$, $n=0,\,1,\,\ldots\,$. Then, \[ \begin{split} \ell_{n+1} &= \alpha\mathbb{P}\Big(\inf_{0\le s<T+\theta_0} X_s^{\Delta_0} > 0,\,\min_{0\le m \le n} X^{\CTT}_{T +\theta_0+m\Delta} < 0\Big) \\ & \ge \alpha\mathbb{P}\Big(\inf_{0\le s<T+\theta_0} X_s^{\Delta_0} > 0,\,X^{\CTT}_{T+\theta_0+n\Delta} < 0\Big) \\ & = \frac{\alpha}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{z^2}{2}}\, \mathbb{P}\Big(\inf_{0\le s<T+\theta_0} X_s^{\Delta_0} > 0,\,X^{\Delta_0}_{T+\theta_0} < \sqrt{n\Delta} z + \ell_n \Big)\,\mathrm{d}z. \end{split} \] Next, we set $\widetilde{\ell}=\Lambda_T-\Lambda^{\Delta_0}_{T+\theta_0}$ and apply \eqref{eq::distrib_of_jump2} in the form \[ \mathbb{P}\Big(X^{\Delta_0}_{T+\theta_0} < x,\,\inf_{0\le s<T+\theta_0} X_s^{\Delta_0} > 0\Big) \ge\frac{\min(x-\kappa,\widetilde{\ell}-\kappa)-\min(x-\kappa,0)}{\alpha},\quad x\in\mathbb{R} \] to obtain the explicit recursive inequality \[ \ell_{n+1}\ge\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-\frac{z^2}{2}}\, \Big(\!\min\big(\sqrt{n\Delta}z + \ell_n - \kappa,\widetilde{\ell}-\kappa\big) - \min\big(\sqrt{n\Delta}z + \ell_n - \kappa,0\big)\!\Big)\,\mathrm{d}z. \] We rewrite the latter using the standard Gaussian cumulative distribution function $\Phi$: \[ \ell_{n+1}\ge\frac{1}{\sqrt{2\pi}} \int_{(\kappa-\ell_n)/\sqrt{n\Delta}}^{(\widetilde{\ell} - \ell_n)/\sqrt{n\Delta}} e^{-\frac{z^2}{2}}\,\big(\sqrt{n\Delta}z+\ell_n-\kappa\big)\,\mathrm{d}z + (\widetilde{\ell}-\kappa)\bigg(1-\Phi \bigg(\frac{\widetilde{\ell}-\ell_n}{\sqrt{n\Delta}}\bigg)\!\bigg). \] Integrating by parts and introducing \[ f(x):=\int_{-\infty}^x \Phi(z)\,\mathrm{d}z=x\Phi(x) + \frac{1}{\sqrt{2\pi}}\,e^{-\frac{x^2}{2}},\quad x\in\mathbb{R} \] we further deduce \[ \ell_{n+1} \ge \widetilde{\ell} - \kappa - \sqrt{n\Delta}\,\bigg(f\bigg(\frac{\widetilde{\ell}-\ell_n}{\sqrt{n\Delta}} \bigg) - f\bigg(\frac{\kappa-\ell_n}{\sqrt{n\Delta}} \bigg)\!\bigg). \] In view of $\Phi(x)+\Phi(-x)=1$, $x\in\mathbb{R}$, we have $f(x)=f(-x)+x$, $x\in\mathbb{R}$, so \[ \ell_{n+1} \ge \ell_n - \kappa + \sqrt{n\Delta}\,\bigg(f\bigg(\frac{\kappa-\ell_n}{\sqrt{n\Delta}}\bigg) -f\bigg(\!-\frac{\widetilde{\ell}-\ell_n}{\sqrt{n\Delta}}\bigg)\!\bigg). \] Since $f'=\Phi$ is increasing, we arrive at \begin{equation}\label{eq::jump_recursion} \ell_{n+1} \ge \ell_n - \kappa + \Phi\bigg(\!-\frac{\widetilde{\ell}-\ell_n}{\sqrt{n\Delta}} \bigg) \cdot(\kappa-2\ell_n+\widetilde{\ell}), \end{equation} provided that $\kappa-2\ell_n+\widetilde{\ell}\ge 0$. \medskip To prove \eqref{taueqn}, and thus the lemma, we assume $\widetilde{\ell}>2\eta/3$ (otherwise \eqref{taueqn} holds with $\theta_0=\theta$ and $\widetilde{\Delta}=\overline{\Delta}_0$) and show that if $\theta_0\in(0,\theta)$ is taken small enough and the integers $(T+\theta_0)/\Delta_0$ and $\Delta_0/\Delta$ are made large enough, then one can find an $n\ge0$ with $\theta_0+n\Delta\le\theta$ and $\widetilde{\ell}/3\le \ell_n$. The inequality $\La_T - \La^{\CTT}_{T+\theta} \le 2\eta/3$ in \eqref{taueqn} then follows from \[ \La_T-\La^{\CTT}_{T+\theta_0+n\Delta} =(\La_T-\La^{\Delta_0}_{T+\theta_0})+(\La^{\Delta_0}_{T+\theta_0}-\La^{\CTT}_{T+\theta_0+n\Delta}) \le \frac{2(\La_T-\La^{\Delta_0}_{T+\theta_0})}{3}\le\frac{2\eta}{3}. \] Arguing by contradiction, suppose that $\ell_n<\widetilde{\ell}/3$ for all $n=0,\,1,\,\ldots,\,\lfloor(\theta - \theta_0)/\Delta\rfloor=:n_0$. Summing \eqref{eq::jump_recursion} over $0\le n\le n_0-1$ upon noting that $\kappa-2\ell_n+\widetilde{\ell}>\kappa+\widetilde{\ell}/3>\kappa+\ell_0=\kappa>0$ for such $n$ we get \begin{eqnarray*} \ell_{n_0} \ge -\kappa n_0 + \sum_{n = 0}^{n_0-1} \Phi\bigg(\!- \frac{\widetilde{\ell}-\ell_n}{\sqrt{n\Delta}}\bigg) \cdot(\kappa-2\ell_n+\widetilde{\ell}). \end{eqnarray*} Using $\kappa-2\ell_n+\widetilde{\ell}>\widetilde{\ell}/3$ and $\widetilde{\ell}-\ell_n\le\widetilde{\ell}$ we deduce \[ \begin{split} \ell_{n_0} \ge -\kappa n_0 +\frac{\widetilde{\ell}}{3}\,\sum_{n = 0}^{n_0-1} \Phi\bigg(\!-\frac{\widetilde{\ell}}{\sqrt{n\Delta}} \bigg) &\ge n_0\bigg(\!-\kappa+\frac{\widetilde{\ell}}{6}\, \Phi\bigg(\!-\frac{\sqrt2\widetilde{\ell}}{\sqrt{\theta-\theta_0-2\Delta}} \bigg)\!\bigg) \\ &\ge n_0\bigg(\!-\kappa+\frac{\eta}{9}\, \Phi\bigg(\!-\frac{\sqrt2\eta}{\sqrt{\theta-\theta_0-2\Delta}} \bigg)\!\bigg), \end{split} \] where we have dropped the summands with $n<n_0-\lceil n_0/2\rceil$ and have employed a common lower bound for the remaining summands. We choose $\kappa=\kappa(\eta,\theta)>0$ such that the latter bracket is positive, observing that the smaller $\theta_0+2\Delta>0$ is, the larger this bracket becomes. Lastly, we pick $\Delta>0$ dividing $\Delta_0$ so that $n_0$ is large and the final lower bound on $\ell_{n_0}$ is at least $\Lambda_T/3\ge\widetilde{\ell}/3$, giving us the desired contradiction. The proof is complete. \hfill \ensuremath{\Box} \medskip We are now ready to show Proposition \ref{prop_jump}. \medskip \noindent\textbf{Proof of Proposition \ref{prop_jump}.} We start by using Propositions \ref{thm1} and \ref{prop:density_est}(a) to find an $\epsilon>0$ such that \eqref{loc_rate_conv} applies to all $T\le t\le T+\epsilon$ with this $\epsilon$. In particular, upon fixing $s\in(T,T+\epsilon]$ and $\eta>0$ we can select some $\overline{\Delta}_0>0$ so that \begin{equation}\label{eta/2bnd} \Lambda_s-\Lambda^{t;\Delta_0}_s\le\frac{\eta}{2},\quad t\in[T,s],\quad\Delta_0\in(0,\overline{\Delta}_0]. \end{equation} Next, we recall the quantity $C_\alpha(\overline{\Delta}_0/2,s-T)\ge0$ from Corollary \ref{cor::discrete_compare} and choose $\theta_0\in(0,s-T]$ with the property \begin{equation}\label{rightcont} \Lambda_{T+\theta_0}-\Lambda_T\le\frac{\eta}{4C_\alpha(\overline{\Delta}_0/2,s-T)}. \end{equation} Finally, we rely on Lemma \ref{lem::special_case} to pick $\widetilde{\Delta}>0$ so that \begin{equation}\label{catchup} \Lambda_T-\Lambda^\Delta_{T+\theta_0/2}\le\frac{\eta}{4C_\alpha(\overline{\Delta}_0/2,s-T)},\quad\Delta\in(0,\widetilde{\Delta}]. \end{equation} Then, for $\Delta\in(0,\min(\widetilde{\Delta},\overline{\Delta}_0/2,\theta_0/2)]$ and $\Delta_0\in[\overline{\Delta}_0/2,\overline{\Delta}_0]$ such that $\Delta_0/\Delta$ is an integer, we get with $t=\lfloor (T+\theta_0)/\Delta\rfloor\Delta\in[T+\theta_0/2,T+\theta_0]$ and $\CTT=(0,\Delta,2\Delta,\ldots,t,t+\Delta_0,t+2\Delta_0,\ldots)$, \begin{equation} \begin{split} \Lambda_s-\Lambda^\Delta_s\le \Lambda_s-\Lambda^{\CTT}_s &=(\Lambda_s-\Lambda^{t;\Delta_0}_s)+(\Lambda^{t;\Delta_0}_s-\Lambda^{\CTT}_s) \\ &\le\frac{\eta}{2}+C_\alpha(\overline{\Delta}_0/2,s-T)(\Lambda^{t;\Delta_0}_t-\Lambda^{\CTT}_t) \\ &=\frac{\eta}{2}+C_\alpha(\overline{\Delta}_0/2,s-T)\big((\Lambda_t-\Lambda_T) +(\Lambda_T-\Lambda^\Delta_t)\big)\le\eta, \end{split} \end{equation} where we have used Remark \ref{rmk::discrete_general_prelim}, \eqref{eta/2bnd}, Corollary \ref{cor::discrete_compare}, Definition \ref{defn::discrete_general}, \eqref{rightcont}, and \eqref{catchup} in this order. Since $s\in(T,T+\epsilon]$ and $\eta>0$ were arbitrary, we conclude that $\lim_{\Delta\downarrow0} \Lambda^\Delta_s=\Lambda_s$, $s\in(T,T+\epsilon]$, thus obtaining the proposition. \hfill \ensuremath{\Box} \subsection{Proof of Theorem \ref{thm::main}} \label{sec::post_jump} Theorem \ref{thm::main} follows directly from Propositions \ref{pro::no_jump}~and~\ref{prop_jump}. Indeed, we can argue by contradiction and assume the existence of continuity points $s\in(0,\infty)$ of $\Lambda$ for which the convergence $\Lambda^\Delta|_{[0,s]}\underset{\Delta\downarrow0}{\overset{\text{M1}}{\longrightarrow}}\Lambda|_{[0,s]}$ does not hold. Let $T\ge0$ be the infimum of such continuity points. If $T$ is a continuity point of $\Lambda$, we rely on Proposition~\ref{pro::no_jump} to infer $\Lambda^\Delta|_{[0,T+\epsilon]}\underset{\Delta\downarrow0}{\overset{\text{M1}}{\longrightarrow}}\Lambda|_{[0,T+\epsilon]}$ for all $\epsilon\ge0$ small enough. If $T$ is a discontinuity point of~$\Lambda$, we apply Proposition \ref{prop_jump} to get $\Lambda^\Delta|_{[0,T+\epsilon]}\underset{\Delta\downarrow0}{\overset{\text{M1}}{\longrightarrow}}\Lambda|_{[0,T+\epsilon]}$ for all $\epsilon>0$ small enough. This contradiction to the definition of $T$ yields the theorem. \hfill \ensuremath{\Box} \begin{rmk} It is worth noting that the proof of Theorem \ref{thm::main} remains intact if instead of Assumption \ref{ass}(a), the boundedness of $f$ on $[0,\infty)$ together with the conclusions of Lemma~\ref{lemma:a priori} and Corollary \ref{cor:mod_of_cont} are assumed and $\psi:=\Psi':\,(0,\delta]\to\big(0,\frac{1}{\alpha}\big]$ is a strictly increasing function. Indeed, the proof of Theorem \ref{thm::main} can be then repeated word by word, and \eqref{loc_rate_conv}, used in the proofs of Propositions \ref{pro::no_jump} and \ref{prop_jump}, can be obtained via Remark \ref{alt_ass}. \end{rmk} \section{Numerical simulations} \label{sec::numerical} In this last section, we examine the convergence of the time-stepping scheme (recall Definition \ref{defn::discrete}) numerically, for various initial densities $f$ and resulting functions $\Lambda$ with and without discontinuities. We write $L^\Delta$ for $\Lambda^\Delta/\alpha$, so that $L^\Delta_{n\Delta} = \mathbb{P}\big(\!\min_{0\le m\le n-1} X_{m\Delta}^\Delta<0\big)$, $n=1,\,2,\,\ldots\,$, and simulate the latter probabilities by a Monte Carlo particle method with $N=10^7$ particles, following \cite[Algorithm 1]{KR}. This fully implementable scheme is given for completeness in the appendix, where we also derive the convergence as $N\rightarrow \infty$. This extends the convergence result from \cite{KR}, which is restricted to the regular case studied there, and additionally shows the order $O(1/\sqrt{N})$ for fixed $\Delta$. \subsection{Initial density vanishing at zero and no discontinuity}\label{sim1/2} We first consider an example with $\lim_{x\downarrow0} f(x)=0$. To this end, we let $X_{0-}$ be $\Gamma(3/2,1/2)$-distributed and note that $f(x)\le Cx^{1/2}$, $x\ge0$ for a suitable constant $C<\infty$. Further we fix the time interval $[0,T]=[0,0.8]$ and
and newStatus != 'stop' and DisplayTechnology != 'i2c1306': global ScrollArtistTag global ScrollArtistNext global ScrollArtistFirstRound global ScrollArtistNextRound self.image.paste(('black'), [0, 0, image.size[0], image.size[1]]) if DisplayTechnology == 'Braun': logoImage = Image.open('/home/volumio/NR1-UI/img/vu0.png').convert('RGB') self.image.paste(logoImage, (34, 0)) else: logoImage = Image.open('/home/volumio/NR1-UI/img/vu.png').convert('RGB') self.image.paste(logoImage, (0, 0)) cava2_fifo = open("/tmp/cava2_fifo", 'r') data2 = cava2_fifo.readline().strip().split(';') TextBaustein = oled.activeArtist + ' - ' + oled.activeSong self.ArtistWidth, self.ArtistHeight = self.draw.textsize(TextBaustein, font=font6) self.ArtistStopPosition = self.ArtistWidth - self.width + ArtistEndScrollMargin if self.ArtistWidth >= self.width: if ScrollArtistFirstRound == True: ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen6text01) elif ScrollArtistFirstRound == False and ScrollArtistNextRound == False: if ScrollArtistTag <= self.ArtistWidth - 1: ScrollArtistTag += ArtistScrollSpeed self.ArtistPosition = (-ScrollArtistTag ,Screen6text01[1]) ScrollArtistNext = 0 elif ScrollArtistTag == self.ArtistWidth: ScrollArtistTag = 0 ScrollArtistNextRound = True ScrollArtistNext = self.width + ArtistEndScrollMargin if ScrollArtistNextRound == True: if ScrollArtistNext >= 0: self.ArtistPosition = (ScrollArtistNext ,Screen6text01[1]) ScrollArtistNext -= ArtistScrollSpeed elif ScrollArtistNext == -ArtistScrollSpeed and ScrollArtistNextRound == True: ScrollArtistNext = 0 ScrollArtistNextRound = False ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen6text01) if self.ArtistWidth <= self.width: # center text self.ArtistPosition = (int((self.width-self.ArtistWidth)/2), Screen6text01[1]) self.draw.text((self.ArtistPosition), TextBaustein, font=font6, fill='white') self.draw.text((Screen6text28), oled.playstateIcon, font=labelfont, fill='white') if len(data2) >= 3: leftVU = data2[0] if leftVU != '': leftVU1 = int(leftVU) self.draw.line(Screen6leftVUcoordinates[leftVU1], fill='white', width=2) rightVU = data2[1] if rightVU != '': rightVU1 = int(rightVU) self.draw.line(Screen6rightVUcoordinates[rightVU1], fill='white', width=2) image.paste(self.image, (0, 0)) if NowPlayingLayout == 'VU-Meter-2' and newStatus != 'stop' and DisplayTechnology != 'i2c1306': if newStatus != 'stop' and oled.duration != None: self.image.paste(('black'), [0, 0, image.size[0], image.size[1]]) logoImage = Image.open('/home/volumio/NR1-UI/img/vu2.png').convert('RGB') self.image.paste(logoImage, (0, 0)) cava2_fifo = open("/tmp/cava2_fifo", 'r') data2 = cava2_fifo.readline().strip().split(';') TextBaustein = oled.activeArtist + ' - ' + oled.activeSong self.ArtistWidth, self.ArtistHeight = self.draw.textsize(TextBaustein, font=font6) self.ArtistStopPosition = self.ArtistWidth - self.width + ArtistEndScrollMargin if self.ArtistWidth >= self.width: if ScrollArtistFirstRound == True: ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen7text01) elif ScrollArtistFirstRound == False and ScrollArtistNextRound == False: if ScrollArtistTag <= self.ArtistWidth - 1: ScrollArtistTag += ArtistScrollSpeed self.ArtistPosition = (-ScrollArtistTag ,Screen7text01[1]) ScrollArtistNext = 0 elif ScrollArtistTag == self.ArtistWidth: ScrollArtistTag = 0 ScrollArtistNextRound = True ScrollArtistNext = self.width + ArtistEndScrollMargin if ScrollArtistNextRound == True: if ScrollArtistNext >= 0: self.ArtistPosition = (ScrollArtistNext ,Screen7text01[1]) ScrollArtistNext -= ArtistScrollSpeed elif ScrollArtistNext == -ArtistScrollSpeed and ScrollArtistNextRound == True: ScrollArtistNext = 0 ScrollArtistNextRound = False ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen7text01) if self.ArtistWidth <= self.width: # center text self.ArtistPosition = (int((self.width-self.ArtistWidth)/2), Screen7text01[1]) self.draw.text((self.ArtistPosition), TextBaustein, font=font6, fill='white') self.playbackPoint = oled.seek / oled.duration / 10 self.bar = Screen7barwidth * self.playbackPoint / 100 self.draw.text((Screen7text28), oled.playstateIcon, font=labelfont, fill='white') self.draw.text((Screen7text06), oled.activeFormat, font=font8, fill='white') self.draw.text((Screen7text07), oled.activeSamplerate, font=font8, fill='white') self.draw.text((Screen7text08), oled.activeBitdepth, font=font8, fill='white') self.draw.text((Screen7ActualPlaytimeText), str(timedelta(seconds=round(float(oled.seek) / 1000))), font=font8, fill='white') self.draw.text((Screen7DurationText), str(timedelta(seconds=oled.duration)), font=font8, fill='white') self.draw.rectangle((Screen7barLineX , Screen7barLineThick1, Screen7barLineX+Screen7barwidth, Screen7barLineThick2), outline=Screen7barLineBorder, fill=Screen7barLineFill) self.draw.rectangle((self.bar+Screen7barLineX-Screen7barNibbleWidth, Screen7barThick1, Screen7barX+self.bar+Screen7barNibbleWidth, Screen7barThick2), outline=Screen7barBorder, fill=Screen7barFill) if len(data2) >= 3: leftVU = data2[0] if leftVU != '': leftVU1 = int(leftVU) self.draw.line(Screen7leftVUcoordinates[leftVU1], fill='white', width=2) rightVU = data2[1] if rightVU != '': rightVU1 = int(rightVU) self.draw.line(Screen7rightVUcoordinates[rightVU1], fill='white', width=2) image.paste(self.image, (0, 0)) if newStatus != 'stop' and oled.duration == None: self.image.paste(('black'), [0, 0, image.size[0], image.size[1]]) logoImage = Image.open('/home/volumio/NR1-UI/img/vu2.png').convert('RGB') self.image.paste(logoImage, (0, 0)) cava2_fifo = open("/tmp/cava2_fifo", 'r') data2 = cava2_fifo.readline().strip().split(';') TextBaustein = oled.activeArtist + ' - ' + oled.activeSong self.ArtistWidth, self.ArtistHeight = self.draw.textsize(TextBaustein, font=font6) self.ArtistStopPosition = self.ArtistWidth - self.width + ArtistEndScrollMargin if self.ArtistWidth >= self.width: if ScrollArtistFirstRound == True: ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen7text01) elif ScrollArtistFirstRound == False and ScrollArtistNextRound == False: if ScrollArtistTag <= self.ArtistWidth - 1: ScrollArtistTag += ArtistScrollSpeed self.ArtistPosition = (-ScrollArtistTag ,Screen7text01[1]) ScrollArtistNext = 0 elif ScrollArtistTag == self.ArtistWidth: ScrollArtistTag = 0 ScrollArtistNextRound = True ScrollArtistNext = self.width + ArtistEndScrollMargin if ScrollArtistNextRound == True: if ScrollArtistNext >= 0: self.ArtistPosition = (ScrollArtistNext ,Screen7text01[1]) ScrollArtistNext -= ArtistScrollSpeed elif ScrollArtistNext == -ArtistScrollSpeed and ScrollArtistNextRound == True: ScrollArtistNext = 0 ScrollArtistNextRound = False ScrollArtistFirstRound = False ScrollArtistTag = 0 self.ArtistPosition = (Screen7text01) if self.ArtistWidth <= self.width: # center text self.ArtistPosition = (int((self.width-self.ArtistWidth)/2), Screen7text01[1]) self.draw.text((self.ArtistPosition), TextBaustein, font=font6, fill='white') if len(data2) >= 3: leftVU = data2[0] if leftVU != '': leftVU1 = int(leftVU) self.draw.line(Screen7leftVUcoordinates[leftVU1], fill='white', width=2) rightVU = data2[1] if rightVU != '': rightVU1 = int(rightVU) self.draw.line(Screen7rightVUcoordinates[rightVU1], fill='white', width=2) image.paste(self.image, (0, 0)) if NowPlayingLayout == 'VU-Meter-Bar' and newStatus != 'stop' and DisplayTechnology != 'i2c1306': global ScrollArtistTag global ScrollArtistNext global ScrollArtistFirstRound global ScrollArtistNextRound global ScrollSongTag global ScrollSongNext global ScrollSongFirstRound global ScrollSongNextRound global spectrumPeaksL global spectrumPeaksR if newStatus != 'stop' and oled.duration != None: self.image.paste(('black'), [0, 0, image.size[0], image.size[1]]) logoImage = Image.open('/home/volumio/NR1-UI/img/vudig.png').convert('RGB') self.image.paste(logoImage, (0, 0)) spec_gradient = np.linspace(Screen8specGradstart, Screen8specGradstop, Screen8specGradSamples) cava2_fifo = open("/tmp/cava2_fifo", 'r') data2 = cava2_fifo.readline().strip().split(';') # print(data2) self.playbackPoint = oled.seek / oled.duration / 10 self.bar = Screen8barwidth * self.playbackPoint / 100 self.ArtistWidth, self.ArtistHeight = self.draw.textsize(oled.activeArtist, font=font9) self.ArtistStopPosition = self.ArtistWidth - self.width + ArtistEndScrollMargin if self.ArtistWidth >= self.width - 60: if ScrollArtistFirstRound == True: ScrollArtistFirstRound = False ScrollArtistTag = 60 self.ArtistPosition = (Screen8text01[0] + 60, Screen8text01[1]) elif ScrollArtistFirstRound == False and ScrollArtistNextRound == False: if ScrollArtistTag <= self.ArtistWidth - 60: ScrollArtistTag += ArtistScrollSpeed self.ArtistPosition = (-ScrollArtistTag ,Screen8text01[1]) ScrollArtistNext = 60 elif ScrollArtistTag == self.ArtistWidth - 59: ScrollArtistTag = 60 ScrollArtistNextRound = True ScrollArtistNext = self.width + ArtistEndScrollMargin if ScrollArtistNextRound == True: if ScrollArtistNext >= 61: self.ArtistPosition = (ScrollArtistNext ,Screen8text01[1]) ScrollArtistNext -= ArtistScrollSpeed elif ScrollArtistNext == 60 and ScrollArtistNextRound == True: ScrollArtistNext = 60 ScrollArtistNextRound = False ScrollArtistFirstRound = False ScrollArtistTag = 60 self.ArtistPosition = (Screen8text01[0] + 60, Screen8text01[1]) if self.ArtistWidth <= self.width - 60: # center text self.ArtistPosition = (int(((self.width-59-self.ArtistWidth)/2) + 60), Screen8text01[1]) self.draw.text((self.ArtistPosition), oled.activeArtist, font=font9, fill='white') self.SongWidth, self.SongHeight = self.draw.textsize(oled.activeSong, font=font10) self.SongStopPosition = self.SongWidth - self.width + SongEndScrollMargin if self.SongWidth >= self.width - 60: if ScrollSongFirstRound == True: ScrollSongFirstRound = False ScrollSongTag = 60 self.SongPosition = (Screen8text02[0] + 60, Screen8text02[1]) elif ScrollSongFirstRound == False and ScrollSongNextRound == False: if ScrollSongTag <= self.SongWidth - 60: ScrollSongTag += SongScrollSpeed self.SongPosition = (-ScrollSongTag ,Screen8text02[1]) ScrollSongNext = 60 elif ScrollSongTag == self.SongWidth - 59: ScrollSongTag = 60 ScrollSongNextRound = True ScrollSongNext = self.width + SongEndScrollMargin if ScrollSongNextRound == True: if ScrollSongNext >= 61: self.SongPosition = (ScrollSongNext ,Screen8text02[1]) ScrollSongNext -= SongScrollSpeed elif ScrollSongNext == 60 and ScrollSongNextRound == True: ScrollSongNext = 60 ScrollSongNextRound = False ScrollSongFirstRound = True ScrollSongTag = 60 self.SongPosition = (Screen8text02[0] + 60, Screen8text02[1]) if self.SongWidth <= self.width - 60: # center text self.SongPosition = (int(((self.width-59-self.SongWidth)/2) + 60), Screen8text02[1]) self.draw.text((self.SongPosition), oled.activeSong, font=font10, fill='white') self.draw.rectangle((0, 0, 59, 34), fill = 'black', outline = 'black') self.draw.text((Screen8text28), oled.playstateIcon, font=labelfont, fill='white') self.draw.text((Screen8text06), oled.activeFormat, font=font8, fill='white') self.draw.text((Screen8text07), str(oled.activeSamplerate), font=font8, fill='white') self.draw.text((Screen8text08), oled.activeBitdepth, font=font8, fill='white') self.draw.text((Screen8ActualPlaytimeText), str(timedelta(seconds=round(float(oled.seek) / 1000))), font=font8, fill='white') self.draw.text((Screen8DurationText), str(timedelta(seconds=oled.duration)), font=font8, fill='white') self.draw.rectangle((Screen8barLineX , Screen8barLineThick1, Screen8barLineX+Screen8barwidth, Screen8barLineThick2), outline=Screen8barLineBorder, fill=Screen8barLineFill) self.draw.rectangle((self.bar+Screen8barLineX-Screen8barNibbleWidth, Screen8barThick1, Screen8barX+self.bar+Screen8barNibbleWidth, Screen8barThick2), outline=Screen8barBorder, fill=Screen8barFill) if len(data2) >= 3: leftVU = data2[0] rightVU = data2[1] if leftVU != '': leftVU1 = int(leftVU) topL = leftVU1 if oled.prevFallingTimerL == 0: spectrumPeaksL = leftVU1 if ((time() - oled.prevFallingTimerL) > Screen8fallingTime): spectrumPeaksL = topL for i in range(leftVU1): try: self.draw.line(((Screen8leftVUDistance+i*Screen8leftVUWide1, Screen8leftVUYpos1), (Screen8leftVUDistance+i*Screen8leftVUWide1, Screen8leftVUYpos2)), fill=(int(spec_gradient[i]), int(spec_gradient[i]), int(spec_gradient[i])), width=Screen8leftVUWide2) except: continue if oled.prevFallingTimerL == 0: oled.prevFallingTimerL = time() if topL > spectrumPeaksL: spectrumPeaksL = topL if ((time() - oled.prevFallingTimerL) > Screen8fallingTime): oled.fallingL = True if spectrumPeaksL > topL: spectrumPeaksL = topL if oled.fallingL: oled.prevFallingTimerL = time() oled.prevFallingTimerL = time() self.draw.line(((Screen8leftVUDistance+spectrumPeaksL*Screen8leftVUWide1, Screen8leftVUYpos1), (Screen8leftVUDistance+spectrumPeaksL*Screen8leftVUWide1, Screen8leftVUYpos2)), fill='white', width=2) if rightVU != '': rightVU1 = int(rightVU) topR = rightVU1 if oled.prevFallingTimerR == 0: spectrumPeaksR = rightVU1 if ((time() - oled.prevFallingTimerR) > Screen8fallingTime): spectrumPeaksR = topR for i in range(rightVU1): try: self.draw.line(((Screen8rightVUDistance+i*Screen8rightVUWide1, Screen8rightVUYpos1), (Screen8rightVUDistance+i*Screen8rightVUWide1, Screen8rightVUYpos2)), fill=(int(spec_gradient[i]), int(spec_gradient[i]), int(spec_gradient[i])), width=Screen8rightVUWide2) except: continue if oled.prevFallingTimerR == 0: oled.prevFallingTimerR = time() if topR > spectrumPeaksR: spectrumPeaksR = topR if ((time() - oled.prevFallingTimerR) > Screen8fallingTime): oled.fallingR = True if spectrumPeaksR > topR: spectrumPeaksR = topR if oled.fallingRL: oled.prevFallingTimerR = time() oled.prevFallingTimerR = time() self.draw.line(((Screen8rightVUDistance+spectrumPeaksR*Screen8rightVUWide1, Screen8rightVUYpos1), (Screen8rightVUDistance+spectrumPeaksR*Screen8rightVUWide1, Screen8rightVUYpos2)), fill='white', width=Screen8PeakWidth) image.paste(self.image, (0, 0)) if newStatus != 'stop' and oled.duration == None: self.image.paste(('black'), [0, 0, image.size[0], image.size[1]]) logoImage = Image.open('/home/volumio/NR1-UI/img/vudig.png').convert('RGB') self.image.paste(logoImage, (0, 0)) spec_gradient = np.linspace(Screen8specGradstart,
p(e^{i\kappa h})\sim - \lambda \kappa^2 h^2 \, \log {\left} \def\r{\right(1/(|\kappa|h) \r)}\ \ \hbox{if}\ \ \alpha = 2,\ \kappa\not = 0\,. \leqno(5.12) $$ \vsp Recalling that it suffices to prove (3.11) for $\kappa\not = 0$ and observing, that there the parameter $\kappa$ can be treated like a constant, we see that $\log (1/(|\kappa|h))\sim \log{(1/ h)}\,,$ where, because $h\to 0\,,$ we can assume $0<h<1\,. $ \vsp Hence we can replace (5.12) by $$ \log \tilde p(e^{i\kappa h})\sim -\lambda \kappa^2 h^2 \log {1\over h}\ \ \hbox{if}\ \ \alpha = 2,\ \kappa\not = 0\,. \leqno(5.13) $$ Then the limit relation (3.11) (equivalently (3.10)) holds if we scale by $$ \tau = \sigma(h) = {\lambda \pi \over \Gamma(\alpha +1) \,\sin{(\alpha \pi/ 2)}} \, h^\alpha \quad} \def\qq{\qquad \hbox{if}\quad} \def\qq{\qquad 0<\alpha<2\,, \leqno(5.14) $$ $$ \tau = \sigma(h) =\lambda h^2 \log {1\over h} \quad} \def\qq{\qquad \hbox{if}\quad} \def\qq{\qquad \alpha = 2\,. \leqno(5.15) $$ \vsp Putting $\mu = \lambda \pi /(\Gamma(\alpha +1)\, \sin{(\alpha \pi/2}) = {\lambda/b(\alpha)}$ in (5.9) with $b(\alpha)$ defined in (2.17) we obtain from (5.4) the regular scaling law $$ \tau = \sigma(h) = \mu \,h^\alpha\ \ \hbox{for}\ \ 0<\alpha <2 \leqno(5.16) $$ with the restriction (5.6) for $\mu \,.$ As result we have \vsp {\bf Theorem 5.1.} {\it Distinguish the cases {(i)} $\;0<\alpha<2\,,\quad} \def\qq{\qquad $ {(ii)} $\;\alpha = 2\,.$ Define the probabilities $p_k= P(Y= k)$ in case {(i)} by {(5.4)} with restriction {(5.6)}, in case {(ii)} by $$ p_0 = 1-2\lambda \zeta(3),\ \ p_k = \lambda |k|^{-3} \quad} \def\qq{\qquad \hbox{for} \quad} \def\qq{\qquad k \ne 0 $$ with restriction $0<\lambda \le {1/(2\zeta (3))}\,.$ Let the scaling relation $$ \tau = \mu h^\alpha\ \ {in} \ { case} \ {(i)}, \qq \tau = \lambda h^\alpha \log{1\over h}\ \ { in} \ { case} \ { (ii)} \leqno(5.17) $$ hold and let for fixed $t>0$ the index $n= t/\tau$ run through $\hbox{\bf N}$ towards $\infty\,.$ Then the random variable $S_n$ of {(3.3)} converges in distribution to the random variable $S(t)$ whose probability density is given by {(1.15)} as $g_\alpha(x,t;0) \,. $} \vsp {\bf Remark 5.2.} We can use throughout $0<\lambda \le 2$ the parameter $\lambda$ and then have in (5.1) under the restriction (5.2) a unified representation of the transition probabilities. Here, in contrast to the Gr\"unwald-Letnikov random walk, the value $\alpha =1$ does no longer play a special role. With $\mu = \lambda/b(\alpha)$ we have for $0<\alpha <2$ the regular scaling law $\tau = \mu h^\alpha\,.$ However, the price to be paid for this unified representation is the non-regular scaling $\tau = h^2\,\log (1/h)$ for $\alpha= 2\,.$ Another price is that the generating function $\tilde p(z)$ in (5.8) is non-elementary, requiring considerable efforts in its asymptotic analysis. \section{A globally binomial random walk} The random walk model discussed in Section 4 has the disadvantage that the case $\alpha =1$ is excluded and the representation of the transition probabilities $p_k$ for $1<\alpha \le 2$ is different from that for $0<\alpha <1\,. $ However, for all admissible values of $\alpha$ we have the regular scaling law $\tau = \mu h^\alpha$. The method treated in Section 5 has the advantage of a unified representation of the transition probabilities in the whole interval $0<\alpha \le 2\,,$ but the scaling law $\tau= \mu \,h^\alpha$ holds only for $0<\alpha <2\,, $ it breaks down at $\alpha = 2\,.$ In this section we present a model that in the whole interval $0<\alpha \le 2$ admits a unified representation of the $p_k$ via binomial coefficients and has there a scaling law of the form $\tau= \mu \,h^\alpha\,.$ Moreover, the generating function $\tilde p (z)$ is elementary for all $\alpha \in (0,2]\,.$ \vsp The use of the binomial coefficients ${\alpha \choose j}$ in the Gr\"unwald-Letnikov random walk has caused singular behaviour for $\alpha=1\,.$ One reason for this sad fact is that ${1\choose j} = 0$ for integer $j\ge 2\,.$ We can remove this singular behaviour by removing the factor $\alpha -1$. \vsp For $0<\alpha\le 2\,,\ \alpha \not = 1\,$ let us define $$ p_0= 1-2\lambda,\ p_k = (-1)^{k+1} {\lambda\over \alpha -1} \,{\alpha \choose |k|+1}\ \ \hbox{for}\ \ k\not = 0\,. \leqno(6.1) $$ Observing that here the singularity at $\alpha = 1$ is removable, let us for $\alpha = 1$ define (via $\alpha \to 1$ in (6.1)) $$ p_0= 1-2\lambda\,,\quad} \def\qq{\qquad p_k = {\lambda\over |k|(|k|+1)}\ \ \hbox{for}\ \ k\not = 0\,. \leqno(6.2) $$ In (6.1) and (6.2) $\sum_{k\in\hbox{\bf Z}}p_k = 1$ and if $0<\lambda\le 1/2$ all $p_k\ge 0\,.$ In the special case $\alpha = 2$ we get $$ p_0= 1-2\lambda\,,\quad} \def\qq{\qquad p_1= p_{-1}= \lambda\,,\quad} \def\qq{\qquad p_k= 0 \ \ \hbox{for}\ \ |k|\ge 2\,, $$ the familiar random walk for approximation of the classical process governed by the equation ${\partial u\over \partial t}= {\partial^2 u\over \partial x^2}\,.$ \vsp The generating function $\tilde p(z)= \sum_{k\in \hbox{\bf Z}}p_k z^k$ has in the case $\alpha \not = 1$ the form $$ \tilde p(z) = 1-\lambda \{ q(z)+q(z^{-1})\} \leqno(6.3) $$ with $$ q(z) = {1\over \alpha -1}\,(1-z^{-1})\,\{ (1-z)^{\alpha -1}-1\}\,. $$ By passing here to the limit or directly from (6.2) we get for $\alpha = 1$ the representation $$ \tilde p(z) = 1 -\lambda \{(1-z^{-1})\log (1-z) +(1-z)\log(1-z^{-1})\} \,, \quad} \def\qq{\qquad \tilde p(1) = 1\,. \leqno(6.4) $$ We have proposed and investigated the particular random walk so generated (its transition probabilities given in (6.2)) in [GM99, Section 5]. \vsp In the special case $\alpha = 2$ we find $$ \tilde p(z) = 1 +\lambda (z-2+z^{-1})\,. \leqno(6.5) $$ \vsp We will now show that for all $\alpha \in (0,2]$ there exists a finite positive number $c(\alpha)$ so that, with $$ \mu = c(\alpha)\,\lambda \,, \leqno(6.6) $$ we arrive for $\kappa \in \hbox{\bf R}\setminus \{0\}$ at the small $h$ asymptotics $$ \tilde p(e^{i\kappa h}) = 1-\mu (|\kappa| h)^\alpha + o\left( (|\kappa|\, h)^\alpha \right) \leqno(6.7) $$ which implies (3.11). As in Sections 4 and 5 we can ignore the value $\kappa = 0$ as trivial. \vsp Referring to [GM99] for detailed treatment of the case $\alpha =1\,, $ let now be $0\not = \kappa \in \hbox{\bf R}$ and $ 0<\alpha \le 2,\ \alpha \not = 1\,,\, z= e^{i\kappa h}$. In view of (6.3) we investigate the asymptotics of $q(z)+q(z^{-1})$ for $h\to 0$. From $z^{-1} = \bar z$ and $$ (1-\alpha)q(z) = z^{-1}(1-z)^\alpha - z^{-1} +1 = e^{-i\kappa\,h}(1-e^{i\kappa\, h})^\alpha - e^{-i\kappa\, h} +1\,, $$ we conclude on $$ \psi(z):= (1-\alpha)\{q(z)+q(z^{-1})\} = 2\Re\left\{e^{-i\kappa h}(1-e^{i\kappa h})^\alpha\right\} + 2(1-\cos(\kappa h))\,, \leqno(6.8) $$ and here $$ \Re\left\{e^{-i\kappa \,h}(1-e^{i\kappa\, h})^\alpha\right\} \sim \Re\left((-i\kappa\, h)^\alpha\right) = (|\kappa|\,h)^\alpha \cos{(\alpha\pi/2)}, \leqno(6.9) $$ $$ 1-\cos(\kappa h)\sim {1\over 2}(|\kappa |h)^2\,. \leqno(6.10) $$ \vsp We distinguish three cases: $ \hbox{(i)}\ \ 0<\alpha<1\,, \ \ \hbox{(ii)}\ \ 1<\alpha<2\,,\ \ \hbox{(iii)} \ \alpha = 2\,. $ \vsp In cases (i) and (ii) the leading term in the asymptotics of $\psi(z)$ turns out to be $$\psi(z)\sim 2\,(|\kappa|\,h)^\alpha\,\cos{(\alpha\pi/ 2)}\,.$$ \vsp In case (iii) where $\alpha = 2$ however, this term is matched in order of magnitude by (6.10) so that we obtain $$ \psi(z)\sim 2(|\kappa|\,h)^2(-1)+(|\kappa|\,h)^2= -(|\kappa|\,h)^2\,. $$ \vsp Collecting results and dividing (6.8) by $1-\alpha$ we get (with $z= e^{i\kappa h}$) $$ \lambda\{ q(z)+q(z^{-1})\}\sim \cases{ \lambda {\displaystyle{2 \cos{(\alpha\pi/ 2)}\over 1-\alpha}}\, (|\kappa|\,h)^\alpha & if $0<\alpha<2\,,\ \alpha \not = 1\,,$\cr\cr \lambda (|\kappa|\,h)^2 & if $\alpha = 2\,.$\cr} \leqno(6.11) $$ Hence, in view of (6.3), we obtain (6.7) with (6.6) by putting $$ c(\alpha)= \cases{ {\displaystyle {2 \cos{(\alpha\pi/2)}\over 1-\alpha}} & if $\quad} \def\qq{\qquad 0<\alpha<2,\ \alpha \not = 1\,,$\cr\cr 1 & if $\quad} \def\qq{\qquad \alpha = 2\,.$\cr} \leqno(6.12) $$ \vsp The scaling coefficient $c(\alpha)$ allows continuous extension to the value $\alpha = 1\,,$ giving $\lim\limits_{\alpha \to 1} c(\alpha) = \pi$ in accordance with [GM99, formula (5.1)]. At $\alpha = 2\,,$ however, $c(\alpha)$ is discontinuous. In fact $$ c(2)= 1\not = 2= \lim_{\alpha \to 2} c(\alpha)\,. \leqno(6.13) $$ \vsp Let us finally display the transition probabilities with $\mu$ instead of $\lambda$ as parameter. \vsp For $0<\alpha<2,\ \alpha \not = 1$: $$ \cases{p_0 = 1-2\mu \, {\displaystyle {1-\alpha\over 2\,\cos{(\alpha\pi/ 2)}}}\,,&\cr\cr p_k={\displaystyle{(-1)^k\over 2\cos{(\alpha\pi/2)}}}\,{\displaystyle{\alpha\choose |k|+1}} & for $\;k\not= 0\,,$\cr\cr 0<\mu\le {\displaystyle{\cos{(\alpha\pi/2)}\over 1-\alpha}}\,,& \cr } \leqno(6.14) $$ for $\alpha = 1$ (see [GM99, formula (5.1)]): $$ p_0 = 1-{\displaystyle{2\mu \over \pi}}\,,\quad} \def\qq{\qquad p_k= {\displaystyle {\mu\over \pi |k|(|k|+1)}} \quad} \def\qq{\qquad \hbox{for} \quad} \def\qq{\qquad k\not = 0\,,\quad} \def\qq{\qquad 0<\mu\le {\pi/ 2}\,, \leqno(6.15) $$ for $\alpha = 2$: $$ p_0 = 1-2\mu\,,\quad} \def\qq{\qquad p_1= p_{-1}= \mu,\quad} \def\qq{\qquad p_k= 0 \quad} \def\qq{\qquad \hbox{for} \quad} \def\qq{\qquad |k|\ge 2\,,\quad} \def\qq{\qquad 0<\mu \le 1/2\,. \leqno(6.16) $$ The discontinuity at $\alpha = 2$ has so been transferred to the upper bound for $\mu\,.$ \vsp We comprise the result in \vsp {\bf Theorem 6.1.} {\it Take the probabilities $p_k= P(Y= k)$ and the restrictions for $\mu$ as in formulas {(6.14)}, { (6.15)}, {(6.16)}, and use the scaling relation $\tau = \mu \,h^\alpha\,.$ Let for fixed $t>0$ the index $n=t/\tau$ run through $\hbox{\bf N}$ towards $\infty\,. $ Then the random variable $S_n$ of { (3.3)} converges in distribution to the random variable $S(t)$ whose probability density is given by {(1.15)} as $g_\alpha(x,t;0) \,. $} \section{The Chechkin-Gonchar random walk} In this section we adopt to each other considerations of Chechkin and Gonchar [ChG] and the framework of our Section 3, restricting attention to the parameter range $ 0 <\alpha <2\,.$ So doing we exclude the well-known case of the classical Gaussian process. We will obtain a {\it random walk}, which is {\it discrete in time} but {\it continuous in space}, in more precise words: whose jumping width (in the instants $t_n= n \tau$) can assume any real number, having an everywhere positive probability density. We
function $D_{\rm LO}^{(8)}(z)$ in the second term of the factorization formula in Eq.~\eqref{eq:DNLOeta} differs from the LO fragmentation function $D_{\rm LO}^{(1)}(z)$ in the first term by the product of that color factor and the ratio $2N_c/(N_c^2-1)$ of the perturbative NRQCD matrix elements in Eqs.~\eqref{eq:<O1>QQbar} and \eqref{eq:<O8>QQbar}. Thus the LO color-octet $^1S_0$ fragmentation function is \begin{equation} \label{eq:dLO8-z} D_{\rm LO}^{(8)}(z) = \frac{ N_c^2-4}{4 N_c(N_c^2-1)m^3} \left[2(1-z) \log(1-z) +3z-2z^2 \right]. \end{equation} In the calculation of the NLO fragmentation function for $g \to Q \bar Q_8$, it is useful to have the LO fragmentation function for $g \to Q \bar Q_8$ expressed as an integral over the gluon phase space in $D=4-2 \epsilon$ dimensions: \begin{equation} \label{eq:fragBorn} D_1(z) = N_{\rm CS} \int d\phi_{\rm Born} (p,q) \mathcal{A}_{\rm Born}(p,q), \end{equation} where $N_{\rm CS}$ is the Collins-Soper prefactor in Eq.~\eqref{eq:overalfac}. The {\it Born phase-space measure} $d\phi_{\rm Born}$ is the product of the differential phase space for the final-state gluon of momentum $q$ and a factor $2\pi \delta(K.n-(2p+q).n)$ from the cut through the eikonal line. It can be reduced to a single differential in the invariant mass $s$ of the $Q \bar Q g$ system: \begin{equation} d\phi_{\rm Born}(p,q) = \frac{z^{-1+\epsilon} (1-z)^{-\epsilon}}{2 (4\pi)^{1-\epsilon} \Gamma(1-\epsilon) (2p+q).n} \left( s-\frac{4m^2}{z} \right)^{-\epsilon} ds, \label{eq:BornPSpq} \end{equation} where $s$ and $z$ expressed as functions of $p$ and $q$ are \begin{subequations} \begin{eqnarray} s &=& (2p+q)^2, \label{eq:s-2p+q} \\ z &=& \frac{(2p).n}{(2p+q).n}. \label{eq:z-pq} \end{eqnarray} \label{eq:z,s-pq}% \end{subequations} There is an implied Heavyside theta function that imposes the constraint $s>4m^2/z$. The {\it Born squared amplitude} $\mathcal{A}_{\rm Born}$ is obtained by multiplying the right side of Eq.~(2.15) in Ref.~\cite{Artoisenet:2014lpa} by the color factor $(N_c^2-4)/2$: \begin{eqnarray} \mathcal{A}_{\rm Born}(p,q) &=& \frac{2 (1-2 \epsilon) (N_c^2-1)(N_c^2-4) g_s^4 [(2p+q).n]^2}{N_c m s^2 (s-4m^2)^2} \nonumber\\ && \times \left[ (1-2z+2z^2 -\epsilon)s^2 - 8 (z - \epsilon) m^2 s + 16(1-\epsilon) m^4 \right], \label{eq:ABorn-pq} \end{eqnarray} where $z$ and $s$ are expressed as functions of $p$ and $q$ in Eqs.~\eqref{eq:z,s-pq}. The factors of $K.n = (2p+q).n$ cancel between $N_{\rm CS}$, $d\phi_{\rm Born}$, and $\mathcal{A}_{\rm Born}$ in Eqs.~\eqref{eq:overalfac}, \eqref{eq:BornPSpq}, and \eqref{eq:ABorn-pq}. The LO fragmentation function for $g \to Q \bar Q_8$ in $D$ dimensions is obtained by inserting the three factors in Eqs.~\eqref{eq:overalfac}, \eqref{eq:BornPSpq}, and \eqref{eq:ABorn-pq} into Eq.~\eqref{eq:fragBorn}. Setting $\epsilon =0$ and integrating over $s$, we obtain the final result for the LO fragmentation function for $g \to Q \bar Q_8$ in 4 dimensions: \begin{equation} \label{eq:dLO} D^{\rm (LO)}_{g \rightarrow Q\bar Q_8} (z) = \frac{ (N_c^2-4) \alpha_s^2}{4 N_c m^3} \left[2(1-z) \log(1-z) +3z-2z^2 \right]. \end{equation} Dividing by the perturbative NRQCD matrix element in Eq.~\eqref{eq:<O8>QQbar}, we obtain the LO fragmentation function $D_{\rm LO}^{(8)}(z)$ in Eq.~\eqref{eq:dLO8-z} multiplied by $\alpha_s^2$. \subsection{Born tensors} \label{sec:Borntensors} To facilitate the calculation of the real NLO corrections to the fragmentation function, it is convenient to generalize the integration measure $d\phi_{\rm Born}(p,q)$ for the LO fragmentation function in Eq.~\eqref{eq:BornPSpq} by allowing $q$ to be a more general light-like 4-vector. It could be the momentum $q_1$ or $q_2$ of a massless final-state parton, or it could be another light-like 4-vector constructed from $q_1$ and $q_2$. The variables $s=(2p+q)^2$ and $z=(2p.n)/(2p+q).n$ defined in Eqs.~\eqref{eq:z,s-pq} can be regarded as functions of this more general light-like 4-vector $q$. The longitudinal momentum of the fragmenting gluon is $(2p+q_1+q_2).n$. The Collins-Soper prefactor in Eq.~\eqref{eq:overalfac} can be generalized to a function of $p$ and $q$: \begin{equation} \label{eq:NBorn} N_{\rm Born}(p,q) = \frac{1}{(N_c^2-1)(2-2\epsilon)} \frac{1}{2\pi (2p+q_1+q_2).n} \left( \frac{2p.n}{(2p+q).n} \right)^{1-2\epsilon}. \end{equation} The Born phase-space measure in Eq.~\eqref{eq:BornPSpq}, with the factor of $1/(2p+q).n$ replaced by $1/(2p+q_1+q_2).n$ and with its coefficient expressed as a function of $s$ and $z$, will be denoted by $d\phi_{\rm Born}(s,z)$: \begin{equation} d\phi_{\rm Born}(s,z) = \frac{z^{-1+\epsilon} (1-z)^{-\epsilon}}{2 (4\pi)^{1-\epsilon} \Gamma(1-\epsilon) (2p+q_1+q_2).n} \left( s-\frac{4m^2}{z} \right)^{-\epsilon} ds. \label{eq:BornPSsz} \end{equation} Similarly, the Born squared amplitude $\mathcal{A}_{\rm Born}(p,q)$ in Eq.~\eqref{eq:ABorn-pq}, with the factor of $[(2p+q).n]^2$ replaced by $[(2p+q_1+q_2).n]^2$ and with its coefficient expressed as a function of $s$ and $z$, will be denoted by $\mathcal{A}_{\rm Born}(s,z)$: \begin{eqnarray} \mathcal{A}_{\rm Born}(s,z) &=& \frac{2 (1-2 \epsilon) (N_c^2-1)(N_c^2-4) g_s^4 [(2p+q_1+q_2).n]^2}{N_c m s^2 (s-4m^2)^2} \nonumber\\ && \times \left[ (1-2z+2z^2 -\epsilon)s^2 - 8 (z - \epsilon) m^2 s + 16(1-\epsilon) m^4 \right]. \label{eq:ABorn-sz} \end{eqnarray} The product of $N_{\rm Born}$, $d\phi_{\rm Born}$, and $\mathcal{A}_{\rm Born}$ in Eqs.~\eqref{eq:NBorn}, \eqref{eq:BornPSsz}, and \eqref{eq:ABorn-sz} depends only on $s$ and $z$. We introduce a more concise notation for this product: \begin{eqnarray} \label{eq:NphiABorn} N d\phi \mathcal{A}_{\rm Born}(s,z) &=& \frac{(1-2 \epsilon) (N_c^2-4) (4 \pi)^\epsilon\alpha_s^2}{\Gamma(2-\epsilon) N_c m} [z (1-z)]^{-\epsilon} \frac{(s-4m^2/z)^{-\epsilon}}{s^2} \nonumber \\ && \times \left[ 1 - \epsilon - 2 z(1-z) \frac{s (s-4m^2/z)} {(s-4m^2)^2} \right] \theta(s-4m^2/z) ds. \end{eqnarray} If this measure is multiplied by a function of $s$ and integrated over $s$ from $4m^2/z$ to $\infty$, it defines a function of $z$. If the weight function is simply 1, the integral is the LO fragmentation function for $g \to Q \bar Q_8$ in $D$ dimensions defined in Eq.~\eqref{eq:fragBorn}: \begin{eqnarray} \label{eq:D1-z} D_1(z) &=& \frac{(1-2 \epsilon)(N_c^2-4)(4\pi)^\epsilon \alpha_s^2}{\Gamma(2-\epsilon) N_c m} [z (1-z)]^{-\epsilon} \nonumber \\ && \times \int_{4m^2/z}^\infty \!\!\!\!\!ds \frac{(s-4m^2/z)^{-\epsilon}}{s^2} \left[ 1 - \epsilon - 2 z(1-z) \frac{s (s-4m^2/z)} {(s-4m^2)^2} \right]. \end{eqnarray} In the calculation of the real NLO corrections to the fragmentation function, it is convenient to have expressions for the Born squared amplitude with a pair of uncontracted Lorentz indices. They will be used to construct subtraction terms that cancel the ultraviolet and infrared divergences in the real NLO corrections point-by-point in the phase space. There are two useful choices for the uncontracted indices $\mu$ and $\nu$. One choice is the Lorentz indices associated with the ends of the eikonal line. The other choice is the Lorentz indices associated with the polarization vectors of the cut gluon line. We will refer to those expressions as the {\it Born tensors}. The Born tensor with Lorentz indices associated with the eikonal line is obtained by multiplying the right side of Eq.~(2.24) in Ref.~\cite{Artoisenet:2014lpa} by the color factor $(N_c^2-4)/2$: \begin{equation} \mathcal{A}_{\rm eikonal}^{\mu \nu}(p,q) = \frac{(1-2\epsilon)(N_c^2-1)(N_c^2-4) g_s^4 [(2p+q).n]^2}{4N_c m [(2p+q)^2]^2 (p.q)^2} \left[ (2p.q)^2 T^{\mu \nu} - (2p+q)^2 l^\mu l^\nu \right] , \label{Borntensor-eikonal} \end{equation} where $l^\mu$ and $T^{\mu \nu}$ are \begin{subequations} \begin{eqnarray} l^\mu &= & 2p^\mu - \frac{2p.n}{(2p+q).n} (2p+q)^\mu , \\ T^{\mu \nu} & = & - g^{\mu \nu} + \frac{n^\mu (2p+q)^\nu + (2p+q)^\mu n^\nu}{(2p+q).n}. \end{eqnarray} \end{subequations} The tensor $\mathcal{A}_{\rm eikonal}^{\mu \nu}$ is orthogonal to $n_\mu$ and $n_\nu$. Its contraction with $-g_{\mu \nu}$ is \begin{equation} \mathcal{A}_{\rm eikonal}^{\mu \nu}(p,q)\; \left( -g_{\mu \nu} \right) = \left(\frac{(2p+q).n}{(2p+q_1+q_2).n} \right)^2 \mathcal{A}_{\rm Born}(s,z) , \label{Borntensor-eikonal-gmunu} \end{equation} where $\mathcal{A}_{\rm Born}$ is given in Eq.~\eqref{eq:ABorn-sz}. The Born tensor with Lorentz indices associated with the final-state gluon is obtained by multiplying the right side of Eq.~(2.27) in Ref.~\cite{Artoisenet:2014lpa} by the color factor $(N_c^2-4)/2$: \begin{equation} \mathcal{A}_{\rm gluon}^{\mu \nu}(p,q) = \frac{ (N_c^2-1)(N_c^2-4) g_s^4 [(2p+q).n]^2}{N_c m [(2p+q)^2]^2 (p.q)^2} \sum_{i=1}^{4} C_i(z,p.q) T_{i}^{\mu \nu}(p,q), \label{Borntensor-gluon} \end{equation} where the tensors are \begin{subequations} \begin{eqnarray} T_{1}^{\mu \nu} (p,q)& = & -g^{\mu \nu} +\frac{q^\mu n^\nu + n^\mu q^\nu}{q.n}, \\ T_{2}^{\mu \nu} (p,q)& = & -g^{\mu \nu} +\frac{q^\mu p^\nu + p^\mu q^\nu }{p.q}, \\ T_{3}^{\mu \nu} (p,q)& = & \left(p^\mu - \frac{p.q}{q.n} n^\mu \right) \left(p^\nu - \frac{p.q}{q.n} n^\nu \right), \\ T_{4}^{\mu \nu} (p,q)& = & q^\mu q^\nu. \end{eqnarray} \label{eq:Timunu} \end{subequations} Their coefficients are \begin{subequations} \begin{eqnarray} C_1(z,p.q) & = & -2(1-z)(m^2 + p.q)\left[z p.q -2 (1-z)m^2 \right], \\ C_2(z,p.q) & = & \left[1-2\epsilon -2z(1-z) \right](p.q)^2-2 z (1-z)m^2p.q, \\ C_3(z,p.q) & = & 4(1-z)^2(m^2 + p.q),\\ C_4(z,p.q) & = & z^2 p.q + (-1+2\epsilon +z^2)m^2 , \end{eqnarray} \end{subequations} where $z$ is the momentum fraction in Eq.~\eqref{eq:z-pq}. The tensor $\mathcal{A}_{\rm gluon}^{\mu \nu}$ is orthogonal to $q_\mu$ and $q_\nu$. Its contraction with $-g_{\mu \nu}$ is \begin{equation} \mathcal{A}_{\rm gluon}^{\mu \nu}(p,q)\; \left( -g_{\mu \nu} \right) = \left(\frac{(2p+q).n}{(2p+q_1+q_2).n} \right)^2 \mathcal{A}_{\rm Born}(s,z) , \label{Borntensor-gluon-gmunu} \end{equation} where $\mathcal{A}_{\rm Born}$ is given in Eq.~\eqref{eq:ABorn-sz}. \section{Real NLO corrections} \label{sec:NLOreal} The real NLO corrections to the perturbative fragmentation function for $g \to Q \bar Q$, with the $Q \bar Q$ pair in a color-octet $^1S_0$ state, come from cut diagrams with two real partons in the final state. The two partons can be two gluons ($gg$) or a light quark-antiquark pair ($q \bar q$). Cut diagrams with two real gluons can be obtained from the four LO cut diagrams with a single real gluon, such as the diagram in Figure~\ref{fig:cutdiagram}, by adding a gluon line that crosses the cut and runs from any of the 6 colored lines on the left side of the cut to any of the 6 colored lines on the right side of the cut. The additional gluon line can also be attached to the operator vertex, in which case the fragmenting gluon is attached to the eikonal line. The cut diagrams with a light $q \bar q$ pair can be obtained from the four LO cut diagrams by replacing the real gluon line that crosses the cut by a virtual gluon that produces a $q \bar q$ pair that crosses the cut. Each of the cut diagrams involves an integral over the phase space of the two real partons in the final state. We denote the equal momenta of the $Q$ and $\bar Q$
phase (shown as second box of Figure \ref{fig:region_formation}) ensures every register used by store (we will use store register for short if there is no ambiguity) will be alive till the region end. In this way, \mbox{{ReplayCache}}\xspace makes all store registers intact during region execution so that correct register values can be used to re-execute stores of the region. However, register allocator might generates spill code in a region, which might break the region formation semantics if there is a redefinition of the register which is spilled to the stack before the redefinition. To handle this case, \mbox{{ReplayCache}}\xspace compiler proposes a post register allocation phase--- \texttt{Lifetime of Spill Store Registers Preservation} (forth box of Figure \ref{fig:region_formation})---to place a boundary right before the redefinition of store register so that the region semantics is preserved. \begin{figure*}[ht!] \centering \setkeys{Gin} \subfloat[Original program with initial boundary]{\includegraphics[width=0.2\textwidth,angle=0]{figures/region-partition-0.pdf}} \hfill \subfloat[Partitioned program by live interval extension]{\includegraphics[width=0.2\textwidth,angle=0]{figures/region-partition-1.pdf}} \hfill \subfloat[Partitioned program with shadow interval]{\includegraphics[width=0.2\textwidth,angle=0]{figures/region-partition-2.pdf}} \hfill \subfloat[Partitioned program with spill store (after register allocation)] {\includegraphics[width=0.2\textwidth,angle=0]{figures/region-partition-3.pdf}} \caption{Example partitioned program with annotated $y$'s live interval in (a) shows an initial region boundary in the function beginning (basic block $A$) and live intervals of variable $x, y$, and $z$; (b) shows second boundary inserted in basic block $D$ to which live intervals of $x, y$ are extended when partitioning threshold sets to 2; (c) shows shadow intervals of all three variables towards the second region boundary; and (d) shows the case of redefining the register $r2$ of spill store in basic block $C$ after variables are assigned to a physical register} \label{fig:live_intervals} \end{figure*} \paragraph{Register Pressure-aware Region Construction} \mbox{{ReplayCache}}\xspace's region construction algorithm works as follows. First, it traverses the CFG of a program from beginning and accumulates the number of live variables with carrying the liveness of store registers, and then places a region boundary when the number of overlapping live variable is greater than the certain threshold, here, we set the threshold to the number of available physical registers in the underlying architecture, e.g., 16 as ARM 32bit~\cite{ltd02} architecture. Figure~\ref{fig:live_intervals}(a) shows an example where variable $x$ of branch is ended in the bottom of basic block $B$ and variable $y$ of store ends in basic block $C$ similarly. Assuming we have only 2 available physical registers, when region construction algorithm traverses the left path going through basic block $C$ to $E$, it places a cut in the middle of basic block $E$ because live intervals of variable $x$ and $y$ are carried to the basic block $E$. \paragraph{Register Renaming} With inserted region boundaries by region construction phase, \mbox{{ReplayCache}}\xspace simply extend the liveness of operand variables of stores and branches towards the region ends so that those variables never get assigned to the same physical register as other variables. Consequently, \mbox{{ReplayCache}}\xspace guarantees the store and branch registers never get overwritten by following definitions of the same region. To implement the extension of live interval of store registers and branch registers, \mbox{{ReplayCache}}\xspace computes a shadow interval for each store register and branch register. The shadow interval starts from the last use point of a store register or branch register and ends in the region ends where the last use point is. As Figure~\ref{fig:live_intervals}(b) shows that \mbox{{ReplayCache}}\xspace compiler generates a shadow interval for variable $y$ of store (the shadow interval of variable $x$ used by branch is not shown in the figure). With this technique support, \mbox{{ReplayCache}}\xspace compiler can guarantee that variable $x$ and $y$ get an unique physical register in the region. \paragraph{Lifetime of Spill Store Registers Preservation} Aforementioned region construction and register renaming only ensure no overwriting for application stores and branch registers. When compiler inserts a stack spill in some regions, it is possible to happen that following definitions might overwrite the registers of spill stores. Figure~\ref{fig:live_intervals}(c) demonstrates an example that a spill store is inserted in basic block $D$ to break the live interval of variable $y$. However, variable $y$ is redefined by the last instruction in basic block $D$. Since the spill stores are inserted after register allocation finishes, one might want to perform the register renaming again, which introduces iterative running of region formation. Rather than doing region formation iteratively, \mbox{{ReplayCache}}\xspace opts for a so called lifetime of spill store registers preservation to cut the region if there is a redefinition of spill store registers, i.e., $y=y<<2$, and that redefinition must not used in writing data to the same stack location as the spill store. } \subsection{\mbox{{ReplayCache}}\xspace Failure Recovery Protocol} \begin{figure}[ht!] \centering \includegraphics[width=1\columnwidth]{figures/failure-recovery-protocol.pdf} \caption{Failure recovery for the region R1 when power failure happens in the bottom of R1. When power failure happens, reload registers values from NVFF (\whitecircle{1}); When \emph{\PSHR} is 0 (\whitecircle{3}), continue execution from power failure point otherwise (\whitecircle{2}) reset the \emph{PC} to the value of \emph{region register} (\whitecircle{4}) and jump back to region beginning (\whitecircle{5})} \label{fig:failure_recovery_protocol} \end{figure} \begin{figure*}[htb!] \centering \subfloat[Store level persistence without ILP]{ \centering \includegraphics[width=0.55\textwidth,height=0.14\textwidth]{figures/store-level-persistence.pdf}} \subfloat[Region level persistence with ILP]{ \centering \includegraphics[width=0.35\textwidth,height=0.14\textwidth]{figures/region-level-persistence.pdf}} \caption{Performance improvement enabled by ILP-based region level persistence} \label{fig:ilp_enabled} \end{figure*} As the Introduction (\cref{sec:intro}) states, \mbox{{ReplayCache}}\xspace dedicates a non volatile register, \emph{region register}, to keep track the pc address of last region end instruction for failure recovery as shown in Figure~\ref{fig:failure_recovery_protocol}. When processor gets to the region boundary, \mbox{{ReplayCache}}\xspace writes its pc address to the \emph{region register}. During program execution, the value of \emph{\PSHR} increments/decrement by 1 when \mbox{{ReplayCache}}\xspace starts/finishes the persistence of a store. Therefore, \emph{\PSHR} register holds the number of stores which are being persisted. When \emph{\PSHR} becomes 0, \mbox{{ReplayCache}}\xspace hardware knows all stores of the region have been persisted. Thus, \mbox{{ReplayCache}}\xspace does not jump back to the region beginning when \emph{\PSHR} is 0 upon power failure. When \emph{\PSHR} is not 0, \mbox{{ReplayCache}}\xspace enables a special selectively replaying model to recover inconsistent data from power failure. Figure~\ref{fig:failure_recovery_protocol} shows how \mbox{{ReplayCache}}\xspace recovers the program status after power failure happens. As shown in the figure, region R1 contains three basic blocks, \textit{e.g.,}\xspace $B, C$ and first portion of $D$. Thanks to the property of region formation, all store registers and branch register of region R1 are \emph{live out} to the end of R1; thus value of registers, e.g., $r2$, $r3$, and $sp$, are guaranteed to be the same towards the region end. When there is a power failure happens in the middle of basic block $D$, \mbox{{ReplayCache}}\xspace first reloads all volatile registers values from NVFF (\whitecircle{1}), and then resets \emph{PC} to the value of \emph{region register} (\whitecircle{4}) when \emph{\PSHR} register indicates there are unpersisted stores on the fly (\whitecircle{2}), finally jumps back to the beginning of region $R1$ to selectively reexecute (\whitecircle{5}) branch and stores till the failure point. In this way, \mbox{{ReplayCache}}\xspace can correct inconsistent NVM status and continue to execute following regions. It is worthy noting that it doesn't hurt the recovery correctness if program takes the right path going through basic block $C$, even though register $r3$ gets redefined in BB3. The reason is redefinition of $r3$ is spilled to the same stack slot as first store of basic block $C$. \subsection{Optimization---High Efficiency Instruction Level Parallelism}\label{sec:ilp} As Background (\cref{sec:background}) states, \mbox{{ReplayCache}}\xspace enables a volatile cache with write back policy with some additional requirements. It needs to treat the stores specially, meaning that \mbox{{ReplayCache}}\xspace must write through store to the underlying NVM, which is an expensive operation, and wait for the ack of the store to ensure that store is persisted in NVM before it moves to execute next instructions. However, per-store write-through policy incurs significant performance overhead consuming hard-won energy--for expensive NVM write operation-- that would otherwise be used for making the program move forward further. To address the overhead problem, \mbox{{ReplayCache}}\xspace optimizes the per-store persistence mechanism by enabling instruction level parallelism (ILP). That is, \mbox{{ReplayCache}}\xspace rather speculatively executes all following instructions till the region end than waits for the acknowledgement of a single store. In other words, \mbox{{ReplayCache}}\xspace opts for a per-region persistence mechanism. \mbox{{ReplayCache}}\xspace enables instruction level parallelism (ILP) to hide long NVM write latency by overlapping them with the following code execution till the region boundary. Figure \ref{fig:ilp_enabled} shows how the instruction level parallelism (ILP) works when multiple stores get executed in a region. Figure~\ref{fig:ilp_enabled}(a) describes a non-ILP case; when processor is persisting a store, it needs to wait until the acknowledgement of the store is received before continuing execution of following instructions. Figure~\ref{fig:ilp_enabled}(b) shows how \mbox{{ReplayCache}}\xspace hides the latency of persisting the store. By overlapping the NVM writing with execution of following
# Scientific Memoirs/1/Memoir on the Free Transmission of Radiant Heat through different Solid and Liquid Bodies SCIENTIFIC MEMOIRS. VOL. I.— PART I. Article I. Memoir on the Free Transmission of Radiant Heat through different Solid and Liquid Bodies; presented to the Royal Academy of Sciences of Paris, on the 4th of February, 1833, by M. Melloni. From the Annales de Chimie et de Physique, t. liii. p. 1. Mariotte was the first, so far as I am aware, who attempted to appretiate the action of diaphanous substances in transmitting or intercepting the calorific rays which emanate from terrestrial sources. After having observed that solar heat concentrated at the focus of a metallic mirror, suffered no sensible diminution of intensity by being made to pass through a glass plate, he took and placed his apparatus before the fire of a stove, and found, that at the distance of five or six feet the temperature of the reflected image at the focus, when the rays were allowed to meet there without impediment, was such as the hand could not bear; but that when the plate of glass was interposed there was no longer any sensible heat, although the image had lost none of its brilliancy. Whence he concluded that none[1], or certainly but a very small portion, of the heat of terrestrial fire passes through glass. About a century after Mariotte's time, the same experiment was repeated by Scheele, who, instead of imitating the cautious reserve of his predecessor, asserted that from the moment when the glass was interposed there was no longer any heat whatever at the focus of the mirror[2]. Pictet, however, corrected the mistake by means of the apparatus known by the name of conjugate mirrors. A very transparent square of glass was placed between a thermometer and the heat of a lighted candle concentrated by the apparatus; the mercury in some moments rose several degrees; there was a perceptible elevation of temperature also when the candle was removed and a small jar filled with boiling water put in its place[3]. Some years later Herschel undertook a very extensive series of experiments on the same subject. They are described in the volume of the Philosophical Transactions for 1800. The author employs no artifice to increase the action of the rays of heat, and contents himself with the direct measurement of their effect by placing the thermometer at a very short distance from the diaphanous body. But doubts were started as to the conclusions drawn from these different results. It was objected that part of the radiant heat was first stopped at the nearer surface of the glass, that it was gradually accumulated there and afterwards propagated from layer to layer, until it reached the further surface whence it began again to radiate on the thermometer. It was maintained even that nearly the whole of the effect was produced by this propagation. In short, some went so far as to deny altogether that the heat emitted by terrestrial bodies can be freely transmitted through any other diaphanous substance than atmospheric air. M. Prevost, by means of a very ingenious contrivance, demonstrated the erroneousness of this opinion. Having attached to the pipe of a fountain a spout consisting of two parallel plates, he obtained a strip of water about a quarter of a line in thickness. On one side of this he placed an air thermometer and on the other a lighted candle or a hot iron. The thermometer rose, almost always, some fraction of a degree[4]. Now it is quite evident that, in this case, a successive propagation through the several layers of the screen, which was in a state of perpetual change, could not take place. It was admitted, therefore, that other diaphanous media besides atmospheric air sometimes transmit the rays of heat as instantaneously as they always transmit those of light. M. Prevost's process could not however be applied to solid bodies. It was therefore impossible to determine, by means of it, whether caloric was immediately transmitted through screens of glass. Delaroche completely solved this problem by employing a method invented by Maycock[5]. The method consists in observing the thermometer as in the preceding cases; that is, when the caloric rays fall upon it after having passed through the plate of glass. We thus obtain a complex measure of the effects produced by immediate transmission and by that conducting power of the layers to which we have given the name of successive propagation. If we know the value of either of these, we have that of the other. Now it is easy to determine the influence of the conducting power by repeating the experiment after having blackened with Indian ink that surface of the plate which is turned towards the calorific source. In this case, the immediate radiation being intercepted, it is clear that the elevation of the temperature at the other side must be attributed only to the conducting power of the layers. Should the elevation be now found less than it was at first, it will be a decisive proof of immediate transmission. And such was the fact in almost all the experiments of Delaroche; I say almost all, because it was found that the quantity of heat freely transmitted varied with the temperatures of the source. For temperatures lower than that of boiling water it was nothing, and when an Argand lamp[6] was employed, it was found to be more than half of the whole quantity. No doubt can be raised as to the truth of this beautiful discovery of Delaroche; and yet the method which he has employed to measure the quantities of heat freely transmitted is by no means exact, especially in respect to high temperatures. In order to understand this seeming paradox two things are to be observed; 1st, the difference produced by change of surface between the two quantities of heat which penetrate the glass by reason of its conducting power; 2nd, the difference produced between those two quantities by the total or partial interception of the calorific rays. It is fully proved by the experiments of Leslie and others, that glass, when blackened with Indian ink, absorbs all the rays of heat, though, in its natural state, it reflects a certain number of them. The quantity of heat which penetrates the screen will therefore be greater in the former than in the latter case. However, as polished glass reflects but a very small portion of caloric rays, the error arising from a difference in the state of the surface will be reduced to a very inconsiderable quantity and may be safely disregarded. But the case is different when we examine the error produced by the total or partial interception of the caloric radiation. In some of the experiments of Delaroche one half, at least, of the incident rays immediately passed through the screen. Thus it was evident that it was the other only which was stopped at the first surface of the glass. The effect of conduction must therefore be limited to this latter half. But as the screen, when blackened, stops the whole radiation, it is then exposed to a heat twice as strong, and therefore exhibits a far greater effect of conduction. Hence it follows that when we deduct from the observation furnished by the transparent glass the observation furnished by the glass blackened, the result obtained will be lower than the true temperature of the rays transmitted freely. But the error will not be the same in all cases. Being of no account when boiling water is employed, it will increase in proportion as the temperature of the source is raised. The measures of the free radiations which suffer the greatest diminution will be those furnished by the highest temperatures.
Radon transform on $\bbh^{k+1}$. Thus, the existence of $\tilde \H_k f$ is equivalent to the existence of $(\H f_v)(\z)$. The latter is characterized by Theorem \ref{hyptag31th} which should be applied to $f_v$. To reformulate the conditions of that theorem in terms of $f$, we need the following \begin{lemma} The equality \be\label {hoyiii} \intl_{S^{n-k-1}}\!\! \!dv\!\intl_{\bbh^{k+1}}\!\! f_v (\eta)\,d\eta\!=\!2 \intl_{\bbh^n} \frac{f(x)}{|x'|^{n-k-1}}\,dx, \quad x'\!=\!(x_1, \ldots, x_{n-k}),\ee holds provided that either side of it is finite when $f$ is replaced by $|f|$. \end{lemma} \begin{proof} Let $\eta=\eta_{n-k} \, e_{n-k}+\tilde \eta$, $\tilde \eta=(\eta_{n-k+1}, \ldots \eta_{n+1})$. Then $\tilde \gam_v \eta=v\, \eta_{n-k}+\tilde \eta$ and (\ref {hfoeehhh4}) yields \bea l.h.s &=& \intl_{S^{n-k-1}} dv \intl_{\bbh^{k+1}} f(v\, \eta_{n-k}+\tilde \eta)\,d\eta\nonumber\\ &=&\intl_{S^{n-k-1}} dv \intl_{-\infty}^\infty \cosh^k r\, dr \intl_{\bbh^{k}} f(v\,\sinh\, r +u\,\cosh\, r)\,du\nonumber\\ &=&2\intl_0^\infty \frac{d\nu (r)}{\sinh^{n-k-1} r} \intl_{S^{n-k-1}} dv \intl_{\bbh^{k}} f(v\,\sinh\, r +u\,\cosh\, r)\,du,\nonumber\eea $d\nu(r)=\sinh^{n-k-1}\, r\,\cosh^k\, r \, dr$. By (\ref{hfohhh4}), the result follows. \end{proof} \begin{theorem} Let $1\le k\le n-1$. The integral (\ref{hfos4609bu4}) is finite for almost all $(v,w)\in \tilde \bbh^n$ provided that \be\label{hyptag31f} \intl_{\bbh^{n}}|f(x)|^p \,\frac{dx}{|x'|^{n-k-1}} <\infty, \qquad 1 \le p < k/ (k-1).\ee \end{theorem} \begin{proof} If $f$ satisfies (\ref{hyptag31f}), then, by (\ref{hoyiii}), $f_v\in L^p (\bbh^{k+1})$. Hence, by Theorem \ref{hyptag31th} and (\ref{hoyyyru9}), $(\tilde \H_k f)(v,\tilde \gam_v \z)=(\H f_v)(\z)$ is finite for almost all $v\in S^{n-k-1}$ and $\z\in \overset {*}{\bbh}{}^{k+1}$. It follows that $(\tilde \H_k f)(v,w)$ is finite for almost all $v\in S^{n-k-1}$ and $w\in \overset {*}{\bbh}{}^{k+1}_v$. \end{proof} \begin{remark} {\rm The restriction $1 \le p < k/ (k-1)$ is sharp, as in Theorem \ref{hyptag31th}, and the bound $k/ (k-1)$ is smaller than $(n-1)/ (k-1)$ if $k<n-1$; cf. (\ref{hfohhh5fr}).} \end{remark} \subsection{Inversion formulas} To reconstruct an arbitrary function $f$ satisfying (\ref{hyptag31f}) from $\vp(v,w)= (\tilde \H_k f)(v,w)$, it suffices to invert the usual hyperbolic Radon transform $(\H f_v)(\z)$ from (\ref{hoopmvyyy}). Specifically, fix $v\in S^{n-k-1}$ and let $\vp_v(\z)=\vp(v,\tilde \gam_v \z)$. Using any known inversion formula for $\H$ (see, e.g., \cite{BR99a, He11, Ru02a, Ru02c}), we get \be\label {dddwbu71w} f_v(\eta)\equiv f (\tilde \gam_v \eta)=(\H^{-1} \vp_v)(\eta).\ee Since $f_v\in L^p (\bbh^{k+1})$, $1 \le p < k/ (k-1)$, then it can be evaluated at almost all points of almost all hyperboloids $\bbh^{k+1}_v$. If, in addition to (\ref{hyptag31f}), $f$ is continuous, then, to find the value of $f$ at a point $x\in \bbh^n$, we regard $x$ as a column vector $x=(x_{1}, \ldots, x_{n+1})^T$ and set \[x'=(x_1, \ldots, x_{n-k})^T\in \bbr^{n-k},\qquad x''=(x_{n-k+1}, \ldots, x_{n+1})^T\in \bbr^{k+1},\] \be\label {dddwbu71} v=x'/|x'|\in S^{n-k-1}\subset\bbr^{n-k}, \ee \be\label {dddwbu72} \eta=( 0, \ldots, 0, |x'|, x'')^T\in \bbh^{k+1}\subset \bbr^{k+2}.\ee Then $x=\tilde \gam_v \eta$ and we get $f(x)= (\H^{-1} \vp_v)(\eta)$. \section{The Range of the Restricted $k$-plane Transform} \subsection{Definitions and the main result} We will be using the same notation as in Section \ref {222222}. In the following $\bbz_+=\{0,1,2, \ldots\}$, $\bbz^n_+=\bbz_+ \times \cdots \times \bbz_+ $ ($n$ times). The Schwartz space $S (\rn)$ is defined in a standard way with the topology generated by the sequence of norms \[ ||f||_m= \sup\limits_{|\a|\le m} (1+|x|)^m |(\partial^\a f)(x)|, \qquad m=0,1,2, \ldots.\] The Fourier transform of $f\in S (\rn)$ has the form \be \label{ft}(Ff)(y) \equiv \hat f (y) = \intl_{\bbr^{ n}} f(x)\, e^{ i x \cdot y} \,dx. \ee The corresponding inverse Fourier transform will be denoted by $\check f$. \begin{definition} \label{iiuuyg5a} A function $g$ on the sphere $S^{k} \subset \bbr^{k+1}$ is called differentiable if the homogeneous function $\tilde g(x)= g(x/|x|)$ is differentiable in the usual sense on $\bbr^{k+1} \setminus \{0\}$. The derivatives of $g$ will be defined as restrictions to $S^{k}$ of the corresponding derivatives of $\tilde g(x)$: \be\label{iiuuyg} (\partial_\theta^\a g)(\theta)=(\partial^\a \tilde g)(x)\big|_ {x=\theta},\qquad \a \in \bbz_+^{k+1}, \quad \theta\in S^{k}. \ee \end{definition} \begin{definition} We denote by $S_e (\tilde Z_{n,k})$ the space of functions $\vp (\th, s; x'')$ on $ \tilde Z_{n,k}= S^k \times \bbr \times \bbr^{n-k-1}$, which are infinitely differentiable in $\theta$, $s$ and $ x''$, rapidly decreasing as $|s|+|x''| \to \infty$ together with all derivatives, and satisfy \be \label {65679z90swu}\vp (-\th, -s; x'') = \vp (\th, s; x'') \qquad \forall \; (\th, s; x'')\in \tilde Z_{n,k}.\ee The topology in $S_e (\tilde Z_{n,k})$ is defined by the sequence of norms \be\label{knnzwe35} ||\vp ||_m=\!\sup\limits_{|\mu|+j+|\gam|\le m} \,\sup\limits_{\theta,s, x''} \ \!(1\!+\!|s|\!+\!|x''|)^m |(\partial_\theta^\mu \partial^j_{s} \partial_{x''}^\gam \vp)(\th, s; x'')|. \ee The space $S_e (Z_{n})$ of rapidly decreasing even smooth functions $\tilde\vp (\th, s)$ on $Z_{n}= S^{n-1} \times \bbr$ is defined similarly. \end{definition} \begin{definition} \label{iiuuyg5a1kka2} Let $S_H (\tilde Z_{n,k})$ denote the subspace of all functions $\vp \in S_e (\tilde Z_{n,k})$ satisfying the {\bf moment condition}: {\it For every $m\in \bbz_+$ there exists a homogeneous polynomial \[P_{m} (\th, x'')=\sum\limits_{|\a|=m} c_\a (x'') \,\th ^\a\] with coefficients $c_\a (x'')$ in $S(\bbr^{n-k-1})$ such that} \be\label {7wer34} \intl_{\bbr} \vp (\th, s; x'')\, s^m\, ds =P_{m} (\th, x'').\ee We equip $S_H (\tilde Z_{n,k})$ with the induced topology of $S_e (\tilde Z_{n,k})$. \end{definition} The main result of this section is the following \begin{theorem} \label {657390sw} The restricted $k$-plane transform $\tilde R_k$ acts as an isomorphism from $S(\rn)$ onto $S_H (\tilde Z_{n,k})$. \end{theorem} \subsection{Auxiliary statements} \begin{lemma}${}$\hfill {\rm (i)} If $f\in C^k (\bbr^n)$, $t\in \bbr$, then for $|\a| \le k$ and $j\le k$, \be\label{aqqc} \partial^\a_x [f (tx/|x|)]=|x|^{-|\a|}\sum\limits_{|\gam |=1}^{|\a|} t^{|\gam|} \,h_{\a,\gam} (x/|x|)\, (\partial^\gam f)(tx/|x|),\ee \be\label{aqqc1} \frac{\partial^j}{\partial t^j}\, [f (tx/|x|)]=\sum\limits_{|\gam|=j} h_{\gam} (x/|x|)\, (\partial^\gam f)(tx/|x|), \ee \noindent where $h_{\a,\gam}$ and $ h_{\gam}$ are homogeneous polynomials independent of $f$. {\rm (ii)} If $g\in C^k (\bbr_+)$, $\bbr_+=(0,\infty)$, then for $1\le |\b| \le k$ and $x\neq 0$, \be\label{aqqc1hr} \partial^\b_x [g (|x|)] =\sum\limits_{k=1}^{|\b|} |x|^{k-|\b|} \,h_{\b,k} (x/|x|)\, g^{(k)} (|x|),\ee \noindent where $h_{\b,k}$ are homogeneous polynomials independent of $g$. \end {lemma} \begin{proof} We proceed by induction. Let $|\a|=1$, that is, $\partial^\a_x =\partial/\partial x_j$ for some $j \in \{1, 2,\ldots , n\}$. Then \[\frac{\partial}{\partial x_j}\, [f (tx/|x|)]=t\sum\limits_{k=1}^n (\partial_k f)(tx/|x|)\, p_{j,k}(x),\] \[ p_{j,k}(x)=\frac{\partial}{\partial x_j} \left [\frac{x_k}{|x|}\right ]=\frac{1}{|x|}\left \{\begin{array} {ll} \displaystyle{-\frac{x_k x_j}{|x|^2}} & \mbox{if $ j\neq k$,}\\ \displaystyle{1-\frac{x_k^2}{|x|^2}} & \mbox{if $ j= k$.}\\ \end{array} \right.\] This gives (\ref{aqqc}) for $|\a|=1$. Now the routine calculation shows that if (\ref{aqqc}) holds for any $|\a|=\ell$, then it is true for $|\a|=\ell +1$. The proof of (\ref{aqqc1}) is easier. For $j=1$, \[\frac{\partial}{\partial t}\, [f (tx/|x|)]=\sum\limits_{k=1}^n (\partial_k f)(tx/|x|)\,\frac{x_k}{|x|}.\] The general case follows by iteration. The proof of (\ref{aqqc1hr}) is straightforward by induction. \end{proof} \begin{corollary}\label {iokl} Let $f\in S(\bbr^n)$, $\tilde f(\theta, t)= f(t\theta)$, where $t\in \bbr$, $\theta \in S^{n-1}$. Then for any $m\in \bbz_+$ there exist $N\in \bbz_+$ and a constant $ c_{m,N}$ independent of $f$ such that \bea \label{jiko} ||\tilde f||_m &\equiv& \sup\limits_{|\a|+j\le m} \sup\limits_{\theta,t} |(1+|t|)^m |(\partial_\theta^\a \partial^j_t \tilde f)(\theta,t)|\nonumber\\ &\le& c_{m,N} ||f||_N \equiv c_{m,N}\,\sup\limits_{|\gam|\le N} \sup\limits_{y} \ \!(1+|y|)^N |(\partial^\gam f)(y)|.\nonumber\eea In other words, $f \to \tilde f$ is a continuous mapping from $S(\bbr^n)$ to $S_e(Z_n)$. \end{corollary} \begin{corollary}\label {iokl2} The map $F_1$, which assigns to a function $w(\theta, t) \in S_e(Z_n)$ its Fourier transform in the $t$-variable, is an automorphism of the space $S_e(Z_n)$. \end{corollary} \subsection{Proof of Theorem \ref{657390sw}} We split the proof in several steps. \begin{proposition} \label {657390sw1} If $f \in S(\rn)$, then $\tilde R_k f \in S_H (\tilde Z_{n,k})$ and the map $f \to \tilde R_k f$ is continuous. \end{proposition} \begin{proof} By (\ref{kkmm4539a1}) and (\ref{durtkkaz}), the function \be \label{k4539a1az} \vp (\th, s; x'')=\intl_{\th^\perp \cap \bbr^{k+1}} f(s\th +u, x'')\, d_\th u= (Rf_{x''})(\th, s), \ee is the usual hyperplane Radon transform in $\bbr^{k+1}$ of $f_{x''} (x')= f(x',x'')$. Hence, (\ref{7wer34}) follows from the equalities \[ \intl_{\bbr} \vp (\th, s; x'')\, s^m\, ds = \intl_{\bbr}(Rf_{x''})(\th, s)\, s^m\, ds= \intl_{\bbr^{k+1}} f(x',x'') (x' \cdot \theta)^k\, dx'.\] The evenness property (\ref{65679z90swu}) is a consequence of (\ref{k4539a1az}). Furthermore, by the Projection-Slice Theorem, \[ [ \vp (\th, \cdot; x'')]^\wedge (\eta)=[(Rf_{x''})(\th, \cdot)]^\wedge (\eta)= [f(\cdot,x'')]^\wedge (\eta \th).\] Hence, $A: f \to \vp\!=\!\tilde R_k f$ is a composition of three mappings, specifically, $A\!=\!A_3 A_2 A_1$, where \[ \begin{array} {llll} A_1 : \;f(x) &\to & [f(\cdot,x'')]^\wedge (\xi')&\equiv g(\xi', x'');\\ A_2 : \; g(\xi', x'') &\to & g(\th \eta, x'')&\equiv w(\th, \eta; x'');\\ A_3 : \; w(\th, \eta; x'') &\to & [w(\th, \cdot; x'')]^\vee (s) &\equiv \vp (\th, s; x'').\end{array} \] The continuity of the operators \[ A_1: S(\rn) \! \to \!S(\bbr^{k+1} \times \bbr^{n-k-1}), \qquad A_3:\; S_e (\tilde Z_{n,k}) \to S_e (\tilde Z_{n,k})\] is a consequence of the isomorphism property of the Fourier transform. The continuity of $A_2$ from $S(\bbr^{k+1} \times \bbr^{n-k-1})$ to $S_e (\tilde Z_{n,k})$ follows from Corollary \ref{iokl} applied in the $\xi'$-variable. This gives the result. \end{proof} The next proposition is the most technical. \begin{proposition} \label {mnubhrkk} If $ \vp\in S_H (\tilde Z_{n,k})$, then the function \be\label {pp12vdkkk} \psi (x)\equiv \psi(x', x'')=\intl_\bbr \vp (x'/|x'|, s; x'') \, e^{is|x'|}\, ds \ee belongs to $S(\rn)$ and the map $ \vp \to \psi$ is continuous. \end{proposition} \begin{proof} We have to show that for every $m\in \bbz_+$ there exist $M=M(m)\in \bbz_+$ and a constant $C_m>0$ independent of $\vp$ such that $||\psi||_m \le C_m \, ||\vp||_M$.
\section{Introduction} It is well known that the classification of nilpotent Lie algebras is a classical problem. Several classifications of nilpotent Lie algebras of dimension at most $ 7 $ over various ground fields are available in the literature (See \cite{cic, Gr2, Gr}). It is not easy to classify nilpotent Lie algebras with an arbitrary dimension. Hence we are interested to classify nilpotent Lie algebras by focusing on some other aspects rather than the dimension. For a given Lie algebra $ L $ with $ \dim L^2=1, $ the structure of $ L $ is given in \cite{ni54}. When $ \dim L^2=2, $ we gave the structure of $ L $ when $ L $ is of class $ 3 $ and with some restrictions for class $ 2 $ in \cite{ni41}. The purpose of this paper is to describe a classification of all nilpotent Lie algebras of class $ 4 $ with the derived subalgebra of dimension $ 3. $ Moreover, in this class, we classify which ones are capable. \section{ Preliminaries} This section is devoted to give some elementary and known results that will be needed for the next investigations. All Lie algebras in this paper are finite dimensional over any arbitrary field. First we recall the concept of a central product of two Lie algebras $A$ and $B.$ \begin{defn}\label{cent} A Lie algebra $L$ is a central product of $A$ and $B,$ if $ L=A+B,$ where $A$ and $B$ are ideals of $ L $ such that $ [A,B]=0$ and $A\cap B\subseteq Z(L).$ We denote the central product of two Lie algebras $A$ and $B$ by $A\dotplus B.$ \end{defn} The following lemma emphasizes the Heisenberg Lie algebras are in fact central products some of their ideals. \begin{lem}\cite[Lemma 3.3]{pair}\label{fr} Let $ L $ be a Heisenberg Lie algebra of dimension $2m+1.$ Then $ L $ is a central product of its ideals $B_j$ for all $ j, $ $1\leq j\leq m$ such that each $B_j$ is the Heisenberg Lie algebra of dimension $3.$ \end{lem} A Lie algebra $L$ is called capable provided that $L \cong H/Z(H)$ for some Lie algebra $H.$ The notion of the epicenter $Z^*(L)$ for a Lie algebra $L$ was defined in \cite{alam}. It is shown that $L$ is capable if and only if $Z^*(L) = 0.$ Another notion having a relation to the capability is the concept of the exterior square of Lie algebras, $L \wedge L,$ which was introduced in \cite{el}. Our approach is on the concept of the exterior center $Z^{\wedge}(L),$ the set of all elements $l$ of $L$ for which $l\wedge l' = 0_{L\wedge L}$ for all $l' \in L.$ Niroomand et al. in \cite{ni3} showed $Z^{\wedge}(L) = Z^*(L)$ for any finite dimensional Lie algebra $L.$\\ It is not an easy matter to determine the capability of a central product of Lie algebras in general, but the next result gives the answer to this question in a particular case. \begin{prop}\label{Hi}\cite[Proposition 2.2]{ni4} Let $L$ be a Lie algebra such that $L=A\dotplus B$ with $ A^2\cap B^2\neq 0.$ Then $ A^2\cap B^2\subseteq Z^{\wedge}(L)$. Moreover, $ L $ is non-capable. \end{prop} Let $ cl(L)$ denote the nilpotency class of a Lie algebra $ L. $ The following theorem gives the classification of all capable nilpotent Lie algebras of class $3$ with the derived subalgebra of dimension $2.$ \begin{thm}\cite[Theorem 5.3]{ni41}\label{26117} Let $ L $ be an $n$-dimensional Lie algebra such that $cl(L)=3$ and $\dim L^2=2.$ Then $L $ is capable if and only if $L\cong L_{4,3}\oplus A(n-4) $ or $L\cong L_{5,5}\oplus A(n-5).$ \end{thm} From \cite{Gr}, the only Lie algebra of maximal class of dimension $4$ is isomorphic to \[L_{4,3}=\langle x_1,\ldots,x_4|[x_1, x_2] = x_3, [x_1, x_3] = x_4\rangle,\] and there are two Lie algebras of maximal class of dimension $5$ that are isomorphic to \[L_{5,6}=\langle x_1,\ldots,x_5|[x_1, x_2] = x_3, [x_1, x_3] = x_4, [x_1, x_4] = x_5, [x_2, x_3] = x_5\rangle\] and \[L_{5,7}=\langle x_1,\ldots,x_5|[x_1, x_2] = x_3, [x_1, x_3] = x_4, [x_1, x_4] = x_5\rangle,\] respectively. We say a Lie algebra $L$ is a semidirect sum of an ideal $I$ by a subalgebra $K$ if $L=I+K,$ $ I\cap K=0. $ The semidirect sum of an ideal $I$ by a subalgebra $K$ is denoted by $K\ltimes I.$\newline \begin{lem}\cite[Lemma 4.1]{ni41}\label{rr11} Let $L $ be a $ 5$-dimensional nilpotent stem Lie algebra of class $ 3 $ and $ \dim L^2=2. $ Then \[ L\cong L_{5,5}=\langle x_1,\ldots,x_5| [x_1, x_2] = x_3, [x_1, x_3] = x_5, [x_2, x_4] = x_5\rangle.\] Moreover, $L_{5,5}=I\rtimes \langle x_4\rangle $ in which \[ I=\langle x_1,x_2,x_3,x_5| [x_1, x_2] = x_3, [x_1, x_3] = x_5\rangle\cong L_{4,3},~\text{and}~[I, \langle x_4\rangle]=\langle x_5\rangle=Z(L_{5,5}).\] \end{lem} \begin{lem}\label{rr112} Let $L $ be a $ 6$-dimensional nilpotent stem Lie algebra of class $ 4 $ and $ \dim L^2=3. $ Then $L $ is isomorphic to one of the Lie algebras listed below. \begin{itemize} \item[$(1)$] $ L_{6,11}=\langle x_1,\ldots,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4, [x_1, x_4] = [x_2, x_3] =[x_2, x_5]= x_6\rangle =I_1\rtimes \langle x_5\rangle, $ in which $ I_1=\langle x_1,x_2,x_3,x_4,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4,[x_1, x_4] = [x_2, x_3] = x_6\rangle\cong L_{5,6}$ and $[I, \langle x_5\rangle]=\langle x_6\rangle=Z(I_1).$ \item[$(2)$] $ L_{6,12}=\langle x_1,\ldots,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4, [x_1, x_4] =[x_2, x_5]=x_6\rangle =I_2\rtimes \langle x_5\rangle, $ in which $ I_2=\langle x_1,x_2,x_3,x_4,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4,[x_1, x_4] = x_6\rangle\cong L_{5,7}$ and $ [I_2,\langle x_5\rangle]=\langle x_6\rangle =Z(I_2). $ \item[$(3)$] $ L_{6,13}=\langle x_1,\ldots,x_6| [x_1, x_2] = x_3, [x_1, x_3] = [x_2, x_4]=x_5, [x_1,x_5]=[x_3, x_4]=x_6\rangle =I_3\rtimes \langle x_4\rangle, $ in which $ I_3=\langle x_1,x_2,x_3,x_5,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_5,[x_1, x_5] = x_6\rangle\cong L_{5,7}$ and $ [I_3,\langle x_4\rangle]=\langle x_5,x_6\rangle. $ \end{itemize} \end{lem} \begin{proof} By looking at the classification of nilpotent Lie algebras of dimension at most $ 6 $ in \cite{cic,Gr}, we get $L\cong L_{6,11},$ $L\cong L_{6,12}$ or $L\cong L_{6,13}.$ Let $L\cong L_{6,11}.$ It is easy to check that $ L_{6,11}=I_1\rtimes \langle x_5\rangle, $ in which $ I_1=\langle x_1,x_2,x_3,x_4,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4,[x_1, x_4] = [x_2, x_3] = x_6\rangle\cong L_{5,6}$ and $[I, \langle x_5\rangle]=\langle x_6\rangle=Z(L_{5,6}).$ Similarly, we can see that $ L_{6,12}=\langle x_1,\ldots,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4, [x_1, x_4] =[x_2, x_5]=x_6\rangle =I_2\rtimes \langle x_5\rangle, $ in which $ I_2=\langle x_1,x_2,x_3,x_4,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_4,[x_1, x_4] = x_6\rangle\cong L_{5,7}$ and $ [I_2,\langle x_5\rangle]=\langle x_6\rangle=Z(L_{5,7}). $ Now, let $L\cong L_{6,13}.$ Clearly $Z(L) = \langle x_6\rangle$ and $L_{6,13} = I_3+\langle x_4\rangle,$ where $ I_3=\langle x_1,x_2,x_3,x_5,x_6| [x_1, x_2] = x_3, [x_1, x_3] = x_5,[x_1, x_5] = x_6\rangle\cong L_{5,7}$ and $ [I_3,\langle x_4\rangle]=\langle x_5,x_6\rangle,$ as required. \end{proof} We need the following lemma. \begin{lem}\label{z}\cite[Lemma 1]{zac} Let $L$ be a nilpotent Lie algebra and $H$ be a subalgebra of $L$ such that $L^2 = H^2 + L^3.$ Then $L^i = H^i$ for all $i \geq 2.$ Moreover, $H$ is an ideal of $L.$ \end{lem} \section{Main results} We are going to give the structure of all nilpotent Lie algebras of class $ 4$ with the derived subalgebra of dimension $ 3. $ Moreover, we determine which one of these are capable. The next two results which are stated for Lie algebras have a group theoretical reason for $p$-groups in \cite[Lemma 2.3 and Theorem 2.4]{bl}. Here, we give a proof for them. \begin{lem}\label{12} Let $ L $ be an $n$-dimensional nilpotent Lie algebra of class $ c $ such that $ \dim L^2=c-1 $ and $ I $ be an ideal of dimension $i$ $( 0\leq i\leq c-1)$ contained in $ L^2 .$ Then $ I=L^{c-i+1}.$ \end{lem} \begin{proof} Clearly, $ \dim L^j = c-j+1,$ where $ 1\leq j \leq c $. We proceed by induction on $ c-i+1 $. If $ i=c-1, $ the result follows easily. Let $ c-i>1$ and $ M/I $ be an ideal of dimension $ 1 $ such that $ M/I \subseteq (L/I)^2 \cap Z(L/I).$ So $ M $ is an $( i+1)$-dimensional ideal of $ L $ such that $ I\subsetneqq M\subseteq L^2. $ By using the induction hypothesis, $ M=L^{c-i }.$ Since $M/I\subseteq Z( L/I),$ we have $L^{c-i+1}= [L,L^{c-i}]=[L,M]\subseteq I.$ Now, both $ I $ and $ L^{c-i+1}
\section{Introduction} Vision is an important sensory modality for humans. Many activities of daily living (ADLs), such as cooking and eating, can be difficult without visual support. Jones et al. \cite{jones2019analysis} revealed that people with visual impairments tend to have poor nutritional status, which is often linked to problems with buying, preparing, and eating healthy food. People with visual impairments may have an aversion to cooking due to difficulty accessing visual information and cues during the cooking process \cite{bilyk2009food,kostyra2017food}. This has resulted in people with visual impairments more frequently eating outside at restaurants or preparing frozen food that may be calorie-rich. According to the aforementioned Canadian study \cite{bilyk2009food}, eight out of nine participants stated they “disliked or hated cooking” because of the time it takes to cook without vision. Christine Ha, the first blind contestant of MasterChef, won the third season of the show in 2012 and described the importance of cooking to people with visual impairments \cite{Christin65:online,TheBlind26:online}. To assist visually impaired individuals to cook independently, there have also been training guidelines released from blind communities \cite{SafeCook91:online} and cooking related assistive technologies that are commercially available \cite{HowDoesa82:online}. For example, people with visual impairments could use voice commands to set timers or use a speaking kitchen thermometer to check the temperature of a steak. However, little research has explored the practices of people with visual impairments in cooking and how they leverage different assistive devices for cooking. Furthermore, there has been little to guide HCI researchers on what stages or steps in the cooking process may benefit most from support via technology. In our research, we explore the following research questions: \begin{itemize} \item RQ1: What are current cooking approaches and techniques employed by people with visual impairments? \item RQ2: What are the key challenges, concerns, and risks encountered while preparing meals? \item RQ3: What are potential opportunities for assistive technologies to support people with visual impairments to cook independently? \end{itemize} To first understand the current cooking experiences of people with visual impairments (RQ1), we conducted a content analysis of 122 YouTube videos that feature visually impaired individuals preparing meals. We describe 12 different activities essential to cooking that were summarized from the video analysis. Based on the findings from the video analysis, we then conducted semi-structured interviews with 12 visually impaired people who have experience cooking to better understand RQ2 and RQ3. The interview findings further illuminate challenges encountered before, during, and after cooking including: utilizing tools, information access, touching and feeling, safety and consequence, precision and ambiguity, organizing and tracking, item and quality inspection, and collaborative cooking and communication. We then discuss the potential opportunities to support people with visual impairments while cooking (e.g., zero-touch interactions for cooking, status tracking and safety monitoring, and collaborative cooking). \section{Background and Related Work} \subsection{Eating and Cooking for People with Visual Impairments} People with visual impairments' Activities of Daily Living (e.g., eating and mobility) and Instrumental Activities of Daily Living (e.g., preparing and making food) are affected by the loss of vision \cite{bhowmick2017insight}. Jones et al. \cite{jones2019analysis} conducted a survey study with 101 visually impaired people and found 65\% of the participants stated that their visual impairments made cooking difficult. Due to the difficulty of cooking, Bilyk et al. \cite{bilyk2009food} found that people with visual impairments tend to eat outside or prepared food, which affects healthy eating behaviors. To enable efficient preparation of meals, Kostyra et al. \cite{kostyra2017food} further showed that assistive technologies, such as having equipment with a voice editor, devices informing about the cooking process, and sensors supporting pouring fluids, may enable efficient preparation of meals for people with visual impairments. Therefore, it is important to explore existing cooking practices and challenges for people with visual impairments and understand specific cooking processes or steps that certain assistive technologies may help people with visual impairments in cooking. \subsection{Enabling Technology for People with Visual Impairments} Cooking usually requires people with visual impairments to interact with different interfaces or devices. The traditional way to enable people with visual impairments to interact with electronic appliances is by adding tactile markers to them. Beyond this traditional method, prior research also explored using computer vision \cite{guo2016vizlens,fusco2014using,morris2006clearspeech,tekin2011real}, voice interactions \cite{abdolrahmani2018siri,branham2019reading}, and 3D printed tactile marking \cite{guo2017facade,he2017tactile} to better support people with visual impairments interacting with different interfaces. For example, VizLens leveraged computer vision and crowdsourcing to enable people with visual impairments to interact with different interfaces, such as a microwave oven \cite{guo2016vizlens}. Guo et al. \cite{guo2017facade} further introduced a crowdsourced fabrication pipeline to help blind people independently make physical interfaces accessible through adding a 3D printed augmentation of tactile buttons overlaying the original panel. Beyond making appliance interfaces accessible, prior research has also explored various approaches to improve the accessibility of mobile devices for people with visual impairments which might help with the cooking process, such as gestural interactions (e.g., \cite{kane2008slide,azenkot2012passchords,li2017braillesketch}) and screen readers (e.g., \cite{rodrigues2015getting,Accessib51:online,leporini2012interacting,Getstart6:online}). For example, Talkback \cite{Getstart6:online}, and VoiceOver \cite{Accessib51:online} enable people with visual impairments to explore interface elements on mobile devices through audio feedback. The feasibility of using mobile devices further allows people with visual impairments to interact with other IoT devices \cite{zhou2017iot,saquib2017blindar}. However, it is unknown how people with visual impairments tend to interact with mobile devices during cooking or utilize their mobile devices to interact with different kitchen appliances, and associated challenges and barriers. \subsection{Technology for Cooking} In terms of cooking processes, there has been prior research that explored learning procedures of cooking \cite{kato2013interactive} and different cooking techniques \cite{kusu2017calculating,hamada2005cooking}. For example, Kato and Hasegawa \cite{kato2013interactive} introduced an interactive sauteed cooking simulator that could visualize different cooking states (e.g., temperature changes, browning from burns). This system could help users to better manage their cooking skills, such as how to cook medium-rare meat \cite{kato2013interactive}. Kusu et al. \cite{kusu2017calculating} further proposed a method to calculate a cooking recipe's difficulty level during searching and recommend recipes that match the user's cooking skills. Although prior research has explored how to help people with cooking activities, there lacks research and understanding of what cooking-related learning procedures people with visual impairments have adopted and what the existing challenges are during these learning processes. In our work, we showed cooking practices of 12 different cooking procedures through a YouTube video analysis and uncovered eight themes of cooking challenges through interviews with people with visual impairments. \section{YouTube Video Analysis: Cooking Practices for People with Visual Impairments} To understand existing cooking practices and potential risks for people with visual impairments, we conducted a YouTube video analysis---searching, filtering, and analyzing YouTube videos related to cooking practices by people with visual impairments---inspired by prior research on leveraging the richness of YouTube video contents to understand accessibility needs \cite{anthony2013analyzing}. Our video analysis consisted of two main steps: 1) searching for YouTube videos related to cooking practices for people with visual impairments; 2) analysis and coding procedures. \begin{table}[ht] \caption{Searching Keywords} \centering \begin{tabular}{|p{8cm}|} \hline \textbf{Searching Keywords} \\ \hline Blind Cooking, Blind Person Cooking, Blind Chef, Legally Blind Cooking, Blind Cooking Food, Blind Cooking Dinner, Blind in the Kitchen, Visually Impaired Cooking, Visually Impaired Person Cooking, Visually Impaired Chef, Visual Impairment Cooking\\ \hline \end{tabular} \label{table:searchterms} \end{table} \subsection{Search Protocol} In the video searching process, we looked for videos focused on cooking practices for people with visual impairments. To search for relevant videos, three researchers independently combined visual impairment related keywords (e.g., blind, visually impaired, visual impairment) and cooking related keywords (e.g., cook, cooking, chef, kitchen). To come up with these, our researchers first started with basic searches (e.g., blind cooking) and gradually included other keyword combinations from candidate video titles or descriptions. Because each search may generate hundreds of results, we then followed the same approach as Komkaite et al. \cite{komkaite2019underneath} by stopping our search for videos after the whole page of results started to be irrelevant. In total, we initially created a video dataset of 136 relevant videos found by March 28th, 2021. We then filtered out videos
colour evolution for different functions of the SFR (as a function of time $t$) with the metallicity fixed at $Z=0.02$. SSPs with a constant SFR versus time or a power-law SFR $\sim t^{-2}$ never cross the GV within a cosmological time. Only functions with a strong decline, such as the sudden drop of a constant SFR to zero, a starburst, or a SFR $\sim \exp(-t)$, cross the GV, indicating that a quick abrupt change in the SFR, i.e. some sort of SF quenching, is necessary to cross the GV. We note that the normalization of the SF histories does not change the U-R colour, but it has an effect on the GV boundaries, as those are a function of stellar mass. Thus strictly speaking each of the SF histories presented has its own GV boundaries. However, the different boundaries are very similar and would be barely distinguishable in the plot; therefore, we only plot the boundaries of the SFR $\sim \exp(-t\,/\,1\,\mathrm{Gyr})$ (orange lines). \begin{figure} \includegraphics[width=\columnwidth]{plots/lfile_sfr.png} \caption{U-R colour versus time for different star formation histories: SFR~$\sim$ const (brown), SFR~$\sim \exp(-t/1\,\mathrm{Gyr})$ (orange), SFR~$\sim t^{-2}$ (green), starburst (i.e. the SFR is a delta function) at $t=4\,\mathrm{Gyr}$ (pink), constant SFR dropping to zero at $t=5\,\mathrm{Gyr}$ (purple). The red and blue line are the upper and lower boundary of the GV for the orange SF history. The metallicity is $Z=0.02$ for all SF histories.} \label{fig:lfile_sfr} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{plots/lfile_tgreen.png} \caption{GV crossing time for the synthetic stellar populations, for a constant SFR that changes to $\mathrm{SFR}\sim \exp(-t/T_{\mathrm{SFQ}})$ at $t_{\mathrm{q}}$, as a function of: Left panel: SF quenching timescale $T_{\mathrm{SFQ}}$, middle panel: quenching time $t_{\mathrm{q}}$, right panel: metallicity $Z$.} \label{fig:lfile_tgreen} \end{figure*} Fig.~\ref{fig:lfile_tgreen} shows the GV crossing time for a SFR that is first constant, and then at a {\it quenching time} $t_{\mathrm{q}}$ changes to an exponentially declining SFR with {\it SF quenching timescale} ($e$-folding timescale) $T_{\mathrm{SFQ}}$ (orange line in Fig.~\ref{fig:lfile_sfr}). The left panel shows the crossing time for different SF quenching timescales, the middle panel for different quenching times, and the right panel for different metallicities. Quenching time and metallicity have very little effect on the GV crossing time and give values of around 2~Gyr. Only the SF quenching timescale has a large effect on the crossing time, showing variations from 0.2 to more than 12 Gyr. Furthermore, for crossing times up to 2~Gyr, the slope of this function is about two, meaning the crossing time is about twice the SF quenching timescale. In previous works \citet{2014_Schawinski_Urry_Simmons} explore exponential declining SFRs with different quenching timescales, which lead the galaxies to take different paths through the GV, with larger quenching times leading to larger crossing times. Also \citet{2015_Smethurst_Lintott_Simmmons} explore those SFRs, and show that early- and late-type galaxies have different pathways through the GV. \subsection{Simulated galaxies}\label{sec:sims} After looking at SSPs, we now investigate the colour evolution and GV crossing times of the NIHAO galaxies and compare them with values from the SSPs introduced in the last subsection. The GV crossing time of a galaxy consists of three contributions: (i) the age and metallicity of the galaxy's stellar population when it enters the GV (called {\it simplified model}), (ii) star formation while in the GV (called {\it SF-only model}), and (iii) ex-situ stars, i.e. stars that form outside and then enter the galaxy or stars that leave the galaxy (although the latter contribution is very small). Thus the {\it fiducial crossing time} as seen in the simulations can be expressed as \begin{equation} \tau_{\mathrm{fid}} = \tau_{\mathrm{simp}} + \Delta \tau_{\mathrm{SFON}} + \Delta \tau_{\mathrm{XS}}, \label{eq:tgreen} \end{equation} where $\tau_{\mathrm{simp}}$ is the {\it simplified crossing time}, $\Delta \tau_{\mathrm{SFON}}$ is the {\it SF-only overtime}, and $\Delta \tau_{\mathrm{XS}}$ is the {\it ex-situ overtime}. We also define the {\it SF-only crossing time} $\tau_{\rmn{SFON}}=\tau_{\rmn{simp}}+\Delta \tau_{\rmn{SFON}}$. In this subsection, we investigate these three contributions independently from each other: contribution (i) in subsection \ref{sec:tgreen_simple}, (ii) in subsection \ref{sec:sfon}, and (iii) in subsection \ref{sec:exsitu}. In subsection \ref{sec:combined}, we investigate all contributions combined. \subsubsection{Simplified model}\label{sec:tgreen_simple} For investigating the sole effect of stellar age and metallicity on the GV crossing time, we make some simplifications: after the galaxy has entered the GV, we disregard all stars that subsequently form within the galaxy or those that enter or leave the galaxy. Therefore, the change in colour is only driven by the ageing of the stars already present and not by new stars forming or other stellar populations merging into or leaving the galaxy. Examples for this evolution are shown in Fig.~\ref{fig:mag_time_tracks_comp2} as green lines: E.g. galaxy g2.79e12 only shows a barely noticeable difference between simple (green) and fiducial (purple) colour evolution leading to only a very small difference of 30~Myr in crossing time. For the galaxy g6.70e12, the difference is much bigger, and the fiducial crossing time of 1.7\,Gyr is reduced to 150\,Myr for the simplified model. Galaxy g1.33e13 is the only one where the simplified crossing time is larger than the fiducial crossing time, caused by a merger that significantly reduces the fiducial crossing time. \begin{figure*} \includegraphics[width=\textwidth]{plots/mag_time_tracks_comp2.png} \caption{U-R colour (purple) and SFR (orange) versus time for six galaxies. The red and blue line are the upper and lower boundary, respectively, of the GV. We also show the colour evolution for the simplified model (green, section \ref{sec:sims}) and the model only considering star formation (but no merger) in the GV (brown, section \ref{sec:sfon}). We furthermore show a linear fit (pink) to the logarithmic SFR in the GV.} \label{fig:mag_time_tracks_comp2} \end{figure*} Fig.~\ref{fig:tg_simp} shows the simplified crossing time as a function of stellar mass (left panel), mean age of the stellar population (middle panel), and the mean stellar metallicity (right panel), with the NIHAO simulations as blue dots and the SSPs as orange lines. The SSPs have a constant SFR of $60\,\mathrm{M}_{\sun}\,\mathrm{yr}^{-1}$ until $t_{\rm q} = 5\,\mathrm{Gyr}$ and then, are quenched to zero. For the crossing time as function of stellar mass (left panel), we vary the constant SFR between 1 and 300\,$\mathrm{M}_{\sun}\,\mathrm{yr}^{-1}$, and for the crossing time as function of age, we vary the quenching time $t_{\mathrm{q}}$ between 1 and 13\,Gyr. \begin{figure*} \includegraphics[width=\textwidth]{plots/tg_simp.png} \caption{GV crossing time (simplified) as a function of stellar mass (left panel), mean stellar age (middle panel), and the mean stellar metallicity (right panel). Blue circles: NIHAO simulations, orange lines: SSP.} \label{fig:tg_simp} \end{figure*} With stellar mass, the crossing time increases only very slightly for the SSPs; for the NIHAO simulations, this effect is washed out by the scatter of this relation. With mean stellar age, the crossing time for the SSPs first decreases slightly, then increases with mean stellar age. For the NIHAO simulations, the slight decrease is reproduced; the increase however is only followed by one galaxy with a crossing time of about 1.2\,Gyr. Otherwise for large mean stellar ages, the NIHAO simulations give a constant crossing time of around 200\,Myr. This discrepancy between SSP and NIHAO simulations is due to the model of a constant SFR that is quenched at $t_{\mathrm{q}}$ being a bad approximation for the NIHAO galaxies with large mean stellar ages. With metallicity, the crossing time for the SSPs is generally decreasing. The NIHAO galaxies mostly follow the slight decrease and give a crossing time of around 200\,Myr. Generally all three quantities (stellar mass, mean stellar age, and mean stellar metallicity) have a negligible effect on the crossing time. The average crossing time is 180\,Myr. \subsubsection{SF-only model}\label{sec:sfon} Star formation within the GV has the effect to extend the time a galaxy spends there. In this subsection, we take a closer look at the SF-only crossing time $\tau_{\rmn{SFON}}$ model that only considers the ageing of the stellar population as it exists at the time the galaxy enters the GV and any subsequent star formation that occurs while traversing the GV, but not stars merging into or out of the galaxy. Examples for this evolution are shown in Fig.~\ref{fig:mag_time_tracks_comp2} as brown lines: They coincide with the fiducial colour evolution (purple line) for most galaxies; only few show significant deviations due to ex-situ stars, which we will elaborate on in section
result is, $$ \frac{n_F(n_F-1)}{2} \times 2 + n_F \times 4 \times \frac{1}{4} = n_F^2. $$ One factor of $n_F$ is absorbed into the three parton matrix elements $\Big |\widehat{{\cal S}}^3_\mu V^\mu\Big |^2$, while the other appears as an explicit factor. \subsection{Triple collinear factorisation} If three collinear particles are colour ``unconnected'' then there is no singularity. So if $a$, $b$ and $c$ all become collinear, \begin{equation} |{\cal A}(\ldots,a,\ldots,b,\ldots,c,\ldots)|^2 \to {\rm finite}, \end{equation} and there is no singular contribution involving the invariants $s_{ab}$, $s_{bc}$ or $s_{abc}$. As before, because the region of phase space where the triple collinear limit is valid is extremely small, this gives a negligible contribution to the cross section. When two of the three collinear particles are colour ``connected'' we find a singular result, \begin{equation} |{\cal A}(\ldots,a,\ldots,b,c,\ldots)|^2 \to 1/s_{bc}. \end{equation} However, when integrated over the triple collinear region of phase space that requires $s_{ab}$, $s_{bc}$ or $s_{abc}$ all to be small, we again obtain a negligible contribution that is proportional to the small parameter defining the extent of the triple collinear phase space. We therefore ignore contributions of this type. \subsection{Soft/collinear factorisation} Two particles may be unresolved if one of them is a soft gluon and another pair are collinear. When the soft gluon $g$ is not colour connected to either of the colour ``connected'' collinear particles $c$ and $d$, factorisation is straightforward, \begin{equation} |{\cal A}(\ldots,a,g,b,\ldots,c,d,\ldots)|^2 \to S_{agb}(s_{ab},s_{ag},s_{bg})P_{cd\to P}(z,s_{cd})\, |{\cal A}(\ldots,a,b,\ldots,P,\ldots)|^2. \end{equation} \subsubsection{Soft/collinear limit of $e^+e^- \to 5$~partons} In the soft/collinear limit, the five parton matrix elements again factorise into a singular factor multiplying the squared two-quark current relevant for three parton production, \begin{eqnarray} \Big |{\cal S}_\mu(Q_1;1,2,3;\overline{Q}_2) V^\mu\Big |^2 & \to & \left(S_{Q_1 1 2}P_{3\overline{Q}_2 \to \overline{Q}} + P_{Q_1 1 \to Q}S_{23 \overline{Q}_2} \right) \Big |{\cal S}^3_\mu V^\mu\Big |^2, \nonumber \\ \Big |{\cal S}_\mu(Q_1;1,2,\tilde{3};\overline{Q}_2) V^\mu\Big |^2 & \to & \left(S_{Q_1 1 2}P_{3\overline{Q}_2 \to \overline{Q}} + P_{Q_1 3 \to Q}S_{12 \overline{Q}_2} + P_{12 \to G} S_{Q_1 3 \overline{Q}_2} \right) \Big |{\cal S}^3_\mu V^\mu\Big |^2. \nonumber \\ \end{eqnarray} Note that for $\Big |{\cal S}_\mu(Q_1;\tilde{1},\tilde{2},\tilde{3};\overline{Q}_2) V^\mu\Big |^2 $, the soft and collinear limits are considered to be overlapping and will be dealt with in section~\ref{subsec:softcol}. In the four-quark current case, the soft/collinear limit has only two colour-unconnected contributions. The first is given by, \begin{equation} \Big | {\cal T}^B_\mu(Q_1,\overline{Q}_4;Q_3,\overline{Q}_2;1)V^{\mu} \Big |^2 \to S_{Q_1 1 \overline{Q}_2}P_{Q_3\overline{Q}_4 \to G} \, \Big | {\cal S}^3_\mu V^\mu \Big |^2, \end{equation} whilst the limit of, $\Big | \overline{{\cal T}}_\mu(Q_1,\overline{Q}_2;Q_3,\overline{Q}_4;1) V^{\mu} \Big |^2,$ again involves both unconnected and connected factors and therefore discussion of this will also be deferred until section~\ref{subsec:softcol}. The other subamplitudes vanish in the unconnected soft/collinear limit. Applying these limits to the full five parton matrix elements is straightfoward and, after removing identical particle factors where necessary, we find, \begin{eqnarray} \lefteqn{\frac{1}{3!} \Big |\widehat{{\cal S}}^5_\mu V^\mu\Big |^2 + \Big |\widehat{{\cal T}}^5_\mu V^\mu\Big |^2 = \left(\frac{g^2N}{2}\right)^2 \Big |\widehat{{\cal S}}^3_\mu V^\mu\Big |^2}\nonumber \\ &\times &\Biggl[ \left(\frac{N^2-1}{N^2}\right) \left(S_{Q_1 1 2}P_{3\overline{Q}_2 \to \overline{Q}} + P_{Q_1 1 \to Q}S_{23 \overline{Q}_2} \right) - \frac{1}{N^2} P_{12 \to G} S_{Q_1 3 \overline{Q}_2} \nonumber \\ && + \frac{n_F}{N^3} S_{Q_1 1 \overline{Q}_2}P_{Q_3\overline{Q}_4 \to G} \Biggr]. \end{eqnarray} \subsection{Two soft gluons} When two unconnected gluons are soft, the factorisation is again simple \cite{multsoft}. For gluons $g_1$ and $g_2$ soft we find, \begin{eqnarray} |{\cal A}(\ldots,a,g_1,b,\ldots,c,g_2,d,\ldots)|^2 & \to & S_{ag_1b}(s_{ab},s_{ag_1},s_{bg_1})S_{cg_2d}(s_{cd},s_{cg_2},s_{dg_2}) \nonumber \\ && \times |{\cal A}(\ldots,a,b,\ldots,c,d,\ldots)|^2, \end{eqnarray} so that the singular factor is merely the product of two single soft gluon emission factors given by eq.~(\ref{eq:ssoft}). Note that $b=c$ is allowed. \subsubsection{Double soft limit of $e^+e^- \to 5$~partons} The sum over the unconnected double soft limits of the colour ordered subamplitudes can be easily read off, \begin{eqnarray} \Big |{\cal S}_\mu(Q_1;1,2,3;\overline{Q}_2) V^\mu\Big |^2 & \to & S_{Q_1 1 2}S_{23\overline{Q}_2} \Big |{\cal S}^3_\mu V^\mu\Big |^2, \nonumber \\ \Big |{\cal S}_\mu(Q_1;1,2,\tilde{3};\overline{Q}_2) V^\mu\Big |^2 & \to & \left(S_{Q_1 1 2}S_{Q_1 3 \overline{Q}_2} + S_{Q_1 3 \overline{Q}_2}S_{12 \overline{Q}_2} \right) \Big |{\cal S}^3_\mu V^\mu\Big |^2, \nonumber \\ \Big |{\cal S}_\mu(Q_1;\tilde{1},\tilde{2},\tilde{3};\overline{Q}_2) V^\mu\Big |^2 & \to & \frac{1}{2} \sum_{P(1,2,3)} S_{Q_1 1 \overline{Q}_2}S_{Q_1 2 \overline{Q}_2} \Big |{\cal S}^3_\mu V^\mu\Big |^2. \end{eqnarray} There is no contribution from the four-quark matrix elements. Inserting these limits into the full five parton matrix elements yields, \begin{eqnarray} \lefteqn{\frac{1}{3!} \Big |\widehat{{\cal S}}^5_\mu V^\mu\Big |^2 + \Big |\widehat{{\cal T}}^5_\mu V^\mu\Big |^2 = \left(\frac{g^2N}{2}\right)^2 \Big |\widehat{{\cal S}}^3_\mu V^\mu\Big |^2}\nonumber \\ &\times& \Bigg[ S_{Q_1 1 2}S_{23\overline{Q}_2} - \frac{1}{N^2} \left(S_{Q_1 1 2}S_{Q_1 3 \overline{Q}_2} + S_{Q_1 3 \overline{Q}_2}S_{12 \overline{Q}_2} \right) \nonumber \\ &&\hspace{1cm}+ \left(\frac{N^2+1}{2N^4}\right) S_{Q_1 1 \overline{Q}_2}S_{Q_1 2 \overline{Q}_2} \Bigg ], \end{eqnarray} where once again the sum over permutations is eliminated by the identical particle factor. \newpage \section{Colour connected double unresolved} \setcounter{equation}{0} \label{sec:connected} The factorisation that occurs when the two unresolved particles are colour ``connected'' is necessarily more involved than that in section~\ref{sec:unconnected}. In particular, we will need to intoduce new functions to describe this factorisation. \subsection{Triple collinear factorisation} \label{subsec:triple} When three colour ``connected'' particles cluster to form a single parent parton there are four basic clusterings, \begin{eqnarray*} && ggg \rightarrow G, \qquad qgg \rightarrow Q, \\ && g{\bar q}q \rightarrow G, \qquad q{\bar q}q \rightarrow Q, \end{eqnarray*} and the colour ordered sub-amplitude squared for an $n$-parton process then factorises in the triple collinear limit, \begin{equation} |{\cal A}(\ldots,a,b,c,\ldots)|^2 \rightarrow P_{abc \rightarrow P} |{\cal A}(\ldots,P,\ldots)|^2. \end{equation} As before, partons able to undergo antenna pinching are considered to be colour connected, so that there may be contributions from amplitudes such as ${\cal A}(\ldots,a,b|c,\ldots)$. The triple collinear splitting function for partons $a$, $b$ and $c$ clustering to form the parent parton $P$ is generically, \begin{equation} P_{abc \rightarrow P}(w,x,y,s_{ab},s_{ac},s_{bc},s_{abc}), \end{equation} where $w$, $x$ and $y$ are the momentum fractions of the clustered partons, \begin{equation} \label{momfrac} p_a=wp_P, \qquad p_b=xp_P, \qquad p_c=yp_P, \qquad \mbox{with } w+x+y=1. \end{equation} In addition to depending on the momentum fractions carried by the clustering partons, the splitting function also depends on the invariant masses of parton-parton pairs and the invariant mass of the whole cluster. In this respect, they are different from the splitting functions derived in the jet-calculus approach \cite{jetcalculus}, and implemented in the shower Monte Carlo NLLJET \cite{NLLJET}, which depend only on the momentum fractions. The triple collinear splitting functions $P_{abc \rightarrow P}$ are obtained by retaining terms in the full matrix element squared that possess two of the `small' denominators $s_{ab}$, $s_{ac}$, $s_{bc}$ and $s_{abc}$. As before, we consider the explicit forms of the $\gamma^* \to $ four and five parton squared matrix elements and work in conventional dimensional regularisation, with all external particles in $d=4-2\epsilon$ dimensions. Similar results could be derived using helicity methods or by examining the on-shell limits of the recursive gluonic and quark currents of ref.~\cite{current}. Although the splitting functions are universal, and apply to any process involving the same three colour connected particles, for processes involving spin-1 particles, there are additional (non-universal) azimuthal correlations due to rotations of the polarisation vectors. These angular correlations do not contribute to the underlying infrared singularity structure and vanish after all azimuthal integrations have been carried out and we therefore systematically omit them. A further check on our results is provided by the strong-ordered limit, where the particles become collinear sequentially rather than at the same time. In this limit these functions factorise into the product of two usual collinear splitting functions plus azimuthal terms, agreeing with the results of~\cite{Knowles}. \subsubsection{Three collinear gluons} Firstly, examining the sub-amplitudes for multiple gluon scattering, we find that the colour-ordered function $P_{ggg \rightarrow G}$ is given by, \begin{eqnarray} \lefteqn{P_{abc \rightarrow G}(w,x,y, s_{ab},s_{bc},s_{abc}) = 8 \times \Biggl\{ } \nonumber \\ &+& \frac{(1-\epsilon)}{s_{ab}^2s_{abc}^2}\frac{(xs_{abc}-(1-y)s_{bc})^2}{(1-y)^2} +\frac{2(1-\epsilon)s_{bc}}{s_{ab}s_{abc}^2} +\frac{3(1-\epsilon)}{2s_{abc}^2}\nonumber \\ &+& \frac{1}{s_{ab}s_{abc}} \left( \frac{(1-y(1-y))^2}{yw(1-w)}-2\frac{x^2+xy+y^2}{1-y} +\frac{xw-x^2y-2}{y(1-y)} +2\epsilon \frac{x}{(1-y)} \right)\nonumber \\ &+& \frac{1}{2s_{ab}s_{bc}} \left( 3x^2 - \frac{2(2-w+w^2)(x^2+w(1-w))}{y(1-y)} + \frac{1}{yw} + \frac{1}{(1-y)(1-w)} \right) \Biggr\} \nonumber \\ &+& ( s_{ab} \leftrightarrow s_{bc}, w \leftrightarrow y) + {\rm azimuthal~terms}. \end{eqnarray} This splitting function is symmetric under the exchange of the outer gluons, and contains poles only in $s_{ab}$ and $s_{bc}$. \subsubsection{Two gluons with a collinear quark or antiquark} There are two distinct splitting functions representing the clustering of two gluons and a quark which depend on whether or not the gluons are symmetrised over. In the unsymmetrised case, there will be poles in $s_{g_1g_2}$, due to contributions from the triple gluon vertex which are not present in the QED-like case. For the pure QCD splitting we find, \begin{eqnarray} \lefteqn{P_{qg_1g_2 \rightarrow Q} (w,x,y,s_{qg_1},s_{qg_2},s_{g_1g_2},s_{qg_1g_2}) = 4 \times \Biggl\{ } \nonumber \\ &+&\frac{1}{s_{qg_1}s_{g_1g_2}} \left( (1-\epsilon) \left( \frac{1+w^2}{y}+\frac{1+(1-y)^2}{(1-w)} \right) +2\epsilon \left( \frac{w}{y}+\frac{1-y}{1-w} \right) \right) \nonumber \\ &+&\frac{1}{s_{qg_1}s_{qg_1g_2}} \left( (1-\epsilon) \left( \frac{ (1-y)^3+w(1-x)-2y}{y(1-w)} \right) - \epsilon \left( \frac{2(1-y)(y-w)}{y(1-w)} -x \right) -\epsilon^2 x \right) \nonumber \\ &+&\frac{1}{s_{g_1g_2}s_{qg_1g_2}} \left( (1-\epsilon) \left( \frac{ (1-y)^2 (2-y)+x^3+2xw-2-y}{y(1-w)} \right) +2\epsilon \frac{(xw-y-2yw)}{y(1-w)} \right) \nonumber \\ &+&(1-\epsilon) \left( \frac{2\left( x{s_{qg_1g_2}}-(1-w)s_{qg_1} \right)^2} {s_{g_1g_2}^2s_{qg_1g_2}^2(1-w)^2} +\frac{1}{s_{qg_1g_2}^2} \left( 4\frac{s_{qg_1}}{s_{g_1g_2}} +(1-\epsilon) \frac{s_{g_1g_2}}{s_{qg_1}} + (3-\epsilon) \right) \right) \Biggr\}, \end{eqnarray} while for the QED-like splitting where one or other or both gluons in the colour ordered amplitude are symmetrised over, \begin{eqnarray} \lefteqn{P_{q\tilde{g_1}\tilde{g_2} \rightarrow Q} (w,x,y,s_{qg_1},s_{qg_2},s_{qg_1g_2}) = 4 \times \Biggl\{ } \nonumber \\ &+&\frac{1}{2s_{qg_1}s_{qg_2}} \frac{w}{xy} \left( 1+w^2-\epsilon(x^2+xy+y^2)-\epsilon^2 xy \right) \nonumber \\ &+&\frac{1}{s_{qg_1}s_{qg_1g_2}} \frac{1}{xy} \left( w(1-x+\epsilon^2 xy)+(1-y)^3-\epsilon(1-y) (x^2+xy+y^2)+\epsilon^2 xy \right) \nonumber \\ &-&\frac{(1-\epsilon)}{s_{qg_1g_2}^2} \left( (1-\epsilon) \frac{s_{qg_1}}{s_{qg_2}} - \epsilon \right) \Biggr\} + ( s_{qg_1} \leftrightarrow s_{qg_2}, x \leftrightarrow y) . \end{eqnarray} The function $P_{q\tilde{g_1}\tilde{g_2} \rightarrow Q}$ can be interpreted as the relevant triple collinear splitting function with one or both of the gluons replaced by photons.
Edges and non-edges of those swaps can change their role in realisations, and so not belong to the static edge and non-edge set $F'$ of $S.$ These edges do not need to be considered in the Gale-Ryser test. The general result for $H$-fixed realisations can be given in the following Theorem~\ref{theorem:connectedness_F_F'_Fixed_realizations_swaps}. \begin{theorem}\label{theorem:connectedness_F_F'_Fixed_realizations_swaps} Let $G=(V,U,E)$ and $G'=(V,U,E')$ be two different $H$-fixed realizations of a sequence $S$ where $H$ can be partitioned in sets $F$ and $F^{*}$ such that \begin{enumerate} \item $F$ does not contain a matching of size $3$, \item $F^{*} \subset F'$ where $F'$ is the static edge and non-edge set of $S$, and \item $F \cap F^{*} = \emptyset$. \end{enumerate} Then the $H$-ignoring swap and Curveball algorithms sample an $H$-fixed realisation uniformly at random for $t \mapsto \infty.$ \end{theorem} \begin{proof} Each $F$-fixed realisation is equivalent to an $H$-fixed realisation. Hence, Corollary~\ref{Cor:Curveball_F_fixed} ensures that an $F$-ignoring Curveball algorithm samples an $H$-fixed realisation at random. Since suitable trades cannot contain edges from $F^*,$ an $H$-fixed swap chain samples all $H$-fixed realisations. \qed\end{proof} \section{General $F$-fixed Markov chains, and chains with swap cycles of length six} We start with a result which will be needed in our main theorem. \begin{proposition}\label{prop:cycles_in_F} Given an even vertex disjoint cycle $C=(v_0,\dots,v_{2n-1},v_0)$ of length $2n \geq 8.$ Let $A$ be the edge set of all vertex pairs $\{v_i,v_j\}$ on $C$ such that $v_i$ and $v_j$ have an odd distance on $C$ of at least length three. Then $A$ contains each cycle of length $8 \leq 2 \ell \leq 2n.$ \end{proposition} \begin{proof} We proof the claim with induction by the length $2n$ of cycle $C$. We start with the smallest possible length $8.$ Then $C'=(v_0,v_3,v_6,v_1,v_4,v_7,v_2,v_5,v_0)$ is a cycle in $A$ of length $8.$ Let us assume cycle $C=(v_0,\dots,v_{2n+1},v_0)$ has length $2n+2.$ We construct the cycle $C'=(v_0,\dots,v_{2n-1},v_0)$ with $\{v_{2n-1},v_0\}\in A.$ Set $A':=A \setminus \{\{v_{2n-1},v_0\},\{v_{2n},v_1\},\{v_{2n+1},v_2\},\{v_{2n-3,2n}\},\{v_{2n-2},v_{2n+1}\}\}$ is the edge set of all vertex pairs $\{v_i,v_j\}$ on $C'$ such that $v_i$ and $v_j$ connect an odd path on $C'$ of at least length three. With the induction hypothesis it follows that $A'$ contains each cycle of length $8 \leq 2 \ell \leq 2n.$ Since $A' \subset A$, $A$ contains all these cycles too. It remains to prove that $A$ contains a cycle of length $2n+2.$ We define the function $f:\{0,\dots,2 n+1\} \mapsto \{0,\dots,2 n+1\}$ with $f(i)= (t \cdot i) \pmod{2n+2}$ such that $2n-1 \geq t \geq 3$, $t$ is odd, and the pair $t$, $2 n+2$ is relatively prime.\\ Such a pair exists what we prove with the Eulerian $\phi$-function counting the number $\phi(2n+2)$ of relatively prime numbers for $2n+2$ in $M:=\{1,\dots,2n+1\}.$ If $\phi(2n+2) \geq 3$ for all $n \geq 4$ we are done. The reason is that number $1$ is always relatively prime to $2n+2.$ Hence, there are two other elements in $M$ which are relatively prime to $2n+2.$ Since $2n+2$ is even these elements need to be odd. This leads to the existence of a $t \in M$ with $2n-1 \geq t \geq 3$ which is relatively prime to $2n+2.$ Assume now that $\phi(2n+2) \leq 2.$ With the definition of $\phi$, we have $\phi(2n+2)=\displaystyle\prod_{p} p^{k_p-1}(p-1)$ where $2n+2=\displaystyle\prod_{p} p^{k_p}$ is the prime factorisation of $2n+2.$ If $2n+2=2^k$ we get $\phi(2n+2)=2^{k-1}\leq 2.$ It follows $k \leq 2$, and so $2n+2 \leq 4$ in contradiction to $n \geq 4.$ Hence, $2n+2$ must be of the form $2n+2=2 \cdot\displaystyle\prod_{p\neq 2} p^{k_p}$ leading to $\phi(2n+2)=\phi(n+1) \leq 2.$ The condition $\phi(n+1)=\displaystyle\prod_{p\neq 2} p^{k_p-1}(p-1)\leq 2$ can only be fulfilled if $p-1 \leq 2$ and $k_p=1.$This is only possible for $n+1=3$ in contradiction to $n \geq 4.$\\ Function $f$ must be bijective. Otherwise we find $f(i)=f(j)$ with $i<j.$ That is $j$ must be of the form $j=i+(2n+2)k$ where $k >0 $ is an integer. This leads to $j\geq 2n+2$ for $i \in \{0,\dots,2n+1\}.$ Hence, $j$ does not occur in the defined interval for $f$ in contradiction to the definition of $f.$ We construct cycle $C^*=(v_{f(0)},v_{f(1)},v_{f(2)},\dots,v_{f(2n+1)},v_{f(0)})$ of length $2 n+2.$ $C^*$ has all edges in $A$, because it alternates between odd and even labeled vertices, i.e. for even $i$ the value $f(i)$ is even, and vice versa for odd $i$. Furthermore, two adjacent vertices in $C^*$ have an odd distance of $2n-1 \geq t \geq 3.$ Since $f$ is bijective, cycle $C^*$ must be vertex-disjoint. \qed\end{proof} Figure~\ref{fig:circle} shows a vertex disjoint cycle $C$ with 12 vertices. Edge set $A$ is given by all vertex pairs $\{v_i,v_j\}$ on $C$ such that $v_i$ and $v_j$ connect an odd path on $C$ of at least length three. We construct cycles in $A$ of lengths $8$, $10$ and $12$ in setting $t=3$ for the first two cases and $t=5$ for the last case. \begin{figure}\label{fig:circle} \centering \includegraphics[scale=0.3]{Circle} \caption{Construction of $A$-cycles with length $8$, $10$ and $12.$ } \end{figure} \begin{theorem}\label{Theorem:connectedness_F_Fixed_realizations_cycles_length_2k} Let $G=(V,U,E)$ and $G'=(V,U,E')$ be two different $F$-fixed realisations of a sequence $S$ such that $F$ does not contain a cycle of length $2 \ell$ where $\ell \geq 4$. Then there exist realizations $G_1,\dots,G_k$ with $G_1:=G$, $G_k:=G'$ such that (i) $|G_i \triangle G_{i+1}|\leq 2\ell-2$ where $G_i \triangle G_{i+1}$ corresponds to a $j$-swap with $j \leq 2 \ell-2$ , and (ii) $k\leq \frac{1}{2}|G \triangle G'|.$ \end{theorem} \begin{proof} We prove the statement with induction by the size of the symmetrical difference $\kappa:= \frac{1}{2}|G \triangle G'|.$ \paragraph{induction basis.} For each fixed $\ell$ the cases with $2 \leq \kappa \leq \ell-1$ are simple. $G \triangle G'$ decomposes in alternating vertex-disjoint cycles between $G$ and $G'$ of at most size $2\ell-2.$ Hence, condition (i) is fulfilled by the sequence $G_1,\dots,G_k$ with $G_1:=G,$ $G_k:=G'$ such that each $G_i \triangle G_{i+1}$ corresponds to a $j$-swap with $j \leq 2 \ell-2.$ In the case where $G \triangle G'$ decomposes in alternating $4$ cycles, $k$ is maximum with $k=\frac{1}{4}|G \triangle G'|$ fulfilling (ii). To continue the induction basis we set $\kappa=\ell,$ i.e. $|G \triangle G'|=2 \ell.$ If $G \triangle G'$ is not connected, each of the components is an alternating cycle of at most length $2 \ell -4$, decomposing in $j$-swaps with $j \leq 2\ell-4.$ Moreover, index $k$ cannot be larger than $k=\frac{1}{4}|G \triangle G'|.$ We now assume that $G \triangle G'$ is one alternating $2 \ell$-cycle $C:=(v_0,\dots,v_{2 \ell-1},v_0)$ between $G$ and $G'$ which can be (a) vertex-disjoint, or, (b) contains at least one vertex, say $v_1,$ twice. W.l.o.g. we assume that $C$ starts with edge $\{v_0,v_1\}\in E(G).$ \emph{We consider case (a)}, see Figure~\ref{fig:induction_beginning}, and assume that all vertex pairs $\{v_i,v_j\}$ on $C$ which have an odd distance of at least $3,$ are arcs in the set $F$. With Proposition~\ref{prop:cycles_in_F} we find a cycle of length $2 \ell$ in $F$ in contradiction to our condition. Hence, there must be a vertex pair $\{v_i,v_j\}$ with an odd distance of at least $3$ on $C$ which does not belong to $F.$ We assume $i<j$, and denote the alternating sub-path on $C$ from $v_i$ to $v_j$ by $P_1=(v_i,v_{i+1},\dots v_j)$ and its length by $\ell_1.$ The remaining odd alternating sub-path on $C$ from $v_j$ to $v_i=v_{j+(2\ell-\ell_1 \pmod{2n})}$ by $P_2=(v_{j\pmod{2n}},v_{j+1\pmod{2n}},\dots,v_{j+(2\ell-\ell_1 \pmod{2n})}).$ Since $\{v_i,v_j\} \notin G \triangle G'$, we have either $\{v_i,v_j\} \in G \cap G'$ or $\{v_i,v_j\} \notin G \cup G'.$ We find that either $C':=(v_i,P_2)$ or $C'':=(P_1,v_i)$ is an $(\ell_2+1)$-swap or $(\ell_1+1)$-swap, respectively. Moreover, we have $\ell_1+1,\ell_2+1 \leq 2 \ell-2$ and get $F$-fixed realisation $G^{*}.$ Furthermore, we find that $G^{*} \triangle G'$ corresponds to a $j$-swap $C^*=(P_1,v_i)$ or $C^{**}=(v_i,P_2)$, respectively. Additionally, we have $j\leq 2 \ell-2.$ We find a sequence $G_1,G_2,G_3$ of $F$-fixed realisations such that $G_1:=G$, $G_2:=G^{*}$, $G_3:=G'$, and (i) is fulfilled. Condition (ii) is fulfilled because $k=3\leq \ell.$ \begin{figure} \centering \includegraphics[scale=0.25]{induction_beginning} \caption{induction basis $\kappa= \ell$. Case (a) with $2\ell=12$ and $|G \triangle G'|=12.$} \label{fig:induction_beginning} \end{figure} \emph{We consider case (b)} see Figure~\ref{fig:induction_beginning_2} where the alternating cycle $C$ contains at least one vertex twice. Then $C$ decomposes in at most $\frac{1}{2} \ell$ $j$-swaps with $4 \leq j \leq 2 \ell-4.$ Hence, (i) in our Theorem is fulfilled with sequence $G_1,G_2,\dots,G_{k}$ where $G_1:=G$, $G_{k}:=G'$ and $|G_i \triangle G_{i+1}|\leq 2 \ell-4$ corresponds
Two Sigma engineer Thomas Walker, introduce safeboot, an open source package that helps improve the usable security of the boot process for modern Debian-based x86 computers. Ralph Pantozzi is the winner of the 2014 Rosenthal Prize and a math educator. Steve Sherman is the Chief Imagination Officer and Executive Daydreamer, Living Maths. Daniel Rose-Levine will demonstrate how he solves the Rubik’s Cube with his feet in under 20 seconds. Interview. 123-456-7890. Daniel Rose-Levine is the former Rubik’s-cube-with-feet world record holder. Smart phone, tablet, or personal computer with internet access, Several sheets of 8.5″ by 11″ printer paper, 5′ by 5′ area in which to move (non-carpeted area preferred), Several sheets of 8.5″ x 11″ printer paper, 2 sheets of 8.5” x 11” (size A4) card stock paper, ideally in 2 different colors; manila folders cut to 8.5” x 11” will also work, Optional: This project can also be built from, 1 box of traditional rounded toothpicks with points at both ends, Something bendy, such as a tie, shoelace, or piece of string. Mark Saul is the Executive Director of the Julia Robinson Mathematics Festival. Dr. Arthur Benjamin will amaze you with some mathematical magic, and then teach you how to do it. Bring a calculator! But when he noticed that very few math pages existed on Instagram, he sought to change that by starting @daily_math, a page dedicated to intriguing problems and ideas about algebra, geometry, calculus, number theory, and other parts of math. He. John Urschel played professional football for the Baltimore Ravens from 2014 to 2017 before retiring to focus on his career in mathematics.  He is currently a PhD candidate at MIT, where he studies spectral graph theory, numerical linear algebra, and machine learning. Second, the presentation could be understood even by people without a significant knowledge of math. The smaller size is only two pages and it great if you are going to print of individual copies for students to practice with in class or at home. Insights by Two Sigma @prashantbaragi Include @prashantbaragi in your post and this person will be notified via email. Let’s discover the magic of Euler’s Polyhedral Formula while creating structures out of toothpicks and marshmallows. Who doesn’t love a limerick?” So Sarah created a series of short rhyming poems to list some basic properties of linear, quadratic, trigonometric, polynomial, rational, and other types of functions encountered in algebra and precalculus, and illustrated the pages with examples. A MoMath retail specialist will be on hand to answer questions and offer expert shopping advice for all your mathematical gift needs. Then they work as a group to assemble the pieces into a number of challenge shapes, including a 3x3x3 cube, a sphinx, a tunnel to climb through, and a fourteen foot tall skyscraper. Shop today! Show Ads. That’s a good practice in any form of communication. Math meets art in this creative application of the popular Rubik’s Cube. But with sigma notation (sigma is the 18th letter of the Greek alphabet), the sum is much more condensed and efficient, and you’ve got to admit it looks pretty cool: This notation just tells you to plug 1 in for the i in 5i, then plug 2 into the i in 5i, then 3, then 4, … Especially notable was the esthetic of minimalism — in how the video is shot, and the choice of clothing, background, and colors — all of which mesh perfectly with the minimal esthetic of group theory. Squorder June 9, 2013 at 9:02 pm #195375. See below for detailed instructions. Founded in 2001, Two Sigma manages approximately$20 billion of assets with headquarters in New York City and additional offices in Houston, TX, London, UK and Hong Kong. Mark Saul is the Senior Scientist at the Julia Robinson Mathematics Festival. David Reimann is an Albion College math and computer science professor and artist who uses symmetry in his work. The dancing and music were artfully minimal too. Jonah Yoshida’s project is a pencil-and-paper infographic on graph theory. Lauren Rose is a mathematician and math professor at Bard College. Bruce Bayly is a math professor at the University of Arizona and bus driver for the Arizona Mathematics Road Show. Giant Soma Puzzle. Daniel Rose-Levine will demonstrate how he solves the Rubik’s Cube with his feet in under 20 seconds. Bring along something bendy — a tie, a shoelace, a piece of string — and let’s have fun exploring some of the curious mathematics of folding. (The candy bar is a real prop but eating is pantomimed so enjoyment is calorie-free.). Tim Chartier is a mathematical mime performer and math professor at Davidson College. Jazz saxophonist and mathematician Marcus G. Miller will share a reflection on how math and music can make us whole. Watch as Lauren Rose builds a Rubik’s Cube mosaic, and try to figure out what the picture will be. Mike Andrejkovics is a high school math teacher from Long Island, NY who creates and performs raps about mathematics based on popular hip-hop tracks. Each poem became a problem to solve as I tried to figure out words to make each function type’s properties rhyme neatly.”  The poems illuminate the distinctive properties of the various kinds of functions, and draw readers in through a unique, creative, and memorable way of communicating mathematical ideas. A MoMath retail specialist will be on hand to answer questions and offer expert shopping advice for all your mathematical gift needs. Overall, this project is modest but extremely well done and produces a very pleasurable “Aha!” moment for many viewers; indeed, it led one of the judges to understand the “sum of squares formula” in a whole new way! A MoMath retail specialist will be on hand to answer questions and offer expert shopping advice for all your mathematical gift needs. Wendy Zeichner is an origami expert and president of OrigamiUSA. See our selection of Geometiles® and browse for books by Festival presenters Peter Winkler and Art Benjamin. But here, the suggestion of the infinite is magical and otherworldly rather than scientific and literal, and so may appeal to audiences not normally attracted to math. John Urschel, current MIT math PhD candidate and former NFL pro, shares his favorite logic puzzle. His explanations of mathematical concepts are clear and insightful, and he is very interactive with his followers, even inviting them to post. Origami expert Wendy Zeichner is the Executive Director of Origami USA. Join us online for a math-and-paper engineering adventure!  Godwyn Morris, Director of Dazzling Discoveries STEM Education Center, will demonstrate some Engineering with Paper challenges.  Together we will explore proportion, ratio, and scale as Godwyn shows you how to create structures, furniture, and characters from simple supplies. Choose from a variety of Museum puzzle options with different sizes, number of pieces, and board material. Manjul Bhargava will demonstrate an interactive magic trick that exhibits how one can create surprising complexity from extreme simplicity.  Viewers are encouraged to participate from home! Viewing 3 posts - 1 through 3 (of 3 total) 2 strings of different colors (wires or shoelaces are also okay), Surface to lay the strings on (e.g. Dr. Arthur Benjamin will amaze you with some mathematical magic, and then teach you how to do it.  Bring a calculator! The judges were impressed with the creativity of Hamza’s entry, expressed through its skillful use of visuals, history, and puzzles, all presented in attractive ways. Bruce Bayly is a singer, violinist, and math professor at the University of Arizona. The project submitted by Kyna Airriess is a “zine” based on a quote from A Mathematician’s Lament, a polemical essay by high school teacher Paul Lockhart. on. Daniel Rose-Levine is the former Rubik’s-Cube-with-feet world record
vertex cover problem is NP-hard on bipartite graphs, which answers an open problem of B. Simeone. ### Improving Vertex Cover as a Graph Parameter Parameterized algorithms are often used to efficiently solve NP-hard problems on graphs. In this context, vertex cover is used as a powerful parameter for dealing with graph problems which are hard to solve even when parameterized by tree-width; however, the drawback of vertex cover is that bounding it severely restricts admissible graph classes. We introduce a generalization of vertex cover called twin-cover and show that FPT algorithms exist for a wide range of difficult problems when parameterized by twin-cover. The advantage of twin-cover over vertex cover is that it imposes a lesser restriction on the graph structure and attains low values even on dense graphs. Apart from introducing the parameter itself, this article provides a number of new FPT algorithms parameterized by twin-cover with a special emphasis on solving problems which are not in FPT even when parameterized by tree-width. It also shows that MS1 model checking can be done in elementary FPT time parameterized by […] ### On substitution tilings of the plane with n-fold rotational symmetry A method is described for constructing, with computer assistance, planar substitution tilings that have n-fold rotational symmetry. This method uses as prototiles the set of rhombs with angles that are integer multiples of pi/n, and includes various special cases that have already been constructed by hand for low values of n. An example constructed by this method for n = 11 is exhibited; this is the first substitution tiling with elevenfold symmetry appearing in the literature. ### Output sensitive algorithms for covering many points In this paper we devise some output sensitive algorithms for a problem where a set of points and a positive integer, m, are given and the goal is to cover a maximal number of these points with m disks. We introduce a parameter, ρ, as the maximum number of points that one disk can cover and we analyse the algorithms based on this parameter. At first, we solve the problem for m=1 in O(nρ) time, which improves the previous O(n2) time algorithm for this problem. Then we solve the problem for m=2 in O(nρ + 3 log ρ) time, which improves the previous O(n3 log n) algorithm for this problem. Our algorithms outperform the previous algorithms because ρ is much smaller than n in many cases. Finally, we extend the algorithm for any value of m and solve the problem in O(mnρ + (mρ)2m - 1 log mρ) time. The previous algorithm for this problem runs in O(n2m - 1 log n) time and our algorithm usually runs faster than the previous algorithm because mρ is smaller than n in many cases. We obtain output […] ### Cost-effectiveness of algorithms In this paper we discuss how to assess the performance of algorithms for optimisation problems in a way that balances solution quality and time. We propose measures of cost-effectiveness for such algorithms. These measures give the gain in solution quality per time unit over a sequence of inputs, and give a basis for deciding which algorithm to use when aiming for best accumulated solution quality for a given time investment over such an input sequence. Cost-effectiveness measures can be defined for both average-case and worst-case performance. We apply these ideas to three problems: maximum matching, graph colouring and Kolmogorov complexity. For the latter, we propose a cost-effectiveness measure for the time-bounded complexity Kτ(x), and argue that it can be used to measure the cost-effectiveness both of finding a short program to output x and of generating x from such a program. Under mild assumptions, we show that (roughly speaking) if the time-bounded complexity Kτ(x) is to be a […] ### A randomized algorithm for finding a maximum clique in the visibility graph of a simple polygon We present a randomized algorithm to compute a clique of maximum size in the visibility graph G of the vertices of a simple polygon P. The input of the problem consists of the visibility graph G, a Hamiltonian cycle describing the boundary of P, and a parameter δ∈(0,1) controlling the probability of error of the algorithm. The algorithm does not require the coordinates of the vertices of P. With probability at least 1-δ the algorithm runs in O( |E(G)|2 / ω(G) log(1/δ)) time and returns a maximum clique, where ω(G) is the number of vertices in a maximum clique in G. A deterministic variant of the algorithm takes O(|E(G)|2) time and always outputs a maximum size clique. This compares well to the best previous algorithm by Ghosh et al. (2007) for the problem, which is deterministic and runs in O(|V(G)|2 |E(G)|) time. ### Determining pure discrete spectrum for some self-affine tilings By the algorithm implemented in the paper by Akiyama-Lee [Adv. Math. 226(4):2855 13;2883, 2011] and some of its predecessors, we have examined the pure discreteness of the spectrum for all irreducible Pisot substitutions of trace less than or equal to 2, and some cases of planar tilings generated by boundary substitutions due to Kenyon [Geom. Func. Anal. 6:471 13;488, 1996]. ### An exact algorithm for the generalized list T-coloring problem The generalized list T-coloring is a common generalization of many graph coloring models, including classical coloring, L(p,q)-labeling, channel assignment and T-coloring. Every vertex from the input graph has a list of permitted labels. Moreover, every edge has a set of forbidden differences. We ask for a labeling of vertices of the input graph with natural numbers, in which every vertex gets a label from its list of permitted labels and the difference of labels of the endpoints of each edge does not belong to the set of forbidden differences of this edge. In this paper we present an exact algorithm solving this problem, running in time O*((τ+2)n), where τ is the maximum forbidden difference over all edges of the input graph and n is the number of its vertices. Moreover, we show how to improve this bound if the input graph has some special structure, e.g. a bounded maximum degree, no big induced stars or a perfect matching. ### A Parameterized Measure-and-ConquerAnalysis for Finding a k-Leaf Spanning Treein an Undirected Graph The problem of finding a spanning tree in an undirected graph with a maximum number of leaves is known to be NP-hard. We present an algorithm which finds a spanning tree with at least k leaves in time O*(3.4575k) which improves the currently best algorithm. The estimation of the running time is done by using a non-standard measure. The present paper is one of the still few examples that employ the Measure & Conquer paradigm of algorithm analysis in the area of Parameterized Algorithmics. ### The Price of Mediation We study the relationship between correlated equilibria and Nash equilibria. In contrast to previous work focusing on the possible benefits of a benevolent mediator, we define and bound the Price of Mediation (PoM): the ratio of the social cost (or utility) of the worst correlated equilibrium to the social cost (or utility) of the worst Nash. We observe that in practice, the heuristics used for mediation are frequently non-optimal, and from an economic perspective mediators may be inept or self-interested. Recent results on computation of equilibria also motivate our work. We consider the Price of Mediation for general games with small numbers of players and pure strategies. For two player, two strategy games we give tight bounds in the non-negative cost model and the non-negative utility model. For larger games (either more players, or more pure strategies per player, or both) we show that the PoM can be arbitrary. We also have many results on symmetric congestion games (also known as […] ### A note on contracting claw-free graphs A graph containment problem is to decide whether one graph called the host graph can
doesn’t care about. I guess that is what it boils down to. On this site, I don’t care if you’re grateful, I don’t care if you’re cute with your words. All I care about is a clear and concise explanation of what’s wrong, and the question you have regarding that. Everything else is superfluous to a Search engine, and is superfluous to me as well. toast Mar 6 2009 @Eddie: It’s not the majority of edits. It’s the majority of Rich B’s edits, which consist of mainly removing “Hi!” from the beginning. His justification of which is that it clutters the bylines. But he’ll also remove “Cheers” from the end, with no justification. He’s had one edit where the only change he’s made is to change a period into a question mark. As I’ve asked multiple times: Is this really necessary? I’ve pointed to specific examples, but he never tries to actually justify his edits. He just makes a pithy remark at my expense. But he never engages in ad hominem. Well, except when he does. @Rich B: You keep saying you understand and that such is reasonable (unless I’m the one asking that is), but you never change your behavior. So clearly you don’t. BTW, wasn’t participating last night because I went out of town to watch a midnight showing of Watchmen with a friend. Pesto Mar 6 2009 I find that the rep system has some similarities to the Peter Principle, in that we assume that because somebody has knowledgeable answers about, say, C#’s list implementation (or simply enough time to provide hundreds of marginal answers), this somehow empowers them as an editor, a moderator, and an organizer. I’m not judging Rich B or anyone else in particular, but clearly there isn’t any evidence to support that somebody who makes a good question answerer also makes a good editor. @toast: Though it may be petty or tedious to change a period into a question mark, it’s not your time Rich B is wasting; it’s his own. And really, is changing a period into a question mark actually unnecessary? Heaven forbid we should encourage proper grammar. Also — not that I necessarily value your opinion — was Watchmen any good? I’m getting trepidatious about it. Someone kept editing one of my questions on stackoverflow, changing the tags. Not only were they not appropriate in my view, but he kept doing it as soon as I changed them back. Highly annoying ! As long as it’s grammar or other similar changes I think it’s ok, but that’s about it. TheTXI Mar 6 2009 @Eric: Retagging is a common practice on StackOverflow. Unless you have a specific example to show us, we can’t really say whether or not there was any abuse. Rich B Mar 6 2009 @Eric: Show us the post you are referring to so we can see these ‘inappropriate’ tags. That is the only way we can help you. Rich B Mar 6 2009 @GateKiller: I like your logic. Not a single valid case of abuse has been shown about me, so I should be banned. You, on the other hand, admit to abusing, have been consistently abusive in your rollbacks and edits, and have even had your posts locked by admins because of this abuse. You are still partaking in this abuse despite the obvious community opinion that disagree with your edits, and the OP not even agreeing with you apparently: http://stackoverflow.com/revisions/560329/list So yes, I like your logic. Ban me. GK is obviously a better contributor to this site. Bruce Banner Mar 6 2009 Just ban RichB. That will be the smartest course of action. He is a troublemaker. Even though his intentions seem to actually improve SO, his ways just do the opposite. May the almighty Banhammer fall upon those who ruin the interwebs for everyone else TheTXI Mar 6 2009 While we’re at it, let’s just ban everyone we don’t like. TheTXI’s ban list – GateKiller, GregD, Toast, Kev, Hitler, Stalin, Mussolini, Mao, Kim Jong Il, Chris Brown, Kevin Federline, and Taylor Swift. Lets of some fun stats: * 25%(76) of comments on this page are by RichB * The @ Symbol has been used 196 times * I have posted 7 times * XKCD has only been referenced 5 times inclusive * The word “shit” has only been used by Kev and belgariontheking * Godwin’s Law occured on March 4th, 2009 at 10:39 am * Jeff Atwood has responded 3 times Eddie Mar 6 2009 @George (aka Gortok): well said. If RichB made the exact edits he does, but defended them differently, I think there’s a big chance we wouldn’t be having this discussion. @toast: Yes, editing a single character to change a ‘.’ to ‘?’ is entirely appropriate if it is more grammatically correct. This is not a simple Q&A site. If the goal were for the OP to get an answer, and that’s it, then there would be little reason to edit posts unless they were very unclear. However, each question and answer will be read many, many times, long after the OP has benefited from the answer. Rewriting things in strong, clear English prose is a positive. @toast: And yes, RichB has attempted to justify removing salutations and signatures. He has explained why he feels this adds value. You clearly disagree with him on this, and that’s OK. We don’t all have to agree. But that doesn’t mean he hasn’t tried to explain himself. And I get your point that Jeff Atwood explicitly said, way above, that just removing “hi” and “cheers” is generally not justification for an edit. On the other hand, I agree and take to heart what @nobody said, above. This is not a pure wiki. The OP’s name is associated with the post and we don’t want to put words into that person’s mouth that don’t fit. I try, in my edits, to always respect the tone and character of the OP, even when I change passive to active voice and make other large editorial changes. @all: I’ve tried three times now to start a *productive* discussion on solving the problem by reaching a community consensus. RichB agrees with my proposal and the anti-RichB crowd has so far entirely ignored my suggestions. Do we need a “RichB sux” page so you can vent all of your spleen, or are you just not interested in solving this conflict? Banning any one user will not solve this problem. Nor will simple limit changes, unfortunately. Perhaps meta-moderation tools are needed. I definitely agree that it’s a good idea to penalize (-10 rep? -20 rep? Loss of edit priv for 1 day?) *anyone* who is not the OP who causes a rollback/edit cooldown period by doing *many* (not just one) edits/rollbacks. And if the OP insists on being incorrect or unreadable … oh well. They will then contribute more to “noise” than to “signal.” We can’t fix every problem. And hey, my captcha phrase was “tempers” plus some digits and stuff. Pretty funny. Pesto Mar 6 2009 @Stephen Hill: Shit, I was going to use the word “shit” earlier, but I took it out. I didn’t realize there was going to be someone counting shits, otherwise I’d have spread shit all over my posts! TheTXI Mar 6 2009 How can you have a list of fun states without listing how many pony references? TheTXI Mar 6 2009 Stats…shutup! Rich B Mar 6 2009 @eddie: I could set up a site about me like I setup for this guy: http://www.thestupidestmanonearth.com I have no problem with people using me as a punching bag. I bet it would be very entertaining. If only I
\section{Introduction} Investigating occurrences of rotation in the Universe and its possible consequences is one of the major issues in contemporary astronomy. It involves the question of rotation of individual cosmic structures as well as the question of possible rotation of the entire Universe. The rotating objects in the Universe are found on various scales - from subatomic particles to stars and galaxies - so it is reasonable to ask whether the Universe as a whole does rotate too. If there is the global, or just a large-scale, rotation of the Universe, its effects should be able to be detected in an observational way. In this work, the possible rotational effects at various astronomical scales are discussed and observational data that could be possibly related to rotation are presented. The attempts to observationally confirm that the Universe actually rotates have been ever regarded much skeptically. The first significant evidence for the global rotation of the Universe was provided by Birch (1982). Birch considered position angles and polarization of classic bright double radio sources and found that the differences in position angles and their polarization are correlated with their position on the sky. That study was immediately criticized by Phinney and Webster (1983), who accused Birch of invalid application of statistical methods and stated that his data are inadequate for drawing such far-reaching conclusions. In reply to them, Birch (1983) found that the effect observed by him had been present in the original data of Phinney and Webster, while the validity of Birch's statistics was proved by Kendall and Young (1984). In next work by Phinney et al. (1984), the data were reanalyzed using novel statistical methods and taking into account possible errors and observational uncertainties. The authors concluded that the effect indicated by Birch was confirmed by observations, however its nature was not clear. Bietenholz and Kronberg (1984) and Bietenholz (1986) having extended the sample of analyzed objects did not find any evidence to confirm Birch's effect. Another attempt to empirically confirm the rotation of the Universe was undertaken by Nodland and Ralston (1997a), who studied correlations between the direction and distance to galaxies and the angle $\beta$ between the polarization direction and their larger axis and found an effect which they interpreted as rotation of the polarization plane dependent on the distance. The study immediately provoked discussion on the validity of statistical methods used (Carrol and Field 1997, Loredo et al. 1997, Eisenstein and Bunn 1997, Nodland and Ralston 1997b, see also Ralston and Jain 2004 for later review); it was also indicated that the effect had not been confirmed by analysis of new, better observational data (Wardle et al. 1997, see however that Jain and Ralston 1999 obtained opposite conclusion), and that even if Birch's effect were to be regarded as evidence for a large-scale rotation of the Universe, the value obtained by Birch is too large as compared to the anisotropy found in cosmic microwave background radiation (CMBR). Potential difficulties in empirical confirmation of rotation of the Universe were mentioned also in a number of theoretical works. Silk (1970) already drew attention to the fact that at present, otherwise than in the early Universe, any dynamic effects of global rotation should be negligible, and the rotational period must exceed the Hubble time, which is a direct consequence of the small anisotropy of CMBR. Later on, Barrow, Juszkiewicz and Sonoda (1985) obtained strict constraints on the value of rotation scalar by investigating some classes of Bianchi models which involve Friedmann' models as their special cases. They demonstrated that for a flat universe there is a limit of $\omega/H_0\sim 2 \times 10^{-5}$ ($\omega\sim 1.5 \times 10^{-15} \, {\rm rad \, yr}^{-1}$). Recently Pontzen and Challinor (2007), examining effects of CMBR polarization induced by global rotation, demonstrated that they could be used to determine constraints on the amount of global rotation. Su and Chu (2009) obtained a limit of this kind from analyzing the 2nd-order Sachs-Wolfe effect, namely that the angular velocity of shear-free rotation of $\Lambda CDM$ universe is less than $\omega \sim 10^{-9} \, {\rm rad \, yr}^{-1}$ at the last scattering surface. This constraint is weaker than that obtained by Barrow, Juszkiewicz and Sonoda (1985) for the flat Bianchi models. Chechin (2010) investigated the rotational effects of cosmic vacuum. Considering the global rotation and the induced rotation of elliptical galaxies, he estimated the value of angular velocity of the Universe in his model as $\, \, \omega \sim 10^{-19} \, {\rm rad \, s}^{-1}$ ($3 \times 10^{-11} \, {\rm rad \, yr}^{-1}$). In the light of the above results the global rotation of the Universe cannot be regarded as confirmed by observations, thus instead of observational confirmation of rotation of the Universe one should rather talk about rotational limits within a given cosmological model. The relativistic models with rotation and geodetics in spacetime behaving according to general relativity were investigated already in the first half of 20th century, beginning from the works of Lanczos (1924), Gamow (1946) and Goedel (1949). Ellis and Olive (1983) as well as Gron and Soleng (1987) demonstrated that the global rotation in inflationary models of the Universe - if any - should be small. Demia{\'n}ski and Griszczuk (1972) discussed solutions of the Einstein equations for the case of flat homogeneous anisotropic space filled with expanding and rotating ideal fluid with non-zero shear, addressing primarily the question how rotation influences the behaviour of matter near the initial singularity. The Einstein equations with rotating fluid were also analyzed in a number of studies by Krasi{\'n}ski (Krasi{\'n}ski 1975, 1997, 1998a,b,c, 1999, 2001a,b). Fil'chenkov (2003, 2005) considered a possibility of cosmological origin of rotation of astronomical objects due to a tunneling effect in quantum cosmology. The dynamics of Friedmann-Robertson-Walker (FRW) models with global rotation was discussed in Szyd{\l}owski and God{\l}owski (2005). The role of rotation of objects in the Universe and its significance for astronomical measurements was analyzed in Vishvakarma (2006). Recently Bratek, Jalocha and Kutschera (2007) found a class of explicit solutions with cylindrical symmetry for differentially rotating gas. Discussion of anisotropic cosmological models with magnetic field and rotation can be found in Demia{\'n}ski and Doroshkevich (2007).Jain, Modgil, and Ralston (2007) examined the Type 1a supernova data in order to determine if it shows any signal of large scale anisotropy. The anisotropy was modeled by an extended G\"{o}del metric (G\"{o}del-Obukhow metric), which incorporated expansion along with rotation. Sousa, Pereira and Silva (2008) considered the relation of energy and momentum density in Goedel's universes, while Akarsu and Kilinc (2010) analyzed locally rotationally symmetric (LRS) Bianchi Type I cosmological models are examined in the presence of dynamically anisotropic dark energy and perfect fluid. Iorio (2010) provided a detailed account of influence of large-scale rotation on the Solar System. The main problem in testing models with non-vanishing rotation is that there are no accepted observables for this purpose to date. It is also worth noting that while there have been established some upper limits for rotation from analysis of CMBR and primeval nucleosynthesis (Goedel 1949, Hawking 1969, Collins and Hawking 1973, Barrow, Juszkiewicz, and Sonoda 1985, Ciufolini and Wheeler 1995, Bunn, Ferreira and Silk 1996, Kogut, Hinshaw and Banday 1997), all these works based on a model involving both shear and rotation, which makes discussing rotation a rather complicated task due to intertwined effects of shear and rotation. Thus it is easier to study a Newtonian equivalent of this model, which allows for rotation with no shear. \section{Cosmological models with dark radiation} In looking for limits of amount of rotation it is convenient to use Senovilla's formulation of Newtonian cosmology. According to Senovilla, Sopuerta and Szekeres (1998), density and pressure in a Newtonian universe filled with homogeneous fluid will not explicitly depend on spatial variables, while depending on time. It is, however, assumed that fluid velocity depends in a linear manner on spatial variables (Szekeres and Rankin 1977, Senovilla, Sopuerta and Szekeres 1998). In this case, contrary to the general relativity solutions, there are solutions without shear that fulfill the Heckmann-Schucking equations with expansion and rotation. This situation has no equivalent in general relativity, where homogeneous rotating and expanding universes (but non-tilted - the models where vectors of 4-velocity $u^{\mu}$ are not orthogonal to the surface of homogeneity are called "tilted" ones - Obukhov (2000), Obukhov, Chrobok, and Scherfner 2002)) with ideal fluid have to possess a non-vanishing shear (Ellis 1966, King and Ellis 1973, Raychaudhuri 1979, Collins 1985). In a Newtonian universe one can readily derive observables required for cosmological tests. On the other hand, the Newtonian approximation
ccn = np.zeros((temperatures.shape[0], chempots.shape[0], 3, 3)) ccp = np.zeros((temperatures.shape[0], chempots.shape[0], 3, 3)) # set transport tensor scheme for analytick in python # which is slow, so we can chose only to calculate certain elements # TODO: incorporate this for all methods pylint: disable=fixme # loop temperaturs for indext, temp in np.ndenumerate(temperatures): # fetch eta for each band etas = self.fetch_etas(chempots, temp).T # fetch tau0 for a given temperature tau0 = self.scattering_tau0[indext] # loop temperature and calculate closed integrals for indexe in range(chempots.shape[0]): sigma_tensor, seebeck_tensor, lorenz_tensor, \ hall_tensor, cc_tensor_n, cc_tensor_p = lbtecoeff.parabolice(self, etas[indexe], temp, bs, tau0, method) sigma[indext, indexe] = sigma_tensor seebeck[indext, indexe] = seebeck_tensor lorenz[indext, indexe] = lorenz_tensor hall[indext, indexe] = hall_tensor ccn[indext, indexe] = cc_tensor_n ccp[indext, indexe] = cc_tensor_p # fully numerick evaluation, for the purpose of speed, the loop # over temperature and chemical potential is done internally. # The return is (temperature,chempot,3,3) arrays else: sigma, seebeck, lorenz = lbtecoeff.numerick( self, chempots, temperatures, bs) # TODO: FIX THE HALL TENSOR ASAP (MAYBE ALSO THE NERNST) pylint: disable=fixme hall = sigma ccn = np.zeros((temperatures.shape[0], chempots.shape[0], 3, 3)) ccp = np.zeros((temperatures.shape[0], chempots.shape[0], 3, 3)) # calculate the carrier concentration if numerick: # calculate the carrier concentration # check if dos exists if ((self.bs.dos_partial is None) or (not self.param.carrier_dos_analytick)): self.bs.calc_density_of_states() # loop temperatures for indext, temperature in np.ndenumerate(temperatures): # loop chempots for indexe, chempot in np.ndenumerate(chempots): ptype, ntype, _ = self.calc_carrier_concentration( temperature, chempot) ccp[indext, indexe, 0, 0] = ptype ccn[indext, indexe, 0, 0] = ntype self.sigma = sigma self.seebeck = seebeck self.lorenz = lorenz self.hall = hall self.ccn = ccn self.ccp = ccp def fetch_relevant_bands(self, tr=None): """ Locate bands that will be included in the transport integrals. Parameters ---------- tr : object, optional A `Transport()` object. Returns ------- None Notes ----- The included bands are located by considering the input range of chemical potentials from `transport_chempot_min` and `transport_chempot_max` padded with the value `transport_energycutband` on each side (see the general configuration file). """ # set logger logger = logging.getLogger(sys._getframe().f_code.co_name) # pylint: disable=protected-access logger.debug("Running fetch_relevant_bands.") if tr is None: energies = self.bs.energies param = self.param else: energies = tr.bs.energies param = tr.param # check if user supplied specific bands for calculation and # let them know if they have supplied this (easy to make mistakes) if param.transport_include_bands: # first check that we actually have all the necessary bands band_index = np.amax(param.transport_include_bands) if band_index > energies.shape[0]: logger.error("User requested a band that is not included in " "the original dataset. Exiting.") sys.exit(1) logger.info( "User supplied specific bands so we are only performing " "transport calculation on those.") # shift index to zero transport_included_bands = [ x - 1 for x in param.transport_include_bands ] else: e_min = param.transport_chempot_min - param.transport_energycutband e_max = param.transport_chempot_max + param.transport_energycutband transport_included_bands = [] # loop bands, later add vectorize on band as well for band in range(energies.shape[0]): if energies[band][(energies[band] > e_min) & (energies[band] < e_max)].size != 0: transport_included_bands.append(band) if tr is None: self.included_bands = np.array(transport_included_bands, dtype='intc') else: tr.included_bands = np.array(transport_included_bands, dtype='intc') def calc_carrier_concentration( # pylint: disable=too-many-locals self, temperature, chempot, dos=None, dos_energies=None, band_decomp=False, defect_ionization=False): r""" Returns the charge carrier concentration. Parameters ---------- temperature : float The temperature in K. chempot : float The chemical potential in eV. dos : ndarray, optional | Dimension: (N,M) Contains the band decomposed density of states for each band N and energy M. If not supplied, set to the `dos_partial` parameter of the current `Bandstructure()` object. dos_energies : ndarray, optional | Dimension: (M) The energies in eV where the density of states are sampled. band_decomp : boolean Return a band decomposed carrier concentration or not. defect_ionization : boolean Selects if defect ionization compensation should be included. The `donor_number`, `donor_energy`, `donor_degen_fact`, `acceptor_number`, `acceptor_energy` and `acceptor_degen_fact` need to be set in the general configuration file. Returns ------- n_type : ndarray | Dimension: (N) Contains the n-type carrier concentration for each band index N in units of :math:`10^{21} \mathrm{cm}^{-3}`. p_type : ndarray | Dimension: (N) Contains the p-type carrier concentration for each band index N in units of :math:`10^{21} \mathrm{cm}^{-3}`. """ # set logger logger = logging.getLogger(sys._getframe().f_code.co_name) # pylint: disable=protected-access logger.debug("Running calc_carrier_concentration.") if dos is None: dos = self.bs.dos_partial if dos_energies is None: dos_energies = self.bs.dos_energies num_bands = self.bs.bandparams.shape[0] n_type = np.zeros(num_bands) p_type = np.zeros(num_bands) ntype_index = np.where( dos_energies > self.param.carrier_conduction_energy) ptype_index = np.where( dos_energies < self.param.carrier_valence_energy) dos_energies_ntype = dos_energies[ntype_index] dos_energies_ptype = dos_energies[ptype_index] intrinsic = np.zeros(num_bands) beta = 1e5 / (constants.kb * temperature) for band in range(num_bands): if dos_energies_ntype.size > 0: # n-type, use only energies from carrier_conduction_energy # to the end of the array set in param.yml, slice integrand = dos[band][ntype_index] * \ fermi_dist(dos_energies_ntype, chempot, beta) n_type[band] = scipy.integrate.trapz(integrand, dos_energies_ntype) # p-type, use only energies from start of array to # carrier_valence_energy set in param.yml, slice if dos_energies_ptype.size > 0: integrand = dos[band][ptype_index] * \ fermi_dist(-dos_energies_ptype, -chempot, beta) p_type[band] = scipy.integrate.trapz(integrand, dos_energies_ptype) # make sure units of carrier concentration is 10^21 cm^-3 n_type = 1e3 * n_type p_type = 1e3 * p_type # calculte intrinsic^2 (sum for each band first) intrinsic = np.multiply(n_type.sum(-1), p_type.sum(-1)) if defect_ionization: donor_number = self.param.donor_number donor_degen_fact = self.param.donor_degen_fact donor_energy = self.param.donor_energy acceptor_number = self.param.acceptor_number acceptor_degen_fact = self.param.acceptor_degen_fact acceptor_energy = self.param.acceptor_energy donor_ion_number = donor_ionization(donor_number, donor_energy, donor_degen_fact, chempot, beta) acceptor_ion_number = acceptor_ionization(acceptor_number, acceptor_energy, acceptor_degen_fact, chempot, beta) n_type = 0.5 * (donor_ion_number - acceptor_ion_number) + \ np.sqrt(np.power(0.5 * (donor_ion_number - acceptor_ion_number), 2.0) + intrinsic) p_type = 0.5 * (acceptor_ion_number - donor_ion_number) + \ np.sqrt(np.power(0.5 * (acceptor_ion_number - donor_ion_number), 2.0) + intrinsic) if not band_decomp: p_type = p_type.sum(-1) n_type = n_type.sum(-1) return p_type, n_type, np.sqrt(intrinsic) def fetch_temperatures(self, store=True): """ Set up the temperatures. Parameters ---------- store : boolean, optional If given and set to True, the temperature array is in addition to being returned also stored in the active `Transport()` object. Returns ------- temperature : (N) ndarray Contains N temperature linear samplings in units of K. The parameters `temperature_min`, `temperature_max` and `temperature_steps` in param.yml set the maximum and minimum temperature and its number of steps. """ # set logger logger = logging.getLogger(sys._getframe().f_code.co_name) # pylint: disable=protected-access logger.debug("Running fetch_temperatures.") temperature = np.linspace(self.param.temperature_min, self.param.temperature_max, self.param.temperature_steps) if store: self.temperature = temperature return temperature return temperature def fetch_chempots(self, store=True): """ Set up the chemical potential. Parameters ---------- store : boolean, optional If given and set to True, the chempot array is in addition to being returned also stored in the current `Transport()` object. Returns ------- chempot : ndarray | Dimension: (N) Contains N chemical potential linear samplings in units of eV. The parameters `transport_chempot_min`, `transport_chempot_max` and `transport_chempot_samples` in param.yml set the maximum and minimum chemical potential and its number of samples. """ # set logger logger = logging.getLogger(sys._getframe().f_code.co_name) # pylint: disable=protected-access logger.debug("Running fetch_chempots.") chempots = np.linspace(self.param.transport_chempot_min, self.param.transport_chempot_max, self.param.transport_chempot_samples) if store: self.chempots = chempots return chempots return chempots def fetch_etas(self, chempot, temperature): """ Calculate the reduced chemical potential Parameters ---------- chempot : ndarray | Dimension: (N) Contains N samples of the chemical potential in units of eV. temperature : float The temperature in K. Returns ------- eta : ndarray | Dimension: (N) Contains N samples of the reduced chemical potential """ # set logger logger = logging.getLogger(sys._getframe().f_code.co_name) # pylint: disable=protected-access logger.debug("Running fetch_etas.") # convert to eta, shift and loop eta = np.zeros((self.bs.e0.shape[0], chempot.shape[0])) # valence bands: eta=e_shift-chempot bandtype = np.where(self.bs.status == 'v') eta[bandtype] = 1e5 * (self.bs.e0[bandtype, np.newaxis] - chempot[np.newaxis, :]) / \ (constants.kb * temperature) band = bandtype[0].shape[0] # conduction bands: eta=chempot-e_shift bandtype = np.where(self.bs.status == 'c') eta[bandtype] = 1e5 * (chempot[np.newaxis, :] - self.bs.e0[bandtype, np.newaxis]) / \ (constants.kb * temperature) band += bandtype[0].shape[0] # check that there are no funny bands not marked as v or c if band != self.bs.e0.shape[0]: logger.error("Some bands are not marked as a conduction or \ valence band. Please correct input files. Exiting.") sys.exit(1) return eta def fetch_chempot_from_etas(temperature, etas): r""" Calculate the chemical potential from eta and the temperature.
$(\what{\mathcal{F}}_L)_{ij}(q,\what{q})$ which only depends on the value $q$ inside the cell $K$ and the field $\what{q}$ on its boundary. To see this, recall that for hyperbolic conservation laws the numerical upwind flux can be constructed by solving a Riemann problem on the interface between two cells (see e.g. section 2.4 of \cite{Hesthaven2007}). The solution contains both the field $q$ inside the cell and an intermediate state $q^*$, which is usually eliminated with a jump condition. Since this jump condition involves the value of fields on both sides of the interface, the resulting flux in Eq. \eqref{eqn:flux_linear_upwind} depends on $q^+$ and $q^-$. As argued in \cite{BuiThanh2015,BuiThanh2016}, the HDG method represents the intermediate state as $\what{q}=q^*$ in the space $\what{\Lambda}_h$ instead of eliminating $q^*$; the jump condition is enforced weakly. $\what{\mathcal{F}}_L$ replaces the numerical flux $\mathcal{F}^*_L$ in Eq. \eqref{eqn:SWEweak_form_linear} and is given by either $\what{\mathcal{F}}_L^{(\text{LF})}$ (Lax-Friedrichs) or $\what{\mathcal{F}}_L^{(\text{up})}$ (upwind) defined on the boundary $\partial K$ of each cell as \begin{subequations} \begin{align} (\what{\mathcal{F}}_L^{(\text{LF})})_{ij}(q,\what{q})n_j &= (\mathcal{F}_L)_{ij}(q)n_j + c_g\sqrt{\phi_B}(q_i-\what{q}_i)\label{eqn:flux_hybrid_linear_LaxFriedrichs}\\ (\what{\mathcal{F}}_L^{(\text{up})})_{ij}(q,\what{q})n_j &= (\mathcal{F}_L)_{ij}(q)n_j + c_g\sqrt{\phi_B}B_{ik}(q_k-\what{q}_k). \label{eqn:flux_hybrid_linear_upwind} \end{align} \end{subequations} To close the system of equations, continuity of the flux is enforced weakly by requiring that on each facet $e\in \mathcal{E}_h$ \begin{equation} \langle \favg{(\what{\mathcal{F}}_L)_{ij}(q,\what{q})n_j},\what{v}_i\rangle_e = 0\qquad\text{for all test functions $\what{v}\in\what{\Lambda}_h$}. \label{eqn:flux_condition} \end{equation} To simplify notation, introduce the following abbreviations: \begin{equation} \begin{aligned} \mathcal{L}(q,v) &=\sum_{K\in\Omega_h}\left\{\left((\mathcal{F}_L)_{ij}(q),\partial_jv_i\right)_K - \langle (\mathcal{F}^*_L)_{ij}(q^+,q^-)n_j,v_i\rangle_{\partial K}\right\}\\ \what{\mathcal{L}}(q,\what{q},v) &= \sum_{K\in\Omega_h}\left\{\left((\mathcal{F}_L)_{ij}(q), \partial_jv_i\right)_K - \langle (\what{\mathcal{F}}_L)_{ij}(q,\what{q})n_j,v_i\rangle_{\partial K}\right\}\\ \mathcal{N}_0(q,v) &= \sum_{K\in\Omega_h}(s_{i}(q),v_i)_K\quad\text{(see Eq. \eqref{eqn:SWEweak_form_linear})}\\ \mathcal{M}(q,v) &= \sum_{K\in\Omega_h}(q_i,v_i)_K\\ \Xi(q,\what{q},\what{v}) &= \sum_{e\in\mathcal{E}_h}\langle \favg{(\what{\mathcal{F}}_L)_{ij}(q,\what{q})n_j},\what{v}_i\rangle_{e} \end{aligned} \label{eqn:L_definitions} \end{equation} With the definitions in Eq. \eqref{eqn:L_definitions}, after summing over all cells $K\in\Omega_h$ the weak form of the linear SWEs discretised with DG in Eq. \eqref{eqn:SWEweak_form_linear} can be written in condensed form as \begin{equation} \frac{\partial \mathcal{M}(q,v)}{\partial t} = \mathcal{N}_0(q,v)+\mathcal{L}(q,v)\qquad\text{for all $v\in W_h$}.\label{eqn:linear_non_hybridized} \end{equation} The equivalent HDG discretisation of Eq. \eqref{eqn:SWEweak_form_linear} is obtained by replacing $\mathcal{F}^*_L$ by $\what{\mathcal{F}}_L$ in Eq. \eqref{eqn:linear_non_hybridized} and enforcing the continuity condition in Eq. \eqref{eqn:flux_condition}. This results in \begin{xalignat}{2} \frac{\partial \mathcal{M}(q,v)}{\partial t} &= \mathcal{N}_0(q,v) + \what{\mathcal{L}}(q,\what{q},v), &\text{subject to}\qquad \Xi(q,\what{q},\what{v}) &= 0\qquad\text{for all $v\in W_h$, $\what{v}\in\what{\Lambda}_h$}. \label{eqn:linear_hybridized} \end{xalignat} As has been shown in \cite{BuiThanh2016}, the solution $q$ of the original, non-hybridized system in Eq. \eqref{eqn:linear_non_hybridized} is identical to the solution $q$ of the HDG problem given in Eq. \eqref{eqn:linear_hybridized}. In general, the additional field $\what{q}=(\what{\phi},\what{\vec{u}})=(\what{\phi},\what{u},\what{v})\in\what{\Lambda}_h$ introduced in Eq. \eqref{eqn:linear_hybridized} on the facets has three components. As will be discussed in Section \ref{sec:Schur_MG_linear_eqn} below, $\what{\vec{u}}$ can be eliminated for the upwind flux, whereas $\what{\phi}$ does not enter the equations if the Lax-Friedrichs method is used. \pparagraph{Case 2: non-linear SWEs} Similarly, the weak form of the non-linear SWEs in Eq. \eqref{eqn:SWE_continuum_compact} can be written for each cell $K\in\Omega_h$ as \begin{equation} \left(\frac{\partial q_i}{\partial t},v_i\right)_K - \left(\mathcal{F}_{ij}(q),\partial_j v_i\right)_K + \langle \mathcal{F}^*_{ij}(q^+,q^-)n_j,v_i\rangle_{\partial K} = (s_i(q),v_i)_K\qquad\text{for all $v\in W_h$}. \label{eqn:SWEnonlinear_weak_form} \end{equation} where $\mathcal{F}^*=\mathcal{F}^{(\text{LF})}$ is the non-linear Lax-Friedrichs flux \begin{equation} \mathcal{F}^{\text{(LF)}}_{ij}(q^+,q^-)n_j = \favg{\mathcal{F}_{ij}(q)}n_j + \frac{c_g}{2}\tau^*\fdiff{q_i}\label{eqn:flux_LaxFriedrichs} \end{equation} with $\tau^* = \max\left\{|\vec{n}\cdot\vec{u}^+|+\sqrt{\phi_B+\phi^+},|\vec{n}\cdot\vec{u}^-|+\sqrt{\phi_B+\phi^-}\right\}$. Further define \begin{equation} \mathcal{N}(q,v) = \sum_{K\in\Omega_h}\left\{ \left((\mathcal{F}-\mathcal{F}_L)_{ij}(q),\partial_j v_i\right)_K - \langle (\mathcal{F}^*-\mathcal{F}^*_L)_{ij}(q^+,q^-)n_j,v_i\rangle_{\partial K} + (s_i(q),v_i)_K\right\} \label{eqn:NL_definitions} \end{equation} With the definitions in Eqs. \eqref{eqn:L_definitions} and \eqref{eqn:NL_definitions}, after summing over all cells $K\in\Omega_h$ the weak form of the DG discretisation in Eq. \eqref{eqn:SWEnonlinear_weak_form} can be written in condensed form as \begin{equation} \frac{\partial \mathcal{M}(q,v)}{\partial t} = \mathcal{N}(q,v)+\mathcal{L}(q,v)\qquad\text{for all $v\in W_h$}.\label{eqn:non_linear_non_hybridized} \end{equation} As above, the equivalent HDG discretisation of Eq. \eqref{eqn:SWEnonlinear_weak_form} is obtained by replacing $\mathcal{F}^*_L$ by $\what{\mathcal{F}}_L$ in the linear term $\mathcal{L}$ in Eq. \eqref{eqn:non_linear_non_hybridized} and enforcing the continuity condition in Eq. \eqref{eqn:flux_condition}. This results in \begin{xalignat}{2} \frac{\partial \mathcal{M}(q,v)}{\partial t} &= \mathcal{N}(q,v) + \what{\mathcal{L}}(q,\what{q},v), &\text{subject to}\quad \Xi(q,\what{q},\what{v}) &= 0\qquad\text{for all $v\in W_h$, $\what{v}\in\what{\Lambda}_h$}. \label{eqn:nonlinear_hybridized} \end{xalignat} By splitting the right hand side into two terms, we have isolated the fast dynamics due to gravity waves in $\what{\mathcal{L}}(q,\what{q},v)$, whereas any slower modes are described by $\mathcal{N}(q,v)$. This observation is crucial for the construction of suitable semi-implicit time integrators, which avoid numerical instabilities due to the fast gravity waves. \subsection{Time discretisation}\label{sec:IMEXmethods} To integrate Eq. \eqref{eqn:nonlinear_hybridized} (Eq. \eqref{eqn:linear_hybridized} can be dealt with in exactly the same way) in time, the term $\mathcal{N}(q,v)$ is integrated explicitly, whereas $\what{\mathcal{L}}(q,\what{q},v)$ is treated implicitly. Denoting the solution at time $t=n\Delta t$ by $q^{(n)}$, a simple scheme would for example be the following ``\textit{$\theta$-method}'': \begin{equation} \frac{\mathcal{M}(q^{(n+1)},v)-\mathcal{M}(q^{(n)},v)}{\Delta t} = \mathcal{N}(q^{(n)},v) + \theta \what{\mathcal{L}}(q^{(n+1)},\what{q}^{(n+1)},v) + (1-\theta) \mathcal{L}(q^{(n)},v)\qquad\text{for all $v\in W_h$} \label{eqn:theta_method} \end{equation} with $\theta\in[0,1]$ and subject to the flux condition Eq. \eqref{eqn:flux_condition}. The equivalent expression for the linear SWEs in Eq. \eqref{eqn:linear_hybridized} can be obtained by replacing $\mathcal{N}\rightarrow\mathcal{N}_0$ in Eq. \eqref{eqn:theta_method}. At each time step, calculating the field $q^{(n+1)}$ requires the solution of the \textit{linear} system \begin{equation} \begin{aligned} \mathcal{M} (q^{(n+1)},v) - \theta\Delta t \what{\mathcal{L}}(q^{(n+1)},\what{q}^{(n+1)},v) &= \mathcal{R}(q^{(n)},\what{q}^{(n)},v) \\ &:= \mathcal{M}(q^{(n)},v) + \Delta t\left(\mathcal{N}(q^{(n)},v)+(1-\theta)\mathcal{L}(q^{(n)},v)\right)\\ \text{subject to}\qquad\Xi(q,\what{q},\what{v}) &= 0\qquad\text{for all $v\in W_h$, $\what{v}\in\what{\Lambda}_h$.} \end{aligned} \label{eqn:theta_linear_system} \end{equation} Note that for a purely explicit method ($\theta=0$), the system matrix arising from the left hand side of the system in Eq. \eqref{eqn:theta_linear_system} is simply the mass matrix (obviously, in this case Eq. \eqref{eqn:theta_method} reduces to the explicit Euler integrator). On the other hand, $s$-stage IMEX methods \cite{Ascher1995,Ascher1997,Pareschi2000,Kennedy2003, Weller2013} are a generalisation of the method in Eq. \eqref{eqn:theta_method}. They require the construction of $s$ intermediate states $\{(Q^{(i)},\what{Q}^{(i)})\}_{i=1}^s$ which are obtained by solving the system of linear equations \begin{equation} \begin{aligned} \mathcal{M}(Q^{(i)},v) &= \mathcal{M}(q^{(n)},v) + \Delta t\sum_{j=1}^{i-1} a_{ij} \mathcal{N}(Q^{(j)},v) + \Delta t \sum_{j=1}^{i}\widetilde{a}_{ij}\what{\mathcal{L}}(Q^{(j)},\what{Q}^{(j)},v),\\ \text{with}\qquad\Xi(Q^{(i)},\what{Q}^{(i)},\what{v})&=0\qquad\text{for each stage $i=1,\dots,s$ and all $v\in W_h$, $\what{v}\in\what{\Lambda}_h$}.\label{eqn:IMEX_HDG_I} \end{aligned} \end{equation} The state $q^{(n+1)}$ at the next timestep satisfies \begin{equation} \mathcal{M}(q^{(n+1)},v) = \mathcal{M}(q^{(n)},v) + \Delta t\sum_{i=1}^s b_i \mathcal{N}(Q^{(i)},v) + \Delta t\sum_{i=1}^s \widetilde{b}_i \what{\mathcal{L}}(Q^{(i)},\what{Q}^{(i)},v).\label{eqn:IMEX_HDG_II} \end{equation} and is obtained by an additional mass-solve. Each particular IMEX method is determined by a choice of coefficients $a_{ij}$, $\widetilde{a}_{ij}$, $b_i$, $\widetilde{b}_i$ which are commonly known as the Butcher tableau coefficients for the explicit and implicit terms. The equations in Eq. \eqref{eqn:IMEX_HDG_I} can be solved stage-by-stage. At every stage a \textit{linear} system of the abstract general form \begin{equation} \what{\mathcal{A}}(q,\what{q},v,\what{v})=\mathcal{M}(q,v) - \alpha\Delta t \what{\mathcal{L}}(q,\what{q},v) + \Xi(q,\what{q},\what{v}) = \mathcal{R}(v) \qquad\text{for all $v\in W_h$, $\what{v}\in\what{\Lambda}_h$}. \label{eqn:general_linear_system} \end{equation} with some positive constant $\alpha\in \mathbb{R}^+$ has to be solved\footnote{ Note that the \textit{``$\theta$-method''} in Eq. \eqref{eqn:theta_method} could be written as a 2-stage IMEX method with \begin{xalignat*}{4} a &= \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} & \widetilde{a} &= \begin{pmatrix} 0 & 0 \\ 1-\theta & \theta \end{pmatrix} & b &= \begin{pmatrix} 1 \\ 0 \end{pmatrix} & \widetilde{b} &= \begin{pmatrix} 1-\theta\\\theta \end{pmatrix} \end{xalignat*} but this would be inefficient since it requires an additional mass-solve in the first stage.}. Typically the value of $\alpha$ is different in each stage of the IMEX method and the right hand side $\mathcal{R}$ depends on previously calculated fields. To see that $\what{\mathcal{A}}$ in Eq. \eqref{eqn:general_linear_system} is indeed linear in the combined field $(q,\what{q})\in W_h\times \what{\Lambda}_h$, use the definitions of $\mathcal{M}$, $\what{\mathcal{L}}$ and $\Xi$ in Eqs. \eqref{eqn:flux_hybrid_linear_LaxFriedrichs}, \eqref{eqn:flux_hybrid_linear_upwind}, \eqref{eqn:L_definitions} and observe that \begin{equation*} \what{\mathcal{A}}(c_1 q_1 + c_2 q_2,c_1 \what{q}_1+c_2\what{q}_2,v,\what{v}) = c_1 \what{\mathcal{A}}(q_1,\what{q}_1,v,\what{v})+c_2\what{\mathcal{A}}(q_2,\what{q}_2,v,\what{v})\qquad\text{for all $v\in W_h$, $\what{v}\in\what{\Lambda_h}$} \end{equation*} and for all $c_1,c_2\in\mathbb{R}$, $(q_1,\what{q}_1),(q_2,\what{q}_2)\in W_h\times\what{\Lambda}_h$. \section{Schur-complement multigrid solver}\label{sec:IMEXHDG} Constructing efficient solvers for Eq. \eqref{eqn:general_linear_system} is the topic of this paper. In the following section we give explicit expressions for $\what{\mathcal{A}}$ for the Lax-Friedrichs and upwind flux. We then describe a Schur complement approach which reduces the problem to solving an elliptic system in the flux variable $\what{q}$ on the facets. This flux system is solved with a new non-nested multigrid preconditioner based on ideas in \cite{Cockburn2014}. \subsection{Linear equation}\label{sec:Schur_MG_linear_eqn} Recall that $q=(\phi,\vec{u})\in W_h$ in the domain, $\what{q}=(\what{\phi},\what{\vec{u}})\in\what{\Lambda}_h$ on the facets, and write the corresponding test-functions as $v=(\psi,\vec{w})\in W_h$, $\what{v}=(\what{\psi},\what{\vec{w}})\in\what{\Lambda}_h$. Using the definitions in Eqs. \eqref{eqn:flux_linear_LaxFriedrichs} and \eqref{eqn:flux_linear_upwind}, the bilinear form $\what{\mathcal{A}}(q,\what{q},v,\what{v})$ on the left hand side of Eq. \eqref{eqn:general_linear_system} can be written down explicitly for the upwind- and Lax-Friedrichs flux. As shown in \cite{BuiThanh2016}, the resulting bilinear form $\mathcal{A}^{(\text{up})}$ for the upwind flux does not depend on $\what{\vec{u}}$. For the Lax-Friedrichs flux the form $\mathcal{A}^{(\text{LF})}$ does not depend on $\what{\phi}$. Explicit expressions for those forms are given as follows: \begin{description} \item{\textbf{Upwind flux:}} \begin{equation} \begin{aligned} \what{\mathcal{A}}^{(\text{up})}(q,\what{q},v,\what{v})= \what{\mathcal{A}}^{(\text{up})}(\phi,\vec{u},\what{\phi},\psi,\vec{w},\what{\psi}) &:= \left(\phi\psi + \vec{u}\cdot\vec{w}\right)_{\Omega_h} - c_g\alpha\Delta t \Big[ \left(\vec{u}\cdot\nabla\psi + \phi_B\phi\nabla\cdot\vec{w}\right)_{\Omega_h}\\ &\quad-\;\;\left(\fjump{\vec{u}\psi}+2\sqrt{\phi_B}\left(\favg{\phi\psi}-\what{\phi}\favg{\psi}\right)+\phi_B\what{\phi}\fjump{\vec{w}}\right)_{\mathcal{E}_h} \Big]\\ &\quad+\;\;\left(\what{\psi}\left[\fjump{\vec{u}}+2\sqrt{\phi_B}\left(\favg{\phi}-\what{\phi}\right)\right]\right)_{\mathcal{E}_h} \end{aligned}\label{eqn:bilinear_HDG_upwind} \end{equation} \item{\textbf{Lax-Friedrichs flux:}} \begin{equation} \begin{aligned} \what{\mathcal{A}}^{(\text{LF})}(q,\what{q},v,\what{v})= \what{\mathcal{A}}^{(\text{LF})}(\phi,\vec{u},\what{\vec{u}},\psi,\vec{w},\what{\vec{w}}) &:=\left(\phi\psi + \vec{u}\cdot\vec{w}\right)_{\Omega_h} - c_g\alpha\Delta t \Big[ \left(\vec{u}\cdot\nabla\psi + \phi_B\phi\nabla\cdot\vec{w}\right)_{\Omega_h}\\ &\quad-\;\;\left( \what{\vec{u}}\fjump{\psi}+2\sqrt{\phi_B}\left( \favg{\vec{u}\cdot\vec{w}}-\what{\vec{u}}\cdot\favg{\vec{w}} \right) +\phi_B\fjump{\phi\vec{w}} \right)_{\mathcal{E}_h} \Big]\\ &\quad+\;\;\left(\what{\vec{w}}\cdot\left[ \phi_B\fjump{\phi}+2\sqrt{\phi_B}\left( \favg{\vec{u}}-\what{\vec{u}} \right) \right]\right)_{\mathcal{E}_h} \end{aligned}\label{eqn:bilinear_HDG_lax} \end{equation} \end{description} The system in Eq. \eqref{eqn:general_linear_system} can be solved with a Schur-complement approach by eliminating the field $q$ defined on the elements and leaving a problem for $\what{q}$ defined on the facets. For
0% Time series analysis focus on the analysis of random variable sequences. In this article, I would briefly review the basic definitions and properties of time series, the typical models, and the prediction of time series. The concepts and formulas are from Applicational Time Series Analysis by Shuyuan He, published by Peking University Express. Note that this is only a memo containing only key knowledge. If you are a beginner, please refer to other tutorials. I picked up most of my statistics knowledge in my university in Mandarin. Thus there might be slight mistakes or imperfections with the terms and the grammars in this article. If you have any suggestions, you are more than appreciated to email me and help me to correct them. # Basic Definitions In this section, the basic definition of time series related knowledges, including stationary time series, correlation function, white noise, and spectrum function, will be reviewed. Some useful properties of these definitions will be reviewed along with. ## Time Series A sequence of random variables ordered by time index is called a time series, represented as $X_1, X_2, \ \dots$ For a time series, its observed samples are called an implementation of the time series, represented as $x_1, x_2, \ \dots$ Most of the time, a time series is considered as the sum of 3 components: the trend component $$T_t$$, the seasonal component $$S_t$$, and the random component $$R_t$$. $X_t = T_t + S_t + R_t$ Usually, $$T_t$$ and $$S_t$$ are considered as non-random functions and can be estimated by methods such as regression. This article would mainly focus on the random component, $$R_t$$. ## Stationary Time Series A time series $$\{X_t\}$$ is called a stationary time series if it satisfies the following. • For any $$t \in \mathbb{N}$$, $$EX_t^2 < \infty$$ • For any $$t \in \mathbb{N}$$, $$EX_t = \mu$$ • For any $$t \in \mathbb{N}$$, $$E[(X_t - \mu)(X_s - \mu)] = \gamma_{t - s}$$ That is, for a stationary time series, for every term, their second moment is finite, and the expectations are a fixed value. For every two terms, their correlation is decided by the difference of their time index instead of the time index itself. ## Correlation Function For a stationary time series $$\{X_t\}$$ with mean value $$\mu$$ and correlation defined by the following formula. $\gamma_{k} = E[(X_t - \mu)(X_{t + k} - \mu)]$ The sequence $$\{\gamma_k\}$$ is called the correlation function of the stationary time series. And the correlation matrix $$\Gamma_n$$ is defined as follows. $\Gamma_n = (\gamma_{k - j})_{k, j = 1}^n = \left(\begin{matrix} \gamma_0 & \gamma_1 & \dots & \gamma_{n - 1} \\ \gamma_1 & \gamma_0 & \dots & \gamma_{n - 2} \\ \vdots & \vdots & & \vdots \\ \gamma_{n - 1} & \gamma_{n - 2} & \dots & \gamma_0 \end{matrix}\right)$ The correlation function $$\{\gamma_k\}$$ and the correlation matrix $$\Gamma_n$$ satisfies the following properties. • $$\gamma_k = \gamma_{-k}$$ • $$\Gamma_n$$ is non-negative definite for any $$n \in \mathbb{N}_+$$ • $$|\gamma_k| \leq \gamma_0$$ for any $$k \in \mathbb{Z}$$ ### Estimation of Correlation Function Given $$n$$ samples from a stationary time series, the correlation function can be estimated as follows. $\hat{\gamma}_k = \frac{1}{N} \sum_{j = 1}^{N - k} (x_j - \bar{x}_N)(x_{j + k} - \bar{x}_N)$ ## Partial Correlation Coefficient For stationary time series with positive definite $$\Gamma_{n + 1}$$, for $$1 \leq k \leq n$$, the Levinson recursive formula holds. $\begin{cases} a_{1, 1} = \frac{\gamma_1}{\gamma_0} \\ \sigma_0^2 = \gamma_0 \\ \sigma_k^2 = \sigma_{k - 1}^2 (1 - a_{k, k}^2) \\ a_{k + 1, k + 1} = \frac{\gamma_{k + 1} - \sum_{j = 1}^k a_{k, j}\gamma_{k - j + 1}}{\gamma_0 - \sum_{j = 1}^k a_{k, j} \gamma_j} \\ a_{k + 1, j} = a_{k, j} - a_{k + 1, k + 1} a_{k, k - j + 1} \end{cases}$ with $\sigma_k^2 = E(X_{k + 1} - \textbf{a}_k^T\textbf{X}_k)^2 \\ \textbf{X}_n = (X_n, X_{n - 1}, \dots, X_1)^T$ ## White Noise Let $$\{\epsilon_t\}$$ be a stationary time series, if for any $$s, t \in \mathbb{N}$$, $E\epsilon_t = \mu \\ cov(\epsilon_t, \epsilon_s) = \begin{cases} \sigma^2 \qquad t = s \\ 0 \qquad \ \ t \neq s \end{cases}$ Then $$\{\epsilon_t\}$$ is called white noise, usually represented as $$WN(\mu, \sigma^2)$$. ### White Noise Test Given the estimation of the correlation coefficient for samples from a time series $\hat{\rho}_k = \frac{\hat{\gamma}_k}{\hat{\gamma}_0}$ The following random variable approximately obeys $$m$$-dimensional normal distribution. $\sqrt{N} (\hat{\rho}_1, \hat{\rho}_2, \dots, \hat{\rho}_m)$ Thus, the following random variable approximately obeys the chi-square distribution with the degree of freedom $$m$$. $\chi_m^2 = N \sum_{i = 1}^m \hat{\rho}_i^2$ By applying the chi-square test, it would be possible to decide whether the samples are from white noise. ## Spectrum Function For stationary time series $$\{X_t\}$$ with correlation function $$\{\gamma_k\}$$, if there exists a non-decreasing and right continuous function $$F(\lambda)$$ defined in $$[-\pi, \pi]$$ such that: $\gamma_k = \int_{-\pi}^\pi e^{ik\lambda} d F(\lambda) \\ F(- \pi) = 0$ Then $$F(\lambda)$$ is called the spectrum distribution function for $$\{X_t\}$$. If there exists a non-negative function $$f(\lambda)$$ such that: $\gamma_k = \int_{-\pi}^\pi f(\lambda) e^{ik\lambda} d\lambda$ Then $$f(\lambda)$$ is called the spectrum density function for $$\{X_t\}$$. For stationary time series, its spectrum function uniquely exists. The spectrum density function for real value stationary time series is an odd function. ### Spectrum Density Function for Stationary Time Series If the correlation function $$\{\gamma_k\}$$ for stationary time series $$\{X_t\}$$ is absolute summable, then $$\{X_t\}$$ has its spectrum density function $f(\lambda) = \frac{1}{2 \pi} \sum_{k = - \infty}^\infty \gamma_k e^{-ik\lambda}$ ### Spectrum Density Function for Moving Average Let $$\{\epsilon_t\}$$ be a $$WN(0, \sigma_2)$$, sequence $$\{a_j\}$$ are square summable, then the moving average over $$\{\epsilon_t\}$$ $X_t = \sum_{j = -\infty}^\infty a_j \epsilon_{t - j}$ has spectrum density function $f(\lambda) = \frac{\sigma^2}{2 \pi} \left| \sum_{j = - \infty}^\infty a_j e_{ij\lambda} \right|^2$ ## Hilbert Space for Stationary Time Series Let $$L^2(X)$$ be the set containing all limited linear combination of random variables from stationary time series $$\{X_t\}$$: $L^2(X) = \left\{ \sum_{j = 1}^k a_j X(t_j) | a_j \in \mathbb{R}^n, t_j \in \mathbb{Z}, 1 \leq j \leq k, k \in \mathbb{N}_+ \right\}$ Then $$L^2(X)$$ is a linear space. With the inner product for element $$X$$ and $$Y$$ defined as $$E(XY)$$, $$L^2(X)$$ is a complete inner product space, also called a Hilbert space. # Time Series Models In this section, 3 typical time series models: AR, MA, and ARMA, and their related properties will be reviewed. ## AR(p) Model Let $$\{\epsilon_t\}$$ be a $$WN(0, \sigma^2)$$, and $$A(z)$$ be a polynomial that satisfies $A(z) = 1 - \sum_{i = 1}^p a_j z^j \neq 0, \qquad |z| \leq 1$ the differential equation $X_t = \sum_{j = 1}^p a_j X_{t - j} + \epsilon_t$ is called an auto regression model, or $$AR(p)$$ model. If a stationary time series satisfies the model, it is called an $$AR(p)$$ series. ### Stable Solution of AR(p) Model For $$AR(p)$$ model, its only stable solution is defined as follows. Sequence $$\{\psi_j\}$$ is called the Wold coefficient of the time series. $X_t = A^{-1}(\mathscr B) \epsilon_t = \sum_{j = 0}^\infty \psi_j \epsilon_{t - j}$ The Wold coefficient can be solved by the following recursive formula. $\psi_k = \begin{cases} 0 \qquad \qquad \qquad k \leq 0 \\ 1 \qquad \qquad \qquad k = 0 \\ \sum_{j = 1}^p a_j \psi_{k - j} \qquad k \geq 1 \end{cases}$ ### Spectrum Density Function of AR(p) Series For $$AR(p)$$ series $$\{X_t\}$$, its spectrum density function is $f(\lambda) = \frac{\sigma^2}{2 \pi} \left| \sum_{j = 0}^\infty \psi_j e^{ij\lambda}\right|^2 = \frac{\sigma^2}{2 \pi} \left| \frac{1}{A(e^{i\lambda})} \right|^2$ ### Yule-Walker Equation For $$AR(p)$$ series, the following equation holds. $\begin{cases} \boldsymbol{\gamma}_n = \Gamma_n \boldsymbol{a}_n \\ \gamma_0 = \boldsymbol{\gamma}_n^T \boldsymbol{a}_n + \sigma^2 \end{cases}$ with $\boldsymbol{a}_n = (a_1, a_2, \dots, a_p, 0, \dots, 0)^T \\ \boldsymbol{\gamma}_n = (\gamma_1, \gamma_2, \dots, \gamma_n)^T$ Yule-Walker equation can be used to calculate the correlation function given $$\boldsymbol{a}_n$$, or estimate $$\boldsymbol{a}_n$$ given estimated correlation function. ## MA(q) Model Let $$\{\epsilon_t\}$$ be a $$WN(0, \sigma^2)$$, and $$B(z)$$ be a polynomial that satisfies $B(z) = 1 + \sum_{j = 1}^q b_j z^j, \qquad
basis sets: (5s4p) for Na$^+$, \cite{Cle74} (11s9p5d) for I$^-$, \cite{Cle74} (11s9p5d) for Cs$^+$, \cite{Mcl81} and (13s9p7d3f) for Tl$^+$. \cite{Mcl81} \section{Results and discussion} \label{aguado:results} Before presenting the results for the lattice distortions around the Tl$^+$ impurity we test the consistency of the embedding method for the case of pure (undoped) crystals. For this purpose we compare the results of two sets of calculations for NaI. In the first one we study the clusters labelled A, B, C, D in Section \ref{aguado:theory}, with the embedding scheme indicated there for each case, but with a Na$^+$ cation instead of the Tl$^+$ impurity. This is, of course, just the case of the pure NaI crystal treated by the embedded cluster method. One can then compare the results with those of a standard PI calculation for the perfect crystal. \cite{Lua90a} We can anticipate some differences between the two methods since in the usual PI calculation for a perfect crystal all the cations (or anions) are equivalent, while in the embedded-cluster description of the same system the cation acting as a fictitious impurity and the other cations of the crystal are not described in the same way. This systematic error is, in fact, what we want to remove in the analysis of the distortions around the true, Tl$^+$, impurities. If we call R$_1$ the distance between the Na$^+$ cation and its first I$^-$ neighbors, Table I gives R$_1$ for the four cluster models A, B, C, D, together with the difference $\Delta R_1$ = [R$_1$(cluster) - R$_1^{PI}$(crystal)] between the embedded-cluster result and that of the perfect crystal at equilibrium and the relative deviation $\Delta R_1$/R$_1^{PI}$(crystal). The PI model predicts for the crystal an equilibrium value R$_1^{PI}$(crystal) of 3.237 \AA, in very good agreement with the experimental value R$_1^{exp}$ = 3.240 \AA. The first two cluster schemes, A and B, give large distortions $\Delta R_1$. In contrast, a contraction of R$_1$ smaller than 1 \% occurs for cluster models C and D. The conclusions for CsI are similar. A good self-embedding is only achieved for clusters C and D (C$^*$ and D$^*$). Comparison of B and C shows the importance of a smooth interface between the inner cluster core, where ions are allowed to move, and the frozen environment around the cluster. Comparison of C and D establishes the necessity of allowing for relaxation of the radii of several shells around the impurity. The systematic error that the embedding-cluster scheme makes has to be taken into account in order to interpret properly the distortions induced by the Tl$^+$ impurity. Now we present the results for the lattice distortions induced by the Tl$^+$ impurity in NaI and CsI. In this case we have calculated \begin{equation} \Delta R_i = R_i(NaI:Tl^+) - R_i(NaI:Na^+), \label{aguado:distortion} \end{equation} where R$_i$ (i=1, 2, 3, ...) refer to the radii of the first, second, ... shells around the impurity and both calculations have been performed in the embedded cluster scheme. In this way the systematic errors of the cluster method analyzed in Table I tend to cancel. As before, R$_1$ is the distance between the central cluster cation, Tl$^+$ or Na$^+$, and the I$^-$ anions in its first neighbor shell. For the reasons given above we only trust the results from clusters C, D (or C$^*$, D$^*$) and the results for R$_1$ and $\Delta R_1$ are given in Table II, which also contains results for Tl$^+$ in CsI. Although we discourage the use of models A and B, just for the purposes of comparison with Berrondo {\em et al.} \cite{Ber96a} we have also calculated $\Delta R_1$ for NaI:Tl$^+$ in model A and for CsI:Tl$^+$ in model A$^*$. Berrondo used a similar embedded-cluster scheme, although his calculational method was different, based on the linear combination of atomic orbitals (LCAO). Taking as reference the experimental nearest-neighbor Na$^+$--I$^-$ distance in the bulk, Berrondo obtained $\Delta R_1$ = 0.54 \AA, and taking the same reference we obtain $\Delta R_1$ = 0.50 \AA. The good agreement between the two calculations provides a check of our theoretical method. However this large value of $\Delta R_1$ is partly due to the fact that the calculated R$_1$(NaI:Tl$^+$) and the reference R$_1^{exp}$(pure crystal) are not fully consistent with each other for the reasons discussed above. If we calculate R$_1$(pure NaI) also by using model A (with Na$^+$ instead of Tl$^+$), that is, like in eqn. \ref{aguado:distortion}, then a corrected ``smaller'' value $\Delta R_1$ = 0.20 \AA \ (or 5.6 \%) results. Returning to models C and D we predict, using eq. \ref{aguado:distortion} a lower expansion of R$_1$. In fact, $\Delta R_1$ is equal to 0.039 \AA \ (1.2 \% expansion) for model D. However, the distortions are not restricted to the first shell. The second and fourth shells also expand a little, while the third shell suffers a contraction. The expansion of the first and second shells is consistent with the fact that the ionic radius of Tl$^+$, equal to 1.40 \AA, is larger than the ionic radius of Na$^+$ (0.95 \AA). In summary we find that the structural relaxation around the Tl$^+$ impurity in NaI is small, although more complicated that usually assumed by previous workers. In fact our calculations suggest that the relaxation could even go beyond the fourth shell. Let us now turn to CsI:Tl$^+$. The calculation with model A$^*$ would give a small expansion $\Delta R_1$ of the first shell if we take as reference R$_1^{exp}$(pure crystal), and our calculation would then be in agreement with that of Berrondo {\em et al.}. \cite{Ber96a} However the ionic radius of Tl$^+$ is lower than that for Cs$^+$, so an expansion of R$_1$ is not to be expected. By using, instead, eq. \ref{aguado:distortion} we obtained a corrected result $\Delta R_1$ = $-0.062$ \AA, that is a contraction. Cluster model C$^*$, in which only the first shell around Tl$^+$ is allowed to distort, gives a small expansion of R$_1$ and finally model D$^*$ recovers again the expected contraction. Beyond the first shell, $\Delta R_i$ oscillates ($\Delta R_2$ and $\Delta R_4$ are also negative, while $\Delta R_3$, $\Delta R_5$ and $\Delta R_6$ are positive), although the values of $\Delta R_i$ are very small, in fact smaller than in NaI:Tl$^+$. The conclusion from our calculations is that the distortions around the impurity affect ions in several coordination shells around the impurity, and that consideration of both the atomic and the electronic relaxation of ions in those shells is required. For each active cluster we have also calculated the absorption energy corresponding to the intra-atomic transition from the singlet ground state to the triplet excited state of the thallium ion (6s$^2$($^1$S) $\rightarrow$ 6s$^1$6p$^1$($^3$P)). Following the Franck-Condon principle, this absorption has to be calculated at the frozen ground state geometrical configuration. Thus, to obtain the energy of the localized excitation, we solve the HFR equations \cite{Roo63} for that geometrical configuration of the active cluster, with the Tl+ ion in the excited electronic state 6s$^1$6p$^1$($^3$P), and evaluate the absorption energy as a difference of effective ionic energies:\cite{Mar92} \begin{equation} E_{abs} = E_{eff}[Tl^+(^3P)] - E_{eff}[Tl^+(^1S)]. \label{aguado:absorption} \end{equation} To calculate the contribution of an electronic open shell to the intraatomic energy of a LS electronic configuration, the Hartree-Fock-Roothaan formalism requires the coupling constants of that specific LS term as input. For the description of the open shell ion we have used the coupling constants of the $^3$P term as given in ref. \onlinecite{Mal62} and the same basis set as for Tl+(6s2). The absorption spectra of these doped crystals are more structured (there are several absorption bands); our calculated transitions should be identified with the band A of the absorption spectrum of NaI:Tl$^+$ (4.25 eV), \cite{Jac91} and with the first absorption band of CsI:Tl$^+$ (4.27 eV). \cite{Nag95} Both in NaI:Tl$^+$ and CsI:Tl$^+$, the absorption energies we obtain become closer to the experiment as we improve the description of the local geometry around the impurity, as Table II shows. In particular the absorption energies obtained with models D and D$^*$ are remarkably accurate, and this can be ascribed to the accurate representation achieved for the lattice distortion around Tl$^+$. Treatment of other absorption bands would require to depart from some of the basic assumptions of the PI model. \section{Summary} \label{aguado:summary} We have reported a study of the local lattice distortions induced by a Tl$^+$ impurity in NaI:Tl$^+$ and CsI:Tl$^+$ scintillators. To that end, the {\em ab initio} Perturbed Ion (PI) model has been used. Large active clusters, embedded in accurate quantum environments representing the rest of the crystal, have been studied. The importance of
Or P=F/A. Typically, when using or A ratio is an ordered pair of numbers, written a:b, with b 0. How do you find density from Formula units?To work out density: p = m / V.To work out the mass: m = p x V.To work out the volume: V = m / p. Whether demand for an item or service is elastic or inelastic is measured by its percent of change in demand divided by its percent of change in price, if all other factors remain the same. The meaning of MOLECULAR FORMULA is a chemical formula that gives the total number of atoms of each element in each molecule of a substance. For example, if you have 6 pencils and 2 pens all the followings are good ways to express the ratio of pens to pencils. The volume of a Cylinder = r2h, where r and h are radius and height respectively. Unit of pressure is Pascals (Pa). Definition, Formula, Unit, Examples Pressure is defined as the physical force exerted on an object. The most basic formula of all time in every culture is: Men Women. Nothing is more basic than this. Thus, you can calculate the total variable cost of your business operations. Conductivity Unit. Debt Service Coverage Ratio. A z-score describes the position of a raw score in terms of its distance from the mean, when measured in standard deviation units. Example 1: Suppose, There is a vector a= (1,0). Transmittance is a ratio of the incident intensity of light (I 0) to the amount of intensity passes through the object (I). For any vector v, its corresponding measuring unit, which is its unit vector, exists in the direction parallel to the vector v and has a magnitude of 1. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. Sample size calculation formula Learn how to determine a sample size. Volume of a Sphere = (4/3) r3. the branch of chemistry which deals with applying the theories of physics to the study of reactions and properties of matter. It is not subject to treatment as a direct cost. This means molality does not change with a change in pressure and temperature of the system. Regardless of how many ions of sodium chloride are being used in a chemical equation, the formula unit for sodium chloride is the simplified expression of one sodium atom to one chlorine atom. In SI units, the unit of pressure is Nm Example : If 20 apples cost $50, then price of one apple is = 50 / 20 =$2.50. If the intensity level is measured in decibels, then equation (4) becomes: The SI unit of shearing stress is N/m 2 or Pa (Pascal). The formula for density is d = M / V, where d is density, M is mass, and V is volume. Vectors are often written in xyz coordinates. Take the square root of the sum to return the length of the vector. The number of formula units (Z) and the dimensions of the crystallographic axes are used in defining the unit cell. Formula: (Good Count Ideal Cycle Time) / Planned Production Time. The formula is a fact or a rule written with mathematical symbols. It usually connects two or more quantities with an equal to sign. When you know the value of one quantity, you can find the value of the other using the formula. If the length and width of a rectangle are a units and b units respectively, the formula of its perimeter is: The quantity that depends upon the force and increases with a decrease in the area on which force is acting is called pressure. The SI Unit of tensile stress is: Pascal. We will segregate the unit of production depreciation formula into two parts to understand it in a better way. The volume of a Cylinder = r2h, where r and h are radius and height respectively. read more indicates the cost of producing the final products when it is readily available to be sold or transferred. A company produces 10,000 units of its product in one month. A chemical formula appears to be another way to represent a formula unit. They also explain the formula behind the ratio and provide examples and analysis to help you understand them. chapter 21: (4) nuclear chemistry. Step five: Select the members who fit the criteria which in this case will be 1 in 10 individuals. Current Ratio. It must be remembered that 1 bel is equal to 10 dB. Debt Service Coverage Ratio. Each unit requires $5 of direct materials and labor. A unit vector is a vector that has a magnitude of 1. Find the number of bushels of wheat he needs to harvest in a Preferred Dividend Coverage Ratio. An example of thrust is a fish being expelled from the ocean by a strong wave. Quick Ratio. Your confidence level corresponds to a Z-score. The SI unit of mass is the kilogram (kg). 18 terms. The ratio is 1:1. 5 ft = 5 0.305 = 1.53 m. Also, Now, the selling price per unit of an umbrella was$20. Internal rate of return is a It is also known as Direction Vector. 2 Formula units are basically empirical formulas for ionic compounds, but in most questions just represent molecules (in their simplest, empirical form). P = a + a + a + a P = 4a Area of square = Side length side length If the side length of the square is a units; then its area is: Area = a a = a Volume of cuboid = length width height Profit = Selling price cost price Loss = Cost price selling price Non examples: 2x carbohydrate, class of naturally occurring compounds and derivatives formed from them. Yj the values of the Y-variable. (1 mark) Ques. Mass is defined as the amount of matter present in a body. The phoneme is a material, real and objective unit that performs the constitutive function. To do this, calculate the empirical formula mass and then divide the compound molar mass by the empirical formula mass. How systematic sampling works. Preferred Dividend Coverage Ratio. It is the lowest whole number ratio of ions represented in an ionic compound. Solution: We know, 1 ft = 0.305 meters. The formula for ppm is {eq}ppm=1/1,000,000=0.0001 {/eq}. = (N/m) x ( m4 m 4) = N.m. As example, when we hit a ball with a bat for a brief period of time then an impulse is generated. Unitary Method Definition and Example : Definition : Unitary-method is all about finding value to a single unit. Definition of Cost of Capital Cost of Capital is the rate of return the firm expects to earn from its investment in order to increase the value of the firm in the market place. It means the cost per item, per liter or per kilogram. Formula: The relationship between current and charge is I = Q t I = Q t Where I is current in amperes, Q is charge in coulombs, and t is a time in seconds. . The z-score is positive if the value lies above the mean, and negative if it lies below the mean. Examples of absorption costing. For example, E.9 (b) asks for the number of formula units of magnesium sulfate heptahydrate in The simplest collection of atoms from which an ionic compound's formula can be established. Indirect Costs (definition extracted from FAR Part 31.2) An indirect cost is any cost not directly identified with a single, final cost objective, but identified with two or more final cost objectives or an intermediate cost objective. How to use empirical formula in
# Tag Info 29 This follows from the uniqueness theorem for solutions of ordinary differential equations, which states that for a homogeneous linear ordinary differential equation of order $n$, there are at most $n$ linearly independent solutions. The upshot of that is that if you have a second-order ODE (like, say, the one for the harmonic oscillator) and you can ... 24 You need to be more precise about exactly what problem you're solving and what the inputs are. But if you're considering the general problem of what electromagnetic fields are produced by a given configuration of electric charge and current over spacetime, then the general solution is given by Jefimenko's equations. 9 There are two logical options when you vary $t$: either the value of the left-hand side changes, or it doesn't. If it changes, then the right side must change as well, since they are equal. But the right-hand side can't change when you vary $t$, since it is not a function of $t$! Therefore, since varying $t$ produces no change in the left-hand-side, then ... 9 Look at it as an initial-value problem. If you know the electric and magnetic field throughout space at one instant, and the positions and velocities of all charged particles at that instant, then you can numerically evolve the system forward in time. Two of Maxwell’s equations tell you how fast the fields are changing at each point (and thus their new ... 7 These are all good and correct answers, but I will answer from a different perspective. Any linear-differential equation of degree $n$ has $n$ linearly independent solutions, ie. these $n$ solutions span a vector space, with sets of solutions forming a basis. For simple harmonic motion, the differential equation is: $$m(\dfrac{d^2x}{dt^2})+kx = 0$$ As ... 6 What they have done is focus exclusively on the long-time solution when the system has reached "oscillatory steady state." This solution does not feature any exponentially decaying terms in time. So their solution for the temperature is taken to be of the form: $$T(x,t)-T_0=\alpha(x)\cos(\omega t-\phi)+\beta(x)\sin(\omega t-\phi)+\frac{A}{k}(L-x)$$where ... 5 The equation can be integrated once, then we have second order equation $$x''+x'+\frac {x^2}{2}=C_1$$ Make a substitution $x'=y(x)$, then $x''=y\frac {dy}{dx}$ as a result, we obtain the Abel equation , for which an analytical solution is known, see https://www.hindawi.com/journals/ijmms/2011/387429/#sec2 $$y\frac {dy}{dx}+y=C_1-x^2/2$$ The numerical ... 4 The derivation based on Hooke's law given on Wikipedia is for the 1D case, so it's assumed that the particles can only move horizontally - hence the absence of the square root. In that derivation, $u(x+2h)$ simply means the horizontal displacement of the mass that was initially at $x+2h$. The wave equation then comes from the "usual" trick of considering ... 4 One way of deriving it is using Taylor series (although to be fully rigorous, this requires further justification for restricting to analytic functions). We have that $f(x) = \sum a_n x^t$, so $f''(t)=\sum (n+1)(n+2)a_{n+2}t^n$. If $f''(t)=-\frac k m f(t)$, then $(n+1)(n+2)a_{n+2} = -\frac k m a_n$, so $a_{n+2} = \frac {-k a_n}{(n+1)(n+2)m}$. So $a_0$ ... 4 The linear combination will only solve Laplace's equation. So no. If each $\phi_i$ solves Laplace's equation, then $\Delta\phi_i=0$, and $$\Delta\left(\sum_ia_i\phi_i\right)=\sum_ia_i\Delta\phi_i=0$$ 4 Unlike attenuation, scattering is difficult because energy which is scattered out of the beam can be scattered back in. If scattering coefficient is very weak over the path length of interest, or the scattering strongly favours small angular deviations, (green path in diagram) you can add the scattering and attenuation coefficients together to estimate the ... 3 This quasi-linear 1st-order PDE $$\left(\frac{\partial u}{\partial t}\right)_{\!x} + u\left(\frac{\partial u}{\partial x}\right)_{\!t}~=~0\tag{1}$$ is the inviscid Burgers' equation. It can be solved via the method of characteristics. The ODE IVP $$\frac{dx}{dt}~=~u, \qquad x(t\!=\!0)~=~\xi, \tag{2}$$ (where $u$ is treated as an external parameter and $\... 3 Consider a boundary condition of the form $$\alpha \psi + \beta \frac{\partial \psi}{\partial x} + \gamma \frac{\partial \psi}{\partial t} =0$$ on the boundary, where$\alpha$,$\beta$, and$\gamma$are real coefficients. Standard Dirichlet boundary conditions correspond to$\beta = \gamma = 0, \alpha \neq 0, while Neumann boundary conditions correspond ... 3 The integral forms of Maxwell's equations are fairly useless unless you have situations with very high degrees of symmetry and/or fields aligned along co-ordinate axes. e.g. The beloved examples of undergraduate physics everywhere of spherical and cylindrical charge and current distributions. Once you move away from these situations then the integral forms ... 3 The simplest model that fits is potential flow around a cylinder (or a circle in 2D). This assumes an an inviscid, incompressible fluid with no vorticity, which is too simple to model the backflow. The backflow occurs because of viscosity produces boundary layer separation. I think the second simplest model possible would be to solve the steady-state ... 3 The equations that describe the flow of fluids like water are the Navier-Stokes equations. These are notoriously intractable. So much so that they are currently the subject of one of the Millenium prizes in mathematics. If you're interested in finding out more about this I recommend Terence Tao's article on the subject. 3 Define\langle y | E\rangle = \psi_(y)$and study the asymptotics of your ODE, $$(\partial_y^2-y^2)~\psi (y)=0.$$ That means the leading behavior of ψ for large y; so , for example, if you had a polynomial in y, you just keep the highest order thereof: If you have an order-m polynomial, you'd just keep the$y^m$monomial, since it dominates all lower ... 3 This is easiest to see in 1D (where the 'volume' becomes an interval$[a,b]$and the boundary consists of 2 points$a$and$b$). Poisson's equation becomes a 2nd-order ODE, which means that the full solution has 2 integration constants. Imposing both Dirichlet and Neumann boundary conditions (BCs) would lead to 4 conditions (2 at$a$and 2 at$b), which ... 3 This is why cosmic censorship is considered to be so important -- you are saved from this conclusion if all of the infinite curvature points are hidden behind horizons, and therefore, the exterior of the black holes can still be globally hyperbolic. 3 Assuming from the notation $$\dot{x}^i~=~f^i(x,p,t), \qquad \dot{p}_i~=~g_j(x,p,t), \tag{1}$$ that the symplectic structure is the standard canonical symplectic structure $$\omega = \sum_{i=1}^n\mathrm{d}p_i\wedge \mathrm{d}x^i,\tag{2}$$ we get that \begin{align}\mathrm{d}H(x,p,t)- \frac{\partial H(x,p,t)}{\partial t}\mathrm{d}t ~=~&\sum_{i=1}^n\... 3 The equation comes from adding a source term to the diffusion equation:k \frac{\partial^2 T}{\partial x^2} + r (T_a -T(x)) = \frac{\partial T}{\partial t} $$and then assumes steady state:$$\frac{\partial^2 T}{\partial x^2} + h^\prime (T_a - T(x)) = 0$$where h^\prime = r/k. It is basically the addition of two models (diffusion, cooling) and ... 3 Re-write for legibility:$$u''(\theta)+\alpha u(\theta)-\beta=0$$Make a substitution:$$y=\alpha u(\theta)-\beta$$So:$$y'=\alpha u'$$And:$$y''=\alpha u''\Rightarrow u''=\frac{y''}{\alpha}\Rightarrow \frac{y''}{\alpha}+y=0y''(\theta)+\alpha y(\theta)=0$$Which is the classic ODE of the SHM. Solve and back-substitute. Don't neglect the BCs!... 2 The hydrogen hamiltonian can be written as$$ H = \frac{p_r^2}{2m} + \frac{L^2}{2mr^2} - \frac{e^2}{r} $$where p_r is the radial component of the momentum. Since H depends on L^2, p_r and r, the hamiltonian commutes with every component of \mathbf L:$$ [H, \mathbf L] = 0, $$which in turn means we can simulteneously diagonalize H, L^2 and ... 2 One cannot transform a p orbital into a d orbital by a rotation. This is because the generators of rotations are the angular momentum operators, and their action can only connect states with the same value \ell of the angular momentum quantum number. Thus, rotations can only connect states with the same \ell. In the case of the parity example that ... 2 If you think about the initial conditions in terms of physics, and connect that to your specific DE, you will see that when the velocity is zero (x'(0)=0) then your acceleration is zero:$$x''=\frac{\beta}{m}x'.$If the acceleration and the velocity are zero, the system won't change position, based on your DE. You need a new DE, or a new initial ... 2 I think that the answer and comments (including references to previous related questions),
divide by two (or multiply it by a half if you like.) and you have the area of the triangle! The perimeter of the triangle is easy: just add up all the sides and voilà, you have the perimeter. You can multiply one side of an equilateral triangle by three as well. As for isosceles triangles, simply multiply one of the equal sides by two and add the shorter one. There we go. A quadrilateral is a shape with four sides. You will spend a lot of time with these. They can be classified into many different categories: Parallelograms are shapes where opposite sides and angles are equal. The opposite sides are parallel, hence the name. Rectangles are parallelograms where the angles are all 90°. Its width or breadth refers to the shorter sides, while its length refers to its longer ones. Rhombuses are parallelograms where all the sides are equal, and opposite angles are equal. Squares are parallelograms that are both rectangles and rhombuses, i.e. all angles are right and all sides are equal. Trapeziums, called trapezoids in American English, have two opposite sides that are parallel. The parallel sides are sometimes called the upper and lower bases. Right-angles trapeziums are trapeziums with a right angle. Isosceles trapeziums are trapeziums where the laterals sides are equal but not parallel. Scalene trapeziums are trapeziums that fall into neither category. Kites are quadrilaterals where two pairs of adjacent sides are equal and one pair of opposite angles is equal. Irregular quadrilaterals are any quadrilaterals that do not fit into one of the groups above. Calculating the area of these shapes can be very easy. For parallelograms, simply multiply the base with the height, the way with do with triangles, except we don't need to divide by two. The square is especially easy: just square one of the sides, which would be the length. For the others, we can cut them up into bite-sized pieces before we calculate. For example, we can dissect the right-angled trapeziums into a right-angled triangle and a rectangle. The perimeter of these shapes are just as easy. For rectangles, we simply add up the length and the width, then multiply by two. You can simply multiply the length of a square by four. The isosceles trapeziums are just as easy: multiply one of the lateral sides by two, then add it up with the other two. The kite is easy as well: Just add up the two different sides and multiply that by two. For the rest, you can just add up everything. == Other polygons == Many other polygons have a name. The following are the ones you need to know in elementary school: Pentagons have five sides. Hexagons have six sides. Heptagons or septagons have seven sides. Octagons have eight sides. Nonagons have nine sides. Decagons have ten sides.And here are two more extras: Hendecagons (also known as undecagons) have eleven sides. Dodecagons have twelve sides.Calculating the perimeter and area of these shapes can be more difficult. Sometimes you have to come up with ways of doing it yourself. When you come across an equilateral polygon, you can of course multiply one of the sides by the number of sides of the shape. In other cases, you may need to find some dimensions yourself. Keep your eyes peeled for equivalences, and the problems cannot be that difficult. When calculating the area of these shapes, there are two main ways of doing so: dissecting and filling. With dissecting, you cut up the figure into many pieces, such as parallelograms, squares and triangles. Then you can simply add up all those areas to find out the total. With filling, you add extra bits to shapes so as to make it look like the shapes you usually come across with. For example, when you don't known the altitude of a triangle, you can put three surrounding triangles around it. Then you can calculate the area of the rectangle formed and the surrounding triangles, thereby finding the area of the triangle. == Circles and other plane figures == Apart from polygons, there are other shapes that have wavy sides, round corners or other peculiarities that disqualify them as polygons. Among them, the most famous are the circle, the ellipse, and the semicircle. These shapes are different from polygons, and have their special formulae that you must learn by heart. Let's start with the most basic: the circle. Circles are shapes with infinite loci around its centre. Its perimeter is called the circumference. The line running from one side of the circle, through the centre and to the other side is called the diameter. The line running from the centre to any point on the circumference is called the radius. Any other line running from one point of the circumference to another is called a chord. An arc is any part of the circumference. For thousands of years, mathematicians have been trying to find out the relationship between the circumference and the diameter. When we divide the circumference by the diameter, we get a number that is slightly larger than 3. That number is called π (spelt pi and pronounced pie). Supercomputers have discovered millions of digits of π, but you only need to remember that π is roughly 3.14 or 22/7. That is close enough. If you know the circumference of a circle, dividing that by π will result in the diameter; multiplying the diameter by π will result in the circumference. To find out the area of a circle, calculate πr2. You don't really get to know much about ellipses and semicircles in elementary school. Ellipses look like ovals, except they have a stricter way of constructing that is more than a crushed circle. They have two 'centres' called foci. Semicircles are circles cut along the diameter, and if you draw a line from one end to a point on the circumference, then to another end, you always get a right angle. These two shapes are seldom taught in elementary school, and aside from knowing their names you don't need to study them. = Constructing an Equilateral Triangle = == Introduction == In this chapter, we will show you how to draw an equilateral triangle. What does "equilateral" mean? It simply means that all three sides of the triangle are the same length. Any triangle whose vertices (points) are A, B and C is written like this: △ A B C {\displaystyle \triangle ABC} . And if it's equilateral, it will look like the one in the picture. == The construction == Using your ruler, Draw a line segment whatever length you want the sides of your triangle to be.Call one end of the line segment A and the other end B.Now you have a line segment called A B ¯ {\displaystyle {\overline {AB}}} .It should look something like the drawing below. Using your compass, Draw the circle ∘ A , {\displaystyle \circ A,} whose center is A and radius is A B ¯ {\displaystyle {\overline {AB}}} . Again using your compass Draw the circle ∘ B , A B ¯ {\displaystyle \circ B,{\overline {AB}}} , whose center is B and radius is A B ¯ {\displaystyle {\overline {AB}}} . Can you see how the circles intersect (cross over each other) at two points?The points are shown in red on the picture below. Choose one of these points and call it C.We chose the upper point, but you can choose the lower point if you like. If you choose the lower point, your triangle will look "upside-down", but it will still be an equilateral triangle. Draw a line segment between A and C and get line segment A C ¯ {\displaystyle {\overline {AC}}} . Draw a line segment between B and C and get line segment B C ¯ {\displaystyle {\overline {BC}}} . Construction of △ A B C {\displaystyle \triangle ABC} is completed. == Claim == The triangle △ A B C {\displaystyle \triangle ABC} is an equilateral triangle. == Proof == The points B and C are both on the circumference of the circle ∘ A , A B ¯ {\displaystyle \circ A,{\overline {AB}}} and point A is at the center. So the line segment A B ¯ {\displaystyle {\overline {AB}}} is the same length as the line segment A C ¯ {\displaystyle {\overline {AC}}} . Each is a radius of circle ∘ A {\displaystyle \circ A} , or more simply A B ¯ = A C ¯ {\displaystyle {\overline {AB}}={\overline {AC}}} . We
extra measures may be unnecessary. #### Prediction types for probabilistic responses In the case of Probabilistic models with univariate targets, yhat must be an AbstractVector or table whose elements are distributions. In the common case of a vector (single target), this means one distribution per row of Xnew. A distribution is some object that, at the least, implements Base.rng (i.e., is something that can be sampled). Currently, all performance measures (metrics) defined in MLJBase.jl additionally assume that a distribution is either: • An instance of some subtype of Distributions.Distribution, an abstract type defined in the Distributions.jl package; or • An instance of CategoricalDistributions.UnivariateFinite, from the CategoricalDistributions.jl package, which should be used for all probabilistic classifiers, i.e., for predictors whose target has scientific type <:AbstractVector{<:Finite}. All such distributions implement the probability mass or density function Distributions.pdf. If your model's predictions cannot be predict objects of this form, then you will need to implement appropriate performance measures to buy into MLJ's performance evaluation apparatus. An implementation can avoid CategoricalDistributions.jl as a dependency by using the "dummy" constructor MLJModelInterface.UnivariateFinite, which is bound to the true one when MLJBase.jl is loaded. For efficiency, one should not construct UnivariateFinite instances one at a time. Rather, once a probability vector, matrix, or dictionary is known, construct an instance of UnivariateFiniteVector <: AbstractArray{<:UnivariateFinite},1} to return. Both UnivariateFinite and UnivariateFiniteVector objects are constructed using the single UnivariateFinite function. For example, suppose the target y arrives as a subsample of some ybig and is missing some classes: ybig = categorical([:a, :b, :a, :a, :b, :a, :rare, :a, :b]) y = ybig[1:6] Your fit method has bundled the first element of y with the fitresult to make it available to predict for purposes of tracking the complete pool of classes. Let's call this an_element = y[1]. Then, supposing the corresponding probabilities of the observed classes [:a, :b] are in an n x 2 matrix probs (where n the number of rows of Xnew) then you return yhat = MLJModelInterface.UnivariateFinite([:a, :b], probs, pool=an_element) This object automatically assigns zero-probability to the unseen class :rare (i.e., pdf.(yhat, :rare) works and returns a zero vector). If you would like to assign :rare non-zero probabilities, simply add it to the first vector (the support) and supply a larger probs matrix. In a binary classification problem it suffices to specify a single vector of probabilities, provided you specify augment=true, as in the following example, and note carefully that these probablities are associated with the last (second) class you specify in the constructor: y = categorical([:TRUE, :FALSE, :FALSE, :TRUE, :TRUE]) an_element = y[1] probs = rand(10) yhat = MLJModelInterface.UnivariateFinite([:FALSE, :TRUE], probs, augment=true, pool=an_element) The constructor has a lot of options, including passing a dictionary instead of vectors. See CategoricalDistributions.UnivariateFinite](@ref) for details. See LinearBinaryClassifier for an example of a Probabilistic classifier implementation. Important note on binary classifiers. There is no "Binary" scitype distinct from Multiclass{2} or OrderedFactor{2}; Binary is just an alias for Union{Multiclass{2},OrderedFactor{2}}. The target_scitype of a binary classifier will generally be AbstractVector{<:Binary} and according to the mlj scitype convention, elements of y have type CategoricalValue, and not Bool. See BinaryClassifier for an example. ### The predict_joint method Experimental The following API is experimental. It is subject to breaking changes during minor or major releases without warning. MMI.predict_joint(model::SomeSupervisedModel, fitresult, Xnew) -> yhat Any Probabilistic model type SomeModelmay optionally implement a predict_joint method, which has the same signature as predict, but whose predictions are a single distribution (rather than a vector of per-observation distributions). Specifically, the output yhat of predict_joint should be an instance of Distributions.Sampleable{<:Multivariate,V}, where scitype(V) = target_scitype(SomeModel) and samples have length n, where n is the number of observations in Xnew. If a new model type subtypes JointProbablistic <: Probabilistic then implementation of predict_joint is compulsory. ### Training losses MLJModelInterface.training_lossesFunction MLJModelInterface.training_losses(model::M, report) If M is an iterative model type which calculates training losses, implement this method to return an AbstractVector of the losses in historical order. If the model calculates scores instead, then the sign of the scores should be reversed. The following trait overload is also required: MLJModelInterface.supports_training_losses(::Type{<:M}) = true. Trait values can also be set using the metadata_model method, see below. ### Feature importances MLJModelInterface.feature_importancesFunction feature_importances(model::M, fitresult, report) For a given model of model type M supporting intrinsic feature importances, calculate the feature importances from the model's fitresult and report as an abstract vector of feature::Symbol => importance::Real pairs (e.g [:gender =>0.23, :height =>0.7, :weight => 0.1]). The following trait overload is also required: MLJModelInterface.reports_feature_importances(::Type{<:M}) = true If for some reason a model is sometimes unable to report feature importances then feature_importances should return all importances as 0.0, as in [:gender =>0.0, :height =>0.0, :weight => 0.0]. Trait values can also be set using the metadata_model method, see below. ### Trait declarations Two trait functions allow the implementer to restrict the types of data X, y and Xnew discussed above. The MLJ task interface uses these traits for data type checks but also for model search. If they are omitted (and your model is registered) then a general user may attempt to use your model with inappropriately typed data. The trait functions input_scitype and target_scitype take scientific data types as values. We assume here familiarity with ScientificTypes.jl (see Getting Started for the basics). For example, to ensure that the X presented to the DecisionTreeClassifier fit method is a table whose columns all have Continuous element type (and hence AbstractFloat machine type), one declares MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous) or, equivalently, MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Continuous) If, instead, columns were allowed to have either: (i) a mixture of Continuous and Missing values, or (ii) Count (i.e., integer) values, then the declaration would be MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = Table(Union{Continuous,Missing},Count) Similarly, to ensure the target is an AbstractVector whose elements have Finite scitype (and hence CategoricalValue machine type) we declare MMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:Finite} #### Multivariate targets The above remarks continue to hold unchanged for the case multivariate targets. For example, if we declare target_scitype(SomeSupervisedModel) = Table(Continuous) then this constrains the target to be any table whose columns have Continous element scitype (i.e., AbstractFloat), while target_scitype(SomeSupervisedModel) = Table(Continuous, Finite{2}) restricts to tables with continuous or binary (ordered or unordered) columns. For predicting variable length sequences of, say, binary values (CategoricalValues) with some common size-two pool) we declare target_scitype(SomeSupervisedModel) = AbstractVector{<:NTuple{<:Finite{2}}} The trait functions controlling the form of data are summarized as follows: methodreturn typedeclarable return valuesfallback value input_scitypeTypesome scientfic typeUnknown target_scitypeTypesome scientific typeUnknown Additional trait functions tell MLJ's @load macro how to find your model if it is registered, and provide other self-explanatory metadata about the model: methodreturn typedeclarable return valuesfallback value load_pathStringunrestricted"unknown" package_nameStringunrestricted"unknown" package_uuidStringunrestricted"unknown" package_urlStringunrestricted"unknown" package_licenseStringunrestricted"unknown" is_pure_juliaBooltrue or falsefalse supports_weightsBooltrue or falsefalse supports_class_weightsBooltrue or falsefalse supports_training_lossesBooltrue or falsefalse reports_feature_importancesBooltrue or falsefalse Here is the complete list of trait function declarations for DecisionTreeClassifier, whose core algorithms are provided by DecisionTree.jl, but whose interface actually lives at MLJDecisionTreeInterface.jl. MMI.input_scitype(::Type{<:DecisionTreeClassifier}) = MMI.Table(MMI.Continuous) MMI.target_scitype(::Type{<:DecisionTreeClassifier}) = AbstractVector{<:MMI.Finite} MMI.package_name(::Type{<:DecisionTreeClassifier}) = "DecisionTree" MMI.package_uuid(::Type{<:DecisionTreeClassifier}) = "7806a523-6efd-50cb-b5f6-3fa6f1930dbb" MMI.is_pure_julia(::Type{<:DecisionTreeClassifier}) = true Alternatively these traits can also be declared using MMI.metadata_pkg and MMI.metadata_model helper functions as: MMI.metadata_pkg( DecisionTreeClassifier, name="DecisionTree", packge_uuid="7806a523-6efd-50cb-b5f6-3fa6f1930dbb", is_pure_julia=true ) DecisionTreeClassifier, input_scitype=MMI.Table(MMI.Continuous), target_scitype=AbstractVector{<:MMI.Finite}, ) Important. Do not omit the load_path specification. If unsure what it should be, post an issue at MLJ. MLJModelInterface.metadata_pkgFunction metadata_pkg(T; args...) Helper function to write the metadata for a package providing model T. Use it with broadcasting to define the metadata of the package providing a series of models. Keywords • package_name="unknown" : package name • package_uuid="unknown" : package uuid • package_url="unknown" : package url • is_pure_julia=missing : whether the package is pure julia • package_license="unknown": package license • is_wrapper=false : whether the package is a wrapper Example metadata_pkg.((KNNRegressor, KNNClassifier), package_name="NearestNeighbors", package_uuid="b8a86587-4115-5ab1-83bc-aa920d37bbce", package_url="https://github.com/KristofferC/NearestNeighbors.jl", is_pure_julia=true, is_wrapper=false) MLJModelInterface.metadata_modelFunction metadata_model(T; args...) Helper function to write the metadata for a model T. Keywords • input_scitype=Unknown: allowed scientific type of the input data • target_scitype=Unknown: allowed scitype of the target (supervised) • output_scitype=Unkonwn: allowed scitype of the transformed data (unsupervised) • supports_weights=false: whether the model supports sample weights • supports_class_weights=false: whether the model supports class weights • load_path="unknown": where the model is (usually PackageName.ModelName) • human_name=nothing: human name of the model • supports_training_losses=nothing: whether the (necessarily iterative) model can report training losses • reports_feature_importances=nothing: whether the model reports feature importances Example metadata_model(KNNRegressor, input_scitype=MLJModelInterface.Table(MLJModelInterface.Continuous), target_scitype=AbstractVector{MLJModelInterface.Continuous}, supports_weights=true, load_path="NearestNeighbors.KNNRegressor") ### Iterative models and the update! method An update method may be optionally overloaded to enable a call by MLJ to retrain a model (on the same training data) to avoid repeating computations unnecessarily. MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y) -> fit result, cache, report MMI.update(model::SomeSupervisedModel, verbosity, old_fitresult, old_cache, X, y, w=nothing)
pathlength for collisions will increase and the duration of the signal will be extended for a period estimated in Ref.~\cite{GHS} as $\sim 10$ years, with a gradual decrease in intensity. If the accelerated protons are not confined in the shell, the duration of the signal would be 1--2 years \cite{Yamada}. All these considerations concern the target for inelastic interactions. The other ingredient is the abundance of accelerated protons at this stage of remnant evolution. The pulsar wind model~\cite{GHS} utilizes the pulsar spin down energy to create a shock inside the contact discontinuity of the shell. The proton luminosity is bounded by the magnetic dipole luminosity of the pulsar given in Eq.~(\ref{L_d}). The efficiency for producing a signal in such a model depends on the efficiency for accelerating protons and on the degree of mixing between the accelerated particles and the expanding shell. The latter depends on mixing the pulsar wind region with the shell through Rayleigh-Taylor instabilities. Although it is clear now that SN1987A does not contain a strong pulsar, it is still of interest to discuss the signal that could be expected from a a young galactic supernova ($\sim$10~kpc) with a rapidly spinning, strongly magnetized pulsar. The answer is extremely sensitive to the magnetic field and pulsar period assumed. Both parameters enter into the pulsar power and into the maximum energy. For example, for $P=10$~ms and $B_{\rm surface}=10^{12}$~Gauss and a 25\% efficiency for particle acceleration and interaction, the model of Ref.~\cite{GHS} gives $10^{39}$~erg/sec and $E_p^{\rm max}\approx 10^5$~TeV. The corresponding neutrino luminosity would be sufficient to produce a signal of $\sim 100$ upward muons in $10^5$m$^2$ for several years. For a longer pulsar period and/or a smaller surface magnetic field, both the maximum energy and the available power rapidly decrease. Berezinsky \& Ptuskin~\cite{BerPtus} argued that acceleration at the supernova blast wave could also produce an observable signal, even though in this case the accelerated particles are not deep inside the expanding shell. When a supernova expands into the surrounding medium it drives a blast wave ahead. There is also a reverse shock in the supernova ejecta. Particle acceleration occurs at both shocks, with the accelerated particles injected into the respective downstream regions, which are contained between the two shocks. The kinetic energy of the expanding shell is huge (of order $10^{51}$~erg/s) but the rate at which it is dissipated is limited by the rate at which matter is swept up by the expanding shell. Thus the luminosity from accelerated particles in this region is quite sensitive (quadratically~\cite{BerPtus}) to the mass loss rate of the progenitor star, which was relatively low for SN1987A. For what is considered a ``typical'' mass loss rate of $10^{-5}\,M_\odot$~\cite{BerPtus}, the estimated neutrino flux for a supernova at 10~kpc corresponds to several hundred upward muons with $E_\mu>100 GeV$ in the first 100 days \cite{BerPtus}. The rate falls off slightly faster than 1/t. The big disadvantage of young supernova remnants as potential neutrino sources is, of course, that supernova explosions are rare events. The one that we were lucky to observe, SN1987A, was not only quite distant, in the LMC, but also shows no signs of pulsar activity at a level above $\sim$10$^{37}$\,erg/s. \section{Possible Extragalactic Sources} Active galactic nuclei are the most luminous objects in the Universe and have long been recognized as possible sources of high energy signals \cite{BlaBla}. These first estimates were mostly based on the total AGN power and number density. More recent calculations \cite{BierStrit,SikBeg} developed the idea in two important ways. They first identified the potential importance of hadrons (especially neutrons) for transporting energy in active galactic nuclei. Secondly, shock acceleration models were at least crudely incorporated into the AGN models, and the photoproduction process was shown to be the most important one for proton energy loss. This led to estimates of the maximum proton energy achievable in acceleration at AGN shocks and to the prediction of high energy neutrino fluxes. Active galactic nuclei have luminosities ranging from 10$^{42}$ to 10$^{48}$ erg/s, which corresponds to black hole masses from 10$^4$ to 10$^{10}$ $M_\odot$ \cite{mjrees} on the natural assumption that they are powered by Eddington limited accretion onto a black hole. AGN's have generally flat emission spectra with a luminosity up to $\sim$3$\times$10$^{46}$ erg/s per decade of energy. In the IR band a steady dust emission is observed, most probably coming from a large region far away from the core. The main thermal feature is the UV bump, which is variable on a timescale of days and weeks\cite{UVvar}. Its energy source is either X-ray heating\cite{UV_X} or viscous heating of the accretion disk\cite{UVvar}, either of which would be closely related to the central engine. X-rays have a hard, nonthermal spectrum, variable on even shorter timescales\cite{Xvar}, which often cuts off at several MeV. AGN's have been extensively studied at radio frequencies, where the most general identification is as radio-{\it loud} or radio-{\it quiet}, depending on the fraction of energy in the radio portion of the spectrum \cite{Sanders}. Roughly 10\% of all observed AGN's are classified as radio-loud \cite{fraction}. Blandford \cite{Blandford} suggests that radio-loud AGN's have rapidly spinning black holes and therefore also strong jets. The UV bump is not always easy to see in radioloud AGN's. Two possible sources within AGN's of intense, high energy neutrino fluxes have been identified. The first is associated with the central engine and the second with production in jets associated with blazars, which are radio-load AGN's in which the observer is illuminated by the beam of a jet. We first discuss central emission. \subsection{Generic AGN} To introduce most of the parameters important for the production of neutrinos, we briefly describe the spherical accretion model used in most of the calculations of the neutrino production in central regions of AGN's \cite{SikBeg,Stecketal,BegSik,SzaboPro92}. Some of the limitations of this model will be mention in \S7.4 below. The model is based on work performed by Kazanas, Protheroe and Ellison \cite{Protheroe,KazEll}. They assume that close to the black hole the accretion flow becomes spherical and a shock is formed where the ram pressure of the accretion flow is balanced by radiation pressure near the black hole. The shock radius is parameterized by $R = x_1 \times R_S$, where $R_S$ is the gravitational (Schwarzschild) radius of the black hole, and $x_1$ is estimated to be in the range $10$ to $100$ \cite{SzaboPro92}. The continuous emission is dominated by the ultraviolet and X-ray radiation, which are assumed to emanate from inside the radius enclosed by the shock. Since the region inside the shock is optically thick, the radiation density at the shock can be estimated from the surface brightness of the AGN. This leads to the relation \begin{equation}\label{radiation} U_{rad} \simeq L \times (\pi R^2 c)^{-1} \end{equation} between luminosity and radiation density in the central region. Since $R=x_1\times R_S\propto L_{\rm Eddington}$, it follows from Eq.~(\ref{radiation}) that $U_{rad}\propto L^{-1}$. Numerically, \begin{equation}\label{numeric} U_{rad}\sim 2\times 10^6 {\rm erg/s}\times {1\over L_{45}} \times \left({30\over x_1}\right)^2, \end{equation} where $L_{45}$ is the luminosity divided by $10^{45}$~erg/s. The radiation energy density also defines the magnetic field value $B$ at the shock under the assumption of equipartition of the radiation and magnetic energy. For the numerical example above $B\sim 7000\;{\rm Gauss} \times (L_{45})^{-{1\over 2}} \times {30\over x_1}$. Acceleration of protons is assumed to occur by the first order diffusive Fermi mechanism at the shock, resulting in an $E^{-2}$ differential spectrum that extends up to $E_{\rm max}$. Energy loss processes occur during acceleration, including $p\gamma\rightarrow N\pi$ and $p\gamma\rightarrow p+e^++e^-$ in the dense radiation fields as well as $pp$ collisions in the gas. All three processes contribute an energetic electromagnetic component, either through $\pi^0\rightarrow\gamma\gamma$ or by production of electrons. Both photo-meson production and $pp$ collisions also give rise to neutrinos via the $\pi^\pm\rightarrow\mu^\pm\rightarrow e^\pm$ decay chain. In the astrophysical environment all unstable particles (except quasi-stable neutrons) decay practically without energy loss. An important detail is that photoproduction of charged pions by protons is dominated by the $n\pi^+$ channel \cite{Stecker68}. Although high energy neutrinos escape directly from the core, the electromagnetic component does not. The core is optically thick to photons with energies greater than $\sim 5$~MeV. All $\gamma$-rays generated in the dense photon field immediately lose energy in $\gamma\gamma\rightarrow e^+e^-$ collisions. Inverse Compton/pair-production cascades downscatter all electrons and photons to X-ray and lower energies. The essential ingredient of these models is that the observed X-ray spectrum is produced as the end product of the electromagnetic cascades initiated by high energy photons and electrons produced by the accelerated protons. Thus, estimates of expected neutrino fluxes from individual AGN's are normalized through the model to their observed X-ray luminosities. The proton density at the shock, $n_p(R)$, can be estimated from the accretion rate needed to support the black hole luminosity, and from the radius and accretion velocity at the shock. It is \begin{equation} n_p \simeq 1.3 \times 10^8 x_1^{1/2} R^{-1.5} L^{1/2} Q^{-1} {\rm cm}^3, \end{equation} where $Q$ is the efficiency for converting accretion power into accelerated particles at the shock. Such
methods. The principle of the proposed real-time network calculation algorithm is discussed, the calculation method for each sensor layout is provided, and the robustness is analyzed accordingly. Simulation experiment results demonstrate significant improvements in the accuracy of the data obtained using the proposed data correction and network calculation algorithm. Thus, the designed real-time calculation algorithm can be practically applied to realize the accurate collection of ventilation data. Article The effects of Reynolds number and submergence ratio on mean velocity, separation length, Reynolds stresses (RSS), correlation coefficient, anisotropy, and turbulent kinetic energy (TKE) are evaluated by modifying the free-stream velocity and pipe diameter to understand the fluid-structure interaction in the presence of a rough channel bed. The acoustic Doppler velocimetry is used to acquire three-dimensional velocity data in the experiments. The experimental data demonstrates that the non-dimensional separation length is linearly varying with the Reynolds number up to a critical Reynolds number and after that, it becomes independent of the Reynolds number. The vertical profiles of the mean velocity, RSS, correlation coefficient, anisotropy, and TKE are not affected by the Reynolds number, however, their profiles are affected by the submergence ratio. The RSS and TKE are found to be larger for the higher submergence ratio below the top level of the pipe throughout the wake region. The reattachment point is found to be the most turbulent as the most pronounced peaks of RSS and TKE are occurring at this point. The quadrant analysis reveals that the RSS contribution of ejection events is higher for a lower submergence ratio whereas the RSS contribution of the sweep events is higher for a higher submergence ratio. In conclusion, this study provides an in-depth understanding of the turbulent flow characteristics in the wake region of a bed-mounted horizontal circular pipe located on a fixed sand bed; in particular the modification of the turbulence properties due to changes in Reynolds numbers and submergence ratios. Article Full-text available ABSTRACT Turbulence characteristics in a fully developed flow over a gradually varied bed roughness are investigated. The results of the Reynolds stress profiles indicate that they increase with an increase in bed roughness height. Their peaks occur within the wall-shear layer close to the bed. Besides, the bed shear stress rises in accordance with the roughness height. The roughness-induced layer grows as the roughness height increases with the streamwise distance. The velocity profiles fitted with the logarithmic law reveal that the zero-velocity level is elevated as the roughness height increases, but the zero-plane displacement is not influenced by the roughness. The turbulent kinetic energy (TKE) flux results indicate that an inrush of faster moving fluid parcels composing the sweep event is the dominant mechanism in the near-bed flow zone. The magnitude of the sweep event escalates, as the roughness height increases. On the other hand, a process of slowly moving fluid parcels forming the ejection event prevails in the outer flow layer. The TKE flux results agree with those obtained from the bursting analysis. Concerning the TKE budget, the peaks of the TKE production, dissipation, and pressure energy diffusion rates being positive appear near the bed and grow as the roughness height increases, whereas the peak of the TKE diffusion rate being negative behaves in the similar way as the other terms of the TKE budget behave. Article Full-text available Direct numerical simulations (DNS) are conducted for turbulent flow through pipes with three-dimensional sinusoidal roughnesses explicitly represented by body-conforming grids. The same viscous-scaled roughness geometry is first simulated at a range of different Reynolds numbers to investigate the effects of low Reynolds numbers and low \$R_{0}/h\$, where \$R_{0}\$ is the pipe radius and \$h\$ is the roughness height. Results for the present class of surfaces show that the Hama roughness function \${\rm\Delta}U^{+}\$ is only marginally affected by low Reynolds numbers (or low \$R_{0}/h\$), and observations of outer-layer similarity (or lack thereof) show no signs of sensitivity to Reynolds number. Then, building on this, a systematic approach is taken to isolate the effects of roughness height \$h^{+}\$ and wavelength \${\it\lambda}^{+}\$ in a turbulent wall-bounded flow in both transitionally rough and fully rough regimes. Current findings show that while the effective slope \$\mathit{ES}\$ (which for the present sinusoidal surfaces is proportional to \$h^{+}/{\it\lambda}^{+}\$) is an important roughness parameter, the roughness function \${\rm\Delta}U^{+}\$ must also depend on some measure of the viscous roughness height. A simplistic linear–log fit clearly illustrates the strong correlation between \${\rm\Delta}U^{+}\$ and both the roughness average height \$k_{a}^{+}\$ (which is related to \$h^{+}\$) and \$\mathit{ES}\$ for the surfaces simulated here, consistent with published literature. Various definitions of the virtual origin for rough-wall turbulent pipe flow are investigated and, for the surfaces simulated here, the hydraulic radius of the pipe appears to be the most suitable parameter, and indeed is the only virtual origin that can ever lead to collapse in the total stress. First- and second-order statistics are also analysed and collapses in the outer layer are observed for all cases, including those where the largest roughness height is a substantial proportion of the reference radius (low \$R_{0}/h\$). These results provide evidence that turbulent pipe flow over the present sinusoidal surfaces adheres to Townsend’s notion of outer-layer similarity, which pertains to statistics of relative motion. Article Full-text available The friction factor relationship for high-Reynolds-number fully developed turbulent pipe flow is investigated using two sets of data from the Princeton Superpipe in the range 31×10 3 ≤Re D ≤35×10 6 . The constants of Prandtl’s ‘universal’ friction factor relationship are shown to be accurate over only a limited Reynolds-number range and unsuitable for extrapolation to high Reynolds numbers. New constants, based on a logarithmic overlap in the mean velocity, are found to represent the high-Reynolds-number data to within 0.5%, and yield a value for the von Kármán constant that is consistent with the mean velocity profiles themselves. The use of a generalized logarithmic law in the mean velocity is also examined. A general friction factor relationship is proposed that predicts all the data to within 1.4% and agrees with the Blasius relationship for low Reynolds numbers to within 2.0%. Article Full-text available In this paper, it is demonstrated by DNS of turbulent rough channels that a proportionality between u~2'|w (the wall-normal Reynolds stress u~2'=<u2'2>1/2 at the top of the roughness elements) and the roughness function does exist. This observation confirmed by experiments allows the derivation of a simple expression for the velocity profile in the log-region. This parameterization of rough walls through u~2'|w is suggested by the direct link between wall structures and ∂2u2'~2∂x22. Identification of the wall structures, near smooth and different kinds of rough surfaces, demonstrates flow isotropization near rough walls, corroborated by profiles of ∂2u2'~2∂x22, is depicted by visualizations of ∇2p. The relationship between the roughness function and u~2'|w allows the derivation of a new kind of Moody diagram, useful in the prediction of friction factors of rough flows at high Reynolds numbers. Chapter Turbulent free shear flows occur if there are no walls directly at the flow. Figure 22.1 shows some examples: a free jet, a buoyant jet, a mixing layer with the free jet–boundary flow as a special case, and a wake flow. The corresponding laminar flows are treated in Sects. 7.2, 7.5, 10.5.4 and 12.1.5. The flow of a turbulent wall jet, which is a jet bounded on one side by a wall, is treated in Sect. 22.8 (the laminar wall jet is discussed in Sect. 7.2.7). Article The streamwise mean velocity profile in a turbulent boundary layer is classically described as the sum of a log law extending all the way to the edge of the boundary layer and a wake function. While there is theoretical support for the log law, the wake function, defined as the deviation of the measured velocity profile from the log law, is essentially an
by the qc straight away.    #693 posted by necros [174.113.119.133] on 2012/03/10 19:39:50 right, i totally forgot about that! i still had that .doNotRemove trick from before and i found that the entity is indeed being removed by something else. thanks, i'm off to track that down. :P  And  #694 posted by necros [174.113.119.133] on 2012/03/10 19:55:18 it's done. thanks! turns out i had some sloppy linked list clean up. i was clearing out the list but forgot to clean up the head link. :P  Not Qc...  #695 posted by necros [174.113.119.133] on 2012/03/17 06:23:27 this is java, but it's more of a conceptual thing... finally started working on the inheritance aspect of fgd files for my fgd<>def converter thingy... essentially, i have multiple lists of objects which represent key/val pairs. these lists can have duplicate entries from base classes that an entity inherits from. eg: you have 'monster_army' which inherits target/targetname fields from the base 'monster' class. what i'd need to do is, on demand, collapse the multiple lists into a single one. ie: when i 'get' a list element, i should iterate through a single apparent list: monster_army has the fields: (inherits from 'monster') 0: B 1: C 2: D monster has the fields: 0: E 1: C 2: A should look like: 0: B 1: C 2: D 3: E 4: A (duplicate C in 'monster' is ignored because 'monster_army' already had that field) i can work through the logic of building a new list myself, but the problem is that there are many times where i need to iterate through the (final) list, so every time i .get(i), i'll have to rebuild this list which feels wrong. is there maybe some more complex data structure that would work better for this? i'm thinking maybe trees due to their non-linear nature? unfortunately, i've never really learnt data structures more complex than a simple linked list. :S or maybe there's some absurdly easy solution and i'm just not seeing it.    #696 posted by ericw [23.17.185.198] on 2012/03/17 07:13:59 you should be able to get away without building the final list. I'd store the key/value pairs in a hash table instead of a list of key/value pair objects, and have an object for each "quakec class". then set up a recursive get() function which first checks the object-being-called's key/value pair mapping (e.g. Grunt), and if the requested key wasn't found, call the get() method on the superclass (e.g. Monster) (or return null if there is no superclass - so for a key requested that isn't in either Grunt or Monster). something like this: class EntityClass { EntityClass superclass; HashMap fields; Object get(String key){ if (fields.containsKey(key)) { // the requested key is stored directly in our class return fields.get(key); } else { // the requested key is not in our class, try searching // our superclass. if (superclass != null) { return superclass.get(key); } else { return null; } } } } hope that helps.. btw here's the javadoc for HashMap. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/HashMap.html    #697 posted by czg [83.253.15.82] on 2012/03/17 08:55:39 If you're only storing the keys and no value with them, I'd recommend using a Set instead of a Map. You should read up on the Java collections framework, as in most cases it already solves a problem for you.  To Lob Or Not To Lob, That Is The Question  #698 posted by Mike Woodham [86.174.73.229] on 2012/03/17 10:30:47 I have a Lavaman in my map (FMB_BDG) and he does a good job except I notice that there is a point where I can stand and the missiles go over my head. If I move closer, they hit me and if I move further away they hit me up to a point where they drop in front of me. I can see that MOVETYPE_BOUNCE seems to affect the 'lobiness', and velocity_z seems to affect the height of the lob. What is affecting the gap between the hit range and the no-hit range? I am not too worried about the far distance as the missile falling short is not an issue because I can contain the player, but the over-me-'ead-john lob is not good. If I change to MOVETYPE_MISSILE and ignore velocity_z I get a straight line missile a la roquette, but this is not good where the player is higher or lower than the Lavaman as dodging becomes too easy. Are there any good web-sites that explain weapon behaviour (in short sentences with no long words, and mathematics that any four year old can understand)?  Ericw++  #699 posted by SleepwalkR [85.178.55.133] on 2012/03/17 12:58:34 This is also how I would do it. Note though that the keys will be unordered. If you need them to be sorted in some way, consider a SortedHashMap, e.g. TreeMap.  Hash Maps  #700 posted by Preach [77.98.165.95] on 2012/03/17 13:43:35 I think everyone is right on Necros' problem that using sets/maps is the way to go, but I think that the implimentations so far have been backwards: So far we've had how to check if a specific element is a member of one or more sets, but what necros is looking for is the ability to list all the unique elements in a collection of sets, omitting duplicates. The key to doing this is set union. Have a set of properties for the monster, a set of properties for the grunt. When it comes time to output the combined list, use the addAll method to create combined lists which omit duplicates.  Lobbed Projectiles  #701 posted by Preach [77.98.165.95] on 2012/03/17 14:15:16 The most important mathematical idea for understanding projectiles is being able to imagine the movement in just one direction, to ignore all the other motion. In particular imagining just how the height changes is important, but it's also the most difficult. It's also quite hard to deal with the idea of projectiles that can go north-south and east-west. So lets just fix our boss facing north, and think about the horizontal movement of the projectile first. It turns out that the northwards movement of the projectile is exactly the same for MOVETYPE_MISSILE and MOVETYPE_BOUNCE. We fire it with some speed to the north, and it keeps going at a constant speed. Most importantly it takes the same amount of time to come in line with the player. Now we can think about how the height changes. The MOVETYPE_BOUNCE has exactly the same behaviour as a falling player. Measure the amount of time the MOVETYPE_MISSILE takes to hit the player. If a player could fall from the height the projectile is thrown at down to the height of the target player in that time then the MOVETYPE_BOUNCE will score a hit, otherwise the shot will sail over his head. So what can we do if it does sail over his head? Well, one trick is to start the projectile with a negative velocity_z, so it is already moving downwards. This will make it fall further during the flight and hopefully hit the player. Alternatively we can reduce the speed it travels north at. This will give a longer flight time and so longer for it to fall. We could combine these two ideas if needed. Calculating exactly how much adjustment is needed is where you have an unavoidable level of fairly grizzly maths. I'm sure I wrote something about this particular problems but I can't remember if I finished it and ever posted it. You can probably get a good enough function by just measuring the threshold at which projectiles start going overhead, and applying one of the two above fixes when the player is that close(using trial and improvment to get
\section{Introduction} The a posteriori error estimation has been an indispensable tool in handling computationally challenging problems. For elliptic partial differential equations (PDEs) consisting of terms characterizing diffusion, reaction and convection, the solution may display a strong interface singularity, interior or boundary layers, etc., due to discontinuous coefficients or terms in significantly different scales, etc. Consequently, it is a challenging task to design a \emph{general} a posteriori error estimator that is accurate and robust enough to resolve local behaviors of the exact solution without spending much computational resource. Explicit residual estimators are directly related to the error and have been well-studied since 1970s (cf. \cite{babuska1978,babuska1987,verf1994,bernardi2000,petzoldt2002,verf1998reaction,verf1998confusion,verf2005confusion,verf2015confusion}). They are easy to compute, applicable to a large class of problems, valid for higher order elements, etc. More importantly, \emph{robust} residual estimators have been proposed for various problems. For diffusion problems with discontinuous coefficient, \cite{bernardi2000,petzoldt2002} established robustness with respect to coefficient jump under the monotonicity or quasi-monotonicity assumption of the diffusion coefficient. For singularly perturbed reaction-diffusion problems, Verf{\"u}rth \cite{verf1998reaction} pioneered a residual estimator that is robust with respect to the size of reaction. For convection-dominated problems, it is still under debate on how to choose a suitable norm to measure the approximation error (cf. \cite{stynes2005confusion, verf1998confusion, sangalli2001confusion, sangalli2005norm, verf2005confusion, sangalli2008confusion}). Sangalli \cite{sangalli2005norm,sangalli2008confusion} proposed a norm incorporating the standard energy norm and a seminorm of order $1/2$ and developed the a posteriori error analysis in the one-dimensional setting. Verf{\"u}rth \cite{verf2005confusion} introduced a norm incorporating the standard energy norm and a dual norm of the convective derivative. With respect to this norm, the explicit residual estimator in \cite{verf2005confusion} was proved to be robust. Moreover, it was shown in \cite{verf2015confusion} that the framework of residual estimators is applicable for various stabilization schemes. Those developments make residual estimators competitive when dealing with challenging problems. One drawback, however, is that residual estimators tend to overestimate the true error by a large margin (cf. \cite{carstensen2010competition,localL2}). This calls for the need of an estimator as general as the residual estimator but with improved accuracy. Recovery-based estimators, e.g., the Zienkiewicz–Zhu (ZZ) estimator and its variations (cf. \cite{ZZ1987,ZZ1992,purdue1972,carstensen2004,nagazhang2005,bankxuzheng,ZZsafeguard}, etc.), are quite popular in the engineering community. However, unlike residual estimators, the robustness of those estimators with respect to issues like coefficient jump, dominated convection or reaction, etc., has not been emphasized or studied in detail yet (cf. \cite{ovall2006}). On coarse meshes, it is known that ZZ-type estimators are in general unreliable, and counterexamples can be easily constructed where the estimator vanishes but the true error is large (cf. \cite{ainsworthoden2000book,localL2}). For linear elements, \cite{ZZsafeguard,ZZ2006} adds two additional terms (one of them is the element residual) to the ZZ estimator to ensure reliability on coarse meshes. For higher order elements, however, a straightforward extension of the original ZZ estimator \cite{ZZ1987,ZZ1992} usually fails and developing a viable estimator is nontrivial. For example, Bank, Xu, and Zheng in \cite{bankxuzheng} recently introduced a recovery-based estimator for Lagrange triangular elements of degree $p$, and their estimator requires recovery of all partial derivatives of $p^{\rm th}$ order instead of the gradient. Recently, the so-called \emph{hybrid estimator} was introduced in \cite{localL2,caizhang2010} for diffusion problems with discontinuous coefficients. The explicit hybrid estimator shares all advantages of the robust residual estimator \cite{bernardi2000,petzoldt2002} and numerical results indicate that the hybrid estimator is more accurate than the residual estimator (cf. \cite{localL2}). This opens a door of finding an alternative of the residual estimator with improved accuracy. Thus one may ask if it is possible to construct hybrid estimators for more general problems and if the hybrid estimator is still more accurate than the residual estimator. In this manuscript, we introduce the hybrid estimator as well as flux recoveries for convection-diffusion-reaction equations. In diffusion-dominated regime, the flux recovery as well as the hybrid estimator is a natural extension of the one in \cite{localL2}. In convection/reaction-dominated regime, the flux recovery in each element depends on the size of diffusion. Roughly speaking, in elements with resolved diffusion (see Section \ref{sub:flux1}), the recovered flux is same to the diffusion-dominated case; in elements where diffusion is not resolved, inspired in part by the method of Ainsworth and Vejchodsk{\'y} \cite[Section 3.4]{ainsworth2011confusion}, the recovered flux is defined piecewisely in each element (see Section \ref{sub:flux2}). The hybrid estimator in the convection/reaction-dominated regime is analogously defined as in \cite{localL2} with proper weights from \cite{verf2005confusion}. In each regime, we prove that the hybrid estimator is equivalent to the robust residual estimator (for example, \cite{verf2005confusion} for the convection/reaction-dominated regime) and then the robustness follows immediately from that of the residual estimator. The hybrid estimator is explicit and valid for higher order elements. Various numerical results show that, compared to the explicit residual estimator, the hybrid estimator is more accurate and the corresponding effectivity index is less sensitive to the size of reaction. The rest of the manuscript is organized as follows. The model problems and finite element discretizations are introduced in Section \ref{sec:modelConfusion}. Section \ref{sec:etaConfusion} collects results on robust residual estimators. After the flux recovery presented in Section \ref{sec:fluxConfusion}, the hybrid estimator is defined in Section \ref{sec:hybridConfusion} along with robust a posteriori error estimates. Section \ref{sec:proofConfusion} gives the proof of the local equivalence between the residual estimator and the hybrid estimator. Numerical results are shown in Section \ref{sec:numericalConfusion}. \section{Problems and Discretizations} \label{sec:modelConfusion} Let $\Omega$ be a polygonal domain in $\mathbb{R}^d\,$ ($d=2,\, 3$) with Lipschitz boundary $\partial\Omega$ consisting of two disjoint components $\Gamma_D$ and $\Gamma_N$. By convention, assume that $\text{diam}(\Omega) = O(1)$. Consider the stationary convection-diffusion-reaction equation: \begin{equation} \label{eq:modelConfusion} \left\{ \begin{alignedat}{2} -\text{div}(\alpha\nabla u) + \vect{a}\cdot\nabla u + bu &= f,\quad && \text{in} \;\; \Omega,\\ u &= 0, \quad && \text{on} \;\; \Gamma_D,\\ -\alpha\nabla u\cdot\boldsymbol{n} &= g_{_N}, \quad && \text{on} \;\; \Gamma_N, \end{alignedat} \right. \end{equation} with $\alpha(x) \geq \delta$, for almost all $x \in\Omega$ and for some constant $\delta>0$. Assume that: \begin{enumerate}[({A}1)] \item $\vect{a}\in W^{1,\infty}(\Omega)^d$ and $b\in L^{\infty}(\Omega)$; \item there are two constants $\beta\geq 0$ and $c_b\geq 0$, independent of $\alpha$, such that \[ b-\frac{1}{2}\text{div}\,\, \vect{a} \geq \beta \;\text{in}\; \Omega \quad \text{and} \quad \twonorm{b}_{\infty} \leq c_b\beta; \] \item $\text{meas}(\Gamma_D)>0$ and $\Gamma_D$ contains the inflow boundary \[ \{ x\in\partial\Omega: \vect{a}(x)\cdot \boldsymbol{n}(x) < 0 \}. \] \end{enumerate} Depending on the magnitude of the $\alpha$ (with respect to $\vect{a}$ and $b$), two regimes are studied in this paper: \begin{enumerate} \item \textbf{diffusion-dominated regime}: there exists a constant $C_b \geq 0$ such that \[ |\vect{a}(x)/\alpha(x)|\leq C_b \quad\text{and}\quad |b(x)/\alpha(x)|\leq C_b \quad \text{for almost all } x\in\Omega; \] \item \textbf{convection/reaction-dominated regime}: $\alpha(x)\equiv\epsilon\ll 1$ for a constant $\epsilon>0$. This is the so-called singularly perturbed problem. \end{enumerate} Let \[ H^1_D(\Omega):= \{v\in H^1(\Omega):v|_{\Gamma_D}=0\}. \] Define the bilinear form on $H^1_D(\Omega)$ by \[ B(u,v) := (\alpha\nabla u, \nabla v) + (\vect{a}\cdot\nabla u, v) + (bu,v) ,\quad \forall\, u,v\in H^1_D(\Omega), \] where $(\cdot,\cdot)_S$ denotes the $L^2$ inner product on set $S$ and the subscript $S$ is omitted when $S=\Omega$. The $L^2$ norm on $S$ is denoted by $\twonorm{\cdot}_S$. The weak formulation of \eqref{eq:modelConfusion} is to find $u\inH^1_D(\Omega)$ such that \begin{equation} \label{eq:weakform} B(u,v) = (f,v) - (g_{_N},v)_{\Gamma_N}, \quad \forall\, v\inH^1_D(\Omega). \end{equation} It follows from integration by parts and the assumptions in (A2) and (A3) that, for any $v\inH^1_D(\Omega)$, \begin{equation} \label{eq:bgraduu} (\vect{a}\cdot\nabla v,v)+(bv,v) = \frac{1}{2}(v^2, \vect{a}\cdot\boldsymbol{n})_{\Gamma_N} + (v^2,b-\frac{1}{2}\text{div}\,\,\vect{a}) \geq \beta\twonorm{v}^2, \end{equation} where $\boldsymbol{n}$ denotes the unit outward vector normal to $\Gamma_N$. The energy norm induced by $B(\cdot,\cdot)$ is defined by \[ \enorm{v} = \left(\twonorm{\alpha^{1/2}\nabla v}^2 + \beta\twonorm{v}^2\right)^{1/2}, \quad \forall\, v\inH^1_D(\Omega), \] where $\enorm{\cdot}_S$ denotes the energy norm over $S$ and the subscript $S$ is omitted when $S=\Omega$. Let ${\mathcal{T}}$ be a regular triangulation of $\Omega$ (see, e.g., \cite{ciarletbook}). Define the following sets associated with the triangulation ${\mathcal{T}}$: \[ \begin{aligned} \mathcal{N} &: \text{ the set of all vertices},\\ \mathcal{E} &: \text{ the set of all edges} (d=2) / \text{faces} (d=3),\\ \mathcal{E}_I &: \text{ the set of all interior edges} (d=2) / \text{faces} (d=3),\\ \mathcal{E}_D&: \text{ the set of all edges} (d=2) / \text{faces} (d=3) \text{ on } \Gamma_D,\\ \mathcal{E}_N&: \text{ the set of all edges} (d=2) / \text{faces} (d=3) \text{ on } \Gamma_N,\\ \mathcal{E}_K &: \text{ the set of edges} (d=2) / \text{faces} (d=3) \text{ in an element } K\in{\mathcal{T}}. \end{aligned} \] For a simplex $S\in{\mathcal{T}}\cup\mathcal{E}$, denote by $|S|$ and $h_S$ its measure and diameter, respectively. Denote by $R_K$ the inradius of $K\in{\mathcal{T}}$. The shape regularity of the triangulation requires the existence of a generic constant $C_0 > 1$ such that \begin{equation} \label{eq:shaperegular} h_K\leq C_0 R_K,\quad \forall\, K\in{\mathcal{T}} \end{equation} holds true for each mesh ${\mathcal{T}}$ in the adaptive mesh refinement procedure. We associate each $e\in\mathcal{E}$ with a unit normal $\boldsymbol{n}_e$, which is chosen as the unit outward normal if $e\subset \partial\Omega$. Denote by
that can be expressed as the product of polynomials in the creation and annihilation operators referring to the two wells. As shown earlier, with respect to this bipartition, a generic separable mixed state $\rho$ is diagonal in the Fock basis (\ref{7}), and can thus be written as in (\ref{9}). For such a state, the quantum Fisher information can be explicitly computed: \begin{equation} F\big[\rho, J_n\big]= (n_x^2+n_y^2)\bigg[N+2\sum_{k=0}^N p_k\, k(N-k) -4\sum_{k=0}^N {p_k p_{k+1}\over p_k+p_{k+1}} (k+1)(N-k)\bigg]\ . \label{29} \end{equation} In particular, in the case of a pure state, $\rho_k=|k, N-k\rangle\langle k,N-k|$, this expression results proportional to the variance of $J_n$ ({\it cf.} (\ref{25})), \begin{equation} F\big[\rho_k, J_n\big]= (n_x^2+n_y^2)\big[N+2 k(N-k)\big]\ , \label{30} \end{equation} and can always be made greater than $N$ with a suitable choice of $k$. More specifically, when $\vec n$ lays in the plane orthogonal to the $z$ direction, so that $n_x^2+n_y^2=1$, one finds that \hbox{$F\big[\rho_k, J_n\big]>N$} for all $k$. Recalling (\ref{24}), this implies that in this case the phase uncertainty is smaller than $1/\sqrt{N}$, thus beating the shot-noise-limit. Actually, when the two wells are filled by the same number of particles, so that the system is in the state $\rho_{N/2}=|N/2, N/2\rangle\langle N/2,N/2|$, one can even get close to the Heisenberg limit, since in this case: \begin{equation} F\big[\rho_{N/2}, J_n\big]= {N^2\over2}+N\ . \label{31} \end{equation} Therefore, unlike in the case of distinguishable particles, the quantum Fisher information can attain a value greater than $N$ even with initial states that are separable with respect to the spatial bipartition. As a consequence, in general, the inequality (\ref{26}) does not play any more the role of a separability condition when dealing with systems of identical particles. In spite of this, we have just seen that the accuracy with which the phase change can be determined in interferometers fed with such separable states can still beat the shot-noise-limit, provided that the rotation involved in the apparatus is not directed along the $z$ axis.% \footnote{When ${\vec n}=(0,0,1)$, the quantum Fisher information (\ref{29}) vanishes. From the physical point of view, this results follows from the fact that the Fock states in (\ref{7}) are eigenstates of the operator $J_z$, so that separability is preserved by rotations generated by it: $e^{i\theta J_z} |k, N-k\rangle= e^{i\theta(2k-N)} |k, N-k\rangle$. In other words, in order to take advantage of the improvement in the accuracy of the phase determination, one has to use an experimental setup for which ${\vec n}\neq(0,0,1)$.} As observed before, such rotations are realized by operators that are non-local with respect to the spatial bipartition.% \footnote{In fact, only the exponential of $J_z$ happens to be a $({\cal A}_1, {\cal A}_2)$-local operator, as shown in (\ref{17}).} Therefore, given the $({\cal A}_1, {\cal A}_2)$ bipartition, it is not the entanglement of the states fed into the interferometer that help overcoming the shot-noise-limit in the phase estimation accuracy; rather, it is the non-local character of the rotations operated by the apparatus on initially separable states that allows $\Delta\theta$ to be larger than $1/\sqrt{N}$, with the possibility of closely approaching the Heisenberg $1/N$ limit. This result can be physically interpreted in another, equivalent way, which will shed further light on the notion of separability when dealing with identical particles. The idea is to change description (and thus bipartition) through a suitable Bogolubov transformation, following the discussion at the end of the previous Section. Take the unit vector $\vec n$ to lay in the plane orthogonal to the $z$ axis, so that one can write $\vec{n}=(\cos\varphi,\sin\varphi,0)$, $\varphi\in[0,2\pi]$. Then, the generator $J_n$ in (\ref{23}) assumes the form: \begin{equation} J_n={1\over 2}\Big( e^{-i\varphi}\, a_1^\dag a_2 +e^{i\varphi}\, a_1 a_2^\dag\Big)\ , \label{32} \end{equation} which clearly shows that its exponential is non-local in the $({\cal A}_1, {\cal A}_2)$ bipartition. Nevertheless, it can become local in a different, suitably chosen bipartition. To this aim, let us introduce a new set of mode operators $b_i^\dag$, $b_i$, $i=1,2$ through the following Bogolubov transformation, that slightly generalizes the one in (\ref{18}): \begin{equation} b_1={a_1+e^{-i\varphi}a_2\over\sqrt{2}}\ ,\qquad b_2={a_1-e^{-i\varphi}a_2\over\sqrt{2}}\ , \label{33} \end{equation} together with their hermitian conjugates. In this new representation, one has: \begin{equation} J_n={1\over 2}\big( b_1^\dag b_1 - b_2^\dag b_2\big)\ , \label{34} \end{equation} so that the unitary operator that implements the rotation around $\vec n$, \begin{equation} e^{i\theta J_n}=e^{i\theta b_1^\dag b_1/2}\ e^{-i\theta b_2^\dag b_2/2}\ , \qquad e^{i\theta b_1^\dag b_1/2}\in {\cal B}_1\ ,\quad e^{-i\theta b_2^\dag b_2/2}\in {\cal B}_2\ , \label{35} \end{equation} is indeed local with respect to the new bipartition $({\cal B}_1, {\cal B}_2)$, where ${\cal B}_1$ is the subalgebra of polynomials in $b_1^\dag$, $b_1$, while ${\cal B}_2$ is the one of polynomials in $b_2^\dag$, $b_2$. In this new language, the state $|N/2,N/2\rangle$ representing the situation of equal filling of the two wells, is no longer separable with respect to this new bipartition; in fact, one finds ({\it cf.} (\ref{21})): \begin{equation} |N/2,N/2\rangle={e^{iN\varphi /2}\over 2^{N/2} (N/2)!}\sum_{k,l=0}^{N/2} {N/2 \choose k} {N/2 \choose l} (-1)^{N/2-l}\, \big({b_1^\dag}\big)^{k+l}\, \big({b_2^\dag}\big)^{N-k-l}|0\rangle\ , \label{36} \end{equation} while, as seen in the previous Section, any pure $({\cal B}_1, {\cal B}_2)$-separable state must be a Fock state of the form ${b_1^\dag}^m\, {b_2^\dag}^n |0\rangle$. Despite these changes, the value of the quantum Fisher information for the initial state $|N/2,N/2\rangle$ and the observable $J_n$ is unchanged and still given by (\ref{31}), since it does not depend on the representation used to compute it. This means that if one is able to build an experimental setup, together with a suitable measure procedure, which can be modelled in terms of the ``energy'' modes $b_i^\dag$, $b_i$ instead of the original ``spatial'' modes $a_i^\dag$, $a_i$, then the accuracy $\Delta\theta$ with which the phase $\theta$ may be determined can still approach the Heisenberg limit. In such a case, the improvement in sensitivity with respect to the standard shot-noise-limit is due to the $({\cal B}_1, {\cal B}_2)$-entanglement of the initial state $|N/2,N/2\rangle$ and not to the non-locality of the transformation that takes place inside the apparatus. \section{Discussion} The standard notion of separability for a many-body system made of $N$ distinguishable particles is based on the natural tensor product structure of the Hilbert space in terms of the single-particle Hilbert spaces: a state (density matrix) of the $N$-body system is separable if it can be written as the convex combinations of products of single-particle density matrices. In the case of a system of identical particles, due to the symmetrization (or antisymmetrization, for fermions) principle, this definition becomes meaningless. It can be replaced by a generalized one, that makes use of a ``dual'' language, focusing on the algebra $\cal A$ of operators of the system instead of the set of its quantum states. One fixes a partition of $\cal A$ in terms of a set of commuting subalgebras and defines as separable those states for which the associated expectation values of any factorized element of this partition can be written as a convex combination of products of expectation values. The notion of separability is thus linked to a specific partition of $\cal A$, so that a given many-body state can be separable with respect to one partition, but result entangled with respect to a different one. Nevertheless, this generalized definition of separability reduces to the familiar one expressed in terms of the single-particle tensor product structure in the case of a system of distinguishable particles. We have applied these considerations to the specific case of a system of $N$ ultracold atoms trapped in an optical double-well potential, whose dynamics is very well captured by a two-mode Bose-Hubbard Hamiltonian. As we have seen, the second quantized language makes the application to this case of the new, generalized notion of separability very transparent and further allows the discussion of various related issues in quantum metrology. In fact, through state preparation and trapping potential control, this system has been shown to realize a highly sensitive Mach-Zhender interferometer, able to measure phase differences with a very high accuracy. Quite in general, the square error $(\Delta\theta)^2$ in the determination of the phase difference $\theta$ accumulated inside the interferometer is bounded by the inverse of the quantum Fisher information $F$, whose value can not exceed $N^2$. This gives the smallest possible error in the estimation of the phase, $\Delta\theta\geq1/N$, the Heisenberg limit, which, for large $N$, is a huge improvement with respect to the standard shot-noise-limit, $\Delta\theta\geq1/\sqrt{N}$. In the case of a system of distinguishable particles, it has been proven that in order to beat the shot-noise-limit in the accuracy of the phase determination one needs to feed the interferometer with suitably $N$-body entangled states. Indeed, one can show that for all separable states one has: $F\leq N$; as a consequence, the condition $F>N$ signals the presence of entanglement and at the same time allows $\Delta\theta$ to be
on Calabi-Yau 3-folds I give an introduction of Donaldson-Thomas type curve counting invariants on Calabi-Yau 3-folds, and explain the recent developments. MOSW06 5th May 2011 16:30 to 17:30 Moduli space of bundles and Kloosterman sums The relation between analytic properties of modular forms and arithmetic results has led to many famous results and conjectures. In the geometric analogue of this conjectural relation - called geometric Langlands correspondence quotients of the upper half plane are replaced by moduli spaces of bundles on the curve. We will try to motivate this analogy. Since the geometry of these spaces is complicated in general, very few explicit examples of such modular forms are known. In joint work with B.C. Ngô and Z. Yun - which was motivated by work of Gross and Frenkel - we found an explicit series of such forms which turn out to be closely related to classical Kloosterman sums. This gives an example of the (wild) geometric Langlands correspondence. MOS 12th May 2011 10:30 to 11:30 Partially positive line bundles Define a line bundle L on a projective variety to be q-ample, for a natural number q, if tensoring with high powers of L kills coherent sheaf cohomology above dimension q. Thus 0-ampleness is the usual notion of ampleness. Intuitively, a line bundle is q-ample if it is positive "in all but at most q directions". We prove some of the basic properties of q-ample line bundles. Related ideas have been used by Ottem to define what an "ample subvariety" of any codimension should mean. MOS 12th May 2011 15:30 to 16:30 Metric properties of spaces of stability conditions The space of Bridgeland stability conditions Stab(X) on the derived category D(X) of a variety X has a natural metric with respect to which the actions of the complex numbers C and of the automorphisms Aut(D(X)) of the category are isometries. Under mild assumptions this metric is complete. For example, when X is a complex projective curve with genus >0 one can compute directly that the quotient Stab(X)/C is isometric to the hyperbolic plane. I will discuss these results and some elementary consequences. MOS 24th May 2011 10:00 to 11:00 A new approach for the Toledo invariant I shall explain a new approach for the Toledo invariant from the Higgs bundle point of view, coming from group theoretic properties of Hermitian symmetric spaces. This approach covers all cases uniformly, including the exceptional ones. I shall give some applications. Joint work with O. García-Prada and R. Rubio. MOS 24th May 2011 11:30 to 12:30 Vector bundles on the algebraic 5-sphere and punctured affine 3-space The 5-dimensional complex sphere X is isomorphic to SL_3/S_2, which admits a fibration p: X-->Y to the 3-dimensional punctured affine space Y=C^{3}\{0} with C^{2} fibres. It was shown by Fabien Morel that vector bundles on a smooth affine variety are determined by A^{1}-homotopy classes of maps to BGL. The fibration p is an A^{1}-homotopy weak equivalence but the Y above is not affine, so it is natural to look for non-isomorphic vector bundles on Y with isomorphic pull-backs to X. We give interesting examples of such bundles of any rank bigger than 1. The examples are produced from vector bundles on the projective space P=P^2. MOS 24th May 2011 15:30 to 16:30 I Cheltsov On simple finite subgroups in the Cremona group of rank 3 The Cremona group of rank N is the group of birational selfmaps of the projective space of dimension N. Recently Yura Prokhorov (Moscow) classified all finite simple subgroups in the Cremona group of rank 3 (this answers a question of Serre). I will show how to apply Nadel-Shokurov vanishing and Kawamata subadjunction to study conjugacy classes of the subgroups classified by Prokhorov. In particular, I give a partial answer to another question of Serre on normalizers of finite simple subgroups in the Cremona of rank 3. This is a joint work with Costya Shramov (Moscow). MOSW07 26th May 2011 11:00 to 12:00 Algebraic approach to tensor product theorems The aim of the talk is to give the general geometric invariant theoretic approach to proving tensor product theorems of semi-stable objects as highlighted in the work of Bogomolov and Ramanan & Ramanathan. This approach will be used to give algebraic proofs of the tensor product theorem for semi-stable Hitchin pairs over arbitrary ground fields. Towards this, one needs to develop a purely algebraic notion of a Hitchin scheme, an object dual in a certain sense to a Hitchin pair. MOSW07 26th May 2011 13:45 to 14:45 Representations of the fundamental group and geometric loci of bundles Starting with Weil's seminal work on vector bundles, the relation between representations of the fundamental group and vector bundles have been studied from various points of view. The work of Narasimhan and Seshadri made clear the relation of unitary representations to polystable bundles. One of Nigel Hitchin's results pertains to representations into the split real forms of semi-simple groups. I will be giving an overview of these ending up with my effort, in collaboration with Oscar Garcia-Prada, to understand these as relating to fixed point varieties under some natural involutions on the moduli of Higgs pairs. MOSW07 26th May 2011 15:15 to 16:15 On the modular interpretation of the Nagaraj-Seshadri locus We will survey constructions of moduli spaces for principal bundles on nodal curves over the complex numbers. This includes a moduli space for torsion free sheaves A of rank r and degree zero on an irreducible nodal curve X which are endowed with a homomorphism d: ∧^r A → O_X which is an isomorphism away from the node. It is a degeneration of the moduli space of SL_r(C)-bundles on a smooth curve. In many cases, this moduli space puts a scheme structure on the "Nagaraj-Seshadri locus" inside the moduli space of semistable torsion free sheaves of rank r and degree zero. MOSW07 26th May 2011 16:30 to 17:30 New results in higher rank Brill-Noether theory In the last 12 months, many new examples of rank 2 bundles (and some of rank 3 bundles) have been discovered. I will describe briefly the links with Koszul cohomology and the moduli spaces of curves, but most of the talk will be devoted to the construction of the bundles and their relevance to higher rank Brill-Noether theory. MOS 31st May 2011 10:00 to 11:00 Orthogonal and symplectic parabolic bundles In this work with S. Majumder and M. L. Wong, we investigate orthogonal and symplectic bundles, with parabolic structure, over a curve. MOS 31st May 2011 11:30 to 12:30 A Bertram Birational models of the Hilbert scheme of points on $P^2$ are moduli of Bridgeland-stable complexes The minimal model program applied to the Hilbert scheme of points on $P^2$ yields a series of birational models, followed by a Fano fibration. These birational models are themselves moduli spaces, but not (generally) of sheaves. Rather, they are moduli spaces of Bridgeland-stable objects in the derived category. Moreover, each of them may be identified with moduli of quiver representations of the quiver associated to $P^2$ and each wall-crossing is a GIT wall-crossing for a particular representation. This is joint work with Izzet Coskun and Daniele Arcara. MOS 2nd June 2011 14:00 to 15:00 U Bruzzo Uhlenbeck-Donaldson compactification for framed sheaves We study moduli spaces of framed sheaves on projective surfaces and introduce a "partial compactification" a la Uhlenbeck-Donaldson for them. MOS 2nd June 2011 15:30 to 16:30 Stable Schottky relations The Schottky problem is to find ways of distinguishing Jacobians from arbitrary principally polarized abelian varieties. From a classical viewpoint (that of this talk) the aim is to find Siegel modular forms that vanish along the Jacobian locus. In this talk we discuss what happens in the stable situation, that is, when the genus increases arbitrarily. MOS 6th June 2011 15:30 to 16:30 S Keel Mirror symmetry for affine Calabi-Yaus I will explain my recent conjecture joint with Hacking and Gross, and theorem in dimension two, which gives the mirror to an affine CY manifold of any dimensions as the Spec of an explicit algebra: The
with another model. Shouldn't affect performance. def forward(self, input, z_geo, z_app, ray_directions, **kwargs): frequencies_geo, phase_shifts_geo = self.geo_mapping_network(z_geo) frequencies_app, phase_shifts_app = self.app_mapping_network(z_app) return self.forward_with_frequencies_phase_shifts(input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs) def forward_with_frequencies_phase_shifts(self, input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs): frequencies_geo = frequencies_geo*15 + 30 frequencies_app = frequencies_app*15 + 30 # TODO: 为什么做变换 input = self.gridwarper(input) x = input for index, layer in enumerate(self.network): start = index * self.hidden_dim end = (index+1) * self.hidden_dim x = layer(x, frequencies_geo[..., start:end], phase_shifts_geo[..., start:end]) sigma = self.final_layer(x) start += self.hidden_dim end += self.hidden_dim labels = self.label_layer_linear(x) # rbg = torch.cat([ray_directions, input, labels], dim=-1) rbg = torch.cat([ray_directions, x], dim=-1) for index, layer in enumerate(self.color_layer_sine): start, end = index * self.hidden_dim, (index+1) * self.hidden_dim rbg = layer(rbg, frequencies_app[..., start:end], phase_shifts_app[..., start: end]) rbg = torch.sigmoid(self.color_layer_linear(rbg)) return torch.cat([labels, rbg, sigma], dim=-1) class SIRENBASELINESEMANTICDISENTANGLE(nn.Module): """Same architecture as TALLSIREN baseline but use double latent codes and render semantic maps""" def __init__(self, input_dim=2, z_geo_dim=100, z_app_dim=100, hidden_dim=256, output_dim=1, device=None): super().__init__() self.device = device self.input_dim = input_dim self.z_geo_dim = z_geo_dim self.z_app_dim = z_app_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.network = nn.ModuleList([ FiLMLayer(3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.final_layer = nn.Linear(hidden_dim, 1) self.color_layer_sine = nn.ModuleList([ FiLMLayer(hidden_dim+3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.color_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 3)) self.geo_mapping_network = CustomMappingNetwork(z_geo_dim, 256, len(self.network)*hidden_dim*2) self.app_mapping_network = CustomMappingNetwork(z_app_dim, 256, len(self.color_layer_sine)*hidden_dim*2) self.label_layer_linear = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, self.output_dim - 4)) self.network.apply(frequency_init(25)) self.final_layer.apply(frequency_init(25)) self.color_layer_sine.apply(frequency_init(25)) self.color_layer_linear.apply(frequency_init(25)) self.label_layer_linear.apply(frequency_init(25)) self.network[0].apply(first_layer_film_sine_init) self.gridwarper = UniformBoxWarp(0.24) # Don't worry about this, it was added to ensure compatibility with another model. Shouldn't affect performance. def forward(self, input, z_geo, z_app, ray_directions, **kwargs): frequencies_geo, phase_shifts_geo = self.geo_mapping_network(z_geo) frequencies_app, phase_shifts_app = self.app_mapping_network(z_app) return self.forward_with_frequencies_phase_shifts(input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs) def forward_with_frequencies_phase_shifts(self, input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs): frequencies_geo = frequencies_geo*15 + 30 frequencies_app = frequencies_app*15 + 30 input = self.gridwarper(input) x = input for index, layer in enumerate(self.network): start = index * self.hidden_dim end = (index+1) * self.hidden_dim x = layer(x, frequencies_geo[..., start:end], phase_shifts_geo[..., start:end]) rbg = torch.cat([ray_directions, x], dim=-1) sigma = self.final_layer(x) labels = self.label_layer_linear(x) for index, layer in enumerate(self.color_layer_sine): start, end = index * self.hidden_dim, (index+1) * self.hidden_dim rbg = layer(rbg, frequencies_app[..., start:end], phase_shifts_app[..., start: end]) rbg = torch.sigmoid(self.color_layer_linear(rbg)) return torch.cat([labels, rbg, sigma], dim=-1) class SIRENBASELINESEMANTICDISENTANGLE_debug(nn.Module): """Same architecture as SIRENBASELINESEMANTICDISENTANGLE_debug except adding sigmoid to label""" def __init__(self, input_dim=2, z_geo_dim=100, z_app_dim=100, hidden_dim=256, output_dim=1, device=None): super().__init__() self.device = device self.input_dim = input_dim self.z_geo_dim = z_geo_dim self.z_app_dim = z_app_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.network = nn.ModuleList([ FiLMLayer(3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.final_layer = nn.Linear(hidden_dim, 1) self.color_layer_sine = nn.ModuleList([ FiLMLayer(hidden_dim+3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.color_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 3)) self.geo_mapping_network = CustomMappingNetwork(z_geo_dim, 256, len(self.network)*hidden_dim*2) self.app_mapping_network = CustomMappingNetwork(z_app_dim, 256, len(self.color_layer_sine)*hidden_dim*2) self.label_layer_linear = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, 19)) # 19 semantic labels self.network.apply(frequency_init(25)) self.final_layer.apply(frequency_init(25)) self.color_layer_sine.apply(frequency_init(25)) self.color_layer_linear.apply(frequency_init(25)) self.label_layer_linear.apply(frequency_init(25)) self.network[0].apply(first_layer_film_sine_init) self.gridwarper = UniformBoxWarp(0.24) # Don't worry about this, it was added to ensure compatibility with another model. Shouldn't affect performance. def forward(self, input, z_geo, z_app, ray_directions, **kwargs): frequencies_geo, phase_shifts_geo = self.geo_mapping_network(z_geo) frequencies_app, phase_shifts_app = self.app_mapping_network(z_app) return self.forward_with_frequencies_phase_shifts(input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs) def forward_with_frequencies_phase_shifts(self, input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs): frequencies_geo = frequencies_geo*15 + 30 frequencies_app = frequencies_app*15 + 30 input = self.gridwarper(input) x = input for index, layer in enumerate(self.network): start = index * self.hidden_dim end = (index+1) * self.hidden_dim x = layer(x, frequencies_geo[..., start:end], phase_shifts_geo[..., start:end]) rbg = torch.cat([ray_directions, x], dim=-1) sigma = self.final_layer(x) labels = torch.sigmoid(self.label_layer_linear(x)) for index, layer in enumerate(self.color_layer_sine): start, end = index * self.hidden_dim, (index+1) * self.hidden_dim rbg = layer(rbg, frequencies_app[..., start:end], phase_shifts_app[..., start: end]) rbg = torch.sigmoid(self.color_layer_linear(rbg)) return torch.cat([labels, rbg, sigma], dim=-1) class SPATIALSIRENSEMANTICHD(nn.Module): """Same architecture as SPATIALSIRENSEMANTIC but on a high resolution""" def __init__(self, input_dim=2, z_dim=100, hidden_dim=256, output_dim=1, device=None): super().__init__() self.device = device self.input_dim = input_dim self.z_dim = z_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.max_batch_size = 2500 self.network = nn.ModuleList([ FiLMLayer(3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.final_layer = nn.Linear(hidden_dim, 1) self.label_layer_sine = FiLMLayer(hidden_dim, hidden_dim) self.label_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 64)) # 19 semantic labels self.color_layer_sine = FiLMLayer(hidden_dim + 3, hidden_dim) self.color_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 64)) self.mapping_network = CustomMappingNetwork(z_dim, 256, (len(self.network) + 2)*hidden_dim*2) self.network.apply(frequency_init(25)) self.final_layer.apply(frequency_init(25)) self.label_layer_sine.apply(frequency_init(25)) self.label_layer_linear.apply(frequency_init(25)) self.color_layer_sine.apply(frequency_init(25)) self.color_layer_linear.apply(frequency_init(25)) self.network[0].apply(first_layer_film_sine_init) self.activation = nn.Softmax(dim=-1) self.gridwarper = UniformBoxWarp(0.24) # Don't worry about this, it was added to ensure compatibility with another model. Shouldn't affect performance. def forward(self, input, z, ray_directions, **kwargs): frequencies, phase_shifts = self.mapping_network(z) n_batch, n_pixel = input.shape[:2] return self.forward_with_frequencies_phase_shifts(input, frequencies, phase_shifts, ray_directions, **kwargs) def forward_with_frequencies_phase_shifts(self, input, frequencies, phase_shifts, ray_directions, **kwargs): frequencies = frequencies*15 + 30 input = self.gridwarper(input) x = input for index, layer in enumerate(self.network): start = index * self.hidden_dim end = (index+1) * self.hidden_dim x = layer(x, frequencies[..., start:end], phase_shifts[..., start:end]) start += self.hidden_dim end += self.hidden_dim sigma = self.final_layer(x) labels = self.label_layer_sine(x, frequencies[..., start:end], phase_shifts[..., start:end]) # TODO: w. / w.o softmax activation on label labels = self.label_layer_linear(labels) start += self.hidden_dim end += self.hidden_dim rbg = self.color_layer_sine(torch.cat([ray_directions, x], dim=-1), frequencies[..., start:end], phase_shifts[..., start:end]) rbg = self.color_layer_linear(rbg) # rbg = torch.sigmoid(self.color_layer_linear(rbg)) return torch.cat([labels, rbg, sigma], dim=-1) class EmbeddingPiGAN128SEMANTICDISENTANGLE(nn.Module): """Smaller architecture that has an additional cube of embeddings. Often gives better fine details.""" def __init__(self, input_dim=2, z_geo_dim=100, z_app_dim=100, hidden_dim=128, output_dim=1, device=None): super().__init__() self.device = device self.input_dim = input_dim self.z_geo_dim = z_geo_dim self.z_app_dim = z_app_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.network = nn.ModuleList([ FiLMLayer(32 + 3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.final_layer = nn.Linear(hidden_dim, 1) # self.color_layer_sine = FiLMLayer(hidden_dim + 3, hidden_dim) self.color_layer_sine = nn.ModuleList([ FiLMLayer(hidden_dim+3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.color_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 3)) self.geo_mapping_network = CustomMappingNetwork(z_geo_dim, 256, len(self.network)*hidden_dim*2) self.app_mapping_network = CustomMappingNetwork(z_app_dim, 256, len(self.color_layer_sine)*hidden_dim*2) self.label_layer_linear = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, self.output_dim-4) ) self.network.apply(frequency_init(25)) self.final_layer.apply(frequency_init(25)) self.color_layer_sine.apply(frequency_init(25)) self.color_layer_linear.apply(frequency_init(25)) self.label_layer_linear.apply(frequency_init(25)) self.network[0].apply(modified_first_sine_init) self.spatial_embeddings = nn.Parameter(torch.randn(1, 32, 96, 96, 96)*0.01) # !! Important !! Set this value to the expected side-length of your scene. e.g. for for faces, heads usually fit in # a box of side-length 0.24, since the camera has such a narrow FOV. For other scenes, with higher FOV, probably needs to be bigger. self.gridwarper = UniformBoxWarp(0.24) def forward(self, input, z_geo, z_app, ray_directions, **kwargs): frequencies_geo, phase_shifts_geo = self.geo_mapping_network(z_geo) frequencies_app, phase_shifts_app = self.app_mapping_network(z_app) return self.forward_with_frequencies_phase_shifts(input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs) def forward_with_frequencies_phase_shifts(self, input, frequencies_geo, frequencies_app, phase_shifts_geo, phase_shifts_app, ray_directions, **kwargs): frequencies_geo = frequencies_geo*15 + 30 frequencies_app = frequencies_app*15 + 30 input = self.gridwarper(input) shared_features = sample_from_3dgrid(input, self.spatial_embeddings) x = torch.cat([shared_features, input], -1) for index, layer in enumerate(self.network): start = index * self.hidden_dim end = (index+1) * self.hidden_dim x = layer(x, frequencies_geo[..., start:end], phase_shifts_geo[..., start:end]) rbg = torch.cat([ray_directions, x], dim=-1) sigma = self.final_layer(x) labels = self.label_layer_linear(x) for index, layer in enumerate(self.color_layer_sine): start, end = index * self.hidden_dim, (index+1) * self.hidden_dim rbg = layer(rbg, frequencies_app[..., start:end], phase_shifts_app[..., start: end]) rbg = torch.sigmoid(self.color_layer_linear(rbg)) return torch.cat([labels, rbg, sigma], dim=-1) class TextureEmbeddingPiGAN128SEMANTICDISENTANGLE(nn.Module): """Smaller architecture that has an additional cube of embeddings. Often gives better fine details. Embeddings are in color prediction branch instead of density network""" def __init__(self, input_dim=2, z_geo_dim=100, z_app_dim=100, hidden_dim=128, output_dim=1, device=None): super().__init__() self.device = device self.input_dim = input_dim self.z_geo_dim = z_geo_dim self.z_app_dim = z_app_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.network = nn.ModuleList([ FiLMLayer(3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.final_layer = nn.Linear(hidden_dim, 1) # self.color_layer_sine = FiLMLayer(hidden_dim + 3, hidden_dim) self.color_layer_sine = nn.ModuleList([ FiLMLayer(hidden_dim+32+3, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), FiLMLayer(hidden_dim, hidden_dim), ]) self.color_layer_linear = nn.Sequential(nn.Linear(hidden_dim, 3)) self.geo_mapping_network = CustomMappingNetwork(z_geo_dim, 256, len(self.network)*hidden_dim*2) self.app_mapping_network = CustomMappingNetwork(z_app_dim, 256, len(self.color_layer_sine)*hidden_dim*2) self.label_layer_linear = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, hidden_dim), nn.Linear(hidden_dim, self.output_dim-4) ) self.network.apply(frequency_init(25)) self.final_layer.apply(frequency_init(25)) self.color_layer_sine.apply(frequency_init(25)) self.color_layer_linear.apply(frequency_init(25)) self.label_layer_linear.apply(frequency_init(25)) self.network[0].apply(modified_first_sine_init) self.spatial_embeddings = nn.Parameter(torch.randn(1, 32, 96, 96, 96)*0.01) #
computation shows that $\mathcal{E} = \omega_{\mathcal{S}}/(2 \pi q T)$ (Eq.~(\ref{SQE1a})) is indeed the same parameter appearing in Eqs.~(\ref{Azeta}) and (\ref{aads2}). In this AdS$_2$ computation, the scaling dimension $\Delta$ is related to the bulk spinor mass by \begin{equation} \Delta = \frac{1}{2} - \sqrt{m^2 R_2^2 - q^2 \mathcal{E}^2}. \label{nu} \end{equation} \subsection{Black hole thermodynamics} \label{blackhole} We close this section by noting a significant property of the above solution of classical general relativity at all $T$ and $\mathcal{Q}$. From the laws of black holes thermodynamics \cite{Bardeen73}, we deduce that the horizon area and the chemical potential must obey a thermodynamic Maxwell relation \begin{equation} \left( \frac{\partial \mathcal{S}_{\rm BH}}{\partial \mathcal{Q}} \right)_{T} = - \left( \frac{\partial \mu}{\partial T} \right)_{\mathcal{Q}}, \label{maxwell2} \end{equation} which is the analog of that in the fermion model computation in Eq.~(\ref{maxwell}). And indeed we do find from Eqs.~(\ref{liueqs}) and (\ref{Ah1}) that Eq.~(\ref{maxwell2}) is obeyed with \begin{equation} \left( \frac{\partial \mu}{\partial T} \right)_{\mathcal{Q}} = -\frac{4 \pi (d-1) g_F \Theta r_0^d}{ c_d (d+1) r_0^{2d} + c_d (d-1)(2d-1) \Theta^2}. \label{maxwell3} \end{equation} In determining the value of $(\partial \mu /\partial T)_\mathcal{Q}$ as $T \rightarrow 0$, rather than explicitly evaluating Eq.~(\ref{maxwell3}), it is instructive to use a more general argument which does not use the explicit form of the solution in Eqs.~(\ref{fA}) and (\ref{liueqs}). From the original action in Eq.~(\ref{EM}) and the metric in Eq.~(\ref{metric1}), Gauss's law for the scalar potential in the bulk is \begin{equation} \frac{2 R^2}{\kappa^2 g_F^2} \frac{d}{dr}\left( \frac{r^d}{R^d} \frac{d A_t}{d r} \right) = 0 , \label{gauss1} \end{equation} and the constant of integration is the boundary charge density \begin{equation} \frac{2 R^2}{\kappa^2 g_F^2} \left( \frac{r^d}{R^d} \frac{d A_t}{d r} \right) = \mathcal{Q}. \label{gauss2} \end{equation} We can write the solution of Eq.~(\ref{gauss2}) as \begin{equation} A_t (r) = \mu (T) - \left( \frac{R^{d-2} \kappa^2 g_F^2}{2 (d-1)} \right) \frac{\mathcal{Q}}{r^{d-1}}, \label{At} \end{equation} where the $r$-dependent term in Eq.~(\ref{At}) is independent of $T$ at fixed $\mathcal{Q}$, and the chemical potential $\mu$ equals $A_t (r \rightarrow \infty)$ when we choose $A_t=0$ on the horizon. Now we transform to the near-horizon AdS$_2$ geometry by making a $T$-independent change of variables from $r$ to $\zeta$ as in Eq.~(\ref{rzeta}), $r = r_{\ast} + 1/\zeta$, where $r=r_{\ast}$ is the position of the horizon at $T=0$, but we won't need the actual value of $r_{\ast}$. Then Eq.~(\ref{At}) implies that, as $\zeta \rightarrow \infty$, the near-horizon scalar potential must of the form in Eq.~(\ref{aads2}), where now we define $\zeta=\zeta_0$ as the position of the horizon at non-zero $T$, where $\mathcal{E}$ is a parameter independent of $T$, and \begin{equation} \left( \frac{\partial \mu}{\partial T} \right)_{\mathcal{Q}} = \mathcal{E} \frac{\partial}{\partial T} \left( - \frac{1}{\zeta_0} \right)_{\mathcal{Q}}. \end{equation} The $T$-dependence of $\zeta_0$ in Eq.~(\ref{Tzeta}) follows from the conformal mapping between the $T=0$ AdS$_2$ metric in Eq.~(\ref{factor}) and $T>0$ metric in Eq.~(\ref{aads2}) \cite{Faulkner11}. So we find by this general argument that \begin{equation} \left( \frac{\partial \mu}{\partial T} \right)_{\mathcal{Q}} = - 2 \pi \mathcal{E} \quad, \quad T \rightarrow 0, \label{muT2} \end{equation} which is the same as the fermion model result in Eq.~(\ref{muT1}). It can be verified that Eq.~(\ref{muT2}) holds also in the spherical geometry of Appendix~\ref{app:sphere}. Combining Eq.~(\ref{muT2}) with Eq.~(\ref{maxwell2}), we obtain Eq.~(\ref{SQE1b}), which is a special case of results obtained from the Wald formalism \cite{Sen05,Sen08,Wald93,Myers93,Wald94,Myers94,Shahin13}. We note that the above derivation of Eq.~(\ref{muT2}) relied only on Gauss's Law and the conformal invariance of the AdS$_2$ near-horizon geometry: this implies that such results hold for a wide class of black hole solutions \cite{Sen05,Sen08,Wald93,Myers93,Wald94,Myers94,Shahin13}. \section{Discussion} \label{sec:conc} In our discussion of the SY state of the infinite-range fermion model in Eq.~(\ref{H}), we noted that the fermion Green's function was almost completely determined by the emergent conformal and gauge invariances in Eq.~(\ref{eq:conf}). These conformal and gauge invariances also fairly uniquely determine the holographic theory of matter moving in curved space in the presence of an electric field. So, with the benefit of hindsight, we can understand the equivalence of the fermion Green's functions obtained in Sections~\ref{sec:inf} and~\ref{sec:holo}. However, we have gone beyond the identification of Green's functions, and also shown that the zero temperature entropy of the SY state can be mapped onto that of the AdS$_2$ theory (see Fig.~\ref{fig:summary}). Specifically, we chose an appropriate combination of observables in Eqs.~(\ref{ws1},\ref{ws2}) to allow us to generally define a common frequency $\omega_{\mathcal{S}}$, and we showed that this frequency was related to precisely the same derivative of the entropy in both the SY state and in charged black holes (where the entropy was the Bekenstein-Hawking entropy). In both cases, establishing this relationship required an analysis of the details of the model, and it did not follow from general symmetry arguments alone. In particular, for the SY state, the entropy computation required careful treatment of the manner in which the emergent gauge and conformal invariances, present at low energies, were broken by the on-site canonical fermions, present at high energies. This common relationship between $\omega_{\mathcal{S}}$ and the entropy indicates an equivalence between the low-energy degrees of freedom of the two theories in Sections~\ref{sec:inf} and~\ref{sec:holo}, and strongly supports the existence of gravity dual of the SY state with a AdS$_2$ horizon. The present results also imply the $c_i$ fermion, with $q=1$, of the theory in Eq.~(\ref{H}) is holographically dual to the $\psi$ fermion, with $q=1$, \cite{Faulkner09,TFJP11} of Eq.~(\ref{Sads2}). As the microscopic $c_i$ fermion carries all of the $\mathcal{Q}$ charge of the theory in Eq.~(\ref{H}), we expect that $\psi$ also carries a non-negligible fraction of the charge (in the large $N$ limit) behind the AdS$_2$ horizon. Both models likely also have higher dimension operators, but these have not been analyzed so far (see however Ref.~\onlinecite{AK15}). Note that the above discussion refers to the near-horizon AdS$_2$ geometry. The larger Reissner-Nordstr\"om-AdS solution is to be regarded here as a convenient (and non-universal) embedding space which provides a UV regulation of the gravitational theory. With such an embedding, we are able compute well-defined values for $\mathcal{S}$ and $\mathcal{Q}$. Presumably other gravitational UV embeddings, will have different `equations of state' between $\mathcal{E}$ and $\mathcal{Q}$, but the will nevertheless obey the fundamental relation in Eq.~(\ref{SQE1b}) provided they contain a AdS$_2$ horizon. We explicitly tested the independence on the UV embedding in Appendix~\ref{app:sphere} by comparing the cases of planar and spherical black holes. The above identification between the $c_i$ and $\psi$ fermions differs from that made previously by the author in Refs.~\onlinecite{SS10,SS10b}. There, $\psi$ was argued to be dual to a higher dimension composite fermion operator of the original model of SY \cite{SY92}. This previous identification was based upon the requirement that local bulk operators must be dual to gauge-invariant operators on the boundary, and the original model \cite{SY92} had a microscopic gauge invariance which did not allow the choice of $c_i$ as dual to $\psi$. However, in the present model in Eq.~(\ref{H}), there is no microscopic gauge invariance, and so we are free to use $c_i$ as the dual of the bulk $\psi$ field. It turns out that the low energy boundary theory for $c_i$ does have a gauge invariance (as in Eq.~(\ref{eq:conf})), but this is an emergent gauge invariance which is broken by UV terms needed to regularize the theory. The present situation is analogous to the theory of the Ising-nematic quantum critical point in metals, where the regularized model for the electrons is not gauge-invariant, but the low energy theory defined on two Fermi surface patches does have an emergent gauge structure \cite{MS10,MMLS10}. And the present situation is different from that in the `slave particle' theories of condensed matter, where the gauge structure emerges from fractionalizing particles into partons, which influenced the reasoning of Refs.~\onlinecite{SS10,SS10b}. Instead the same particle can be gauge-invariant in the underlying theory, and acquire an emergent gauge charge in the low energy theory. There is some similarity between this interpretation and ideas in Ref.~\onlinecite{Gubser11}. Finally, we note recent work \cite{Shenker13,Shenker15,AK15} on `a bound on chaos' which also related characteristic times of the real-time dynamics of strongly-coupled quantum systems to thermodynamics, $\hbar$, and black hole horizons. \section*{Acknowledgments} I thank T.~Banks, A.~Dabholkar, Wenbo Fu, S.~Hartnoll, A.~Kitaev, Hong Liu, J.~McGreevy, R.~Myers, A.~Sen, A.~Strominger, and W.~Witczak-Krempa for valuable discussions, and especially A.~Georges and O.~Parcollet for inspiring discussions on these topics over many years. This research was supported by the NSF under Grant DMR-1360789, and also partially by the Templeton Foundation. The research at KITP Santa Barbara was supported by the Simons Foundation and NSF Grant PHY11-25915. Research at Perimeter
a big thank you to the creators & everybody that contributes>>568863 0210b4 No.579202 https:// www.wikihow.com/Write-a-Bill-for-the-United-States-Congress 99af08 No.581563 File: 9f452a46567abcc⋯.png (554.34 KB, 1288x892, 322:223, Screen Shot 2018-03-07 at ….png) File: 3dcf8dc75f5429f⋯.png (349.98 KB, 1263x766, 1263:766, Screen Shot 2018-03-07 at ….png) File: 03b00bfecc6562c⋯.png (2.68 MB, 1000x700, 10:7, Trump_Lion_Transparent (1).png) Hey!!! I see that people care now about IBOR. Thanks for being here those of you who held the line and battled over the past 5+ weeks. Maybe people are starting to "Get it"?? God I hope!! b9fb60 No.581695 >>581563 only took them 35 days to figure out what i was going ballistic about from the get go. >these people are stupid applies to more than the enemy thanks for pushing it and taking the beating. i gave up and went back to figuring out who the actual pieces are on the MAP(chessboard) the anons dont listen to me about that either even though i have basically solved 1/2 of the shit since day one. 99af08 No.581809 File: 20bb1c6f7f5b095⋯.png (92.11 KB, 487x473, 487:473, Screen Shot 2018-02-13 at ….png) >>581695 It's nonsense. They are just as wrapped up in "THEIR" investigation and BS as my "liberal" friends are in "American Horror Story" and THAT shit. Thank YOU for holding the line. Some of us are figuring it out. Have you been to the Generation 2 board?? I was sorta "invited" the other day, and I was in there last night (Who needs sleep right??) and THEY have it going on. Again, I THANK YOU for your efforts. We'll get this. All of those posts at the top of the new bread should help. I KNOW that at least ONE BV had it out for me, and again, you KNOW the attacks we took. And NOW…LOOK what ! says…they FEAR it BADLY. Well, at least WE know the efforts we have put in. God is watching. DRIVE ON!!! b9fb60 No.581826 >>581809 not at gen2 /qlivesmatter/ is a board i set up in reserve after the fucking BV banned me for showing the illogic of 111 day mirror theory 99af08 No.581937 File: 03b00bfecc6562c⋯.png (2.68 MB, 1000x700, 10:7, Trump_Lion_Transparent (1).png) >>581826 I got banned as well one night in the middle of a SHIT SHOW about garbage for pushing IBOR. Our people know who we are. I'll check out that board, and let me get the link to Gen2. Maybe THIS is the kick in the ass The Anons NEED. Jesus…Q had to SPELL IT OUT for Us. So humiliating. /qresearch2gen/ b9fb60 No.581995 >>581937 bookmarked he spelled it out loud and clear multiple times people just refuse to obey this is not a game? bullshit. it has always been a game. a preplanned attack, with 100% chance of success the only thing that has ever been in question is the parameters of each move 8264e2 No.582116 >>253926 Who control the online petitions webpage to the White House? Are they removing the signatures? Are they giving people false hope? Is this a way for them (Deep State) to dampens the people voice, while (Deep State) pretending to be listening to the people concerns. 99af08 No.582153 >>582116 If the White House is comped, we're ALL dead. I doxxed myself on purpose in early February, and while it's gotten rough for me (Like Q said, "there's NOTHING (((they))) fear more") I'm still alive. This is WAR. "One man with courage makes a majority." Drive on. 04355e No.586172 >>349357 Easy Anons, did Q ask us to start a petition? Wonder why if so. Check this out, Snowden supported a Magna Carta (a kind of bill or rights) for the Internet as far back as 2014. I wouldn't worry too much about signatures/petition, I think we will get our internet freedom if the administration get Snowfagden to work on it. Let's hope he's been redpilled and is being brought back for just such a mission. https:// youtu.be/yVwAodrjZMY b9fb60 No.592267 File: f1026c36be22ae2⋯.jpg (384.81 KB, 939x743, 939:743, f1026c36be22ae20d63c861085….jpg) File: 163d13ad6841a05⋯.png (102.95 KB, 279x202, 279:202, ClipboardImage.png) >>581937 anon, this bud's for you complete and utter vindication of the hours of dedication. cheers! ba95a6 No.600779 >>599686 research the difference between INalienable and UNalienable. There is a very important difference. INalienable can be bought sold transferred or willingly given up. UNalienable means it exists and that's that you cant fuck with it. ba95a6 No.601188 Sorry if that's in the wrong spot, first time posting here. I'm not the type to say anything unless I have something worthwhile to say. Hey Q, after "we've won," are we going to examine the corporate document known as "The Constitution of the United States" and start asking questions about why before the civil war we had "The Constitution For These United States of America" - as any legal analyst will tell you, changing but a single word in the naming of a document winds up meaning that it is an entirely new document. Big implications and its a big issue, because it basically means that all "Amendments" to the "Constitution" past #12 would not have been made to the original true Constitution. This is why Sessions restated the assertion that a state cannot secede - it is based entirely upon there being a parallel Constitution and it is the straw man entities thereof that are spelled in all caps, that are part of the corporation of the united states (all caps) and the corporate bylaws prevent secession of a "member strawman." One thing at a time though, the traitors need to go first. Hail Anons, great fucking work here. 4b8ec2 No.602627 Queen Elizabeth may have no longer been Queen of the UK since the Maastricht Treaty… http:// gerardbattenmep.co.uk/wp-content/uploads/2015/06/Queen-and-Constitution-Aug-2012.pdf b84b45 No.624500 Umm IBOR stuck on 14,685 signed? cd7c24 No.626561 Total signatures right now 14,973 … 62a9d7 No.627883 >>621807 We went too deep. Attempted a pullback. Is this why posts about the video are missing? Or is it an error in the code at qanonposts.com ? 36fc2f No.634122 Networking to make this viral nationwide now…. 0cfe4b No.656313 File: 25292359b4ef9a9⋯.png (136.18 KB, 530x600, 53:60, ClipboardImage.png) New graphic to share out 99f87a No.688083 >>253926 It would help if whitehouse.gov had a link to the petitions.whitehouse.gov page.nPetitions aren't even mentioned on whitehouse.gov. They should be listed in the "get involved" section. fcdcb4 No.701693 25,000 signed. Need 75,000 I believe by early April!!! I signed it sent to all FBI friends and twitter friends! https:// 8ch.net/qresearch/res/253926.html#q688083 fcdcb4 No.701714 https:// petitions.whitehouse.gov/petition/internet-bill-rights-2 fcdcb4 No.701955 Sign This Petition Needs 74,964 signatures by April 3, 2018 to get a response from the White House 25,036 SIGNED 100,000 GOAL 2cac44 No.703684 >>701955 2cac44 No.703695 i hope dc is the target…….f…those faggots 2cac44 No.703703 for usmc 2cac44 No.703715 this site is being attacked……..hugely ae88eb No.703797 Q tells us to listen carefully. He has been asking us. What is the keystone? At minute mark 2:20 the POTUS during his campaign tells us what the keystone is. “The Clinton machine is at the center of this power structure” remove the center keystone in a structure and it will fall upon itself. The Clinton machine’s greatest weapon against us is MSM. “The MSM is no longer involved in journalism they are a political special interest”. We must make it Rain on these special interest. We must raise our voice of truth making their illusions obsolete. 13fb37 No.716971 Hey anons does anyone know if we have a Chemtrail petition going? We did it may have expired. Anyway I've gone onto the different "End Chemtrail" sites to get their help with the petition(s) including IBOR petition some of them got back to me wanting the a link to the chem petition and I have a flyer I made for the IBOR to give. But I can't find the chem petition b81629 No.717302 >>716971 I havent seen one - but for your petition - government acknowledged geo engineering via aerosols 2 yrs ago - >>Here is the published report https:// ehjournal.biomedcentral.com/articles/10.1186/s12940-016-0089-0 de5563 No.717800 File: 8aa98ce3b72f2b4⋯.jpg (445.2 KB, 1830x1565, 366:313, finally-thumbs-down.jpg) File: 61bd7bd893559f7⋯.jpg (263.78 KB, 1445x1012, 1445:1012, facebook11.jpg) 13fb37 No.720397 File: fbb4ecafa1c65e2⋯.png (956.1 KB, 1234x1912, 617:956, IMG_3107.PNG) File: 048b57bccfcb6ff⋯.png (512.36 KB, 1221x1612, 1221:1612, IMG_3111.PNG) >>717302 Thanks, I read a big portion of it, it is just someone's assessment on the subject, in my opinion. It isn't legislation and the Chemtrails continues. I started another last night, would really love the help:) I made these flyers I've been trying to get out to peeps. The Chemtrails petition won't show on the website until we have 150 signatures, here's that link https:// petitions.whitehouse.gov/petition/put-stop-chemtrails-and-cloud-seeding-cia-and-all-other-participants-involved-poisoning-our-home-earth f024f8 No.722955 File: 3c2f6232f2a1d82⋯.png (6.11 KB, 277x211, 277:211, Screenshot-2018-3-19 Inter….png) Needs 73,398 signatures by April 3 !!! https:// petitions.whitehouse.gov/petition/internet-bill-rights-2 c2c5a8 No.723871 Here is a petition to address illegal surveillance. If people could sign and share this: https:// 8ch.net/qresearch/res/253926.html#q688083 c2c5a8 No.723883 I meant this petition for illegal surveillance: https:// petitions.whitehouse.gov/petition/stop-human-experimentationblacklisting aa0489 No.733534 White House Petition to BUILD THE WALL d4bbef No.734470 File: c785aa0d0b0971b⋯.jpg
\section{Introduction} We assume that the reader is familiar with the basic concepts of graph theory. In particular, we use $V(G)$ and $E(G)$ to denote the vertex set and edge set, respectively, of the graph $G$. A {\it $k$-factor} of $G$ is a $k$-regular spanning subgraph of $G$. Thus a {\em 1-factor} of $G$ (also called a {\em perfect matching}) is a collection of independent edges whose end-vertices partition $V(G)$, and a {\em 2-factor} of $G$ is a collection of vertex-disjoint cycles in $G$ whose vertex sets partition $V(G)$. If the cycles in a given 2-factor all have the same length, we say that the 2-factor is {\em uniform}. We will use the notation $C_{\ell}$ to denote a cycle of length $\ell$, and refer to a uniform 2-factor whose cycles have length $\ell$ as a {\em $C_{\ell}$-factor}. If $G_1, G_2, \ldots, G_r$ are subgraphs of $G$ whose edge sets partition $E(G)$, then we speak of a {\em decomposition} of $G$ into its subgraphs $G_1, \ldots, G_r$, and write $G=G_1 \oplus G_2 \oplus \ldots \oplus G_r$. In particular, a {\em 2-factorization} of $G$ is a decomposition of $G$ into 2-factors. If $\mathcal{F}=\{F_1, \ldots, F_t\}$ is a set of 2-factors of $G$, then we refer to a 2-factorization in which every factor is isomorphic to an element of $\mathcal{F}$ as an $\mathcal{F}$-factorization. If $\mathcal{F}=\{F\}$, then we speak of an $F$-factorization; if, moreover, $F$ is a $C_{\ell}$-factor, then we refer to a $C_{\ell}$-factorization. We are particularly interested in 2-factorizations of $K_v$, the complete graph of order $v$. Note that if $v$ is even, then $K_v$ has no 2-factorization, as its vertices have odd valency. Thus we define $K_v^*$ to denote $K_v$ if $v$ is odd and $K_v-I$, the complete graph with the edges of a 1-factor $I$ removed, if $v$ is even. The question of whether $K_v^*$ admits a 2-factorization in which each 2-factor is isomorphic to $F$ is known as the {\em Oberwolfach Problem} $\mathrm{OP}(F)$, and has been the subject of much study. The Oberwolfach Problem has been solved in the case that $F$ is uniform~\cite{ASSW, Hoffman Schellenberg 91}, bipartite~\cite{BryantDanziger, Haggkvist 85} or contains exactly two components~\cite{Traetta 13}. The solution of the Oberwolfach Problem for uniform factors will be useful to us later, so we state it here for future reference. \begin{theorem}[\cite{ASSW, Hoffman Schellenberg 91}] \label{uniform OP} Let $v, \ell \geq 3$ be integers. There is a $C_{\ell}$-factorization of $K_v^*$ if and only if $\ell \mid v$ and $(v,\ell) \notin \{(6,3), (12,3)\}$. \end{theorem} Complete solutions to the Oberwolfach Problem are also known for certain infinite families of orders~\cite{ABHMS, Bryant Schar 09} and asymptotic solutions can be found in~\cite{DukesLing, GJKKO}; however, the problem is still open in general. The Oberwolfach Problem has been extended to finding 2-factorizations of regular graphs other than $K_v^*$, notably certain classes of lexicographic product. We use $G[n]$ to denote the lexicographic product of the graph $G$ with the empty graph on $n$ vertices, so that $V(G[n]) = V(G) \times \mathbb{Z}_n$, with $(u,x)(v,y) \in E(G[n])$ if and only if $uv \in E(G)$ and $x,y \in \mathbb{Z}_n$. Of particular note, $K_m[n]$ is the complete equipartite graph with $m$ parts of size $n$; the existence of uniform 2-factorizations of $K_m[n]$ was settled by Liu~\cite{Liu00, Liu03}. \begin{theorem}[\cite{Liu00,Liu03}] \label{liu} Let $\ell, m, n$ be positive integers with $\ell \geq 3$. There is a $C_{\ell}$-factorization of $K_{m}[n]$ if and only if the following conditions are all satisfied: \begin{enumerate} \item $\ell \mid mn$; \item $(m-1)n$ is even; \item if $m=2$, then $\ell$ is even; \item $(\ell,m,n) \notin \{(3,3,2), (3,6,2), (3,3,6), (6,2,6)\}$. \end{enumerate} \end{theorem} A related question is the \emph{Hamilton-Waterloo Problem} $\mathrm{HWP}(G;F_1,F_2;\alpha,\beta)$. Here, we seek a 2-factorization of the graph $G$ in which $\alpha$ 2-factors are isomorphic to $F_1$ and $\beta$ 2-factors are isomorphic to $F_2$. In the case that $G=K_v^*$, we denote this problem by $\mathrm{HWP}(v;F_1,F_2;\alpha,\beta)$, while if $F_1$ and $F_2$ are uniform 2-factors, say $F_1$ is a $C_m$-factor and $F_2$ is a $C_n$-factor, we use the notation $\mathrm{HWP}(G;m,n;\alpha,\beta)$. Thus, $\mathrm{HWP}(v;m,n;\alpha,\beta)$ asks whether $K_v^*$ has a 2-factorization into $\alpha$ $C_m$-factors and $\beta$ $C_n$-factors. {We have the following obvious necessary conditions. \begin{theorem} \label{Necessary} Let $G$ be a $2r$-regular graph, and let $F_1$ and $F_2$ be 2-factors of $G$. If there is a solution to $\mathrm{HWP}(G;F_1, F_2; \alpha,\beta)$, then $\alpha, \beta \geq 0$ and $\alpha+\beta = r$. In particular, there can be a solution to $\mathrm{HWP}(G;m,n;\alpha,\beta)$ only if $m$ and $n$ both divide $|V(G)|$, $\alpha, \beta \geq 0$ and $\alpha+\beta = r$. \end{theorem}} Note that when one of $\alpha$ or $\beta$ is 0, or when $m=n$, $\mathrm{HWP}(v;m,n;\alpha,\beta)$ is equivalent to an instance of the uniform Oberwolfach Problem, so we will generally assume that $\alpha$ and $\beta$ are positive. In addition, when considering $\mathrm{HWP}(G;m,n;\alpha,\beta)$, we will generally assume without loss of generality that $m < n$. The Hamilton-Waterloo Problem $\mathrm{HWP}(v;m,n;\alpha,\beta)$ has been the subject of much recent study; see, for instance, the following papers, which have all appeared since 2013~\cite{AsplundEtAl, BonviciniBuratti, BryantDanzigerDean, BurattiDanziger, BDT3, BDT2, BDT1, KeranenOzkan, KeranenPastine, MerolaTraetta, OdabasiOzkan, WangCao, WangChenCao, WangLuCao}. An asymptotic existence result is given in~\cite{GJKKO}. In the case that $m$, $n$ and $v$ are all odd, the current authors have solved $\mathrm{HWP}(v;m,n;\alpha,\beta)$ (recalling that $m < n$) except possibly if $\alpha=1$, $\beta \in \{1,3\}$ or $v=mn/\gcd(m,n)$~\cite{BDT2}. When $m$ and $n$ have opposite parities, less is known. The paper~\cite{BDT3} solves this problem when $m \mid n$, $v>6n>36m$ and $\beta \geq 3$; further results for cycle lengths of opposite parities can be found in~\cite{KeranenPastine2}. The case $(m,n)=(3,4)$ is completely solved~\cite{BonviciniBuratti, DanzigerQuattrocchiStevens, OdabasiOzkan, WangChenCao}. Other cases which have been considered include $(m,n) \in \{(3,v), (3,3s), (4,n), (8,n)\}$~\cite{AsplundEtAl, KeranenOzkan, LeiShen, OdabasiOzkan, WangCao}. In this paper, we consider the Hamilton-Waterloo Problem $\mathrm{HWP}(v;m,n;\alpha,\beta)$ for even $m$ and $n$. More generally, factorization into bipartite factors has been considered in~\cite{BryantDanziger, BryantDanzigerDean, Haggkvist 85}. \begin{theorem}[\cite{BryantDanziger, Haggkvist 85}] \label{known results} Let $v$ be a positive even integer and let $F_1$ and $F_2$ be bipartite 2-regular graphs of order $v$. \begin{enumerate} \item If $v \equiv 0 \pmod{4}$, then there is a solution to $\mathrm{HWP}(v;F_1,F_2;\alpha,\beta)$ if and only if $\alpha+\beta= \frac{v-2}{2}$, except possibly if $\alpha=1$ or $\beta=1$. \item If $v \equiv 2 \pmod{4}$, then there is a solution to $\mathrm{HWP}(v;F_1,F_2;\alpha,\beta)$ whenever $\alpha+\beta= \frac{v-2}{2}$ and, in addition, $\alpha$ and $\beta$ are both even. \end{enumerate} \end{theorem} In fact,~\cite{BryantDanziger} actually proves a more general result. \begin{theorem}[\cite{BryantDanziger}]\label{BD} Let $\mathcal{F}=\{F_1, F_2, \ldots, F_t\}$ be a collection of bipartite 2-regular graphs of order $v$ and let $\alpha_1, \alpha_2, \ldots, \alpha_t$ be nonnegative integers satisfying $\alpha_1+\alpha_2+\ldots+\alpha_t = \frac{v-2}{2}$. If $\alpha_1 \geq 3$ is odd and $\alpha_i$ is even for each $i \in \{2,3,\ldots,t\}$, then $K_v$ admits an $\mathcal{F}$-factorization in which $\alpha_i$ factors are isomorphic to $F_i$, $i \in \{1,\ldots,t\}$. \end{theorem} Bryant, Danziger and Dean~\cite{BryantDanzigerDean} gave a complete solution to the Hamilton-Waterloo Problem with bipartite factors $F_1$ and $F_2$ in the case that $F_1$ is a {\em refinement} of $F_2$, i.e.\ $F_1$ can be obtained from $F_2$ by replacing each cycle of $F_2$ with a bipartite 2-regular graph on the same vertex set. \begin{theorem}[\cite{BryantDanzigerDean}] \label{refinement} Let $\alpha, \beta \geq 0$ and $v>0$ be integers with $v$ even, and let $F_1$ and $F_2$ be bipartite 2-regular graphs of order $v$ such that $F_1$ is a refinement of $F_2$. There is a solution to $\mathrm{HWP}(v;F_1,F_2;\alpha,\beta)$ if and only if $\alpha+\beta=\frac{v-2}{2}$. \end{theorem} Note that a $C_m$-factor is a refinement of a $C_n$-factor if and only if $m \mid n$. Thus, in the uniform case, Theorems~\ref{known results} and~\ref{refinement} yield the following: \begin{theorem}[\cite{BryantDanziger, BryantDanzigerDean, Haggkvist 85}] \label{known uniform} Let $v$ be a positive even integer, and let $n>m \geq 2$ and $\alpha, \beta \geq 0$ be integers. There is a solution to $\mathrm{HWP}(v;2m,2n;\alpha,\beta)$ if and only if $2m$ and $2n$ are both divisors of $v$ and $\alpha+\beta=\frac{v-2}{2}$, except possibly when \begin{enumerate} \item $v \equiv 0$ (mod 4), $m \nmid n$, and $1 \in \{\alpha, \beta\}$; \item $v \equiv 2$ (mod 4), $m \nmid n$, and $\alpha$ and $\beta$ are both odd. \end{enumerate} \end{theorem} In this paper, we improve upon these results for uniform bipartite factors. Since we assume the cycle
\section{Introduction} Miniaturization of devices based on atomic vapor has set the stage for the development of various compact light-matter interfaces\,\cite{Wasilewski2010,Petelski2003}. However, to reach high precision performances required to be used as atomic sensors\,\cite{Knape2005,Schaeffer2015}, integrated optical chips immersed in alkali-vapors demand meticulous characterization. In this perspective, vapor nano-systems\,\cite{Sarkisyan2004,Sarkisyan2001,Low2018} have recently been used to clarify fundamental problems in optics such as the role of dipole interactions in resonant light scattering\,\cite{Peyrot2018,Low2018}, control the coherent excitation of Rydberg atoms\,\cite{Pfau2010}, or evidence the non-local response of an atomic gas\,\cite{PeyrotArxiv}. Another issue inherent to the system size reduction, is the growing influence of the long-range atom-surface interaction, that triggered the development of new glass nano-cells\,\cite{Baluktsian2010,Whittaker2015,Whittaker2017}. Also, by combining short repulsive and long attractive potential, the possibility to trap atoms in bound states has theoretically been predicted\,\cite{Lima2000} and this ability could make way to hybrid nanoscale atom-surface meta-materials\,\cite{Whittaker2014}. \begin{figure}[b!] \includegraphics[width=\linewidth]{FIG1.pdf} \caption{(a)Photograph of the cell. The white light color fringes reflect the thickness variations in the wedge of the nano-cell. (b) Schematic of the cell.} \label{fig:Fig1} \end{figure} In this article, we report on the fabrication of a glue-free all-glass nano-cell with a thickness varying linearly between (exactly) $0$ and $\sim1\,\mu$m, and with a surface roughness of $1${\AA} rms. Filled with Cesium (Cs), the cell was initially built for the interrogation of small atomic vapor layers of alkali and investigate the hypothetical existence of an atom-surface bound state that would depend on the surface corrugation\,\cite{Vargas1998}. Developing surfaces with ultra-low surface roughness is also an asset to reduce parasite scattering limiting signal-to-noise ratio of light matter interaction with nanometric atomic ensembles. Besides, the super-polish surfaces presented here can be ideal candidates for the fabrication of glass micro-cavities for QED experiments \cite{Barrett2011}. Therefore, the techniques that we present may have a large panel of applications and find audience in diverse domains such as surface science optics and atomic physics. We will first describe the cell fabrication process and how we filled it with Cs. We subsequently present the methods to check the requirements in terms of surface roughness and cell thickness. Finally we bring forward promising examples of spectra in transmission and off-axis fluorescence of the Cs D1 line. \section{Fabrication and characterization} \label{sec:Fabrication and characterization} The cell was fabricated at Laboratoire Charles Fabry of Institut d'Optique (Palaiseau, France) and filled with a cesium vapor at the Laboratoire GEPI of Observatoire de Paris (Paris, France). It is made of four parts that are assembled by optical contact bonding leading to a monolithic ensemble (see Fig.~\ref{fig:Fig1}a). Using a unique material for all parts avoids differential thermal expansion that could damage the optical bonding. Borofloat glass has been chosen for its good optical properties in the visible and near-infrared spectrum, low cost and facility to be super-polished in comparison to other materials used in previous cells (sapphire for instance). This glass was chosen also to facilitate the sealing to the Pyrex side arm that will contain the Cs reservoir, as the thermal expansion coefficients and the softening temperatures of both glasses are similar. However, unlike sapphire, and similar to fused silica, it reacts with alkali at temperature exceeding $\sim 200^\circ$C. The fabricated cell is therefore a good candidate for operation at moderate temperatures. The central part (A) (see Fig.~\ref{fig:Fig1}b) is machined using boring-bits so as to let a $6$\,mm external diameter, $4$\,mm internal diameter and $25$\,mm long tube protrude from the front face to allow for connection to a Pyrex loading manifold containing the Cs reservoir. Glass is then removed from the inside of part (A) using milling-bits. In this hollow piece, a thick plate (B), carefully angled on one side is introduced. The cell is closed by two thick plates (C$_L$) and (C$_R$) wedged by $1^{\circ}$ to dismiss unwanted reflections. Plate (B) is optically contacted on one side on plate (C$_R$). On the other side, the gap formed by the angle between plate (B) and plate (C$_L$) forms the nano-enclosure and its realization is therefore crucial. The two main manufacturing difficulties that we will detail are: (i) the control of the nano-gap thickness, (ii) the polishing of the different surfaces to reach excellent surface roughness. The thin wedge of plate (B) is realized by polishing iterations and controls of the wedge thickness and flatness between two polishing steps. This is done interferometrically, using a He-Ne laser and a Fizeau interferometer. The wedge is realized such that $4$ fringes appear in the interferogram, parallel to the $y$ axis (see Fig.~\ref{fig:Fig1}) and with equal interfringe spacing. This corresponds to a thickness variation of $1.2\,\mu$m. Also, the absolute thickness is controlled such that, along its thickest edge, the plate thickness exceeds the thickness of plate (A) by $\sim 300$\,nm (see Fig.~\ref{fig:Fig2}a), forming a ridge that will eventually be removed. This is controlled mechanically, using an electronic depth gauge with a resolution of $100$\,nm. Parts A and B are then optically contacted on a parallel plate (P$_1$) and polished simultaneously so as to remove the $300$\,nm thick ridge and bring parts (A) and (B) to equal height for closing purpose. This height equalization is controlled interferometrically, using a flat etalon (P$_2$) as shown in Fig.~\ref{fig:Fig2}. This procedure ensures that final assembly brings the closing plate (C$_L$) in optical contact with both parts (A) and (B). The wedge thickness therefore varies between (exactly) $0$ and about $900$\,nm. \begin{figure} \includegraphics[width=\columnwidth]{FIG2.pdf} \caption{Procedure to realize a thin wedge with thickness varying between (exactly) $0$ and $900$\,nm. Parts (A) and (B) are optically contacted on a parallel plate (P$_1$). Left : the thickness of part (B) first exceeds that of part (A) by $300$\,nm. The flat etalon P$_2$ therefore sits on the ridge of part (B), leading to interference fringes (between part (A) and the flat etalon) with different white light colours on each side of the ridge. Right : after flattening the ridge and equalizing the heights of part (A) and (B), the flat etalon sits equally on parts (A) and (B), leading to equal colour fringes.} \label{fig:Fig2} \end{figure} To realize surfaces with very low roughness such as those inside the thin wedge, we first grind them finely using alumina abrasives and a brass grinding wheel. The surfaces are then manually polished on a pitched wheel using an aqueous solution of rare earth oxide abrasives with fine particle size ($<1\,\mu$m). The final polish is performed with an increasingly diluted solution, leading to a super-polish with surface roughness of $1${\AA} rms or less. Figure~\ref{fig:Fig3} shows a typical roughness profile, measured using an optical heterodyne profiler (ZYGO 5500) with a sensitivity of $0.2${\AA} rms~\cite{Sommargren1981}. The spatial resolution of the profiler is $1\,\mu$m, on the order of the distance travelled by the atomic dipoles of the vapor before they reach steady-state (due to collisions inside the gas or radiative decay). To investigate atom-surface interactions over shorter travelling distances and account for the transient response of the vapor to finer details of the surface~\cite{PeyrotArxiv}, we characterize the surface on a smaller scale. To do so, we also acquired images of the super-polished surfaces using an atomic force microscope (AFM) in tapping mode, equipped with a Silicium tip (n-type)~\cite{AFM_model}. The spatial resolution of the AFM imaging is given by the radius of the tip, typically $10$\,nm or less. Figure~\ref{fig:Fig3} compares typical results obtained for surfaces with a super-polish and a standard polish, for which abrasives are rougher and we stopped the polishing procedure before the final dilution step. The roughness is lower for a super-polish than a standard polish, as expected, and it decreases for lesser spatial resolutions as surface height fluctuations are better averaged~\cite{Polishing}. \begin{figure} \includegraphics[width=\columnwidth]{FIG3.pdf} \caption{(Top) AFM imaging of (a) a standard polish, and (b) a super-polished surface of our cell. (Bottom) Roughness profiles acquired with (c) the AFM and a spatial resolution of $20$\,nm and (d) the optical heterodyne profiler (spatial resolution : $1\,\mu$m). Red (blue) traces correspond respectively to the super-polish (standard polish)
certificate support & \cmark & \cmark \\ & QC\_SEC\_2 & SHA-2 certificate support & \cmark & \cmark \\ & QC\_SEC\_3 & RFC proxy support & \xmark & \cmark \\ & QC\_SEC\_4 & ARGUS auth integration & \xmark & \cmark \\ & QC\_SEC\_5 & World writable files & \cmark & \cmark \\ & QC\_SEC\_6 & Passwords in world readable files & \cmark & \cmark \\ \hline \multirow{ 2}{*}{Information Model} & QC\_INFO\_1 & GLUE schema 1.3 support & \xmark & \cmark \\ & QC\_INFO\_2 & GLUE schema 2.0 support & \cmark & \cmark \\ & QC\_INFO\_3 & Middleware version & \xmark & \cmark \\ \hline \multirow{ 2}{*}{Operations} & QC\_MON\_1 & Service probes & \xmark & \cmark \\ & QC\_ACC\_1 & Accounting records & \cmark & \cmark \\ \hline \multirow{ 1}{*}{Support} & QC\_SUPPORT\_1 & Bug tracking system & \cmark & \cmark \\ \hline \multirow{ 2}{*}{Specific QC} & QC\_FUNC\_1 & Basic functionality test & \cmark & \cmark \\ & QC\_FUNC\_2 & New feature/bug fixes test & \xmark & \cmark \\ \hline \end{tabular*} \caption{Quality Criteria (QC) requirements.} \label{table:qc} \end{table*} \subsection{Automation assessment of the EGI Quality Criteria requirements} \label{sec:automation:qc} Early introduced, the Quality Criteria (QC) document drives the validation of software products within the SWPP workflow. It defines the quality requirements that a given product has to fulfill in order to be considered ready for the subsequent \textit{staged rollout} phase. The document is continuously evolving and it is currently on the 7th release \cite{egi-qc-web}. Table \ref{table:qc} lists the quality requirements, their associated criticality and the possibilities of automation. Requirements cover the minimum criteria for EGI acceptance, and are grouped in seven broad categories: i) \textit{documentation}, ii) \textit{installation}, covering the full deployment of the product, iii) \textit{security}, iv) \textit{information model}, which validates the outbound data published by the information service, v) \textit{operations}, which groups probes related to EGI e-Infrastructure, vi) \textit{support} channels and vii) \textit{other specific criteria}, useful to extend the functionality and integration testing coverage. As depicted in the table, the only requirements that need human interaction are the ones related to the analysis of the documentation (\texttt{QC\_DOC\_x}): one could address programmatically the existence of the required documentation but not the suitability of its content. Nevertheless, the \texttt{QC\_DOC\_x} requirements seldom involve major changes --only when products are included for the first time--, commonly appearing as minimal improvements when it comes to software updates. Once the requirements suitable for automation are identified and defined, the process to tackle them has to be implemented. From the requirement list, deployment and testing related tasks are the most complex and as such will be thoroughly covered in the next sections. \section{Related work} \label{section:related-work} Free and open-source software operating systems, such as Linux distributions, rely on packages to distribute the software. Packages are archives containing the binaries, configuration files and dependency information, accessible through online repositories. Software packages can be found in different formats attached to a specific Linux distribution, although there are recent solutions that containeirise software applications, bundling their dependencies, to make them installable across all major Linux distributions \cite{AppImage,snap,flatpak}. Most quality-aware distributions have quality control policies for package creation \cite{debian-policy} and dependency resolution \cite{debian-piuparts}. Likewise Linux operating systems, the software distributed through UMD and CMD releases are in the form of packages, which also are passed through a quality control process. As the latter are lighter distributions, they can afford to go a step further in the software validation, imposing deployment and testing requirements. Software validation is the process that checks that the software satisfies its intended use, in conformance with the requirements coming from the end users. Tightly related and complemented by the software verification process, they together address \textit{"all software life cycle processes including acquisition, supply, development, operation and maintenance"}, as defined in the IEEE Standard for Software Verification and Validation (V\&V) \cite{IEEE-VV-standard}. V\&V are commonplace concepts in software engineering literature, but these terms are often used interchangeably in practice \cite{ryan2017use}. Indeed, both processes serve different purposes since verification is linked to the early stages of the software development life cycle, focusing on building the software correctly, while validation is commonly placed at the end of the development process, providing \textit{"evidence that the software and its associated products satisfy system requirements allocated to software at the end of each life cycle, solve the right problem, and satisfy intended use and user needs"}. The V\&V distinction is consistent with major systems engineering processes for software development, such as the Capability Maturity Model Integrated \cite{CMMI-development-2010,CMMI-services-2010,CMMI-acquisition-2010}, organized in maturity levels, where software V\&V practices are addressed at the higher levels of the process \cite{Monteiro2009}. A practical way to put V\&V into action is referring to the type of testing associated to each process. Software verification implies the static analysis of the source code, requirements and design documents for defect detection via inspections, walkthroughs and reviews \cite{german2003software}. Conversely, software validation requires the software to be in operation mode to be tested, so it is identified with the dynamic behaviour of the source code. There are different test-case design methodologies to tackle the dynamic analysis of a software component but all fall in the category of so-called \textit{black-box testing}. In this type of testing the test-cases are data or input/output driven, as the internal structure of the software is not of interest at this stage. In this regard, Myers et al. \cite{Myers2012} group under the term \textit{higher-order testing} the type of black-box testing methods --function, system, installation, integration, acceptance-- that aim to detect defects, from the user's perspective, by categorizing the test cases in which the software shall be exposed. The outcome is a quality criteria that guide the software validation. The ultimate goal of software validation is to increase the reliability of the systems being delivered to the users. Nevertheless, in software validation, the economics of testing shall be carefully considered. On the one hand, inadequate investment may imply solving defects at later stages. Quoting from Perry's book \cite{perry2007effective}, \textit{"it is at least 10 times as costly to correct an error after coding as before, and 100 times as costly to correct a production error"}. On the other hand, a generous effort may lead to increased project costs \cite{kit1995software}, not estimated in the project design, and delays in the release dates \cite{huang2005optimal}. Therefore, measuring the cost-effectiveness of the testing process does not only imply stopping at the optimum point where the cost of testing does not exceed the value obtained from the defects uncovered, but also focusing on the valuable features first within the appropriate testing phase in the life cycle \cite{bullock2000calculating}. Test automation is gaining momentum as a way to decrease the costs and time associated to software testing tasks. Process efficiency gets improved as automation optimizes the execution time of testing, maximizing the test coverage as more testing could be performed in less time \cite{saglietti2010automated}. The augmentation of the test coverage strengthens the quality and reliability of the end product, reducing the number of defects present. Automation also increases the overall effectiveness, avoiding the risk of human errors and achieving repeatability. This is particularly useful to reduce the regression risk by finding defects in the modified, but previously working, functionalities of the system \cite{dustin1999automated}. However, test automation does not always supersede manual testing. According to a number of studies \cite{rafi2012benefits,wiklund2017impediments,taipale2011trade}, not all the testing tasks can be easily automated, such as those requiring extensive knowledge in a specific domain, or they require a costly maintenance. In some cases, manual testing can complement automation since, based on its unstructured nature, it could potentially expose unexpected defects not considered in the previous stages within the software life cycle. \begin{figure*}[t] \centerline{\includegraphics[scale=0.5]{umd_verification_flowchart.png}} \caption{Product validation worflow in \texttt{umd-verification}. \label{fig_umd_verification_flowchart}} \end{figure*} \section{\texttt{umd-verification}: an automated tool for the software validation process} \label{section:implementation} In order to automatize the software validation process within EGI, the essential component would be a general purpose tool to manage the QC execution for each product validation. This tool would execute the appropriate tasks for each requirement analysis and, eventually, evaluate the obtained output values to judge whether the given requirement has been fulfilled, allowing the process to stop depending on its criticality. \subsection{Design considerations} \subsubsection*{Infrastructure as Code deployments} With the advent of Infrastructure as code (IaC) tools, the automated maintenance and provision of services in an infrastructure is powered through a series of definition files, which enforce the desired configuration of such services. Applying the IaC model to drive the deployment part of the SWPP
# TrainKerasTensorFlowForTaggedSounds - Loads a set of wav files and their tags, # analyzes them using Mel-Frequency Cepstral Coefficients and related derivatives, # splits the wav set into training and validation sets, and trains a Keras/TensorFlow # machine learning model to recognize the sounds. # # The resulting model can be used to scan a stream of similarly-calculated MFCC # rows from a longer or continuous wav stream, performing recognition within # each overlapped window of MFCCs, predicting whether each instrument is at the # start of that window. # # Usage: # python TrainKerasTensorFlowForTaggedSounds.py <sound-directory> [<results-file-path>] # # sound-directory: Path to folder where wav files and TaggedSoundData.json reside. # results-file-path: Optional path to output log file. Defaults to 'results.out' in the current directory. import collections from datetime import datetime import glob from InstrumentLoader import InstrumentLoader, mfccMaxRangeHz, wavMinAllowedHz from keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D from keras.models import Sequential import keras.optimizers import keras.utils from MfccWavLoader import MfccWavLoader import os import numpy import random from sklearn.model_selection import train_test_split from sklearn.preprocessing import MultiLabelBinarizer from SoundModelParams import SoundModelParams from SoundTagJsonReader import SoundTagJsonReader from StayAwake import preventComputerFromSleeping import sys from TrainAndValidateModel import TrainAndValidateModel if len(sys.argv) < 2: print('First param must be a directory containing wav files and ' + SoundTagJsonReader.fileName) exit(1) samplesDirPath = sys.argv[1] # Avoid loss of experimental results during long runs resultFileName = "results.out" if len(sys.argv) >= 3: resultFile = sys.argv[3] resultFile = open(resultFileName, "w") # Tee specific outputs to both the result file and stdout for safekeeping. def Log(*objects): print(*objects) print(*objects, file=resultFile) startDateTime = datetime.now() Log("Start:", startDateTime) # Keras / TensorFlow model: A Time-Delay Neural Network in the same vein as those used # for finding speech phonemes in a stream of Mel-Frequency Cepstral Coefficients # (along with other information like the first and second derivatives of the MFCCs # over one or more MFCC window widths). https://en.wikipedia.org/wiki/Time_delay_neural_network # We treat all of the tagged wav samples as potential phonemes in a large "alphabet." # # We model a TDNN as a Convolutional Neural Network using Temporal Pooling. # https://hal.archives-ouvertes.fr/file/index/docid/287426/filename/A_convolutional_neural_network_approach_for_objective_video_quality_assessment_completefinal_manuscript.pdf # http://www.cs.toronto.edu/%7Efritz/absps/waibelTDNN.pdf # http://isl.anthropomatik.kit.edu/cmu-kit/downloads/CP_1991_Review_of_TDNN_(Time-Delay_Neural_Network)_Architectures_for_Speech_Recognition.pdf # https://www.microsoft.com/en-us/research/wp-content/uploads/2017/08/ms_swbd17-2.pdf # https://d1ge0kk1l5kms0.cloudfront.net/images/G/01/amazon.jobs/Interspeech_2017_4._CB503635227_.pdf # https://github.com/carpedm20/lstm-char-cnn-tensorflow/blob/master/models/TDNN.py # # MFCCs and derivatives (M = # of coefficients) are computed at intervals (T) and a set of # MFCCs are selected into a frame (W = number of MFCC rows covering (W * T) time span). # # Per TDNN patterns, all the W * M coefficients in a frame are fully connected to a row of # perceptrons with a chosen activation function. We use N-neuron rows where the time span # covers the longest comparison sample (instrument) we have; hence N = ((ceil(maxSampleSpan / T) - W + 1). # # The resulting N rows of perceptrons are themselves framed in sets of V rows and fully connected # each to a row of I perceptrons where I is the number of different "instruments" we are trying # to detect. Number of rows R = N - V + 1 # # The resulting R rows are then selected into I columns with each column fully connected to an # output neuron corresponding to the probabilty of that instrument. # # |M|M| | # |F|F| | # |C|C| M coefficients per row # |C|C| | # |s|s| | # |1|2|3|4|5|6|... | # ---W--- # ---W--- # # | conv1 layer of varying kernel shapes and filter counts # V | # V # +-+-+-+-+ +-+-+ # | | | | | | | | # | | | | |...| | | Reduced sets of rows, one version per conv filter # | | | | | | | | # +-+-+-+-+ +-+-+ # ---V--- # ---V--- # # | conv2 layer of varying kernel shapes and filter counts # V # # +-+-+-+-+ +-+ # | | | | | | | # | | | | |...| | # +-+-+-+-+ +-+ # --------------- # | # V # # ******** Fully connected layer # | # V # # ABC Per-instrument classification neurons (outputs) # # We vary the following parameters to find the best accuracy. # - MFCC computation time intervals: 5ms, 10ms, 20ms, 25ms (matching various of the intervals in papers above) # - MFCC intervals in a sliding window presented to the neural network: 3, 4, 5 # (i.e. 15ms up to 125ms when multiplied by the time intervals). # - Training batch size instruments = InstrumentLoader(samplesDirPath, []) Log("Max, min MFCC rows across all instruments: ", instruments.maxMfccRows, instruments.minMfccRows) Log("Number of instruments by length in MFCC rows:") for k, v in sorted(instruments.mfccLenToSamplesMap.items()): suffix = '' if len(v) == 1: suffix = '(' + os.path.basename(v[0].wavPath) + ')' Log(" ", k, ": ", len(v), suffix) if instruments.minWavHz < wavMinAllowedHz: print("ERROR: One or more wav files found with rate in Hz less than configured minimum. Min found:", instruments.minWavHz, " allowed min:", wavMinAllowedHz) exit(1) # Zero-pad all sounds to the max number of rows. Assumes layout of (rows, cols, channels) where channels # can be just the MFCCs (dimension height of 1) or the MFCCs plus its derivatives (dimension height of 2 or more). # TODO: Or do we create multiple TDNNs trained at each row length? numMfccLayers = 1 numMfccColumns = 12 def zeroPad(mfccLayers): shape = numpy.shape(mfccLayers) numMfccRows = shape[0] numMfccColumns = shape[1] numMfccLayers = shape[2] if (numMfccRows < instruments.maxMfccRows): mfccLayers = numpy.append(mfccLayers, numpy.zeros(((instruments.maxMfccRows - numMfccRows), numMfccColumns, numMfccLayers)), axis=0) return (mfccLayers, numMfccLayers, numMfccColumns) allInstrumentMfccData = [] for i in range(len(instruments.allInstrumentMfccWavs)): zeroPaddedMfccData, numMfccLayers, numMfccColumns = zeroPad(instruments.allInstrumentMfccWavs[i].fullFeatureArray) allInstrumentMfccData.append(zeroPaddedMfccData) # Binarize the labels: Convert to a 1-hot array from text labels/tags. # Text labels for each array position are in the classes_ list on the binarizer. labelBinarizer = MultiLabelBinarizer() oneHotLabels = labelBinarizer.fit_transform(instruments.allInstrumentLabels) numInstruments = oneHotLabels.shape[1] Log("Num instruments:", numInstruments, ":", labelBinarizer.classes_) soundModelParams = SoundModelParams(instruments.maxMfccRows, labelBinarizer.classes_.tolist()) # Partition the data into training and testing splits using 80% of # the data for training and the remaining 20% for testing. # Non-random for repeatability. (instrumentMfccData, testInstrumentMfccData, instrumentOneHotLabels, testInstrumentOneHotLabels) = train_test_split( allInstrumentMfccData, oneHotLabels, test_size=0.2, random_state=42) # Reformat the resulting lists of training and test data into a 4D tensor # required by the Conv2D Keras layers. This is "channels_last" format, # (batch, height, width, channels). channels is the number of MFCC layers (just the coefficients # or the coefficients and their derivatives), width is the number of # MFCC columns, height is the number of rows, batch is the total set of # training or test. mfccTensors = numpy.stack(instrumentMfccData) print("Training tensor shape:", numpy.shape(mfccTensors)) testMfccTensors = numpy.stack(testInstrumentMfccData) print("Testing tensor shape:", numpy.shape(testMfccTensors)) # For the first convolutional layer, the number of convolutional filters # that are trained to find patterns amongst the input MFCCs. # Experimentation shows that high numbers of these produce the best results. numConv1FiltersValues = [ numInstruments * 32 ] # For the first convolutional layer, the size of the kernel that implies the size of the filters. # Other values are OK but 5x5 seems pretty good based on experimentation. conv1KernelSizeValues = [ 5 ] #conv1KernelSizeValues = [ 3, 5, 7, (3,5), (5,3), (2, numMfccColumns), (3, numMfccColumns), (4, numMfccColumns) ] # For the first convolutional layer, the size of the kernel that implies the size of the filters. # Other values are valid but 5x5 seems pretty good based on experimentation. conv1KernelSizeValues = [ 5 ] #conv1KernelSizeValues = [ 3, 5, 7, (3,5), (5,3), (2, numMfccColumns), (3, numMfccColumns), (4, numMfccColumns) ] # For the second convolutional layer, the number of convolutional filters # that are trained to find patterns amongst the results of the first conv layer. # A tip at https://www.pyimagesearch.com/2018/04/16/keras-and-convolutional-neural-networks-cnns/ # recommends more filters here than in the conv1 layer. # Experimentation however showed good results with a higher number of conv1 filters and about half as many for conv2. numConv2FiltersValues = [ numInstruments * 16 ] # For the second convolutional layer, the size of the kernel that implies the size of the filters. # Experimentation showed 5x5 and 3x6 (3 rows by whole width) having good results. We keep 5x5 for now. conv2KernelSizeValues = [ 5 ] #conv2KernelSizeValues = [ 3, 5, (3,5), (5,3), (3,6), (5,6), (7,6) ] # Other values can be more optimal but setting this value based on experimentation. numFullyConnectedPerceptronsLastLayerValues = [ numInstruments * 16 ] #numFullyConnectedPerceptronsLastLayerValues = [ numInstruments * 2, numInstruments * 3, numInstruments * 4, numInstruments * 8, numInstruments * 16, numInstruments * 32, numInstruments * 64, numInstruments * 128 ] # Settings here based on experiments (see results\ directory). conv1DropoutValues = [ 0 ] conv2DropoutValues = [ 0.25 ] fullyConnectedDropoutValues = [ 0.5 ] results = [] saveModelToPath = "./Model.h5" preventComputerFromSleeping(True) try: for numConv1Filters in numConv1FiltersValues: for conv1KernelSize in conv1KernelSizeValues: for numConv2Filters in numConv2FiltersValues: for conv2KernelSize in conv2KernelSizeValues: for numFullyConnectedPerceptronsLastLayer in numFullyConnectedPerceptronsLastLayerValues: for conv1Dropout in conv1DropoutValues: for conv2Dropout in conv2DropoutValues: for fullyConnectedDropout in fullyConnectedDropoutValues: result =
\cite{Wang-Yau2}. \begin{proposition}\label{conservation} For any surface $\Sigma$ in the reference spacetime, we have the following conservation law: \[ \int \Omega \widehat H d \widehat \Sigma = \int \left[ - \Omega\sqrt{1+\Omega^2| \nabla \tau|^2} \langle H_0, \breve e_3 \rangle - \Omega^2 \langle D_{\nabla \tau} \breve e_3 , \breve e_4 \rangle \right] d \Sigma. \] \end{proposition} \begin{proof} Multiply \eqref {relation_mean_curvature} by $\Omega$ and integrate over $\Sigma$. By \eqref{relation_lower_metric}, the two area forms satisfy \begin{equation}\label{area_form} d \widehat \Sigma = \sqrt{1+\Omega^2| \nabla \tau|^2} d \Sigma . \end{equation} \end{proof} To define the quasi-local energy, the right hand side of the conservation law is rewritten in terms of the mean curvature gauge in the following proposition. \begin{proposition} \label{total_mean_mean_gauge} In terms of the connection one-form in mean curvature gauge $\alpha_{H_0}$, the conservation law in Proposition \ref{conservation} reads \[ \begin{split} & \int \Omega \widehat H d \widehat \Sigma = \int \Big [ \sqrt{(1+\Omega^2| \nabla \tau|^2) |H_0|^2 \Omega^2 + div(\Omega^2 \nabla \tau)^2 } + div(\Omega^2 \nabla \tau) \theta - \alpha_{H_0} (\Omega^2 \nabla \tau) \Big ] d \Sigma, \end{split} \] where \begin{equation}\label{gauge_angle} \theta = - \sinh^{-1} \frac{ div(\Omega^2 \nabla \tau) }{|H_0|\Omega \sqrt{1+\Omega^2| \nabla \tau|^2} }. \end{equation} \end{proposition} \begin{proof} Let $\theta$ be the angle between the oriented frames $\{ -\frac{H}{|H|}, \frac{J}{|H|} \}$ and $ \{ \breve e_3,\breve e_4\}$, i.e. \begin{equation} \begin{split} \label{gauge} - \frac{H_0}{|H_0|} = & \cosh \theta \breve e_3 + \sinh \theta \breve e_4 \\ \frac{J_0}{|H_0|} = & \sinh \theta \breve e_3 + \cosh \theta \breve e_4. \\ \end{split} \end{equation} In particular, we have \begin{equation} \label{gauge_change} \langle H_0 , \breve e_4 \rangle = |H_0| \sinh \theta, \quad - \langle H_0 , \breve e_3 \rangle = |H_0| \cosh \theta, \text{ \,\, and \,\,} \alpha_{H_0} = \alpha_{\breve e_3} + d \theta .\end{equation} To compute $\langle H_0 , \breve e_4 \rangle$, we start with $\langle D_{e_a} \frac{\partial}{\partial t} , e_a \rangle =0$ and then use \eqref{decompose_t} to derive \[ \Omega \sqrt{1+\Omega^2| \nabla \tau|^2} \langle D_{e_a} \breve e_4, e_a \rangle = \langle D_{e_a} \Omega^2 \nabla \tau , e_a \rangle. \] The right hand side is precisely $ div(\Omega^2 \nabla \tau)$. As a result, \[ - \langle H_0 , \breve e_4 \rangle = \frac{ div(\Omega^2 \nabla \tau) }{\Omega \sqrt{1+\Omega^2| \nabla \tau|^2} }\] and $\theta$ is given by \eqref{gauge_angle}. The proposition now follows from a direct computation. \end{proof} \section{Definition of the Quasi-local energy} Now we consider a surface $\Sigma$ in a general spacetime $N$. As in \cite{Wang-Yau2,Wang-Yau3}, a quasi-local energy is assigned to each pair of an isometric embedding $X$ of $\Sigma$ into the reference spacetime, and an observer $T_0$ (a future timelike Killing field). Isometric embeddings into the dS spacetime and the AdS spacetime are studied in \cite{Lin-Wang}. The set of observers is simply the orbit of $\frac{\partial}{\partial t}$ under the isometry group of the reference spacetime. See Section 7.2 for more details in the AdS case. Let $\Sigma$ be a surface in a spacetime $N$. We assume the mean curvature vector $H$ of $\Sigma$ is spacelike and the normal bundle of $\Sigma$ is oriented. The data we use for defining the quasi-local energy is the triple $(\sigma,|H|,\alpha_H)$ where $\sigma$ is the induced metric, $|H|$ is the norm of the mean curvature vector, and $\alpha_H$ is the connection one-form of the normal bundle with respect to the mean curvature vector \[ \alpha_H(\cdot )=\langle \nabla^N_{(\cdot)} \frac{J}{|H|}, \frac{H}{|H|} \rangle. \] Here $J$ is the reflection of $H$ through the incoming light cone in the normal bundle. For an isometric embedding $X$ into the reference spacetime, we write $X= (\tau ,X^1,X^2,X^3)$ with respect to a fixed static chart of the reference spacetime. The quasi-local energy associated to the pair $(X,\frac{\partial}{\partial t})$ is defined to be \begin{equation}\label{energy_fix_chart_base} \begin{split} E(\Sigma, X,\frac{\partial}{\partial t}) = & \frac{1}{8 \pi} \Big \{ \int \Omega \widehat H d \widehat \Sigma - \int \Big [ \sqrt{(1+\Omega^2| \nabla \tau|^2) |H|^2 \Omega^2 + div(\Omega^2 \nabla \tau)^2 } \\ & \qquad - div(\Omega^2 \nabla \tau) \sinh^{-1} \frac{ div(\Omega^2 \nabla \tau) }{\Omega |H|\sqrt{1+\Omega^2| \nabla \tau|^2} } - \Omega^2 \alpha_{H} (\nabla \tau) \Big ] d \Sigma \Big \}. \end{split} \end{equation} Using Proposition \ref{total_mean_mean_gauge}, we have \begin{equation}\label{energy_fix_chart_graph} \begin{split} E(\Sigma, X,\frac{\partial}{\partial t}) =& \frac{1}{8 \pi} \Big \{ \int \Big [ \sqrt{(1+\Omega^2| \nabla \tau|^2) |H_0|^2 \Omega^2 + div(\Omega^2 \nabla \tau)^2 } \\ & \qquad - div(\Omega^2 \nabla \tau) \sinh^{-1} \frac{ div(\Omega^2 \nabla \tau) }{\Omega |H_0|\sqrt{1+\Omega^2| \nabla \tau|^2} } - \Omega^2 \alpha_{H_0} (\nabla \tau) \Big ] d \Sigma\\ & - \int \Big [ \sqrt{(1+\Omega^2| \nabla \tau|^2) |H|^2 \Omega^2 + div(\Omega^2 \nabla \tau)^2 } \\ & \qquad - div(\Omega^2 \nabla \tau) \sinh^{-1} \frac{ div(\Omega^2 \nabla \tau) }{\Omega |H|\sqrt{1+\Omega^2| \nabla \tau|^2} } - \Omega^2 \alpha_{H} (\nabla \tau) \Big ] d \Sigma \Big \}. \end{split} \end{equation} \begin{remark} \label{Brown-York_positive} For an isometric embedding into the static slice of the AdS spacetime, \[ E(\Sigma,X,\frac{\partial}{\partial t}) = \int \Omega(H_0-|H|) d\Sigma.\] Such an expression was studied in \cite{Wang-Yau07, Shi-Tam}. In particular, the positivity of the above expression was proved in \cite{Shi-Tam}. \end{remark} While the above expression seems to depend on the choice of the static chart, we can rewrite it purely in terms of the isometric embedding $X$ and the observer $T_0$. In fact, $\Omega^2 = - \langle T_0,T_0 \rangle$ and $- \Omega^2 \nabla \tau= T_0^\top $, the tangential component of $T_0$ to $X(\Sigma)$. Thus \begin{definition}\label{energy_invariant} The quasi-local energy $E(\Sigma, X,T_0)$ of $\Sigma$ with respect to the pair $(X,T_0)$ of an isometric embedding $X$ and an observer $T_0$ is \[ \begin{split} & 8 \pi E(\Sigma, X,T_0) \\ =& \int_{\Sigma} \Big [ \sqrt{ - \langle T_0^\perp,T_0^\perp \rangle |H_0|^2 + div(T_0^\top)^2 } - div(T_0^\top) \sinh^{-1} \frac{ div(T_0^\top) }{|H_0|\sqrt{- \langle T_0^\perp,T_0^\perp \rangle} } + \alpha_{H_0} (T_0^\top) \Big ] d\Sigma\\ & - \int_{\Sigma} \Big [ \sqrt{ - \langle T_0^\perp,T_0^\perp \rangle |H|^2 + div(T_0^\top)^2 } - div(T_0^\top) \sinh^{-1} \frac{ div(T_0^\top) }{|H|\sqrt{- \langle T_0^\perp,T_0^\perp \rangle} } + \alpha_{H} (T_0^\top) \Big ]d\Sigma . \end{split} \] where $T_0^\perp$ is the normal part of $T_0$ to $X(\Sigma)$. \end{definition} The quasi-local energy is invariant with respect to the isometry of the reference spacetime if an isometry is applied to both $X$ and $T_0$. As a result, in studying the variation of $E$, it suffices to consider the quasi-local energy with respect to a fixed $T_0 = \frac{\partial}{\partial t}$. The quasi-local energy is expressed in terms of the difference of two integrals. We refer to the first integral in \eqref{energy_fix_chart_base} as the reference Hamiltonian and the second integral in \eqref{energy_fix_chart_base} as the physical Hamiltonian. \section{First variation of the quasi-local energy} In this section, we compute the first variation of the quasi-local energy. It suffices to consider the variation of the isometric embedding $X$ while fixing $T_0=\frac{\partial}{\partial t}$. \begin{definition} An optimal isometric embedding for the data $(\sigma, |H|, \alpha_H)$ is an isometric embedding $X_0$ of $\sigma$ into the reference spacetime (dS or AdS) that is a critical point of the quasi-local energy $E(\Sigma, X, \frac{\partial}{\partial t})$ among all nearby isometric embeddings $X$ of $\sigma$. \end{definition} For the Wang-Yau quasi-local energy with the Minkowski reference, the first variation of the quasi-local energy is computed in Section 6 of \cite{Wang-Yau2}. The computation of the variation of the physical Hamiltonian is straightforward and the main difficulty is to evaluate the variation of the reference Hamiltonian. In \cite{Wang-Yau2}, this is done by computing the variation of the total mean curvature of a surface in $\mathbb R^3$ with respect to a variation of the metric. This becomes more complicated here since the isometric embedding equation also involves the static potential when the reference is either the dS or AdS spacetime. Instead of following the approach in \cite{Wang-Yau2}, we derive the first variation by an alternative approach used in \cite{Chen-Wang-Yau1}. The idea there is to consider the image $X(\Sigma)$ in the reference spacetime as a new physical surface and show that it is naturally a critical point of the quasi-local energy with respect to other isometric embeddings into the reference spacetime. We first derive the following result for surfaces in the reference spacetime. \begin{theorem} \label{thm_own_critical} The identity isometric embedding for a surface $\Sigma$ in the reference spacetime is a critical point of its own quasi-local energy. Namely, suppose $\Sigma$ is in the reference spacetime defined by an embedding $X_0$. Consider a family of isometric embeddings $X(s)$, $-\epsilon<s<\epsilon$ such that $X(0)=X_0$. Then we have \[ \frac{d}{ds}|_{s=0} E(\Sigma, X(s), \frac{\partial}{\partial t})= 0. \] \end{theorem} \begin{proof} Denote $\frac{d}{ds}|_{s=0}$
parameters. \end{lemma} The optimization landscape of the inner loop is similar to that of the standard infinite-horizon LQR problem studied in \cite{fazel2018global, bu2019LQR, malik2020derivative}. We note that in our finite-horizon setting, independent process noises (together with the random initial state) with positive-definite covariances are essential for $\Sigma_{\bK,\bL}$ to be full-rank (in contrast to only requiring a random initial state with positive-definite covariance in \cite{fazel2018global}). The full-rankness of $\Sigma_{\bK,\bL}$ further guarantees that the stationary point of the inner-loop objective function $\cG(\bK, \bL)$ is also the unique optimal solution, as suggested in Lemma \ref{lemma:station}. Furthermore, it also ensures that the inner-loop objective function $\cG(\bK, \bL)$ is PL, as shown in Lemma \ref{lemma:landscape_L}. These two properties together are the key enablers to establish the global convergence of our double-loop method, despite the problem being nonconvex-nonconcave. Subsequently, we analyze the optimization landscape of the outer loop in the following lemma (proved in \S\ref{proof:landscape_K}), subject to the feasible set $\cK$ defined by \eqref{eqn:set_ck}. Note that the set $\cK$ is critical, as by Lemma \ref{lemma:assumption_L}, it is a sufficient and almost necessary condition to ensure that the solution to the associated inner-loop subproblem is well defined. More importantly, from a robust control perspective, such a set $\cK$ represents the set of control gains that enjoy a certain level of \emph{robustness}, which share the same vein as the $\cH_{\infty}$-norm constraint for the infinite-horizon LTI setting \cite{zhang2019policymixed}. Indeed, they both enforce the gain matrix to \emph{attenuate} a prescribed level of disturbance. This level of robustness also corresponds to the level of \emph{risk-sensitivity} of the controllers in LEQG problems. For more discussion of both the finite- and infinite-horizon discrete-time disturbance attenuation problem, we refer the reader to \cite{doyle1989state, whittle1990risk, bacsar2008h}. \begin{lemma}(Outer-Loop Landscape)\label{lemma:landscape_K} There exist zero-sum LQ dynamic games such that $\cG(\bK, \bL(\bK))$ is nonconvex and noncoercive on $\cK$. Specifically, as $\bK$ approaches $\partial\cK$, $\cG(\bK, \bL(\bK))$ does not necessarily approach $+\infty$. Moreover, the stationary point of $\cG(\bK, \bL(\bK))$ in $\cK$, denoted as $(\bK^*, \bL(\bK^*))$, is unique and constitutes the unique Nash equilibrium of the game. \end{lemma} \begin{figure}[t] \begin{minipage}{.48\textwidth} \centering \includegraphics[width=1\textwidth]{plots/landscape.png} \caption{\textit{Left:} Optimization landscape of LQR, where the dashed line represents the boundary of the stabilizing controller set. \textit{Right:} Optimization landscape of the outer loop, with the dashed line representing the boundary of $\cK$. The solid lines represent the contour lines of the objective function, $K$ denotes the control gain of one iterate, and $\bigstar$ is the global minimizer.} \label{fig:landscape} \end{minipage} \hfill \begin{minipage}{.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{plots/ir_plot.png} \caption{Illustrating the proof idea for Theorem \ref{theorem:ir_K}. Starting with any $\bK \hspace{-0.1em} \in \hspace{-0.1em} \cK$ that induces a $\bP_{\bK, \bL(\bK)}$, denote the gain matrix after one step of the updates \eqref{eqn:npg_K} or \eqref{eqn:gn_K} as $\bK'$. We construct an iterative argument backward in time to find a constant stepsize such that $P_{K'_t, L(K'_t)} \hspace{-0.1em}\geq\hspace{-0.1em} 0$ exists and satisfies $P_{K'_t, L(K'_t)} \hspace{-0.1em}\leq\hspace{-0.1em} P_{K_t, L(K_t)}$ for all $t$. Specifically, for any $t \in \{0, \hspace{-0.1em}\cdots\hspace{-0.1em}, N\hspace{-0.1em}-\hspace{-0.1em}1\}$, a $K'_t$ satisfying \textcolor{mysquare}{$\blacksquare$} also satisfies \textcolor{mytriangle}{\large$\blacktriangle$\normalsize}. Moreover, \textcolor{mybigcube}{$\blacklozenge$} is automatically enforced by Assumption \ref{assumption_game_recursive}. Combined, $\bK' \hspace{-0.1em}\in \hspace{-0.1em}\cK$ is guaranteed.} \label{fig:proof1} \end{minipage} \end{figure} Lack of the coercivity brings up challenges for convergence analysis, as a decrease in the value of the objective function cannot ensure feasibility of the updated gain matrix, in contrast to the standard LQR problem \cite{fazel2018global,bu2019LQR}. We illustrate the difficult landscape of the outer loop in Figure \ref{fig:landscape}. To address this challenge, we will show next that the natural PG (NPG) and Gauss-Newton (GN) updates, these two specific policy search directions, can automatically preserve the feasibility of the iterates on-the-fly, which was referred to as the {implicit regularization} property in \cite{zhang2019policymixed} for infinite-horizon LTI systems. \subsection{Update Rules and Global Convergence}\label{sec:double_update} In this section, we introduce three PG-based update rules. We use $l, k \geq 0$ to represent the iteration indices of the inner- and outer-loop updates, respectively, and we additionally define \begin{align}\label{eqn:gh_def} \bH_{\bK, \bL} := \bR^w - \bD^{\top}\bP_{\bK, \bL}\bD, \quad \bG_{\bK, \bL(\bK)} := \bR^u + \bB^{\top}\tilde{\bP}_{\bK, \bL(\bK)}\bB, \end{align} where $\tilde{\bP}_{\bK, \bL(\bK)}$ is defined in \eqref{eqn:tilde_pk}. The update rules, motivated by \cite{fazel2018global, bu2019LQR, zhang2019policymixed}, can be written as follows: \vspace{-0.9em} \hspace{-2.55em} \begin{minipage}[b]{0.5\linewidth} \centering\begin{align} \text{PG: } \quad \bL_{l+1} &= \bL_l + \eta\nabla_{\bL}\cG(\bK_k, \bL_l), \label{eqn:pg_L}\\ \text{NPG: } \quad \bL_{l+1} &= \bL_l + \eta\nabla_{\bL}\cG(\bK_k, \bL_l)\Sigma^{-1}_{\bK_k, \bL_l}, \label{eqn:npg_L} \\ \text{GN: } \quad \bL_{l+1} &= \bL_l + \eta\bH_{\bK_k, \bL_l}^{-1}\nabla_{\bL}\cG(\bK_k, \bL_l)\Sigma^{-1}_{\bK_k, \bL_l},\label{eqn:gn_L} \end{align} \end{minipage} \hspace{0.5em} \begin{minipage}[b]{0.5\linewidth} \centering \begin{align} \bK_{k+1} &= \bK_k - \alpha\nabla_{\bK}\cG(\bK_k, \bL(\bK_k)), \label{eqn:pg_K}\\ \bK_{k+1} &= \bK_k - \alpha\nabla_{\bK}\cG(\bK_k, \bL(\bK_k))\Sigma^{-1}_{\bK_k, \bL(\bK_k)}, \label{eqn:npg_K} \\ \bK_{k+1} &= \bK_k - \alpha\bG_{\bK_k, \bL(\bK_k)}^{-1}\nabla_{\bK}\cG(\bK_k, \bL(\bK_k))\Sigma^{-1}_{\bK_k, \bL(\bK_k)},\label{eqn:gn_K} \end{align} \end{minipage} where $\eta, \alpha >0$ are constant stepsizes for the inner loop and the outer loop, respectively. For a fixed $\bK \in \cK$, we have $\bH_{\bK, \bL_l}$ invertible for any $l \geq 0$. This is because $\bH_{\bK, \bL} \geq \bH_{\bK, \bL(\bK)} >0$ for all $\bL$. Also, we have $\bG_{\bK_k, \bL(\bK_k)}$ invertible for any $\bK_k \in \cK$, due to $\bG_{\bK_k, \bL(\bK_k)} \geq \bR^u$ in the p.s.d. sense at such iterations. We note that the PG \eqref{eqn:pg_L}, \eqref{eqn:pg_K} and NPG \eqref{eqn:npg_L}, \eqref{eqn:npg_K} updates can be estimated using samples, as shown in \cite{fazel2018global,malik2020derivative}. To prove the global convergence of our algorithms to the Nash equilibrium, we first present the convergence results for three inner-loop PG updates. \begin{theorem}(Inner-Loop Global Convergence)\label{theorem:conv_L} For a fixed $\bK\in \cK$ and an arbitrary $\bL_0$ that induces a finite $\cG(\bK, \bL_0)$, we define a superlevel set $\cL_{\bK}(a)$ as in \eqref{eqn:levelset_L}, where $a < \cG(\bK, \bL_0)$ is an arbitrary constant. Then, for $l \geq 0$, the iterates $\bL_l$ following \eqref{eqn:pg_L}-\eqref{eqn:gn_L} with stepsizes satisfying \begin{align*} \text{PG: } \ \eta \leq \frac{1}{\psi_{\bK, a}},\quad \text{Natural PG: } \ \eta \leq \frac{1}{2\|\bH_{\bK, \bL_0}\|}, \quad\text{Gauss-Newton: } \ \eta \leq \frac{1}{2}, \end{align*} converge to $\bL(\bK)$ at globally linear rates, where $\psi_{\bK, a}$ is the smoothness constant of the objective over $\cL_{\bK}(a)$, and $\bH_{\bK, \bL}$ is as defined in \eqref{eqn:gh_def}. Moreover, with $\eta = 1/2$, GN \eqref{eqn:gn_L} converges to $\bL(\bK)$ with a locally Q-quadratic rate. \end{theorem} The proof of Theorem \ref{theorem:conv_L} is deferred to \S\ref{proof:conv_L}. For the outer loop, we require the iterates of $\bK$ to stay within $\cK$ in order for the solution to the associated inner-loop subproblem to be well defined. To meet this requirement, we introduce the \emph{implicit regularization} property for the NPG \eqref{eqn:npg_K} and GN \eqref{eqn:gn_K} updates in Theorem \ref{theorem:ir_K}, with its proof being provided in \S\ref{proof:ir_K}. \begin{theorem}(Implicit Regularization)\label{theorem:ir_K} {Let $\bK_0 \in \cK$ and let the stepsizes satisfy \begin{align*} \text{Natural PG: } \alpha \leq 1/\|\bG_{\bK_0, \bL(\bK_0)}\|, \quad \text{Gauss-Newton: }\alpha &\leq 1. \end{align*} Then, the iterates $\bK_k \in\cK$ for all $k \geq 0$. In other words, the sequence of solutions to the Riccati equation \eqref{eqn:DARE_black_L}, $\{\bP_{\bK_k, \bL(\bK_k)}\}$, exists, and for all $k \geq 0$, $\bP_{\bK_k, \bL(\bK_k)}$ always satisfies the conditions in \eqref{eqn:set_ck}. Furthermore, the sequence $\{\bP_{\bK_k, \bL(\bK_k)}\}$ is monotonically non-increasing and bounded below by $\bP_{\bK^*, \bL(\bK^*)}$, in the p.s.d. sense.} \end{theorem} We note that one key step of the proof for Theorem \ref{theorem:ir_K} is to ensure the \emph{existence} of a solution to \eqref{eqn:DARE_black_L} along the iterations, by carefully controlling the sizes of update steps along certain descent directions. We provide an illustration of the proof idea in Figure \ref{fig:proof1}. Specifically, the implicit regularization property holds for NPG and GN directions because they can ensure \emph{matrix-wise} decrease of $\bP_{\bK_k,\bL(\bK_k)}$ (more than just that of the objective value $\cG(\bK,\bL(\bK))$), while other descent directions (e.g., vanilla PG) can only decrease $\cG(\bK,\bL(\bK))$, which is a \emph{scalar-wise} decrease and is not sufficient to ensure that the next iterate stays in $\cK$. Note that the intuition for implicit regularization here is thus more explicit than that in \cite{zhang2019policymixed} for infinite-horizon LTI settings, where some linear matrix inequalities have to be delicately designed. We highlight the importance of the implicit regularization property as follows. \begin{remark}(Preserving the Robustness of $\bK_0$)\label{remark:ir} Suppose that the initial control gain matrix satisfies $\bK_0 \in \cK$. Then Lemma \ref{lemma:equivalent} shows that $\bK_0$ is the control gain matrix that attains a $\gamma$-level of disturbance attenuation. By the implicit regularization property in Theorem \ref{theorem:ir_K}, every iterate $\bK_k\in\cK$ for all $k\geq 0$ following the NPG \eqref{eqn:npg_K} or the GN \eqref{eqn:gn_K} update rules will thus preserve this $\gamma$-level of disturbance attenuation throughout the
# GSEB Solutions Class 11 Chemistry Chapter 12 Organic Chemistry: Some Basic Principles and Techniques Gujarat Board GSEB Textbook Solutions Class 11 Chemistry Chapter 12 Organic Chemistry: Some Basic Principles and Techniques Textbook Questions and Answers. ## Gujarat Board Textbook Solutions Class 11 Chemistry Chapter 12 Organic Chemistry: Some Basic Principles and Techniques ### GSEB Class 11 Chemistry Organic Chemistry: Some Basic Principles and Techniques Text Book Questions and Answers Question 1. What are hybridisation states of each carbon atom in the following compounds ? CH2=C=O, CH3-CH=CH2, (CH3)2C=O, CH2=CHCN, C6H6 • CH = C = O : sp², sp² • CH3-CH = CH2 : sp³ sp², sp² • (CH3)2 C = O : sp³ sp³ sp² • CH2 = CHCN : sp², sp², sp • C6H6 : sp² Question 2. Indicate the crand nbonds in the following molecules : C6H6. C6H12, CH2C12, CH2=C=CH2, CH3NO2, HCONHCH3. Question 3. Write bond line formulas for: isopropyl alcohol, 2, 3-Dimethyl butanal, Heptan-4-one. Question 4. Give the I.U.P. A.C. names of the following compounds : (a) Propylbenzene (b) 3-Methylpentane nitrile (c) 2, 5-Dimethylheptane (d) 3-Bromo-3-Chloroheptane (e) 3-ChIoropropan-l-al (f) 2, 2-Dichloroethanol. Question 5. Which of the following represents the correct IUPAC name for the compounds concerned : (a) 2, 2-Dimethylpentane or 2-Dimethylpentane (b) 2,4,7-Trimethyloctane or 2,5,7-Trimethyloctane (c) 2-chloro-4-methylpentane Or 4-chloro-2-methylpentane (d) But-3-yn-l-ol or But-4-ol-l-yne. (a) 2, 2-Dimethyl pentane (b) 2, 4, 7-Trimethyloctane (c) 2-chloro-4-methylpentane (d) But-3-yn-l-ol. Question 6. Draw formulas for the first five members of each homologous series beginning with the following compounds, (a) H-COOH (b) CH3COCH3 (c) H-CH=CH2. (a) H-COOH; CH3COOH; CH3CH2COOH; CH3CH2CH2COOH; CH3 – CH2 – CH2 – CH2 – COOH (b) (c) H2C=CH2; CH3-CH=CH2; CH3-CH2-CH=CH2; CH3 – CH2 – CH2 – CH=CH2 and CH3-CH2-CH2-CH2-CH=CH2. Question 7. Give condensed and bond line structural formulas m% identify the functional group (s) present if any, for: (a) 2, 2, 4-Trimethypentane (b) 2-Hydroxy-l, 2, 3-propanetricarboxylic acid (c) Hexanedial. Question 8. Identify the functional groups in the following compounds. (a) -CHO aldehyde group: —OH hydroxy group; -OMe Methoxy group (ether) (b) (c) -CH = CH-NO2 : -HC = CH – alkene group NO2 – Nitro group. (Ethylenic double bond) Question 9. Which of the two : O2N-CH2-CH2O or CH3-CH2-O is expected to be more stable and why? O2N-CH2-CH2O – is expected to be more stable than CH3CH2O. In O2N ← CH2O-CH2O, due to the -I effect exerted by -NO2 group, there occurs a dispersal of the negative charge, whereas in CH3 → CH2O- due to + I effect exerted by CH3, there occurs an intensification of the negative charge. Dispersal of the charge leads to more stability of the ion. Question 10. Explain why alkyl groups act as electron donors when attached to a n system. Due to + I effect, electrons are released towards C atom holding it bond next to it Shifting of π electron-pair in a multiple bond at the call of the attacking reagent is termed as Electromeric effect. Question 11. Draw the resonance structures for the following compounds. Show the electron shift using curved-arrow notation, (a) C6H5OH (b) C6H5NO2 (c) CH3CH = CHCHO (d) C6H5-CHO (e) C6H5-C+H2 (f) CH3CH = CHCH+2 (a) Resonance structures of Phenol: There are 5 canonical structure for phenol. Phenol is a resonance hybrid of these 5 structures. (b) Resonance structures of nitrobenzene : These are the following four resonating structures of nitrobenzene. (c) CH3 – CH = CH-CHO (d) C6H5CHO (e) C6H5C+H2 (f) Question 12. What are electrophiles and nucleophiles? Explain with examples. Electrophiles : A reagent which is in search of electrons is called an electrophile. It is electron-seeking reagent and takes away a pair of electrons. Neutral electrophiles. BF3, R; :CR2, AlCl3, FeCl3 Positive electrophiles. H+, H3O+, Cl+, BR+, I+ $$\stackrel{+}{\mathrm{NO}}$$2, $$\stackrel{+}{\mathrm{NO}}$$, R+ They are Lewis acids. Nucleophiles: A reagent which is in search of a relatively positive centre (nucleus-loving). They are electron-rich species containing at least one lone pair of electrons. Neutral nucleophiles : $$\mathrm{H}_{2} \ddot{\mathrm{O}}:,: \mathrm{NH}_{3}, \mathrm{R} \ddot{\mathrm{NH}}_{2}, \mathrm{ROH}, \mathrm{RSH}, \mathrm{ROR}$$ Negative nucleophiles : H, Cl, Br, I, R, OH, OR, SR, NH2,CN, RCOO etc. Question 13. Identify the reagents shown in bold in the following equations as nucleophiles or electrophiles. (a) CH3COOH+ HO → CH3COO + H2O (b) CH3COCH3 + CN → (CH3)2C(CN) (OH) (c) C6H5 + CH3C+O → C6H5COCH3. (a) HO is a nucleophile (b) $$\bar { C }$$N is a nucleophile (c) CH3C+O is an electrophile. Question 14. Classify the following reactions in one of the reaction type studied in this unit : (a) CH3CH2Br + HS → CH3CH2SH (b) (CH3)2C = CH2 + HCl → (CH3)2 ClC (c) CH3CH2Br + HO → CH2 = CH2 + H2O (d) (CH3)3 C – CH2OH + HBr → (CH3)2 CBrCH2OH (a) Nucleophilic substitution reaction (c) P-Elimination reaction (d) Substitution reaction and rearrangement reaction. Question 15. What is the relationship between the members of following pairs of structures? Are they structural or geometrical isomers or resonance contributors? (a) They differ in the position of the functional group and are position isomers, (structural isomers) (b) They are geometrical isomers. They are resonance contributors. Question 16. For the following bond cleavages, use curved-arrows to show the electron flow and classify each as homolysis or heterolysis. Identify reactive intermediate produced as free radical, carbocation and carbanion. (a) It is a case of homolytical fission. Free-radicals are formed. (b) It is a case of heterolysis. Reaction intermediate formed is carbanion. (c) It is a case of heterolysis [Heterolytical bond fission] reaction intermediate formed is carbocation (d) It is a case of heterolysis [Heterolytical bond fission] reaction intermediate formed is carbocation. Question 17. Explain the terms Inductive and Electromeric effects. Which electron displacement effect explains the following correct orders of acidity of the carboxylic acids? (a) Cl3CCOOH > Cl2CHCOOH > ClCH2COOH (b) CH3CH2COOH > (CH3)2CHCOOH > (CH3)3C.COOH Inductive effect : The displacmeent of a electron cloud along a saturated carbon chain whenever an electron-withdrawing or electron-donating group is present at the end of the chain is called Inductive effect or the I-effect : This effect weakens steadily with increasing distance from the substituent (electron-withdrawing or electron-donating group) and actually dies down after three carbon atoms. There are two type of Inductive effects: (i) If the substituent attached to the end of the chain is electron- withdrawing, the effect is called I effect. (ii) If the substituent attached to the end of the carbon chain is electron-donating, the effect is called + I-effect. The + I effect of some of the atoms or groups in the decreasing order is Inductive effect is a permanent effect operating in the ground state of the organic molecules and hence is responsible for high, melting point, boiling point and dipole moment of polar compound. Electromeric Effect: It involves the complete transfer of electrons of a multiple bond (double or triple bond) to one of the bonded atoms (usually more electronegative) in presence of an attacking reagent. It is called E-effect: This effect, is temporary and takes place only in the presence of a reagent. As soon as reagent is removed, the molecule reverts back to its original position. Electromeric effect is also of two types : + E-efFect and – E effect – If the electrons of the π bond are transferred to that atom of the double bond to which the reagent finally gets attached the effect is called + E effect. For example, addition of acids to alkenes. If on the other hand, the electrons of the double bond are transferred to an atom of the double bond, other than the one to which the reagent finally gets attached, the effeet is called – E-effect. Question 18. Give a brief description of the principles of the following techniques taking an example in each case. (a) Crystallisation (b) Distillation (c) Chromatography. (a) Crystallisation : The process by which an impure compound is converted into its crystals is known as crystallisation. Crystals are the purest form of a substance having definite geometrical shapes. The impure substance is dissolved in a.suitable solvent in which it is sparingly soluble at room temperature but appreciably soluble at higher temperature. The solution is concentrated to get nearly a saturated solution. When this saturated solution is cooled crystals of the pure substance will separate out. Impure sugar is purified by this method. (b) Distillation : Distillation involves conversion of a liquid into vapours by heating followed by condensation of the vapours thus produced by cooling. This method is commonly used for those liquids which are sufficiently stable at their boiling points and which contain non-volatile impurities. If a mixture of two liquids having different boiling
(see Figure \ref{fig:SPIN-B-FIELD-BUNCH} and \ref{fig:SPIN-B-LINE-IOSS-MAG}). Contours of $b^2/\rho$ with selected values are also displayed on each panel of Figure \ref{fig:SPIN-B-LINE-IOSS-MAG} for our reference. In the vicinity of the black hole at $r/r_{\rm g} \lesssim 20$, the value of $b^{2}/\rho$ decreases monotonically (approximately independent on the colatitude angle $\theta$ inside the funnel) as $r$ increases. Again, this can be interpreted as a consequence of no visible (but very weak) bunching of poloidal magnetic flux at $r/r_{\rm g} \simeq 5$--10 (around the stagnation surface edges in the case of $a=0.5$--0.99), as is shown in Figures \ref{fig:FID-B-FIELD-BUNCH}, \ref{fig:SPIN-B-FIELD-BUNCH}, and \ref{fig:SPIN-B-LINE-IOSS-MAG}. Thus, we may not expect a significant concentration of the mass density toward the polar axis at a few $\lesssim r/r_{\rm g} \lesssim 20$ as examined with our fiducial run in Section \ref{sec:RESULTS-FID} (see also Figures \ref{fig:FID-MAG-EVO} and \ref{fig:FID-B-LINE-EVO}). Depending on the black hole spin ($a=0.5, 0.7, 0.9$ and 0.99), $b^{2}/\rho \simeq 2, 5, 10$ and 20 are identified at around the closest part (near the funnel edge) on the jet stagnation surface. This is located between two outermost streamlines ($z \propto r^{2}$ and $z \propto r^{1.6}$) of the semi-analytical solution of the FFE jet. As is seen in Figure \ref{fig:SPIN-LORENTZ}, a high value of $\Gamma$ is distributed throughout an outer layer of the funnel between two outermost streamlines ($z \propto R^{2}$ and $z \propto R^{1.6}$), which are anchored to the event horizon. Having a high value of $b^{2}/\rho$ at the jet launching point is suggestive that the flow will undergo bulk acceleration to relativistic velocities, as seen in Equation (\ref{eq:TOTAL-TO-MATTER-ENG.FLUX}). This is a necessary, but not sufficient condition, as the magnetic nozzle effect is also needed, which can be triggered by a differential bunching of poloidal flux toward the polar axis. As is suggested in \citet[]{TNTT90, PNHMWA15}, the location of the jet stagnation surface, where the magneto-centrifugal force is balanced by the gravity of the black hole, is independent of the flow property, such as the rate of mass loading, because it is solely determined by $a$ and $\Omega$ (Equation \ref{eq:Omega_F}). We point out that a departure of the jet stagnation surface from the black hole at a higher colatitude ($\theta \rightarrow 0$) gives a prospective reason for the lateral stratification of $\Gamma$ at large distances (where the sufficient condition for the bulk acceleration may be applied). The above issue could be associated with the so-called limb-brightened feature in the M87 jet. Note we identified the value of $b^{2}/\rho \simeq 0.5$--1 as the physical boundary at the funnel edge along the outermost BP82-type parabolic streamline, which is anchored to the event horizon. Thus, if this boundary condition holds even further downstream ($r/r_{\rm g} \gg 100$), the funnel edge is unlikely to be a site exhibiting a relativistic flow, as is examined in Figure \ref{fig:SPIN-LORENTZ}. A highly Doppler boosted emission may not be expected there, but an alternative process, such as the in situ particle acceleration may be considered at the edge of the jet sheath (as a boundary shear layer) under the energy equipartition between the relativistic particles and the magnetic field \citep[e.g.][]{SO02}. On the other hand, limbs in the M87 jet have a finite width $\delta R$ inside their edges and $\delta R$ seems to increase in the downstream direction \citep[e.g.][]{ANP16}, suggesting a differential bunching of streamlines \citep[e.g.][]{K11}. In this paper, we identify the outer jet structure (limbs) as the jet sheath, while the inner jet structure (inside limbs) is identified as the jet spine. In the next section, our results are compared with VLBI observations, followed by related discussions in Section \ref{sec:DIS}. Especially, we assign Section \ref{sec:SPINE-SHEATH} for discussion of a limb-brightened feature in the context of MHD jets and Section \ref{sec:VLBI-CORES2} to describe the origin of knotty structures. \section{Comparison with VLBI observations} \label{sec:COMP} \subsection{Jet Morphology} \label{sec:OBS-MOR} \begin{figure*} \centering\includegraphics[scale=0.72, angle=0]{f15} \caption{\label{fig:R-z} Distribution of the jet radius $R$ as a function of the jet axial distance $z$ (de-projected with $M=6.2 \times 10^{9} M_{\odot}$ and $\theta_{\rm v}=14^{\circ}$) from the SMBH in units of $r_{\rm g}$ \citep[cf.][labeled as AN12, NA13, and H13, respectively]{AN12, NA13, H13}. Additional data points are taken from \citet[]{D12, A15, H16} (labeled as D12, A15, and H16, respectively). The (vertical) dashed-dotted line denotes the Bondi radius $r_{\rm B}$, located at $\simeq 6.9 \times 10^{5}\, r_{\rm g}$ and the HST-1 complex is around $10^{6}\,r_{\rm g}$. Filled black region denotes the black hole (inside the event horizon), while the hatched area represents the ergosphere for the spin parameter $a = 0.99$. The light gray area denotes the approximate solution (e.g. NMF07, TMN08) of the FFE genuine parabolic jet (outermost BZ77-type streamline: $z \propto R^{2}$ at $R/r_{\rm g} \gg 1$), while the dark gray area is the case of the parabolic jet (outermost BP82-type streamline: $z \propto R^{1.6}$ at $R/r_{\rm g} \gg 1$), respectively. In both of the outermost streamlines, which are anchored to the event horizon with $\theta_{\rm fp}=\pi/2$, a variation from $a=0.5$ (upper edge) to $a=0.99$ (lower edge) is represented as a shaded area. The solid line is the linear least-square for data points of MERLIN 1.8 GHz, indicating the conical stream $z \propto R$ \citep[]{AN12}.} \end{figure*} Figure \ref{fig:R-z} overviews the geometry of the M87 jet by compiling the data in the literature (see the caption for references). Multi-wavelength observations by \citet[][hereafter AN12]{AN12} revealed that the global structure of the jet sheath is characterized by the parabolic stream $z \propto R^{1.73} $ at $z/r_{\rm g} \sim 400$--$4 \times 10^{5}$ \citep[see also][]{H13}, while it changes into the conical stream $z \propto R^{0.96}$ beyond the Bondi radius of $r_{\rm B}/r_{\rm g} \simeq 6.9 \times 10^{5}$ \citep[$\sim 205$ pc:][]{R06}. \citet[]{H13} and \citet[]{NA13} examined the innermost jet region ($z/r_{\rm g} \gtrsim 10$) by utilizing the VLBI core shift \citep[]{H11}. \citet[]{H13} which suggests a possible structural change toward upstream at around $z/r_{\rm g} \sim 300$, where the VLBA core at 5 GHz is located. The innermost jet sheath ($z/r_{\rm g} \gtrsim 200$) is recently revealed with HSA 86 GHz \citep[]{H16}. Based on our theoretical examinations presented in previous sections, we overlay the outermost streamlines of the semi-analytical FFE jet model (NMF07, TMN08) with varying Kerr parameters ($a=0.5$--0.99) on data points in Figure \ref{fig:R-z} for comparison. There are notable findings: i) the inner jet radius (at $z/r_{\rm g}\lesssim 100$), which is represented by VLBI cores at 15--230 GHz, are traced by either outer parabolic or inner genuine parabolic streamlines of FFE jets, which are anchored to $r=r_{\rm H}$ with $\theta_{\rm fp}=\pi/2$. Within the uncertainties, we cannot distinguish the shape of the streamline, but there is a tendency that the mean values shift towards the genuine parabolic streamlines inside the funnel. Therefore, we consider that the mm VLBI core at 230 GHz with EHT observations \citep[][hereafter the EHT core]{D12, A15} presumably show the innermost jet to be in a highly magnetized (PFD) regime $b^2/\rho\gg 1$ and $\Gamma \lesssim 1.5$ (see, Figures \ref{fig:SPIN-MAG} and \ref{fig:SPIN-LORENTZ}) inside the funnel. Note that the dominant magnetic energy of the VLBI core at 230 GHz is originally proposed by \citet[]{K15}. This may reflect the jet spine, however, rather than the jet sheath at the funnel edge (see Section \ref{sec:VLBI-CORES1} for discussions). ii) At the scale of $100 \lesssim z/r_{\rm g} \lesssim 10^{4}$, we identify a clear coincidence between the radius of the jet sheath and the outermost BP82-type streamline of the FFE jet solution. We also confirm a reasonable overlap between the VLBI cores at 5 and 8 GHz and an extended emission of the jet sheath at VLBA 43 and HSA 86 GHz \citep[see also][]{H13}. Therefore, the hypothesis of the VLBI core as the innermost jet emission \citep[]{BK79} is presumably correct at this scale although a highly magnetized state of VLBI cores, suggested by \citet[]{K14, K15} and the frequency ($\nu$) dependent VLBI core shift $\Delta z (\nu) \propto 1/\nu$, taking place in the non-conical jet geometry in M87 \citep[]{H11} may conflict with original ideas in \citet[][where an equipartition between the magnetic and synchrotron particle energy densities and a constant opening angle and constant velocity jet is considered]{BK79}. We also remind readers about our recent result on the jet geometry of blazars which examined VLBI cores \citep[]{ANAL17}, suggesting that non-conical structures may exist inside the sphere of influence (SOI) $r_{\rm SOI} \sim 10^{5}$--$10^{6}\, r_{\rm g}$. iii) At around $z/r_{\rm g} \simeq 10^{4}$--$10^{5}$, it is visible that data points (the radius of the jet sheath) start to deviate slightly from $z \propto R^{1.6}$, but a parabolic shape is sustained. This may indicate a new establishment of the lateral force-equilibrium between the funnel edge and the outer medium (wind/corona above the RIAF), {\em or} the jet sheath starts to be Doppler de-boosted
\section{Problem and Motivation}\label{sec:prob-and-motivation}\footnote{Extended abstract submitted to ICFP 2018 SRC. Research Advisor: J. Garrett Morris, garrett@ittc.ku.edu} Managing resources---file handles, database connections, etc.---is a hard problem. Debugging resource leaks and runtime errors due to resource mis-management are difficult in evolving production code. Programming languages with static type systems are great tools to ensure erroneous code is detected at compile time. However, modern static type systems do little in the aspect of resource management as resources are treated as normal values. We propose a type system, \qub{}, based on the logic of bunched implications (\BI)\cite{ohearn_logic_1999} which models resources as first class citizens. We distinguish two kinds of program objects---restricted and unrestricted---and two kinds of functions---sharing and separating. Our approach guarantees resource correctness without compromising existing functional abstractions. For a concrete example, we consider the case of file handling. In Haskell, a file being closed twice or a file not being closed at all may cause run-time errors but it not flagged as a type error. We represent separating functions, i.e. functions that do not share resources with their arguments using $\sepimp$, and sharing functions i.e. functions that share resources with their arguments using $\shimp$. In \qub{}, the type signatures of the file handling API explicitly states that they are separating in nature. This accounts for closing the file handle more than once. Each program object needs to be explicitly dropped if it has to be treated as a resource, as in linear type systems \citep{ahmed_l3_2007, mazurak_lightweight_2010, bernardy_linear_2017}. This accounts for failing to close the file handles. Exception handling in Haskell can be done using \texttt{MonadError}\citep{liang_monad_1995}. However, it does not give a systematic way of cleaning up resources in case of run-time exceptions. We consider the case where a critical section of the code throws an exception as shown in \cref{fig:exception-handling-qub}. The \texttt{IOF} describes the fact that the computation can throw exceptions, while \texttt{IO} does not. The \texttt{catch} function has a sharing argument, hence it can access the file handle \texttt{fh} declared in the part of the code that can throw exceptions and close it before exiting to prevent a memory leak. \begin{figure}[h] \begin{tabular}{c|c} \begin{haskell} openFile :: FilePath -* IO FileHandle closeFile :: FileHandle -* IO () readFile :: FileHandle -* IOF (String, FileHandle) writeFile :: String -* FileHandle -* IOF ((), FileHandle) throw :: Exception -* IO a catch :: IOF a -* (Exception -* IO a) ->> IO a \end{haskell} & \begin{haskell} readFromFile :: FilePath -* IO (Either String String) readFromFile fpath = do fh <- openFile fpath ((s, fh) <- readLine fh let l = caps s closeFile fh return \dollar Right l) `catch` (\e -> do closeFile fh return \dollar Left "read file error") \end{haskell} \end{tabular} \caption{File and Exception Handling in \qub{}} \label{fig:exception-handling-qub} \end{figure} \section{Background and Related Work}\label{sec:backgrond} Type systems based on linear logic\citep{girard_linear_1987, wadler_taste_1993, ahmed_l3_2007, mazurak_lightweight_2010, bernardy_linear_2017} provide one technique to solve the resource control problem. They restrict the structural rules of weakening and contraction to view all values as resources. This changes the meaning of the connectives as well. Linear implication $A \rightspoon B$ means ``A is consumed to obtain B''. We also get additive and multiplicative fragments of conjunction ($A \otimes B$ means ``both A and B'' and $A \with B$ means ``choose between A and B''). There is, however, an awkward asymmetry in this system---while $\rightspoon$ is the right adjoint of $\otimes$, $\with$ has no such counterpart. Logic of \BI\citep{pym_semantics_2002} repairs this asymmetry between implication and conjunction. It uses trees as contexts, where the internal nodes are either comma ($,$) or semicolon ($;$) and leaf nodes are the propositions. The structural rules---weakening and contraction---are prohibited for propositions connected using ($,$). $\Gamma;\Delta \vdash \Gamma$ but $\Gamma,\Delta \nvdash \Gamma$. The multiplicative conjunction $\otimes$ gets a multiplicative implication $\sepimp$ and the additive conjunction $\with$ gets the additive implication $\shimp$ as its right adjoint. The Curry-Howard interpretation of \BI is in terms of sharing in rather than linear logic's consumption. If the function does not share resources with its argument $\sepimp$ is used, while if the function shares resources with its arguments, $\shimp$ is used instead. Jones\citep{jones_theory_1994, jones_qualified_2003} introduces qualified types, a general framework to incorporate predicates for polymorphism. The Hindley-Milner type system\citep{milner_theory_1978} extended with qualified types\citep{jones_simplifying_1995} can express type classes with functional dependencies\citep{mark_type_2000}, and first class polymorphism\citep{jones_first-class_1997}. Morris\citep{morris_best_2016} uses qualified types to design Quill, a functional language with linear calculus. In Quill, the predicate $\Un{\tau}$ specifies the type $\tau$ is unrestricted i.e. it can be duplicated or dropped at will, or it does not contain any resources. Proof theoretically, the type is tagged unrestricted whenever weakening and contraction is admissible. A binary predicate $\geq$ helps generalize function definition in presence of restricted types. $\tau \geq \tau'$ specifies that type $\tau$ admits more structural rules than type $\tau'$. \section{Approach and Uniqueness}\label{sec:approach} \qub{} is an extension of standard call-by-name lambda calculus based on logic of \BI. We introduce two kinds of lambdas associated with the two implications. $\lambda^{\sepimp}x.M$ introduces a separating function $\sepimp$, while $\lambda^{\shimp} x. M$ introduces a sharing arrow $\shimp$. We generalize the use of trees as contexts in \BI to graphs of sharing information. We represent sharing graphs as adjacency lists in the environment context. A triple $(x^{\vec{y}}:\tau) \in \Gamma$ would mean $x$ of type $\tau$ is in sharing with $\vec{y}$. The sharing relation is a symmetric, reflexive and non-transitive. We say that the contexts are in complete sharing---$\Gamma \oplus \Delta$---if all the variables are shared and they are disjoint---$\Gamma \circledast \Delta$---if they are not shared. We formally define them in \cref{fig:aux-functions}, where $\mathbin{\#}$ means disjoint. The predicates $\ShFun{\phi}$ and $\SeFun{\phi}$ range over sharing and separating functions respectively. We include predicates $\Un{\tau}$ and $\tau \geq \tau'$ as is from Quill. The complete type system is shown in \cref{fig:type-system}. {\small \begin{figure}[h]\centering \begin{minipage}[h]{0.45\linewidth} \begin{flalign*} \texttt{Vars}(\Gamma, x^{\vec{y}}) &= \texttt{Vars}(\Gamma) \cup \{ x \}\\[-5pt] \texttt{Shared}(\Gamma, x^{\vec{y}}) &= \texttt{Shared}(\Gamma) \cup \{ \vec{y} \}\\[-5pt] \texttt{Used}(\Gamma) &= \texttt{Vars}(\Gamma) \cup \texttt{Shared}(\Gamma) \end{flalign*} \end{minipage}% \begin{minipage}[ht]{0.45\linewidth} \begin{flalign*} (\Gamma, x^{\vec{y}})^{[a \mapsto \vec{b}]} &= \begin{cases} a \notin \vec{y}\ \ \ \ (\Gamma^{[a \mapsto \vec{b}]}, x^{\vec{y}}:\tau)\\ a \in \vec{y}\ \ \ \ (\Gamma^{[a \mapsto \vec{b}]}, x^{(\vec{y}\backslash a)\cup\vec{b}}:\tau) \end{cases}\\[-5pt] \Gamma^{[\vec{a} \mapsto \vec{b}]} &= (\dots((\Gamma^{[a_1 \mapsto \vec{b}]})^{[a_2 \mapsto \vec{b}]})^{\dots})^{[a_n \mapsto \vec{b}]} \end{flalign*} \end{minipage}\\[-5pt] \begin{minipage}[ht]{0.45\linewidth} \begin{flalign*} \Gamma \circledast \Gamma' &= \Gamma \sqcup \Gamma'\ \ \ \textit{if}\ \texttt{Vars}(\Gamma) \mathbin{\#} \texttt{Used}(\Gamma') \wedge \texttt{Vars}(\Gamma') \mathbin{\#} \texttt{Used}(\Gamma) \\[-5pt] \Gamma \oplus \Gamma' &= \Gamma \sqcup \Gamma'\ \ \ \textit{if}\ \texttt{Used}(\Gamma) = \texttt{Used}(\Gamma') \end{flalign*} \end{minipage} \caption{Auxiliary Functions} \label{fig:aux-functions} \end{figure \begin{figure}[h]\centering \begin{minipage}{.35\textwidth} \begin{prooftree} \AxiomC{{\color{white}$\Gamma \circledast \Delta \circledast$}} \RightLabel{[ID]} \UnaryInfC{$P \mid x^{\vec{y}} : \sigma \vdash x : \sigma $} \end{prooftree} \end{minipage}% \begin{minipage}{.50\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \circledast \Delta \circledast \Delta \vdash M : \sigma$} \AxiomC{$P \vdash \Delta\ \texttt{un}$} \RightLabel{[CTR-UN]} \BinaryInfC{$P \mid \Gamma \circledast \Delta \vdash M : \sigma$} \end{prooftree} \end{minipage}\\[3pt] \begin{minipage}{.35\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \oplus \Delta \oplus \Delta\vdash M : \sigma$}\RightLabel{[CTR-SH]} \UnaryInfC{$P \mid \Gamma \oplus \Delta \vdash M : \sigma$} \end{prooftree} \end{minipage}% \begin{minipage}{.50\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M : \sigma$} \AxiomC{$P \vdash \Delta\ \texttt{un}$} \RightLabel{[WKN-UN]} \BinaryInfC{$P \mid \Gamma \circledast \Delta \vdash M : \sigma$} \end{prooftree} \end{minipage}\\[3pt] \begin{minipage}{.35\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M : \sigma$} \RightLabel{[WKN-SH]} \UnaryInfC{$P \mid \Gamma \oplus \Delta \vdash M : \sigma$} \end{prooftree} \end{minipage}% \begin{minipage}{0.60\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M : \sigma$} \AxiomC{$P' \mid \Gamma'_{x} \sqcup x: \sigma \vdash N: \tau$} \RightLabel{[LET]} \BinaryInfC{$P \cup P' \mid \Gamma \sqcup \Gamma' \vdash (\Let{x}{M}{N}): \tau$} \end{prooftree} \end{minipage}\\[3pt] \begin{minipage}{0.45\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M: \sigma$} \AxiomC{$t \notin \texttt{fvs}(\Gamma) \cup \texttt{fvs}(P)$}\RightLabel{[$\forall$ I]} \BinaryInfC{$P \mid \Gamma \vdash M: \forall t. \sigma$} \end{prooftree} \end{minipage}% \begin{minipage}{0.45\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M: \forall t.\sigma$}\RightLabel{[$\forall$ E]} \UnaryInfC{$P \mid \Gamma \vdash M: [\tau \backslash t] \sigma $} \end{prooftree} \end{minipage}\\[3pt] \begin{minipage}{0.45\textwidth} \begin{prooftree} \AxiomC{$P, \pi \mid \Gamma \vdash M : \rho$} \RightLabel{[$\Rightarrow$ I]} \UnaryInfC{$P \mid \Gamma \vdash M : \pi \Rightarrow \rho$} \end{prooftree} \end{minipage}% \begin{minipage}{0.45\textwidth} \begin{prooftree} \AxiomC{$P \mid \Gamma \vdash M : \pi \Rightarrow \rho$} \AxiomC{$P \vdash \pi$} \RightLabel{[$\Rightarrow$ E]} \BinaryInfC{$P \mid \Gamma \vdash M: \rho$} \end{prooftree} \end{minipage}\\[3pt] \begin{minipage}{0.35\textwidth} \begin{prooftree} \AxiomC{$P \Rightarrow \texttt{ShFun}\ \phi\ \ \ \ \ P \vdash \Gamma \geq \phi$}\noLine\def2pt{-0.2pt} \UnaryInfC{$P \mid \Gamma^{[\emptyset\mapsto \{x\}]},x^{\text{Vars}(\Gamma)}: \tau \vdash M : \tau'$}\RightLabel{[$\shimp$ I]}\def2pt{2pt} \UnaryInfC{$P \mid \Gamma \vdash \lambda^{\shimp}x. M : \phi \tau \tau'$} \end{prooftree} \end{minipage}% \begin{minipage}{0.55\textwidth} \begin{prooftree} \AxiomC{$P \Rightarrow \texttt{ShFun}\ \phi$}\noLine\def2pt{0pt}
l'opérateur de $\kappa$-entrelacement normalisé. \prop \label{compatibilitéinductionnormalisation} Soit $L$ un sous-groupe de Levi standard de $G$, $\tau$ une représentation générique $\kappa$-stable de $L$ et $A_{\tau}=A^{\text{gén}}(\tau, \psi)$ l'opérateur de $\kappa$-entrelacement normalisé de $\tau$. Alors $\iota_L^{G}(\tau)$ a un unique sous-quotient irréductible générique $\pi_0$ qui est $\kappa$-stable. Si on note $A_{\tau,\kappa}(\pi_0)$ l'opérateur sur $\pi_0$ obtenu à partir de $A_\tau$ par la propriété d'induction parabolique et de multiplicité $1$ (3.3.1), alors $A_{\tau,\kappa}(\pi_0)=A^{\text{gén}}(\pi_0, \psi)$. \vskip3mm \dem Pour $\lambda$ une fonctionnelle de Whittaker pour $\tau$ on associe une fonctionnelle de Whittaker pour $\iota_L^{G}(\tau)$ via $\lambda\mapsto (f\in V_{\iota_L^{G}(\tau)}\mapsto \lambda \circ f)$. On sait que $\tau$ est générique, donc $\iota_L^{G}(\tau)$ a une unique droite de fonctionnelles de Whittaker d'après \cite{JS} et donc il y a un unique sous-quotient irréductible $\pi_0$ avec des fonctionnelles de Whittaker non nulles, i.e. $\pi_0$ générique et donc $\kappa\pi_0$ générique. On sait que $\pi= \iota_L^{G}(\tau)$ est $\kappa$-stable donc par la propriété de multiplicité 1 on obtient que $\pi_0$ est $\kappa$-stable. On note $\pi_0=U/V$ avec $V\subset U \subset V_\pi$ et $U$ maximal. Alors $U$ et $V$ sont stables par $A_{\tau,\kappa}(\pi)$ qui induit donc par passage aux quotients un opérateur $A_{\tau, \kappa}(\pi_0)$ sur $\pi_0$. Si $\Lambda$ est une fonctionnelle de Whittaker non nulle pour $\pi$ alors elle induit par restriction une fonctionnelle $\Lambda_U$ sur $U$. De la même manière que dans la preuve du point précédent, l'opérateur $A _{\tau,\kappa}(\pi)$ fixe $\Lambda$ et donc sa restriction à $U$ fixe $\Lambda_U$. Donc $A_{\tau, \kappa}(\pi_0)=A^\text{gén}(\pi_0)$. \hfill \qed \section{Construction de représentations elliptiques} Rappelons la construction de représentations elliptiques. Notre théorème traitant des représentations elliptiques de $H$, nous donnons ici la construction de représentations elliptiques de $H$ pour conserver les mêmes notations dans le paragraphe 7. Partons d'une représentation essentiellement de carré intégrable $\tau_E$ de $H$ à laquelle on associe des représentations elliptiques comme suit. D'après la classification de Bernstein-Zelevinsky (voir 2.1), il existe un entier $k$ divisant $m$ et une représentation cuspidale $\rho_E$ de $\text{GL}_{\frac{m}{k}}(E)$ tels que $\tau_E$ se réalise comme l'unique sous-représentation irréductible de l'induite parabolique \[\nu^{k-1}\rho_E\times\nu^{k-2}\rho_E\times \dots \times \rho_E. \] Pour $I$ un sous-ensemble de $\mathcal{K}=\lbrace 1,\dots, k-1\rbrace$, on définit un sous-groupe de Levi $L_{E,I}$ de $H$ contenant $L_E=\text{GL}_{\frac{m}{k}}(E)\times \dots \times \text{GL}_{\frac{m}{k}}(E)$ de la manière suivante: si $I$ est le complémentaire de $\lbrace n_1,n_1+n_2, \dots, n_1+n_2+\dots +n_{t-1} \rbrace$ dans $\lbrace 1, \dots, k-1 \rbrace$, alors on pose $$L_{E,I}=\text{GL}_{n_1\frac{m}{k}}(E)\times \dots \times \text{GL}_{n_t\frac{m}{k}}(E)$$ où $n_t$ est tel que $n_1+n_2+\dots +n_t=k$. On a alors $$L_{E,I}\subset L_{E,J} \text{ si } I\subset J$$ et en particulier $L_{E,\emptyset}=L_E$ et $L_{E, \mathcal{K}}=H$. Pour chaque sous-ensemble $I$ de $\mathcal{K}$ on note: \begin{itemize} \item $\tau_{E,I}$ l'unique sous-représentation irréductible de $\iota_{L_E}^{L_{E,I}}(\nu_E^{k-1}\rho_E \otimes \dots \otimes \rho_E)$; \item $\pi_{E,I}$ le quotient de Langlands, i.e. l'unique quotient irréductible, de $X_{E,I}=\iota_{L_{E,I}}^H(\tau_{E,I})$. \end{itemize} Ainsi $\tau_{E,I}$ est une représentation irréductible essentiellement de carré intégrable de $L_{E,I}$. Observons que si $I\subset J$ alors $X_{E,J}$ est une sous-représentation de $X_{E,I}$ si $I\subset J$. De plus, $\pi_{E,J}$ est un sous-quotient de $X_{E,I}$ si et seulement si $I \subset J$. Les représentations $\pi_{E,I}$, qui sont donc les sous-quotients irréductibles de $$X_{E, \emptyset}=\nu^{k-1}\rho_E\times \nu^{k-2}\rho_E\times \dots \times \rho_E,$$ apparaissent avec multiplicité 1 dans la représentation $X_{E,\emptyset}$. Alors, les représentations elliptiques de $H$ sont exactement les représentations $\pi_{E,I}$ ainsi construites à partir d'une représentation essentiellement de carré intégrable $\tau_E$ de $H$. Notons qu'une représentation irréductible de $H$ est elliptique si et seulement si elle a même support cuspidal qu'une représentation irréductible essentiellement de carré intégrable, en l'occurrence un segment de Zelevinski. \vskip4mm Nous avons la même construction pour les représentations elliptiques de $G=\text{GL}_n(F)$. \section{Résultats connus d'induction automorphe que l'on va utiliser} Nous exposons ici les résultats déjà démontrés d'induction automorphe. Cela concerne les représentations essentiellement de carré intégrable. \vskip3mm Nous avons la proposition suivante dans \cite[p.148]{HLSMF}. \prop \label{inductionautomorpheconnu} \begin{enumerate} \item Soit $\tau_E$ une représentation irréductible cuspidale de $H$. Si la classe d'isomorphisme de $\tau_E$ a un stabilisateur d'ordre $d_1$ dans $\Gamma=\text{Gal}(E/F)$, alors son $\kappa$-relèvement $\pi$ est induite parabolique de $\pi_1\otimes \kappa\pi_1\otimes \dots \otimes \kappa^{d_1-1}\pi_1$ à $G$, où $\pi_1$ est une représentation irréductible cuspidale de $\text{GL}_{n_1}(F)$, $n=n_1d_1$, et a pour stabilisateur $\kappa^{d_1\mathbb{Z}}$ dans $\kappa^{\mathbb{Z}}$. \item Si $\tau_E$ est essentiellement de carré intégrable, elle est déterminée par son support cuspidal qui forme un "segment" $\{\rho_E,\nu_E\rho_E,\ldots , \nu_E^{k-1}\rho_E\}$ (cf. 2.1), où $\rho_E$ est une représentation irréductible cuspidale de $\text{GL}_s(E), sk=m$, et $\nu_E=\nu\circ N_{E/F}$. D'après le point précédent on peut écrire le $\kappa$-relèvement de $\rho_E$ comme induite parabolique de $\pi_1\otimes \kappa\pi_1\otimes \dots \otimes \kappa^{d_1-1}\pi_1$ à $\text{GL}_{sd}(F), sd=n_1d_1$. Alors le $\kappa$-relèvement de $\tau_E$ est induite parabolique de $\pi_1'\otimes \kappa\pi_1'\otimes \dots \otimes \kappa^{d_1-1}\pi_1'$ à $G$, où $\pi_1'$ est la représentation essentiellement de carré intégrable de $\text{GL}_{n_1k}(F)$ de support cuspidal $\{\pi_1, \nu\pi_1, \dots ,\nu^{k-1}\pi_1\}$. \end{enumerate} \section{Compatibilité induction automorphe - induction parabolique} Comme nous pouvons le voir dans la construction des représentations elliptiques l'induction parabolique est très présente. Nous nous intéressons donc à la question de la compatibilité entre l'induction parabolique et l'induction automorphe. \vskip3mm Nous avons la proposition suivante dans \cite[p.145]{HLSMF}, on en donne les notations introduites. On se donne des entiers strictement positifs $m_1, \dots, m_t$ tels que $\sum_{i=1}^tm_i=m$. Pour $i=1, \dots, t$, on choisit un élément $e_i$ de $E^{\times}$ tel que $\sigma(e_i)=(-1)^{m_i(d-1)}e_i$, ce qui permet de considérer les facteurs de transfert $\tilde{\Delta}_i$ et $\Delta_i$ relatifs à l'induction automorphe de $H_i=\text{GL}_{m_i}(E)$ à $G_i=\text{GL}_{m_id}(F)$. Pour $i=1, \dots , t$ on se donne une base du $F$-espace vectoriel $E^{m_i}$, ce qui donne un plongement de $H_i$ dans $G_i$. Voyant $E^m$ comme $E^{m_1}\oplus \dots \oplus E^{m_t}$, on obtient une base du $F$-espace vectoriel $E^m$ d'où un plongement de $H$ dans $G$. Le groupe $L=G_1\times \dots \times G_t$ apparaît comme un sous-groupe de Levi de $G$, $L_H=H_1\times \dots \times H_t$ comme un sous-groupe de Levi de $H$, et on a $L_H=L\cap H$. Soit $P$ le sous-groupe parabolique de $G$ formé des matrices triangulaires inférieures par blocs de taille $m_1d, \dots , m_td$, et soit $U_P$ le radical unipotent de $P$. Le groupe $P_H=P \cap H$ est un sous-groupe parabolique de $H$, de radical unipotent $U_{P,H}=U_P\cap H$, et $L_H$ est une composante de Levi de $P_H$. Pour $i=1, \dots, t$ on se donne une représentation $\pi_i$ de $G_i$. \vskip3mm \prop \label{compatibilitéparaboliqueautomorphe} Supposons que pour $i=1, \dots, t$, la représentation (irréductible, $\kappa$-stable) $\pi_i$ de $G_i$ soit un $\kappa$-relèvement d'une représentation lisse irréductible $\tau_i$ de $H_i$, et que les représentations $\pi=\iota_P^G(\pi_1\otimes \dots \otimes \pi_t)$ de $G$ et $\tau=\iota_{P_H}^H(\tau_1\otimes \dots \otimes \tau_t)$ de $H$ soient irréductibles. Alors $\pi$ est un $\kappa$-relèvement de $\tau$. De plus, il existe une racine de l'unité $\zeta$, qui ne dépend ni des $\pi_i$, ni des $\tau_i$, telle que si pour $i=1, \dots, t$, $A_i$ est un isomorphisme de $\kappa\pi_i$ sur $\pi_i$, et que $A$ est l'isomorphisme de $\kappa\pi$ sur $\pi$ associé aux $A_i$ comme plus haut, on ait \[c(\tau, \pi, A)=\zeta \Pi_{i=1}^tc(\tau_i,\pi_i,A_i).\] \section{Induction automorphe pour les représentations elliptiques} On reprend dans cette section 7 les notations introduites dans la section 4. Le théorème suivant est le résultat principal de l'article. \theo Toute représentation irréductible elliptique de $H$ admet un $\kappa$-relèvement. \vskip3mm \noindent \textbf{Démonstration} \begin{enumerate} \item Nous partons donc d'une représentation essentiellement de carré intégrable $\tau_E$ de $H$ de support cuspidal $\{\rho_E, \nu_E\rho_E,\ldots , \nu_E^{k-1}\rho_E\}$ avec $k\vert m$. Comme nous l'avons vu dans la section 4, cette représentation $\tau_E$ permet de construire des représentations elliptiques $\pi_{E,I}$ de $H$ où $I$ est un sous-ensemble de $\mathcal{K}= \{1,\ldots ,k-1\}$. Nous allons montrer que $\pi_{E,I}$ admet un $\kappa$-relèvement. D'après la proposition \ref{inductionautomorpheconnu}, nous savons qu'il existe des entiers $n_1, d_1$ avec $kn_1d_1=n$, que $\rho_E$ a un $\kappa$-relèvement de la forme (induite parabolique irréductible) $\rho \times \kappa\rho \times \cdots \times \kappa^{d_1-1}\rho$ pour une représentation irréductible cuspidale $\rho$ de ${\rm GL}_{n_1}(F)$ et qu'il existe une représentation $\xi$ de $\text{GL}_{n_1k}(F)$ (la représentation essentiellement de carré intégrable de ${\rm GL}_{kn_1}(F)$ de support cuspidal $\{\rho, \nu \rho , \ldots , \nu^{k-1}\rho\}$) tels que le $\kappa$-relèvement de $\tau_E$ soit de la forme \[\pi:= \xi\times \kappa\xi\times \dots \times \kappa^{d_1-1}\xi.\] Montrons que le $\kappa$-relèvement de $\pi_{E,I}$ est \[\pi_I:=\sigma_I\times \kappa\sigma_I\times \dots \times \kappa^{d_1-1}\sigma_I\] où $\sigma_I$ est la représentation irréductible elliptique de ${\rm GL}_{kn_1}(F)$ associée à $\xi$ et $I$. \vskip3mm Pour cela nous allons montrer qu'il existe une constante $c$ telle que pour
Forum:Question regarding journal publications 7 0 Entering edit mode 19 months ago K.Gee ▴ 40 Hello Biostars. When you get deep in science, you realizing that nobody takes care about science as it is turned into a business. To explain my point of view in simple words, if you want to publish, you HAVE TO PAY!!! In the Accademia, things are even worse. If you want to obtain any level of diploma and improve your cv, you should have publications. ANY IDEA is going to be filtered top-down based on professors interests and the fund availability. I'm just realized how many ideas got vanished because of that. To get to the point, I am wondering if there are journals that accepts scientific manuscripts without a mandatory payment. Thanks in advance for the responses. journals publications Forum • 761 views 1 Entering edit mode Not a journal but maybe it helps you. https://www.biorxiv.org/ If you want to obtain any level of diploma Think this is not true, you only need publications when you want to get a Phd. I'm just realized how many ideas got vanished because of that. No one wants to read about just ideas. People want to read about tested and proven ideas. And that cost money anyways (lab, supplies, salary etc) and you need a fund for that and that fund is also there to help you publish. Also don't forget that the papers get peer reviewed, some person need to find reviewers the website need to stay in the air etc. that cost money. Besides my opinions, I do agree that some costs to publish something are insane. 2 Entering edit mode Think this is not true, you only need publications when you want to get a Phd. I have no respect for any PhD program that actually requires publications. While I published relatively well as a grad student I'm very glad my program didn't encourage taking on simple low-risk projects by making absurd publication demands. 1 Entering edit mode I'd add to this that in the UK, its actually pretty unusual to get papers during the PhD as they're only about 3 years long, particularly in experimental biology. It's a bit more normal in chemistry and physics. It's certainly something academics will push for, as long as it adds to, and not distracts from, successfully completing a thesis - but it is not a requirement to graduate that you have any papers. 1 Entering edit mode In Mexico, all Bio-related Ph.D. programs I'm aware demands a first author publication or patent to graduate. 0 Entering edit mode First of all, thanks a lot for the response. By "any idea," I was inferred to the scientistic proof idea. My point was that the concept of publishing is kind of weird. I will give you an example, and I am sure you will get my point. Let's say I found a job opportunity. In the interview day, the interviewer will be going to tell you. "you will salary yourself, it is 12hours working daily, and to get hired, you have to pay me too". In the salary place, I put as you mentioned (lab, supplies, salary, etc.), and the interviewer is the journal. Regarding peer-reviewed people, I do agree that should consider as a cost, but don't forget that journals have subscriptions and advertisements as well. 0 Entering edit mode Yep - its a very backwards system. Your analogy is not quite right, as the expectation is not that the researcher themselves pay. It is most certainly a con that we have to pay to publish and pay to access oftentimes. 1 Entering edit mode I think journals have converted themselves into brands these days, the more well-known a brand is, more you pay for publishing. The ones who don't charge money are typically not so well recognized (read : negligible impact factor). If you are okay with that then they (low impact factor journals) are more desperate than you are for publishing your content with very lenient peer review (in some cases no peer review) process, which is a red flag already. 1 Entering edit mode agree, it is a nasty business nowadays. But at least many publishers do not charge (or apply a reduced rate) if the authors are from low-income or a lower-middle-income countries. For example, https://www.biomedcentral.com/getpublished/article-processing-charges or https://www.cell.com/cell-reports/authors. 0 Entering edit mode Thanks for the links @ grant.hovhannisyan. The problem is that this situation not only exists in developing counties but in developed as well. @ grant.hovhannisyan, @genomax I have many examples from France, Greece, and Italy, too, even if there is funding for publishing it not enough from more than 2-3 publications annually. 0 Entering edit mode Welcome to the broken world of academic publishing. For the record, there are lots of journals that support open-access science (not that this always means you don't have to pay, just that you don't pay to access). Typically, you're going to be looking at online journals since they aren't passing printing costs on to you. I can't think of any names right now, but I'm sure others will weigh in. At the moment, you will probably find that these journals are lower impact factor (not that I'm endorsing chasing impact factor, but its something everyone thinks about). Hopefully this will change over time as the push for open access accelerates. I would also encourage you to look at the arXiv pre-print sites. This is a good (and free) way to disseminate early manuscripts and to get some peer review. One last thing, I'm not sure what your institution is, but many (at least here in the UK) have a specific budget from the government to fund/subsidise the costs of publishing - it should never be out of the researchers own pocket. Some institutions may however dictate that it comes out of your own grant money. 0 Entering edit mode Thank you much for the reply and the suggestion. I wish other institutes or Universities consider the costs of publications as the UK. Unfortunately some times due to budget limitations, so Uni/institutes are considering among others ( labs materials, supplies etc) paper costs a lot, and you end up to publish what the director/supervisor finds more attractive and scientific interesting. In bioinformatics, for instance, you don't need to produce data if you want to publish and you can work at any time of the day almost from everywhere. Data are online, so if you have a question to answer; you have the answer much more accessible than testing in the lab. Even in that case, you should that the "permission" from the hierarchy upper person for that. So this is why I wrote this post. 0 Entering edit mode I have not checked specifically of late but many journals used to have reduced (or perhaps no fees) if you were from a developing country. If you are not then grants certainly have enough money to pay for incidentals. 2 Entering edit mode 19 months ago Bioinformatics journal is free, but there are restrictions about what qualifies for a free publication with this journal. Most likely it is not suited to general research. 1 Entering edit mode Thanks a lot for the post @kevin Blighe. It is a fair option I guess 0 Entering edit mode Are you sure? I don't see anything in Instructions to Authors. 2 Entering edit mode It's not explicit on there, but you only have to pay if you either (A) go over the page limits or (B) want it to be open access. Those are the only costs listed at least and my understanding has always been that those are then the only possible fees (we always always paid for articles to be open access since we're required to, so I've never personally tested this). 0 Entering edit mode I had to read all, but @kevin in right, only Open Access papers have a declared fee, if you don't exceed the number of pages it should be no-cost. Did anyone try this before? 2 Entering edit mode No costs, if the paper is behind the paywall. 2 Entering edit mode 19 months
covariant gauge and in \cite{lannes} using other methods. As compared to the calculation in the covariant gauge, our computation is much simpler. It is very unlikely that with the covariant-gauge methods utilized in \cite{hirsdos} one could obtain combinatorial expressions for higher-order invariants. It turns out that our procedure goes beyond and can be implemented at higher order. We will show in the next section how this is achieved at order four, obtaining combinatorial formulae for the two primitive Vassiliev invariants present at that order. \subsection{Vassiliev invariants of order four} \hskip .25cm In this section we will apply our reconstruction procedure to compute the two combinatorial expressions for the two primitive invariants at order four. Using our diagrammatic notation for the group factors and the choice of basis shown in fig. \ref{canonical}, the perturbative series expansion in the temporal gauge at this order takes the form: \begin{eqnarray} \hat v_4({\cal K}) &=& \hat\gamma_{41}({\cal K}) \times \ffak + \hat\gamma_{42}({\cal K}) \times \ffau + \hat\gamma_{43}({\cal K}) \times \ffay + \hat\alpha_{41}({\cal K}) \times \ffav \nonumber \\ \, \nonumber \\ &+& \hat\alpha_{42}({\cal K}) \times \ffaw + \hat\alpha_{43}({\cal K}) \times \ffax, \label{ordcuatro} \end{eqnarray} which, after writing down the geometrical contributions more explicitly, making use of the notation introduced above to separate the $D$ integrals from the $E$ integrals, becomes: \begin{eqnarray} \hat v_4({\cal K}) = \hskip 9cm &\,& \nonumber \\ \; \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } \bigg) \times \ffak &+& \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibu} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibu} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibu} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibu} \hskip 12pt }$ } } \bigg) \times \ffau &+& \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\diby} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\diby} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\diby} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\diby} \hskip 12pt }$ } } \bigg) \times \ffay &+& \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibv} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibv} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibv} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibv} \hskip 12pt }$ } } \bigg) \times \ffav &+& \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibw} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibw} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibw} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibw} \hskip 12pt }$ } } \bigg) \times \ffaw &+& \nonumber \\ \bigg( \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibx} \hskip 12pt }$ } } + {1 \over 4} \sum_{i} \epsilon_i^2 \mbox{ {\Large {\it S}}} ^{\,Dii}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibx} \hskip 12pt }$ } } + \sum_{i>j} \epsilon_i \epsilon_j \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibx} \hskip 12pt }$ } } + \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibx} \hskip 12pt }$ } } \bigg) \times \ffax. \label{ordfoura} \end{eqnarray} Notice that in this expression we have not included the Feynman integrals proportional to $\epsilon$ and $\epsilon^3$, as they do not contribute. Also it is worthwhile to point out that $D^{ii}_\bigcirc$ denotes an integral where two chords for the signature-dependent part of (\ref{prop}) are attached to the same crossing $i$, while $D^{ij}_\bigcirc$ corresponds to one in which the two chords are attached to two different crossings. In the latter, there are in fact two different sums: one for $i,j \in {\cal C}_a$, and another for $i,j \in {\cal C}_b$, where ${\cal C}_a$ and ${\cal C}_b$ are the sets which entered in (\ref{nomc}) and (\ref{nomd}). All these integrals are built out of products of two $f$-terms, while the ones of $D_\bigcirc$ contain four $f$-terms. At order four there are six independent group factors, but only two are primitive. The factorization theorem will allow us to obtain ways of relating all the $D$-integrals in terms of the second-order one $\mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt \usebox{\dibc} \hskip 12pt }$ } $. This will lead to an expression for the ones associated to the primitive group factors, $\mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibw} \hskip 12pt }$ } }$ and $\mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibx} \hskip 12pt }$ } }$, similar to that obtained at third order in (\ref{relatd}). As in previous orders, one easily finds, with the aid of the kernels (\ref{nucleos}) and of the factorization theorem, that the sum over all the signature contributions coming from the propagator (\ref{prop}), which are contained in $\mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } }$, builds up the whole regular invariant: \begin{eqnarray} \mbox{ {\Large {\it S}}} ^{\,E}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } &\equiv& \eek + \eel + \eem + \eez + \een + \eeo \nonumber \\ &+& \eep + \eeqq + \eer + \ees + \eet \nonumber \\ &=& {1 \over 4} \big( \sum_i \epsilon_i \big)^4 = \gamma_{41}(K). \label{linkfour} \end{eqnarray} This implies that the rest of the coefficients associated to that group factor vanish: \begin{eqnarray} \mbox{ {\Large {\it S}}} ^{\,Dij}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } &\equiv& \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibl} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibm} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibz} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibn} \hskip 12pt }$ } \nonumber \\ \, \nonumber \\ &+& \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibo} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibp} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibq} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibr} \hskip 12pt }$ } + \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibs} \hskip 12pt }$ } \nonumber \\ &+& \mbox{ {\Large {\it D$^{ij}$}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibt} \hskip 12pt }$ } = 0 \;\; \forall i,j\, , \, \nonumber \\ \, \nonumber \\ \mbox{ {\Large {\it S}}} ^{\,D}{\hskip -5pt\mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } } &\equiv& \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibk} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibl} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibm} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibz} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibn} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibo} \hskip 12pt }$ } \nonumber \\ \, \nonumber \\ &+& \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibp} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibq} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip -4pt\usebox{\dibr} \hskip 12pt }$ } + \mbox{ {\Large {\it D}}} \mbox{ $_{\, \hskip
## File: regress.c package info (click to toggle) chrony 1.21z-5 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628 /* $Header: /cvs/src/chrony/regress.c,v 1.32 2003/09/22 21:22:30 richard Exp$ ======================================================================= chronyd/chronyc - Programs for keeping computer clocks accurate. ********************************************************************** * Copyright (C) Richard P. Curnow 1997-2003 * * This program is free software; you can redistribute it and/or modify * it under the terms of version 2 of the GNU General Public License as * published by the Free Software Foundation. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA * ********************************************************************** ======================================================================= Regression algorithms. */ #include #include #include #include #include "regress.h" #include "logging.h" #include "util.h" #define MAX_POINTS 128 void RGR_WeightedRegression (double *x, /* independent variable */ double *y, /* measured data */ double *w, /* weightings (large => data less reliable) */ int n, /* number of data points */ /* And now the results */ double *b0, /* estimated y axis intercept */ double *b1, /* estimated slope */ double *s2, /* estimated variance of data points */ double *sb0, /* estimated standard deviation of intercept */ double *sb1 /* estimated standard deviation of slope */ /* Could add correlation stuff later if required */ ) { double P, Q, U, V, W; double diff; double u, ui, aa; int i; if (n<3) { CROAK("Insufficient points"); } W = U = 0; for (i=0; i 0.0) && (resid[i] > 0.0))) { /* Nothing to do */ } else { nruns++; } } return nruns; } /* ================================================== */ /* Return a boolean indicating whether we had enough points for regression */ #define RESID_SIZE 1024 #define MIN_SAMPLES_FOR_REGRESS 3 int RGR_FindBestRegression (double *x, /* independent variable */ double *y, /* measured data */ double *w, /* weightings (large => data less reliable) */ int n, /* number of data points */ /* And now the results */ double *b0, /* estimated y axis intercept */ double *b1, /* estimated slope */ double *s2, /* estimated variance of data points */ double *sb0, /* estimated standard deviation of intercept */ double *sb1, /* estimated standard deviation of slope */ int *new_start, /* the new starting index to make the residuals pass the two tests */ int *n_runs, /* number of runs amongst the residuals */ int *dof /* degrees of freedom in statistics (needed to get confidence intervals later) */ ) { double P, Q, U, V, W; /* total */ double resid[RESID_SIZE]; double ss; double a, b, u, ui, aa; int start, nruns, npoints, npoints_left; int i; if (n < MIN_SAMPLES_FOR_REGRESS) { return 0; } start = 0; do { W = U = 0; for (i=start; i critical_runs10[npoints]) || (npoints_left < MIN_SAMPLES_FOR_REGRESS)) { break; } else { /* Try dropping one sample at a time until the runs test passes. */ ++start; } } while (1); /* Work out statistics from full dataset */ *b1 = b; *b0 = a; ss = 0.0; for (i=start; i 0 && !flags[u]) u--; if (flags[u]) u++; while (v < (n-1) && !flags[v]) v++; if (flags[v]) v--; do { if (v - u < 2) { if (x[v] < x[u]) { EXCH(x[v], x[u]); } flags[v] = flags[u] = 1; return x[index]; } else { pivind = (u + v) >> 1; EXCH(x[u], x[pivind]); piv = x[u]; /* New value */ l = u + 1; r = v; do { while (x[l] < piv) l++; while (x[r] > piv) r--; if (r <= l) break; EXCH(x[l], x[r]); l++; r--; } while (1); EXCH(x[u], x[r]); flags[r] = 1; /* Pivot now in correct place */ if (index == r) { return x[r]; } else if (index < r) { v = r - 1; } else if (index > r) { u = l; } else { CROAK("Impossible"); } } } while (1); } /* ================================================== */ #if 0 /* Not used, but this is how it can be done */ static double find_ordered_entry(double *x, int n, int index) { int flags[MAX_POINTS]; bzero(flags, n * sizeof(int)); return find_ordered_entry_with_flags(x, n, index, flags); } #endif /* ================================================== */ /* Find the median entry of an array x[] with n elements. */ static double find_median(double *x, int n) { int k; int flags[MAX_POINTS]; memset(flags, 0, n*sizeof(int)); k = n>>1; if (n&1) { return find_ordered_entry_with_flags(x, n, k, flags); } else { return 0.5 * (find_ordered_entry_with_flags(x, n, k, flags) + find_ordered_entry_with_flags(x, n, k-1, flags)); } } /* ================================================== */ /* This function evaluates the equation \sum_{i=0}^{n-1} x_i sign(y_i - a - b x_i) and chooses the value of a that minimises the absolute value of the result. (See pp703-704 of Numerical Recipes in C). */ static void eval_robust_residual (double *x, /* The independent points */ double *y, /* The dependent points */ int n, /* Number of points */ double b, /* Slope */ double *aa, /* Intercept giving smallest absolute value for the above equation */ double *rr /* Corresponding value of equation */ ) { int i; double a, res, del; double d[MAX_POINTS]; for (i=0; i 0.0) { res += x[i]; } else if (del < 0.0) { res -= x[i]; } } *aa = a; *rr = res; } /* ================================================== */ /* This routine performs a 'robust' regression, i.e. one which has low susceptibility to outliers amongst the data. If one thinks of a normal (least squares) linear regression in 2D being analogous to the arithmetic mean in 1D, this algorithm in 2D is roughly analogous to the median in 1D. This algorithm seems to work quite well until the number of outliers is approximately half the number of data points. The return value is a status indicating whether there were enough data points to run the routine or not. */ int RGR_FindBestRobustRegression (double *x, /* The independent axis points */ double *y, /* The dependent axis points (which may contain outliers). */ int n, /* The number of points */ double tol, /* The tolerance required in determining the value of b1 */ double *b0, /* The estimated Y-axis intercept */ double *b1, /* The estimated slope */ int *n_runs, /* The number of runs of residuals */ int *best_start /* The best starting index */ ) { int i; int start; int n_points; double a, b; double P, U, V, W, X; double resid, resids[MAX_POINTS]; double blo, bhi, bmid, rlo, rhi, rmid; double s2, sb, incr; double mx, dx, my, dy; int nruns = 0; if (n < 2) { return 0; } else if (n == 2) { /* Just a straight line fit (we need this for the manual mode) */ *b1 = (y[1] - y[0]) / (x[1] - x[0]); *b0 = y[0] - (*b1) * x[0]; *n_runs = 0; *best_start = 0; return 1; } /* else at least 3 points, apply normal algorithm */ start = 0; /* Loop to strip oldest points that cause the regression residuals to fail the number of runs test */ do { n_points = n - start; /* Use standard least squares regression to get starting estimate */ P = U = 0.0; for (i=start; i 0.0) { incr = 3.0 * sb; } else { incr = 3.0 * tol; } blo = b; bhi = b; do { blo -= incr; bhi += incr; /* We don't want 'a' yet */ eval_robust_residual(x + start, y + start, n_points, blo, &a,
\section{Introduction} Quantum-classical correspondence for the case of classically chaotic systems has been much investigated recently (e.g., Refs.\cite{QC,Reichl,OdeA,Gutz,Haakebook,Houches}). Expectation values of corresponding dynamical variables begin to differ \cite{Casati,kickedtop}, as do classical and quantum phase space distributions \cite{Berry,Zurek1}, on extremely short timescales which are typically logarithmic in Planck's constant. If these studies are taken at face value, therefore, all chaotic dynamical systems, being fundamentally quantum in nature, should either be obeying quantum laws of evolution now or be expected to do so in an extremely short time. Observations tell us otherwise, however. Sarkar and Satchell \cite{Sarben}, a decade ago, already pointed out the possible role of environment in the quantum evolution of chaotic systems. Recently \cite{Zurek2}, Zurek and Paz have conjectured an interesting quantitative relation between a classical chaotic system and its quantum version which is in contact with a bath. They have considered the Wigner representation of the quantum Liouville equation \begin{equation} \dot{W} = \left\{H,W\right\}_{PB} + \sum_{n=1}^{\infty} \frac{\hbar^{2n} (-1)^{n}}{2^{2n} (2n+1)!} \frac{\partial^{2n+1}V}{\partial x^{2n+1}} \frac{\partial^{2n+1}W}{\partial p^{2n+1}} \label{wigner1} \end{equation} for a particle in a potential $V(x)$ moving in a 2-dimensional phase space. Clearly the $\hbar$ terms are a singular perturbation of the classical Liouville equation, in that the order of the differential equation is changed. For chaotic systems derivatives of the Wigner function with respect to momentum become large enough to render the quantum correction terms comparable in magnitude to the classical Poisson bracket after the {\em Ehrenfest time}, $\tau_{\hbar} \propto (1 / \lambda) \log(1/\hbar)$, where $\lambda$ is a Lyapunov exponent. Therefore, for $t > \tau_{\hbar}$ we expect significant differences between the classical and quantum descriptions of the same system. We will now consider an environment consisting of harmonic oscillators to be coupled to the particle. The Caldeira-Leggett model \cite{Leggett,Book} will be used for these oscillators. For the special case of a high-temperature, Ohmic environment the right hand side of eqn.(\ref{wigner1}) is modified by the addition of a dissipitive and a decoherence term \cite{Zurek2}. When the temperature is high enough and the dissipation sufficiently low the dissipative term may be considered unimportant and the decoherence term only survives. The non-unitary Wigner function evolution becomes \begin{equation} \dot{W} = \left\{H,W\right\}_{PB} + \sum_{n=1}^{\infty} \frac{\hbar^{2n} (-1)^{n}}{2^{2n} (2n+1)!} \frac{\partial^{2n+1}V}{\partial x^{2n+1}} \frac{\partial^{2n+1}W}{\partial p^{2n+1}} + D \frac{\partial^{2}W} {\partial p^{2}}. \label{wigner2} \end{equation} The decoherence term is in the form of a diffusive contribution to the dynamics with diffusion coefficient, $D$. This is vital, since it is the diffusion resulting from the opening of the system which limits the development of the fine structure in the momentum direction to a critical momentum scale, $\sigma_{c}$. The timescale on which this process occurs is given by \cite{Zurek2} \begin{equation} \tau_{c} \approx \frac{1}{\lambda} \log \left( \frac{\sigma_{p}(0)} {\sigma_{c}} \right), \end{equation} where $\sigma_{p}(0)$ is the initial width of a Gaussian wavepacket in the momentum direction. Classical behaviour is recovered, therefore, provided the environmentally-induced diffusion process can prevent the development of fine-structure, i.e. if \begin{equation} \tau_{c} \ll \tau_{\hbar}. \end{equation} However, opening a system to a thermal environment has other consequences as well. In particular, the von Neumann entropy of the system, given by $S(t) = - \mbox{Tr} \rho_{r}(t) \ln \rho_{r}(t)$, where $\rho_{r}(t)$ is the reduced density matrix of the system at time $t$, will increase; information is lost to the environment and initially pure, superposition states of the quantum system become classical mixtures in a very short time. Since our ability to predict accurately the behaviour a classical system exposed to an initial perturbation depends very much on the nature of the dynamics, it is natural to ask whether this is true for the rate of information loss of its quantum analogue due to a perturbing environment. As a first step towards answering this question (see also \cite{CavesC,CavesQ}) Zurek and co-workers have considered the inverted harmonic oscillator \cite{Barton} of unit mass with Hamiltonian \begin{equation} H_{S} = \frac{p^{2}}{2} - \frac{\lambda^{2} x^{2}}{2}. \label{hamiltonian} \end{equation} This is intended as a model of instability and, in fact, the dynamical behaviour in phase space is dominated by a hyperbolic point at the origin. The unstable and stable directions, and the rate at which initial phase space distibutions expand and contract in these directions, respectively, are determined by $\lambda$. In this sense we call $\lambda$ an ``instability parameter" analagous to a Lyapunov exponent in a classical chaotic system. Indeed at any point on a trajectory the sum of the Lyapunov exponents is zero. For a chaotic trajectory there must be one pair of non-zero Lyapunov exponents. However, there are a number of reasons why we should question any conclusions drawn as to the implications for a real chaotic system based on so simple a model. Firstly, there are no quantum corrections to the Wigner function evolution for this quadratic potential. The model does not allow for these influences on the dynamics which, though small in the presence of an environment in comparison to the classical terms, nonetheless are generally {\em always present}. The stable and unstable manifolds associated with all hyperbolic points in Hamiltonian chaotic systems intersect both one another and those associated with other hyperbolic points \cite{LL}. In this way homoclinic and heteroclinic points are formed. The stable and unstable manifolds of the inverted oscillator intersect only at the hyperbolic origin in phase space. Clearly, therefore, the effect that the complicated distribution of homoclinic points might have on the open dynamics is not taken into account. Neither, of course, is the effect of heteroclinic points. Notwithstanding these objections however, the inverted oscillator remains a tractable model of instability both for a closed system and for an open system in the presence of an environment. As such, it deserves attention for the insights it might give regarding the qualitative and, maybe, quantitative behaviour of genuine, open quantum analogues of classically chaotic systems. The entropy production rate has been considered in the limit of high temperature and low dissipation. This entailed using the approximate Wigner function evolution in eqn.(\ref{wigner2}). Zurek and Paz show \cite{Zurek2} that after a time determined by both $\lambda$ and the strength of the interaction with the environment the rate of entropy increase approaches a constant, \begin{equation} \dot{{\cal S}} \to \lambda, \label{zurekrate} \end{equation} i.e., the {\em quantum} entropy production rate is determined, in this approximation, by the {\em classical} instability parameter. Given that the classical Lyapunov exponent to which $\lambda$ is analogous is equal to the Kolmogorov-Sinai (KS) entropy of the system \cite{Pesin}, this is indeed a remarkable characterisation \cite{Miller}. It suggests that after a time, a quantum, classically chaotic system loses information to the environment at a rate determined {\em entirely} by the rate at which the classical system loses information as a result of its dynamics, namely the KS entropy \cite{Beck}. In this paper we will examine once more the ``toy" model of Zurek and Paz. Apart from entropy production we will consider the experimentally relevant survival probability function. We will not be restricted to the assumption of low dissipation made by others \cite{Zurek2}. As we will show, the asymptotic behaviour of both the survival probability function and the rate of entropy increase will {\em not} be determined by $\lambda$ alone but by $\lambda$ in a specific combination with the dissipation parameter (which determines the strength of interaction with the environment). Our approach does not use the master equation approach of Zurek and Paz but rather Feynman-Vernon influence functional techniques \cite{FV}. This allows a straightforward analysis of strong coupling of the system to the environment. The remainder of this paper is organised as follows: In Section 2 we define the initial state of both the system and the environment, determine the time evolution of the reduced density matrix of the system generally and describe also how this allows us to calculate the von Neumann entropy, ${\cal S}(t)$. We specify the nature of the environment more explicitly in Section 3, leaving us in a position to consider finite temperature evolution. In Section 4 we shall define the survival probability function, $P(t)$, calculate it for the inverted oscillator and discuss its significance for quantum chaotic systems. In Section 5 we show analytically our generalisation of eqn.(\ref{zurekrate}) for the finite-temperature case. We state our conclusions in Section 6. Finally, the appendix contains the more tedious details
the $d_{x^2-y^2}$-orbital of Copper and the $p_x$ and the $p_y$-orbitals on the Oxygen atoms on the x-direction and the y-direction respectively of the Copper, together with on-site and nearest neighbor interactions, two phases which are odd under time-reversal due to spontaneously generated orbital loop-currents but preserving the translational symmetry were shown to be locally stable in a mean-field calculation. The phase observed has the symmetry of one of these phases but there are some other details \cite{kaminski, fauque, greven, mook} which are different and obtainable only in a more complicated model \cite{weber-prl09}. The observed as well as the predicted phase has a symmetry which can be characterized as the ordering of a polar time-reversal odd vector ${\bf L}$. ${\bf L}$ describes the loop-current pattern shown in Fig.(\ref{loop-order}) which preserves even-ness only in reflection on one of the four-possible symmetry planes of the square lattice. ${\bf L}$ has four-possible orientations, corresponding to the four possible domains of the ordered phase. \begin{figure}[tbh] \centering \includegraphics[width=0.5\textwidth]{loop-order.pdf} \caption{The four possible domains with the symmetry of the observed order in the underdoped Cuprates.} \label{loop-order} \end{figure} In the quantum-fluctuation regime, the instantaneous pattern of currents is described by vectors ${\bf L}_i$, which vary in direction among the four-possible orientations. The quantum-fluctuations are described by the correlation function \cite{aji-cmv-qcf, aji-cmv-qcf-pr} $<{\bf U}^+_i(t) {\bf U}_j(t')>$, where ${\bf U}_i$ is the generator of the rotation among the four-configurations of ${\bf L}_i$. It turns out that in the fluctuation regime the discreteness of ${\bf L}_i$ can be replaced by a continuous vector so that the fluctuations are related to the fluctuations of a quantum-fluctuations of a $2-$dimensional rotor. In a model including dissipation due to the coupling of the fluctuations of ${\bf U}_i$ to the fluctuations of the fermion-current, the spectral-function $\chi"({\bf q}, \omega)$ of the fluctuations in the quantum-critical regime has been derived (\cite{aji-cmv-qcf}, \cite{aji-cmv-qcf-pr}) to be of the form \begin{eqnarray} \label{eq:flucspec} \textrm{Im}\chi({\bf q},\omega) &=& \begin{cases} -\chi_0 \tanh(\omega/2T), &|\omega| \lesssim \omega_c; \\ 0, &|\omega| \gtrsim \omega_c. \end{cases} \end{eqnarray} This is precisely of the form which was suggested \cite{mfl} to account for the singular transport properties in Region I of the phase diagram and is the basis of the marginal fermi-liquid. The single-particle scattering rate from such fluctuations can be calculated, the imaginary part of the self-energy for frequencies much larger than the temperature is given simply by \begin{eqnarray} \label{selfenergy} \textrm{Im}\Sigma(\omega, {\bf k}) &=& -\frac{\pi}{2} \lambda({\bf k})\begin{cases} |\omega|, & |\omega| \lesssim \omega_c \\ \omega_c, & |\omega| \gtrsim \omega_c. \end{cases} \end{eqnarray} Here $\lambda({\bf k})$ is a coupling function whose derivation is discussed below. Fig. (\ref{mdcwidth}) shows that in the $(\pi,\pi)-$ direction, this prediction is fulfilled in all Cuprates in which ARPES measurements have been carried out. The scattering rate at any energy $\omega$ is essentially $\propto \sum_{\bf q} \int _0^{\omega} \chi"({\bf q}, \omega')$. Therefore the linearity in $\omega$ of the scattering rate up to some energy and constancy thereafter is a direct proof of the fluctuation spectra given by Eq. (\ref{eq:flucspec}). Earlier experiments with greater resolution at lower energies providing evidence the crossover from linearity in $T$ to linearity in $\omega$ have been reviewed \cite{abrahams-cmv-pnas}. Fig. (\ref{mdcwidth}) is also a proof that a distinct fluctuation spectra with a sharp cut-off $\omega_c \approx 0.5 eV$ exists universally in the cuprates. These critical fluctuations themselves for $q \to 0$ are directly observed in Raman scattering (but the experiments have not been carried out all the way to the cut-off energy), see Fig.(\ref{sugai-raman}), where evidence for the universality is presented through $S(\omega) = (1+n(\omega/T))\chi"(\omega)$ in the limit $q \to 0$. The spectra of Eq.(\ref{eq:flucspec}) is quite unlike the Gaussian critical spectra discussed in Sec.IIC. The singularity at $(\omega, T) \to 0$ does not affect the bulk of the spectra at all which extends at all $T$ to $\omega_c$, which as we will infer from experiments is about $E_f/4$. Second and most curiously, the critical spectra has no spatial scale, the concept of a dynamical critical exponent $z$ is lost. \begin{figure}[tbh] \centering \includegraphics[width=0.7\textwidth]{mdcwidth.pdf} \caption{The linewidths of the Momentum Distribution curves for different cuprates as a function of the energy. The imaginary part of the self-energy is obtained by multiplying this linewidth with the bare fermi-velocity. The detailed references for each cuprate are given in Ref.(\onlinecite{lijun-prl08})} \label{mdcwidth} \end{figure} Given that these singular fluctuations determine the properties above $T_c$ including the scattering rate of the fermions, it is natural to ask if they promote superconductive pairing with the observed d-wave symmetry and with the right order of magnitude of $T_c$. As has already been noted in Sec. II, the spectra of the fluctuations ({eq:flucspec}) is ideal for high $T_c$ on the basis of Eliashberg theory. It has a high upper cut-off, and it has the least inelastic scattering possible in a quantum-critical spectra. However, the q-independence of the spectra makes one wonder if it can promote d-wave pairing. To investigate this, the momentum dependence of the coupling of the fermions to the fluctuations has been calculated. The coupling of the fermions to the fluctuations has been calculated \cite{shehter-aji-cmv}. In the continuum limit, ${\bf U}({\bf r})$ is the angular momentum operator generating rotations of ${\bf L}({\bf r})$. Therefore it can only couple to the local angular momentum of fermions. So the coupling is of the form \begin{equation} \label{coupl} H_{int} \propto \int d{\bf r} \sum_{\sigma}g \psi^+({\bf r},\sigma) ({\bf \hat{r} \times \hat{p}}) \psi({\bf r},\sigma) {\bf U}({\bf r}) + H.C. \end{equation} $H_{int}$ has also been derived for the fermions in a two-dimensional model of the Cuprate lattice and the coupling constant $g$ estimated in terms of the same microscopic model which gives the symmetry of the observed order and its approximate magnitude. It is useful to note that Eq. (\ref{coupl}) is the natural orbital angular momentum analog of the familiar collective spin-fluctuation coupling to to spin-flip excitations of fermions. We may write Eq.(\ref{coupl}) in momentum space; \begin{equation} \label{coup-kspace} H_{int} = \sum_{{\bf k, k'}, \sigma} g i ({\bf \hat{k}} \times {\bf\hat{k'}}) \psi^+({\bf k},\sigma)\psi^+({\bf k'},\sigma){\bf U}({\bf k-k'}) + H.C. \end{equation} The coupling constant of the scattering of fermions to the fluctuation spectrum can be extracted from the ARPES data in the normal state, Fig. (\ref{mdcwidth}) \cite{lijun-prl08}. From such measurements, one deduces that the coupling constant $\lambda_0$ for all Cuprates measured by ARPES is between about $0.7 $and $1$ and the cut-off $\omega_c$ is between $0.4$ eV and $0.5$ eV. The lattice generalization of Eq. (\ref{coup-kspace}) also predicts that the scattering rate varies $\propto a + b\cos(4\theta)$ where $\theta$ is measured from the $\pi,\pi$ direction with a variation of about a factor of 2 going from the $\pi,\pi$ to the $\pi,0$ direction. \begin{figure}[tbh] \centering \includegraphics[width=0.8\textwidth]{raman.pdf} \caption{Universality of Raman scattering in Cuprates near and $Ba_xK_{1-x}BiO_3$ for compositions of doping with the highest $T_c$. The figure is taken from (\onlinecite{sugai-raman}).} \label{sugai-raman} \end{figure} The momentum-dependence of coupling, even though the spectrum itself is momentum dependent is crucial to the symmetry of superconductivity promoted by the critical fluctuations. This is seen as follows: Integrating over the fluctuations in Eq.(\ref{coup-kspace})gives an effective vertex for scattering of fermion-pairs: \begin{align} \label{hpair} H_{pairing} \approx & \sum_{{\bf k}\sigma{\bf k'}\sigma'} \Lambda({\bf k},{\bf k}') c^{\dagger}_{\sigma'}(-{\bf k}')c^{\dagger}_{\sigma}({\bf k}')c_{\sigma}({\bf k})c_{\sigma'}(-{\bf k}); \notag \\ \Lambda({\bf k},{\bf k}') = & \gamma(k,k')\gamma(-k,-k') \textrm{Re}\chi(\omega=\epsilon_{\bf k}-\epsilon_{\bf k}'). \end{align} This is exact to $O(\lambda \frac{\omega_c}{E_f})$, where $\lambda$'s are the dimensionless coupling constants exhibited below. In the continuum approximation for fermions near the fermi-energy, $\gamma({\bf k}, {\bf k}') \propto i({\bf k} \times {\bf k}')$. The pairing vertex is then \begin{equation} \label{kxk'} \Lambda\left(\textbf{k},\textbf{k},\right) \propto -({\bf k} \times {\bf k}')^2\textrm{Re}\chi({\bf{ k}}-{\bf{k'}},\omega). \end{equation} Since $\textrm{Re} \chi({\bf k}-{\bf k'}),\omega) < 0$ for $-\omega_c <\omega < \omega_c$, independent of momentum, the pairing symmetry is given simply by expressing $({\bf k} \times {\bf k}') ^2$ in separable form~: \begin{align} ({\bf k} \times {\bf k}') ^2 &= 1/2 \left[(k_x^2+k_y^2)(k_x^{'2}+k_y^{'2}) - (k_x^2-k_y^2)(k_x^{'2}-k_y^{'2})\right.\nonumber \\ &- \left. 4(k_xk_y) (k'_x k'_y)\right]. \end{align} Pairing interaction in the $s$-wave channel is repulsive, that in the two $d$-wave channels is equally attractive, and in the odd-parity channels is zero. The factor $i$ in $\gamma({\bf k}, {\bf k}')$, present because the coupling is to fluctuations of time-reversal odd operators, is crucial in determining the sign of the interactions
# NonlinearSystem The NonlinearSystem object holds the equation system created by the normal FEM process (e.g. the Matrix and RHS vector) to be solved. Normally MOOSE uses PETSc to store and solve this system. This object is where you will find the callback routines used by the PETSc solvers. ## Solving Non-linear Systems Application of the finite element method converts PDE(s) into a system of non-linear equations, to solve for the coefficients . • Newton's method has good convergence properties, we use it to solve this system of nonlinear equations. • Newton's method is a "root finding" method: it finds zeros of nonlinear equations. • Newton's Method in "Update Form" for finding roots of the scalar equation • We don't have just one scalar equation: we have a system of nonlinear equations. • This leads to the following form of Newton's Method: • Where is the Jacobian matrix evaluated at the current iterate: • Note that: ## Jacobian Definition An efficient Newton solve, e.g. one that requires few "non-linear" iterations, requires an accurate Jacobian matrix or an accurate approximation of its action on a vector. When no explicit matrix is formed for the Jacobian and only its action on a vector is computed, the algorithm is commonly referred to as matrix-free (PETSc jargon) or Jacobian-free (MOOSE jargon). The default solve algorithm in MOOSE is PJFNK, or Preconditioned Jacobian-Free Newton-Krylov. "Krylov" refers to the linear solution algorithm used to solve each non-linear iteration of the Newton algorithm. For more information on solving linear systems, please see Solving Linear Systems. Even if a Jacobian-free non-linear algorithm is chosen, typically a good preconditioning matrix is still needed. Building the matrix can be accomplished automatically, using automatic differentiation and/or manually. One can elect to sacrifice some computing speed and calculate Jacobians automatically using automatic differentiation (AD). MOOSE employs the DualNumber class from the MetaPhysicL package in order to enable AD. If the application developer wants to make use of AD, they should inherit from ADKernel as opposed to Kernel. Additionally, when coupling in variables, the adCoupled* methods should be used. For example, to retrieve a coupled value, instead of using coupledValue("v") in the ADKernel constructor, adCoupledValue("v") should be used. adCoupledGradient should replace coupledGradient, etc. An example of coupling in an AD variable can be found in ADCoupledConvection.C and ADCoupledConvection.h. Moreover, material properties that may depend on the non-linear variables should be retrieved using getADMaterialProperty instead of getMaterialProperty. They should be declared in materials using declareADProperty. Example AD material source and header files can be found here and here; example kernel source and header files that use AD material properties can be found here and here. The object central to AD computing objects is ADReal which is defined in MooseTypes. Finite element shape functions are introduced in the documentation section Shape Functions. There we outline how our primary variables are summations of those shape functions multiplied by constant coefficients which are our degrees of freedom. At the end of Solving Non-linear Systems we gave an explicit illustration of how the derivative of a variable u with respect to its jth degree of freedom () is equal to the jth shape function . Similarly the derivative of with respect to is equal to . The code expression _phi[_j][_qp] represents in any MOOSE framework residual and Jacobian computing objects such as kernels and boundary conditions. Any MOOSE kernel may have an arbitrary number of variables coupled into it. If these coupled variables use the same shape function family and order, then their associated s will be equivalent. However, if u and v use different shape functions then . As a developer, however, you do not in most cases have to worry about these differences in . MOOSE automatically updates the object member variable _phi to use the shape functions of the variable for whom the Jacobian is currently being computed. However, if the primary variable u is a scalar-valued (single-component) finite element variable and the coupled variable v is a vector-valued (multi-component) finite element variable (or visa versa), then you must introduce an additional member variable to represent the shape functions of the vector-valued (scalar-valued) variable. The name of this variable is up to the developer, but we suggest perhaps a _standard_ prefix for scalar valued finite-element variables and _vector_ for vector valued finite-element variables. The _standard_ prefix is suggested over _scalar_ so as not to be confused with a MooseVariableScalar, which only has a single value over the entire spatial domain. An example constructor for a standard kernel that couples in a vector-valued FE variable is shown below: : Kernel(parameters), _efield_id(coupled("efield")), _efield(coupledVectorValue("efield")), _efield_var(*getVectorVar("efield", 0)), _vector_phi(_assembly.phi(_efield_var)), _mobility(getParam<Real>("mobility")) { } The associated declarations are: const unsigned int _efield_id; const VectorVariableValue & _efield; VectorMooseVariable & _efield_var; const VectorVariablePhiValue & _vector_phi; const Real _mobility; Real _sgn; Residual, on-diagonal, and off-diagonal methods are respectively Real { return -_grad_test[_i][_qp] * _sgn * _mobility * _efield[_qp] * _u[_qp]; } and Real { return -_grad_test[_i][_qp] * _sgn * _mobility * _efield[_qp] * _phi[_j][_qp]; } and Real { if (jvar == _efield_id) return -_grad_test[_i][_qp] * _sgn * _mobility * _vector_phi[_j][_qp] * _u[_qp]; else return 0; } An example constructor for a vector kernel that couples in a scalar-valued FE variable is shown below: const InputParameters & parameters) : VectorKernel(parameters), _v_id(coupled("v")), _v_var(*getVar("v", 0)), { } The associated declarations are: const unsigned _v_id; MooseVariable & _v_var; Residual and off-diagonal Jacobian methods are respectively: Real { } and Real { if (jvar == _v_id) else return 0.; } note:Flexibility Note that only one member is needed to represent shape functions for standard MooseVariables and VectorMooseVariables. For example, if the vector-variables v and w are coupled into a standard kernel for u, only a single _vector_phi member needs to be added; there is not need for both a _v_phi and _w_phi. _vector_phi will be automatically updated to represent the shape functions for whichever vector variable the Jacobian is being computed for. ### Newton for a Simple Equation • Consider the convection-diffusion equation with nonlinear , , and : • The component of the residual vector is: • Using the previously-defined rules for and , the entry of the Jacobian is then: • Note that even for this "simple" equation, the Jacobian entries are nontrivial: they depend on the partial derivatives of , , and , which may be difficult or time-consuming to compute analytically. • In a multiphysics setting with many coupled equations and complicated material properties, the Jacobian might be extremely difficult to determine. ### Chain Rule • On the previous slide, the term was used, where was a nonlinear forcing function. • The chain rule allows us to write this term as • If a functional form of is known, e.g. , this formula implies that its Jacobian contribution is given by ### Jacobian-Free Newton-Krylov • is a linear system solved during each Newton step. • For simplicity, we can write this linear system as , where: - - - • We employ an iterative Krylov method (e.g. GMRES) to produce a sequence of iterates , • and remain fixed during the iterative process. • The "linear residual" at step is defined as • MOOSE prints the norm of this vector, , at each iteration, if you set print_linear_residuals = true in the Outputs block. • The "nonlinear residual" printed by MOOSE is . • By iterate , the Krylov method has constructed the subspace • Different Krylov methods produce the iterates in different ways: • Conjugate Gradients: orthogonal to . • GMRES/MINRES: has minimum norm for in . • Biconjugate Gradients: is orthogonal to • is never explicitly needed to construct the subspace, only the action of on a vector is required. • This action can be approximated by: • This form has many advantages: - No need to do analytic derivatives to form - No time needed to compute (just residual computations) - No space needed to store ## Solving Linear Systems You will commonly hear of two ways to solve an implicit linear system of equations: directly or iteratively. A typical direct solve will perform a LU factorization. Direct solves are a great tool for solving small-medium sized systems; however, they are extremely expensive when applied to large-scale problems. To solve large-scale systems, iterative methods must be used. The most successful iterative methods are
letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.251 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R60.102 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.124 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.172 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.248 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R70.115 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.197 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.186 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.272 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R80.094 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.126 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.087 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.161 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R90.127 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.204 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.188 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.236 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R100.136 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.264 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.216 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.293 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). n R10.347 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.662 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.581 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.562 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R20.325 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 1.007 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.578 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.555 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R30.307 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.742 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.558 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.423 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R40.326 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.924 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.730 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.456 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R50.123 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.641 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.551 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.276 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R60.140 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.553 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.478 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.298 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R70.079 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.501 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.412 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.263 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). R80.281 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.816 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.572 Different uppercase superscript letters show differences between strains within the same run (P<0.05). , Different lowercase superscript letters show differences between runs within the same strain (P<0.05). 0.369 Different uppercase superscript letters show differences between strains within the same
""" 01/30/2021 - Teague Tomesh This file contains a set of functions for solving the maximum independent set (MIS) problem on a given graph using a variety of ansatzes. Each ansatz (qaoa, dqva, qlsa, dqva+cutting) has its own function which implements the variational algorithm used to find the MIS. """ import time, random, queue, copy, itertools import numpy as np import networkx as nx from networkx.algorithms.community.kernighan_lin import kernighan_lin_bisection from scipy.optimize import minimize #from cutqc.main import CutQC import qiskit from qiskit import * from qiskit.quantum_info import Statevector from ansatz import qaoa, dqv_ansatz, qls_ansatz, dqv_cut_ansatz import qsplit.qsplit_circuit_cutter as qcc import qsplit.qsplit_mlrecon_methods as qmm from utils.graph_funcs import * from utils.helper_funcs import * from utils.cutting_funcs import * def solve_mis_cut_dqva(init_state, graph, P=1, m=4, threshold=1e-5, cutoff=1, sim='aer', shots=8192, verbose=0, max_cuts=1): """ Find the MIS of G using the dqva and circuit cutting """ # Initialization # NOTE: the backend to use is very version dependent. # Qiskit 0.23.2 does not support the newer Aer_simulators that # are available in Qiskit 0.26.0. # For now, just use the statevector_simulator #backend = Aer.get_backend(name='aer_simulator', method='statevector') backend = Aer.get_backend('qasm_simulator') # Randomly permute the order of the partial mixers cur_permutation = list(np.random.permutation(list(graph.nodes))) history = [] # build circuit fragments and stitching data def _get_circuit_and_cuts(num_params, init_state, mixer_order): params = [qiskit.circuit.Parameter('var_{}'.format(num)) for num in range(num_params)] kwargs = dict(params=params, init_state=init_state, mixer_order=mixer_order, decompose_toffoli=1, verbose=0, P=P) circuit, cuts = dqv_cut_ansatz.gen_dqva(graph, partition, cut_nodes, hot_nodes, **kwargs) fragments, wire_path_map = qcc.cut_circuit(circuit, cuts) if verbose: print('Found cut locations:', cuts) print('Cut {}-qubit circuit into {} fragments with ({})-qubits'.format( circuit.num_qubits, len(fragments), [f.num_qubits for f in fragments])) return fragments, wire_path_map, cuts # strip a string of non-digit characters def _digit_substr(string): return "".join(filter(str.isdigit,string)) # bind numerical values to the parameters of a circuit def _bind(circuit, params): binding = { circuit_param : params[int(_digit_substr(circuit_param.name))] for circuit_param in circuit.parameters } return circuit.bind_parameters(binding) # get output (probability distribution) of a circuit def _get_circuit_output(params, var_fragments, wire_path_map, frag_shots): start_time = time.time() fragments = [ _bind(fragment, params) for fragment in var_fragments ] recombined_dist = sim_with_cutting(fragments, wire_path_map, frag_shots, backend, verbose=0) end_time = time.time() if verbose: print('\t\tsim_with_cutting elapsed time: {:.3f}'.format(end_time-start_time)) return recombined_dist # This function will be what scipy.minimize optimizes def avg_cost(params, *args): # get output probability distribution for the circuit start = time.time() probs = _get_circuit_output(params, *args) # Compute the average Hamming weight. # Have to check each string to ensure it is a valid IS because of the # noise introduced by the cutting process. avg_weight = sum([prob * hamming_weight(bitstr) for bitstr, prob \ in probs.items() if is_indset(bitstr, graph)]) end = time.time() if verbose: print('\t\t\tTotal time = {:.3f}, avg weight = {:.4f}'.format( end-start, avg_weight)) # we want to maximize avg_weight <--> minimize -avg_weight return -avg_weight # Begin outer optimization loop best_indset = init_state best_init_state = init_state cur_init_state = init_state best_params = None best_perm = copy.copy(cur_permutation) # Randomly permute the order of mixer unitaries m times for mixer_round in range(1, m+1): mixer_history = [] inner_round = 1 new_hamming_weight = hamming_weight(cur_init_state) # Attempt to improve the Hamming weight until no further improvements can be made #while True: # Try a single iteration for now while inner_round == 1: if verbose: print('Start round {}.{}, Initial state = {}'.format(mixer_round, inner_round, cur_init_state)) # Begin Inner variational loop # - build parameterized fragments and optimize # TODO: fix the num_params computation num_params = P * (graph.number_of_nodes() + 1) init_params = np.random.uniform(low=0.0, high=2*np.pi, size=num_params) # Partition the graph and find cut locations to split the circuit cut_start_time = time.time() # Sometimes the cutter will fail to find any cuts, in which case # the code will break down. Loop to prevent this fragments = [None] counter = 0 while len(fragments) == 1 or len(found_cuts) == 0: counter += 1 if counter > 100: print('Unable to find viable cuts after 100 iterations!') print('Returning current solution:', best_indset) return best_indset, best_params, best_init_state, best_perm, partition, cut_nodes, hot_nodes, history # Kernighan-Lin partitions a graph into two relatively equal subgraphs partition = kernighan_lin_bisection(graph) subgraphs, cut_edges = get_subgraphs(graph, partition) # identify nodes incident to a cut (cut_nodes), # and choose "hot nodes": a subset of cut_nodes to which we will # apply a partial mixer in the first mixing layer cut_nodes, hot_nodes = simple_choose_nodes(graph, subgraphs, cut_edges, max_cuts, init_state) if len(hot_nodes) == 0: # no hot nodes were selected -> will cause an assertion error in gen_dqva # repeat the cutting process to find a better selection of nodes continue uncut_nodes = list(set(graph.nodes).difference(set(cut_nodes))) fragments, wire_path_map, found_cuts = _get_circuit_and_cuts(num_params, cur_init_state, cur_permutation) if len(fragments) == 1 or len(found_cuts) == 0: cur_permutation = list(np.random.permutation(list(graph.nodes))) frag_shots = shots // qmm.fragment_variants(wire_path_map) cut_end_time = time.time() if verbose: print('kl bisection:', partition) print('Hot nodes:', hot_nodes) print('\tNum params =', num_params) print('\tCurrent Mixer Order:', cur_permutation) print('\tSplit circuit into {} subcircuits with {} qubits in {:.3f} s'.format( len(fragments), [len(frag.qubits) for frag in fragments], cut_end_time - cut_start_time)) args = (fragments, wire_path_map, frag_shots) out = minimize(avg_cost, init_params, args=args, method='COBYLA') opt_params = out['x'] opt_cost = out['fun'] if verbose: print('\tOptimal cost:', opt_cost) print('\t{} function evaluations'.format(out['nfev'])) # Get the results of the optimized circuit probs = _get_circuit_output(opt_params, *args) # Select the top [cutoff] probs top_probs = sorted([(key, val) for key, val in probs.items() if val > threshold], key=lambda tup: tup[1], reverse=True)[:cutoff] # Check if we have improved the Hamming weight best_hamming_weight = hamming_weight(best_indset) better_strs = [] for bitstr, prob in top_probs: this_hamming = hamming_weight(bitstr) if is_indset(bitstr, graph) and this_hamming > best_hamming_weight: better_strs.append((bitstr, this_hamming)) better_strs = sorted(better_strs, key=lambda t: t[1], reverse=True) # Save current results to history inner_history = {'mixer_round':mixer_round, 'inner_round':inner_round, 'cost':opt_cost, 'init_state':cur_init_state, 'mixer_order':copy.copy(cur_permutation), 'num_params':num_params, 'frag_qubits':[f.num_qubits for f in fragments]} mixer_history.append(inner_history) # If no improvement was made, break and go to next mixer round if len(better_strs) == 0: print('\tNone of the measured bitstrings had higher Hamming weight than:', best_indset) break # Otherwise, save the new bitstring and repeat best_indset, new_hamming_weight = better_strs[0] best_init_state = cur_init_state best_params = opt_params best_perm = copy.copy(cur_permutation) cur_init_state = best_indset print('\tFound new independent set: {}, Hamming weight = {}'.format( best_indset, new_hamming_weight)) inner_round += 1 # Save the history of the current mixer round history.append(mixer_history) # Choose a new permutation of the mixer unitaries cur_permutation = list(np.random.permutation(list(graph.nodes))) print('\tRETURNING, best hamming weight:', new_hamming_weight) return best_indset, best_params, best_init_state, best_perm, partition, cut_nodes, hot_nodes, history def solve_mis_qls(init_state, G, P=1, m=1, mixer_order=None, threshold=1e-5, cutoff=1, sim='aer', shots=8192, verbose=0, param_lim=None, threads=0): """ Find the MIS of G using Quantum Local Search (QLS), this ansatz is composed of two types of unitaries: the cost unitary U_C and the mixer unitary U_M. The mixer U_M is made up of individual partial mixers which are independently parametrized. QLS's key feature is the parameter limit which truncates the number of partial mixers that are applied at any one time, and its dynamic reuse of quantum resources (i.e. the partial mixers for qubits which are in the MIS are turned off and applied to other qubits not currently in the set) """ # Initialization if sim == 'statevector' or sim == 'qasm': backend = Aer.get_backend(sim+'_simulator', max_parallel_threads=threads) elif sim == 'aer': backend = Aer.get_backend(name='aer_simulator', method='statevector', max_parallel_threads=threads) elif sim == 'cloud': raise Exception('NOT YET IMPLEMENTED') else: raise Exception('Unknown simulator:', sim) # Select an ordering for the partial mixers if mixer_order == None: cur_permutation = list(np.random.permutation(list(G.nodes))) else: cur_permutation = mixer_order history = [] # This function will be what scipy.minimize optimizes def f(params): # Generate a circuit circ = qls_ansatz.gen_qlsa(G, P=P, params=params, init_state=cur_init_state, barriers=0, decompose_toffoli=1, mixer_order=cur_permutation, verbose=0, param_lim=param_lim) if sim == 'qasm' or sim == 'aer': circ.measure_all() # Compute the cost function result = execute(circ, backend=backend, shots=shots).result() if sim == 'statevector': statevector = Statevector(result.get_statevector(circ)) probs = strip_ancillas(statevector.probabilities_dict(decimals=5), circ) elif sim == 'qasm' or sim == 'aer': counts = result.get_counts(circ) probs = strip_ancillas({key: val/shots for key, val in counts.items()}, circ) avg_cost = 0 for sample in probs.keys(): x = [int(bit) for bit in list(sample)] # Cost function is Hamming weight avg_cost += probs[sample] * sum(x) # Return the negative of the cost for minimization #print('Expectation value:', avg_cost) return -avg_cost # Begin outer optimization loop best_indset = init_state best_init_state = init_state cur_init_state = init_state best_params = None best_perm = copy.copy(cur_permutation)
kv * dq) # connect up the input and output nengo.Connection(net.input[:dim], net.CB) nengo.Connection(net.CB, net.output[:arm.DOF], function=CB_func, synapse=None) I don’t think there’s anything noteworthy going on here, most of the relevant details have already been discussed…so we’ll move on to the adaptation! Implementing CB – non-linear dynamics adaptation The final part of the model is the non-linear dynamics adaptation, modelled as a separate ensemble in the cerebellar sub-network (a separate ensemble so that it’s more modular, the learning connection could also come off of the other CB population). I work through the details and proof of the learning rule in the paper, so I’m not going to discuss that here. But I will restate the learning rule here: $\dot{\textbf{d}} = \textbf{L}_d \textbf{A} \otimes \textbf{u},$ where $\textbf{d}$ are the neural decoders, $\textbf{L}_d$ is the learning rate, $\textbf{A}$ is the neural activity of the ensemble, and $\textbf{u}$ is the joint space control signal sent to the arm. This is a basic delta learning rule, where the decoders of the active neurons are modified to push the decoded function in a direction that reduces the error. The adaptive ensemble can be initialized either using saved weights (passed in with the learned_weights paramater) or as all zeros. It is important to note that setting decoders to all zeros means that does not mean having zero neural activity, so learning will not be affected by this initialization. # dynamics adaptation------------------------------------ if learning_rate is not None: n_neurons=1000, dimensions=arm.DOF*2, # enforce spiking neurons neuron_type=nengo.LIF(), intercepts=AreaIntercepts( dimensions=arm.DOF, base=nengo.dists.Uniform(-.5, .2))) net.learn_encoders = nengo.Connection( # if no saved weights were passed in start from zero weights = ( learned_weights if learned_weights is not None net.learn_conn = nengo.Connection( # connect directly to arm so that adaptive signal # is not included in the training signal transform=weights, learning_rule_type=nengo.PES( learning_rate=learning_rate), synapse=None) nengo.Connection(net.input[dim:dim+2], net.learn_conn.learning_rule, transform=-1, synapse=.01) return net We’re able to implement that learning rule using Nengo’s prescribed error-sensitivity (PES) learning on our connection from the adaptive ensemble to the output. With this set up the system will be able to learn to adapt to perturbations that are functions of the input (set here to be $[\textbf{q}, \dot{\textbf{q}}]$). The intercepts in this population are set to values I found worked well for adapting to a few different forces, but it’s definitely a parameter to play with in your own scripts if you’re finding that there’s too much or not enough generalization of the decoded function output across the state space. One other thing to mention is that we need to have a relay node to amalgamate the control signals output from M1 and the dynamics compensation ensemble in the CB. This signal is used to train the adaptive ensemble, and it’s important that the adaptive ensemble’s output is not included in the training signal, or else the system quickly goes off to positive or negative infinity. Implementing S1 – a placeholder The last sub-network in the REACH model is a placeholder for a primary sensory cortex (S1) model. This is just a set of ensembles that represents the feedback from the arm and relay it on to the rest of the model. def generate(arm, direct_mode=False, means=None, scales=None): dim = arm.DOF*2 + 2 # represents [q, dq, hand_xy] means = np.zeros(dim) if means is None else means scales = np.ones(dim) if scales is None else scales scale_down, scale_up = generate_scaling_functions( np.asarray(means), np.asarray(scales)) net = nengo.Network('S1') with net: # create / connect up S1 -------------------------------- net.S1 = nengo.networks.EnsembleArray( n_neurons=50, n_ensembles=dim) # expecting input in form [q, x_des] net.input = nengo.Node(output=scale_down, size_in=dim) net.output = nengo.Node( lambda t, x: scale_up(x), size_in=dim) # send in system feedback and target information # don't account for synapses twice nengo.Connection(net.input, net.S1.input, synapse=None) nengo.Connection(net.S1.output, net.output, synapse=None) return net Since there’s no function that we’re decoding off of the represented variables we can use separate ensembles to represent each dimension with an EnsembleArray. If we were going to decode some function of, for example, q0 and dq0, then we would need an ensemble that represents both variables. But since we’re just decoding out f(x) = x, using an EnsembleArray is a convenient way to decrease the number of neurons needed to accurately represent the input. Creating a model using the framework The REACH model has been set up to be as much of a plug and play system as possible. To generate a model you first create the M1, PMC, CB, and S1 networks, and then they’re all hooked up to each other using the framework.py file. Here’s an example script that controls the arm to trace a circle: def generate(): kp = 200 kv = np.sqrt(kp) * 1.5 center = np.array([0, 1.25]) # set the initial position of the arm arm_sim.init_q = arm_sim.inv_kinematics(center) arm_sim.reset() net = nengo.Network(seed=0) with net: net.dim = arm_sim.DOF net.arm_node = arm_sim.create_nengo_node() net.error = nengo.Ensemble(1000, 2) net.xy = nengo.Node(size_in=2) # create an M1 model ------------------------------------- net.M1 = M1.generate(arm_sim, kp=kp, operational_space=True, inertia_compensation=True, means=[0.6, 2.2, 0, 0], scales=[.5, .5, .25, .25]) # create an S1 model ------------------------------------- net.S1 = S1.generate(arm_sim, means=[.6, 2.2, -.5, 0, 0, 1.25], scales=[.5, .5, 1.7, 1.5, .75, .75]) # subtract current position to get task space direction nengo.Connection(net.S1.output[net.dim*2:], net.error, transform=-1) # create a trajectory for the hand to follow ------------- x = np.linspace(0.0, 2.0*np.pi, 100) PMC_trajectory = np.vstack([np.cos(x) * .5, np.sin(x) * .5]) PMC_trajectory += center[:, None] # create / connect up PMC -------------------------------- net.PMC = PMC.generate(PMC_trajectory, speed=1) # send target for calculating control signal nengo.Connection(net.PMC.output, net.error) # send target (x,y) for plotting nengo.Connection(net.PMC.output, net.xy) net.CB = CB.generate(arm_sim, kv=kv, means=[0.6, 2.2, -.5, 0], scales=[.5, .5, 1.6, 1.5]) model = framework.generate(net=net, probes_on=True) return model In line 50 you can see the call to the framework code, which will hook up the most common connections that don’t vary between the different scripts. The REACH model has assigned functionality to each area / sub-network, and you can see the expected input / output in the comments at the top of each sub-network file, but the implementations are open. You can create your own M1, PMC, CB, or S1 sub-networks and try them out in the context of a full model that generates high-level movement behaviour. Running the model To run the model you’ll need Nengo, Nengo GUI, and NengoLib all installed. You can then pull open Nengo GUI and load any of the a# scripts. In all of these scripts the S1 model is just an ensemble that represents the output from the arm_node. Here’s what each of the scripts does: 1. a01 has a spiking M1 and CB, dynamics adaptation turned off. The model guides the arm in reaching in a straight line to a single target and back. 2. a02 has a spiking M1, PMC, and CB, dynamics adaptation turned off. The PMC generates a path for the hand to follow that traces out a circle. 3. a03 has a spiking M1, PMC, and CB, dynamics adaptation turned off. The PMC generates a path for the joints to follow, which moves the hand in a straight line to a target and back. 4. a04 has a spiking M1 and CB, dynamics adaptation turned off. The model performs the centre-out reaching task, starting at a central point and reaching to 8 points around a circle. 5. a05 has a spiking M1 and CB, and dynamics adaptation turned on. The model performs the centre-out reaching task, starting at a central point and reaching to 8 points around a circle. As the model reaches, a forcefield is applied based on the joint velocities that pushes the arm as it tries to reach the target. After 100-150 seconds of simulation the arm has adapted and learned to reach in a straight line again. Here’s what it looks like when you pull open a02 in Nengo GUI: I’m not going to win any awards for arm animation, but! It’s still a useful visualization, and if anyone is proficient in javascript and want’s to improve it, please do! You can see the network architecture in the top left, the spikes generated by M1 and CB to the right of that, the arm in the bottom left, and the path traced out by the hand just to the right of that. On the top right you can see the a02 script code, and below that the Nengo console. Conclusions One of the most immediate extensions (aside from any sort of model of S1) that comes to mind here is implementing a more detailed
decision version of \coloringVD{q} given with a path decomposition of width at most $t$ in time $(q+1-\epsilon)^t \cdot n^{\mathcal{O}(1)}$ for some $\epsilon >0$. Let $r$ be the value given by \cref{thm:csp} for $d=q+1$ and $\epsilon$. We reduce from $(d,r)$-\textsc{CSP}. Consider an instance $\mathcal{I}$ with variables $\mathcal{V}$ and constraints $\mathcal{C}$. Let $b$ be the smallest integer such that $(b+1)^{1/b} < 1 + \epsilon/d$; note that $b$ is a constant. Note that we can assume that the number $N$ of variables is divisible by $b$, as otherwise we can add at most $b-1$ new variables. Furthermore, by adding at most $N$ dummy constraints that are satisfied by every assignment we can ensure that every variable appears in some clause. Note that the lower bound from \cref{thm:csp} still holds after these modifications. We partition $\mathcal{V}$ into $M= N/b$ subsets $V_1,\ldots,V_M$, each of size at most $b$. We call these subsets \emph{blocks}. Let us construct an instance $\mathcal{I}' = (\mathcal{V},\mathcal{C}')$ of $(d,br)$-\textsc{CSP}. For each constraint $C \in \mathcal{C}$, let $C'$ be a constraint involving all variables in all blocks intersected by $C$ such that $C'$ is satisfied by some assignment if an only if its projection to the variables contained in $C$ satisfies $C$. Clearly $C'$ is of arity at most $br$. Let $\mathcal{C}'$ consists of all constraints $C'$, defined for all $C \in \mathcal{C}$. It is straightforward to verify that $\mathcal{I}'$ is satisfiable if and only if $\mathcal{I}$ is. Now we show that using our hypothetical algorithm for \coloringVD{q} we can solve $\mathcal{I}'$ (and thus $\mathcal{I}$) in time $(q+1-\epsilon')^N$, which would contradict the SETH by \cref{thm:csp}. \medskip For an assignment $\varphi :\mathcal{V} \to [d]$, its \emph{signature} is the vector $\mathbf{f} = (f_1\ldots,f_M) \in \{0,\ldots,b\}^M$, where for each $i \in [M]$ the value of $f_i$ is the number of variables from $V_i$ mapped by $\varphi$ to $d (=q+1)$. We exhaustively guess the signature of some fixed satisfying assignment of $\mathcal{I}'$. This results in at most \begin{equation} (b+1)^M = (b+1)^{N/b} = \left( (b+1)^{1/b} \right )^N < (1+\epsilon/d)^N \label{eq:branches} \end{equation} branches. In each branch we will look for a satisfying assignment with prescribed signature. It is clear that $\mathcal{I}'$ is satisfiable if and only if in at least one branch we succeed in finding a solution. Let us focus of some branch related to the signature $\mathbf{f} = (f_1,\ldots,f_M)$. We construct an instance $\mathcal{I}_{\mathbf{f}}=(\mathcal{V}, \mathcal{C}_{\mathbf{f}})$ of $(d,br)$-\textsc{CSP} as follows. Consider a constraint $C' \in \mathcal{C}'$ and recall that for each block $V_i$, either all variables of $V_i$ are in $C'$ or none of them. We define a constraint $C_{\mathbf{f}}$ on the same variables as $C'$. An assignment of variables of $C_{\mathbf{f}}$ satisfies $C_{\mathbf{f}}$ if and only if it satisfies $C$ and, in each block contained in the variable set of $C_{\mathbf{f}}$, exactly $f_i$ variables have value $d$. Let $\mathcal{C}_{\mathbf{f}}$ consist to all constrains $C_{\mathbf{f}}$ defined for all $C' \in \mathcal{C}$. If is clear that $\mathcal{I}_{\mathbf{f}}=(\mathcal{V},\mathcal{C}_{\mathbf{f}})$ is satisfiable if and only if $\mathcal{I}$ is satisfied by some assignment with signature $\mathbf{f}$. Thus in the branch we are considering we aim to solve $\mathcal{I}_{\mathbf{f}}$. \paragraph{Construction of $(G_{\mathbf{f}},L_{\mathbf{f}},k_{\mathbf{f}})$.} Now we proceed to the construction of an instance $(G_{\mathbf{f}},L_{\mathbf{f}},k_{\mathbf{f}})$ of \coloringVD{q}. We start with introducing the set $X = \{x_1,x_2,\ldots,x_N\}$ of vertices with lists $[q]$. Each vertex $x_i$ represents the variable $v_i$. The intended meaning is that coloring $x_i$ with a color $c \in [q]$ corresponds to assigning the value $c$ to $v_i$. On the other hand, deleting $x_i$ corresponds to assigning the value $q+1$ to $v_i$. For each $i \in [q+1]$ we define a $q$-element vector $\gamma(i)$: \begin{myitemize} \item if $i =1$, then $\gamma(i) = 2,2,3\ldots,q$, \item if $i \in [2,q]$, then $\gamma(i) = 1,1,2,\ldots,i-1,i+1,\ldots,q$, \item if $i = q+1$, then $\gamma(i) = 1,2,\ldots,q$. \end{myitemize} The important property is that if $i \leq q$, then the set of values appearing in $\gamma(i)$ is exactly $[q] \setminus \{i\}$, and if $i = q+1$, then the set of values appearing in $\gamma(i)$ is exactly $[q]$. Now let us consider a constraint $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$. Let $v_1,v_2,\ldots,v_p$ be the variables of $C_{\mathbf{f}}$ (where $p \leq br$) and let $R \subseteq [q+1]^p$ be the relation enforced by $C_{\mathbf{f}}$ (i.e., the set of satisfying assignments). We define a relation $R' \subseteq [q]^{pq}$ defined as follows. For each $a_1,a_2,\ldots,a_p \in R$, we add to $R'$ the $pq$-element sequence $\gamma(a_1),\gamma(a_2),\ldots,\gamma(a_p)$ over $[q]$. We call \cref{prop:realize-coloring} to obtain a $pq$-ary gadget \[ \mathcal{F}(C_{\mathbf{f}}) = (F(C_{\mathbf{f}}),L,(z_{1,1},z_{1,2},\ldots,z_{1,q},z_{2,1},\ldots,z_{2,q},\ldots,z_{p,1},\ldots,z_{p,q})),\] where each list is equal to $[q]$. We introduce $N+1$ copies of $\mathcal{F}(C_{\mathbf{f}})$ and make the vertices $z_{i,j}$ from each copy adjacent to $x_i$. We repeat the above step for every constraint in $\mathcal{C}_{\mathbf{f}}$. Finally, we set $k_{\mathbf{f}} := \sum_{i=1}^M f_i$. This completes the construction of the instance of \coloringVD{q}. Note that the total number of vertices of $G_{\mathbf{f}}$ is at most \[ |X| + \sum_{C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}} |V(F(C_{\mathbf{f}})| \cdot (N+1) = N + \sum_{C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}} \mathcal{O}(1) \cdot (N+1) = \mathcal{O}(N^{br+1}) = N^{\mathcal{O}(1)}, \] as $q,b,r$ are constants. \paragraph{Equivalence of instances.} We claim that $(G_{\mathbf{f}},L_{\mathbf{f}},k_{\mathbf{f}})$ is a yes-instance of \coloringVD{q} if and only if $\mathcal{I}_{\mathbf{f}}$ is satisfiable. First, suppose that there exists a satifying assignment $\varphi : \mathcal{V} \to [q+1]$ of $\mathcal{I}_{\mathbf{f}}$. We assign colors to vertices of $X$ as described above: if $\varphi(v_i) \leq q$, then we color $x_i$ with color $\varphi(v_i)$. If $\varphi(v_i) = q+1$, then the vertex $x_i$ is deleted. Note that this way we delete exactly $k_{\mathbf{f}} = \sum_{i=1}^M f_i$ vertices from $X$. Consider a $p$-ary constraint $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$ and a copy of $\mathcal{F}(C_{\mathbf{f}})$ with portals $z_{1,1},z_{1,2},\ldots,z_{1,q},$ $z_{2,1},\ldots,z_{2,q},\ldots,z_{p,1},\ldots,z_{p,q}$. For each $i \in [p]$, we color the vertices $z_{i,1},\ldots,z_{i,q}$ according to the vector $\gamma(\varphi(v_i))$. Note that each of these vertices receives color in $[q] \setminus \{\varphi(v_i)\}$. Finally, by \cref{prop:realize-coloring}, this partial coloring can be extended to list $q$-coloring of the whole gadget. We repeat this step for every copy of $\mathcal{F}(C_{\mathbf{f}})$, for every $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$. Note that we do not remove any vertices outside $X$, so the number of deleted vertices is exactly $k_{\mathbf{f}}$. Now suppose that there is a set $S \subseteq |V(G_{\mathbf{f}})|$ of size at most $k_{\mathbf{f}}$ and a proper list coloring $\psi$ of $G_{\mathbf{f}} - S$. Note that we can safely assume that for any $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$, all copies of $F(C_{\mathbf{f}})$ receive exactly the same coloring. Indeed, if this is not the case, we can recolor all copies in the same way as the one in which there are the fewest deleted vertices, obtaining another solution to the problem. Now recall that for each $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$ we introduced $N+1$ copies of $\mathcal{F}(C_{\mathbf{f}})$. As $N+1 > N \geq \sum_{i=1}^M f_i = k_{\mathbf{f}}$, we conclude that no non-portal vertex from $\mathcal{F}(C_{\mathbf{f}})$ is deleted and thus $S \subseteq X$. We set the valuation of variables of $\mathcal{I}_{\mathbf{f}}$ according to the coloring of $X$: if $x_i \notin S$, we set the value of $v_i$ to $\psi(x_i)$, and otherwise we set the value of $v_i$ to $q+1$. Note that $|S|\leq k_{\mathbf{f}}$ variables were mapped to $q+1$. We claim that this valuation satisfies all constraints of $\mathcal{I}_{\mathbf{f}}$. Consider a constraint $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$; let $v_1,v_2,\ldots,v_p$ be its variables and let $R \subseteq [q+1]^p$ be the relation enforced by $C_{\mathbf{f}}$. Recall that $R' \subseteq [q]^{pq}$ is the relation consiting of sequences $\gamma(a_1),\ldots,\gamma(a_p)$ for $a_1,\ldots,a_p \in R$. Consider one copy of $\mathcal{F}(C_{\mathbf{f}}$ with portals $z_{i,1},\ldots,z_{i,q}$. Recall that by \cref{prop:realize-coloring}, the colors assigned to $z_{i,1},\ldots,z_{i,q}$ are $\gamma(a_1),\ldots,\gamma(a_p)$ for some $a_1,\ldots,a_p \in R$. As each $x_i$ is adjacent to $z_{i,1},\ldots,z_{i,q}$, by the properties of $\gamma(a_i)$ we conclude that either $x_i \in S$ or the color of $x_i$ is $a_i$. Thus $v_i$ is mapped either to $a_i$ or to $q+1$. Consequently, by the definition of $C_{\mathbf{f}}$, we conclude that in every block $V_i$ contained in $C_{\mathbf{f}}$, at least $f_i$ variables are mapped to $q+1$. Since the total number of variables mapped to $q+1$ is at most $k_{\mathbf{f}} = \sum_{i=1}^M$, we conclude that in each block $V_i$ exactly $f_i$ variables are mapped to $q+1$. Let us get back to analyzing $C_{\mathbf{f}} \in \mathcal{C}_{\mathbf{f}}$. As exactly $f_i$ variables from each block $V_i$ contained in $C_{\mathbf{f}}$ are mapped to $q+1$, we observe that for every $i$, the variable $v_i$ is mapped to $a_i$. Consequently, $C_{\mathbf{f}}$ is as the sequence of values on its variables is $a_1,\ldots,a_p
Pepperidge Farm Farmhouse Sweet Hawaiian Bread, 4) The oxidation number of chlorine in the product state: -1. Well according to the textbook, no, the OD # for KCL is -1!! What is the oxidation number of Cl in KClO3. The same thing with water, textbook claims a OD# of +1 for water. H+1Cl-1 + K+1Mn+7O-2 4 → K+1Cl-1 + Mn+2Cl-1 2 + H+1 2O-2 + Cl0 2 b) Identify and write out all redox couples in reaction. So, K +1 Cl-1 ===> KCl 0 OR KCl. In this case, it will look 3(-2). 5 Amazing Dessert Recipes Using Protein Powders, How You Can Get Your Whole Family Into Fitness, Want To Become More Fit? What Is Special In Investing In The Amazon Stock? Since the oxidation states have to equal 0 when you add them, Chlorine must be +5.-5+5-0. Learn how to calculate or find the oxidation number of elements along with examples. 6ftwonder. The substance potassium chlorate (v) above has an oxidation state of chlorine that is less common. NOTE: The maximum positive oxidation number for chlorine is +7, the same as its group number … Expert Answer 100% (1 rating) Previous question Next question Get more help from Chegg. In the oxidation number change method the underlying principle is that the gain in the oxidation number (number of electrons) in one reactant must be equal to the loss in the oxidation number of the other reactant. Note that Cl- (chloride) is a spectator ion. + 2*-2 = 0. This problem has been solved! The chlorate ion has a 1- … Atoms in the species → K Cl. We know Cl show diffrent oxidation state as -1 to +7 due to vacant d orbital. Concepts and reason In the given question, a reaction is given. 0.0500 >>0.100 0.0250 0.0333 125. See the answer. The atoms in KCl undergo ionicbonding.) 4 views. See the answer. 3.1. KCL should have oxidation number of 0 k = +1 and cl = -1. Since there are 3 O atoms, we have to multiply the charge of O by 3. Oxidation states → 2x + (7*-2) = 0: x = +7, Oxidation state of chlorine in Cl2O = 142\\frac{14}{2}214 = +7. Atoms Molecules and Ions that have Constant Oxidation State (Number), Oxidation Number of Atoms in a Diatomic Molecule, CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, JEE Main Chapter Wise Questions And Solutions, The oxidation state of the atom in the reactant side, The oxidation state of the atom in the product side, Oxidation number of all alkali metal ions is always = +1, Oxidation number of all alkaline earth metal ions is always = +2, Oxidation number of all boron family metal ions is always = +3, Oxidation number of hydrogen in proton (H, Oxidation number of oxygen in oxide ion(O, Oxidation number can be positive or zero or negative. KClO3=1+5+ (-2)3=0. Nam D. Apr 1, 2018 #+2# Explanation: Manganese chloride has a chemical formula of #MnCl_2#. The same thing with water, textbook claims a OD# of +1 for water. Oxidation state of chlorine in KCl = -1. Oxidation state of chlorine in KCl = -1. Oxidation number of Cl in HCl/KCl = -1. b) 2K + Cl₂ → 2KCl. Posted in Chemistry You need the oxidation number for Cl, so we will use only the ClO3 1-. Notwithstanding, Cl went from +3 to – 1 which means it picked up electrons and was decreased. help me. We know that potassium (K) has an oxidation of +1 since it is a group 1 element. The oxidation states have to equal 0 when you do the math. Since K began with an oxidation number of +1 and finished with an oxidation of +1, it was neither decreased nor oxidized. … 2- x 3 = 6- total. Oxidation numbers. Answer Save. Expert Answer 100% (1 rating) Previous question Next question Get more help from Chegg. Concepts and reason In the given question, a reaction is given. Thus, the charge on potassium (K) in KCl is +1. -6+1= -5. the oxidation number of potassuim is +1, oxidation number of chlorine is +5 the oxidation number of oxygen is -2 and there are 3 atoms of oxygen. Reactants: K in KClO2 has oxidation number +1 K in KCl has oxidation number +1 Products: O in O2 has oxidation number 0 Then,
over ${\cal A}$ of the ${\cal A}-{\cal A}$ bimodule ${\cal A}\otimes {\cal A}$, ($\otimes$ always means the tensor product over $\mathbf k\ $). This is a $\mathbb N$-graded associative unital $\mathbf k\ $-algebra with $\mathfrak T^n({\cal A})=\otimes^{n+1}{\cal A}$ and product $(x_0\otimes \dots\otimes x_n)(y_0\otimes \dots \otimes y_m)=x_0\otimes\dots\otimes x_{n-1}\otimes x_ny_0\otimes y_1\otimes \dots \otimes y_m$ for $x_i,y_j\in {\cal A}$. One defines a structure of cosimplicial module for $(\mathfrak T^n({\cal A}))$ by setting\\ $\mathfrak f_0(x_0\otimes\dots \otimes x_n)=\mbox{\rm 1\hspace {-.6em} l} \otimes x_0\otimes\dots\otimes x_n$\\ $\mathfrak f_i(x_0\otimes\dots\otimes x_n)=x_0\otimes\dots \otimes x_{i-1}\otimes \mbox{\rm 1\hspace {-.6em} l} \otimes x_i\otimes\dots\otimes x_n\ \mbox{for}\ 1\leq i\leq n$\\ $\mathfrak f_{n+1}(x_o\otimes \dots\otimes x_n)=x_0\otimes\dots\otimes x_n\otimes \mbox{\rm 1\hspace {-.6em} l}$\\ and\\ $\mathfrak s_i(x_0\otimes\dots\otimes x_n)=x_0\otimes\dots\otimes x_ix_{i+1}\otimes \dots\otimes x_n$ for $0\leq i\leq n-1$\\ One shows easily that with the above structures, $\mathfrak T({\cal A})$ is a cosimplicial algebra and therefore, $\mathfrak T({\cal A})$ equipped with the $N$-differential $d_1$ is a graded $q$-differential algebra. It follows (Theorem 4 (1)) that the $H^n_{(m)}(\mathfrak T({\cal A}),d_1)$ can be computed in terms of the cohomology $H^n(\mathfrak T({\cal A}))$ of the cosimplicial module $\mathfrak T({\cal A})$. However we shall give later a direct proof of the triviality of these generalized cohomologies whenever ${\cal A}$ admits a linear form $\omega$ satisfying $\omega(\mbox{\rm 1\hspace {-.6em} l})=1$ by using Lemma 5.\\ In this case $d_1$ is given by \[ d_1(x_0\otimes\dots\otimes x_n)=\sum^n_{i=0} q^i x_0\otimes\dots \otimes x_{i-1}\otimes \mbox{\rm 1\hspace {-.6em} l}\otimes x_i\otimes \dots\otimes x_n - q^n x_0\otimes\dots \otimes x_n\otimes \mbox{\rm 1\hspace {-.6em} l} \] so it coincides with the $q$-differential of $\mathfrak T({\cal A})$ introduced in \cite{D-VK}, \cite{MD-V}.\\ As pointed out in \cite{D-VK}, the graded algebra $\mathfrak T({\cal A})$ can be characterized by a universal property. Indeed ${\cal A}\otimes {\cal A}$ is the free ${\cal A}-{\cal A}$ bimodule generated by $\tau=\mbox{\rm 1\hspace {-.6em} l}\otimes \mbox{\rm 1\hspace {-.6em} l}$, hence $\mathfrak T({\cal A})$ is the $\mathbb N$-graded associative $\mathbf k\ $-algebra generated by ${\cal A}$ in degree 0 and by a free generator $\tau$ of degree 1; $x_0\otimes\dots\otimes x_n=x_0\tau x_1\dots \tau x_n$. This implies the following result \cite{D-VK}. \begin{proposition} Let ${\mathfrak{A}}=\oplus_n{\mathfrak{A}}^n$ be a $\mathbb N$-graded associative unital $\mathbf k\ $-algebra. Then for any homomorphism of unital $\mathbf k\ $-algebras $\varphi:{\cal A}\rightarrow {\mathfrak{A}}^0$ and for any element $\alpha$ of ${\mathfrak{A}}^1$, there is a unique homomorphism of graded unital $\mathbf k\ $-algebras\linebreak[4] $\mathfrak T_{\varphi,\alpha}:\mathfrak T({\cal A})\rightarrow {\mathfrak{A}}$ which extends $\varphi$ and is such that $\mathfrak T_{\varphi,\alpha}(\tau)=\alpha,\linebreak[4](\tau=\mbox{\rm 1\hspace {-.6em} l}\otimes \mbox{\rm 1\hspace {-.6em} l} \in \mathfrak T^1({\cal A}))$. \end{proposition} By applying this result to the case where ${\mathfrak{A}}=C({\cal A},{\cal A})$, (Example 3), where $\varphi$ is the identity mapping of ${\cal A}$ onto itself considered as an homomorphism of ${\cal A}$ into $C^0({\cal A},{\cal A})(={\cal A})$ and where $\alpha$ is again the identity mapping of ${\cal A}$ onto itself considered as an element of $C^1({\cal A},{\cal A})$, one obtains the canonical homomorphism $\Psi=\mathfrak T_{\varphi,\alpha}:\mathfrak T({\cal A})\rightarrow C({\cal A},{\cal A})$ of graded unital algebras defined in \cite{Mass} which reads \[ \Psi(x_0\otimes\dots\otimes x_n)(y_1,\dots,y_n)=x_0y_1x_1\dots y_nx_n, \ \ \ \mbox{for} x_i,y_j\in {\cal A}. \] \begin{proposition} The above homomorphism $\Psi$ is an homomorphism of cosimplicial algebras, i.e. one has $\Psi\circ \mathfrak f_i=\mathfrak f_i\circ \Psi$ and $\Psi\circ \mathfrak s_i=\mathfrak s_i\circ\Psi$ with obvious notations. \end{proposition} This statement is easy to check. This implies in particular that one has $\Psi\circ d_1=d_1\circ \Psi$ as well as $\Psi\circ d=d\circ \Psi$ where the usual cosimplicial differential $d$ is the ordinary Hochschild differential on $C({\cal A},{\cal A})$.\\ As pointed out in \cite{D-VK} (with a slightly different formulation), the $q$-differential $d_1$ on $\mathfrak T({\cal A})$ is the unique $q$-derivation of degree 1 of $\mathfrak T({\cal A})$ such that one has $d_1(x)=\mbox{\rm 1\hspace {-.6em} l} \otimes x - x\otimes \mbox{\rm 1\hspace {-.6em} l}$ for $x\in {\cal A}$ and $d_1(\mbox{\rm 1\hspace {-.6em} l} \otimes \mbox{\rm 1\hspace {-.6em} l})=\mbox{\rm 1\hspace {-.6em} l} \otimes \mbox{\rm 1\hspace {-.6em} l}\otimes \mbox{\rm 1\hspace {-.6em} l} (=(\mbox{\rm 1\hspace {-.6em} l} \otimes \mbox{\rm 1\hspace {-.6em} l})^2)$.\\ By induction on the integer $n$, one shows that \[ d^n_1(\mbox{\rm 1\hspace {-.6em} l}\otimes\mbox{\rm 1\hspace {-.6em} l})=[n]_q!(\mbox{\rm 1\hspace {-.6em} l}\otimes\mbox{\rm 1\hspace {-.6em} l})^{n+1}=[n]_q!\ \mbox{\rm 1\hspace {-.6em} l}^{\otimes(n+2)} \] and \[ d^n_1(x)=[n]_q!(\mbox{\rm 1\hspace {-.6em} l}\otimes \mbox{\rm 1\hspace {-.6em} l})^{n-1}d(x)=[n]_q!\ \mbox{\rm 1\hspace {-.6em} l}^{\otimes n}d(x) \] for $x\in {\cal A}$ with $d(x)=\mbox{\rm 1\hspace {-.6em} l}\otimes x-x\otimes \mbox{\rm 1\hspace {-.6em} l}=d_1(x)$.\\ \noindent {\bf Remark 5.} One can extend the setting of pre-cosimplicial algebras in a framework of monoidal categories of pre-cosimplicial modules. More precisely let $E$, $F$ and $G$ be 3 pre-cosimplicial modules with coface homomorphisms denoted by $\mathfrak f_i$ and assume that one has bilinear mappings $\cup:E^a\times F^b\rightarrow G^{a+b}$ such that \[ \mathfrak f_i(\alpha\cup \beta)=\left\{\begin{array}{ll} \mathfrak f_i(\alpha)\cup\beta & \mbox{if}\ \ i\leq a\\ \alpha\cup \mathfrak f_{i-a}(\beta) & \mbox{if}\ \ i>a \end{array}\right., \ \ i\in \{0,\dots,a+b+1\} \] and such that\\ \[ \mathfrak f_{a+1}(\alpha)\cup\beta = \alpha\cup \mathfrak f_0(\beta),\ \ \forall \alpha\in E^a\ \ \mbox{and}\ \ \forall \beta\in F^b. \] Then one has $d_1(\alpha\cup\beta)=d_1(\alpha)\cup \beta+q^a\alpha \cup d_1(\beta)$, with $d_1$ defined as in Section~3. This applies in particular to a generalization of Example 2 by taking for E the simplicial forms with coefficients in a $\mathbf k\ $-module $\cal E$, for $F$ the simplicial forms with coefficients in a $\mathbf k\ $-module ${\cal F}$ and for $G$ the simplicial forms with coefficients in the $\mathbf k\ $-module $\cal E \otimes \cal F$, $\cup$ being then the tensor product over $\mathbf k\ $ combined with the product of simplicial forms. This also applies to Hochschild cochains by taking $E=C({\cal A},{\cal M})$, $F=C({\cal A},{\cal N})$ and $G=C({\cal A},{\cal M}\otimes_{{\cal A}}{\cal N})$ where ${\cal M}$ and ${\cal N}$ are ${\cal A}-{\cal A}$ bimodules; the bilinear mapping $\cup$ is then the tensor product over ${\cal A}$ and corresponds to the usual cup product, (see in \cite{D-VK}).\\ \noindent {\bf Construction 3: Universal $q$-differential envelopes.}\\ Let ${\cal A}$ be an associative unital $\mathbf k\ $-algebra and consider the following category $q_{{\cal A}}$. An object of $q_{{\cal A}}$ is a graded $q$-differential algebra $\Omega=\oplus_{n\in \mathbb N}\Omega^n$ together with an homomorphism of unital algebras $\varphi:{\cal A} \rightarrow \Omega^0$; a morphism of $(\Omega, \varphi)$ into $(\Omega',\varphi')$ is an homomorphism $\psi:\Omega\rightarrow \Omega'$ of graded $q$-differential algebras such that $\varphi'=\psi\circ \varphi:{\cal A}\rightarrow \Omega^{\prime 0}$. An object of $q_{{\cal A}}$ will be refered to as a $q$-{\sl differential calculus for} ${\cal A}$. An important property of $q_{{\cal A}}$ is that it possesses an initial universal object. Namely there exist a graded $q$-differential algebra $\Omega_q({\cal A})$ and an homomorphism of unital algebras $\varphi_q$ of ${\cal A}$ into $\Omega^0_q({\cal A})$ such that, for any $q$-differential calculus $(\Omega,\varphi)$ for ${\cal A}$, there is a unique homomorphism $\psi_\Omega:\Omega_q({\cal A})\rightarrow \Omega$ of graded $q$-differential algebras satisfying $\varphi=\psi_\Omega\circ \varphi_q$. Clearly $(\Omega_q({\cal A}),\varphi_q)$ is unique up to an isomorphism and will be refered to as {\sl the universal $q$-differential calculus for} ${\cal A}$ and the graded $q$-differential algebra $\Omega_q({\cal A})$ will be called {\sl the universal $q$-differential envelope of ${\cal A}$}. It is straightforward that $\varphi_q$ is an isomorphism and we shall identify ${\cal A}$ with $\Omega^0_q({\cal A})$. One constructs easily $\Omega_q({\cal A})$ by generators and relations \cite{D-VK}: it is the graded algebra generated by ${\cal A}$ in degree 0 and by the $d^\ell({\cal A})$ in degrees $\ell$ for $\ell\in \{1,\dots,N-1\}$ together with the relations of ${\cal A}$ and the relations \[ d^n(xy)=\sum^n_{m=0}\left[\begin{array}{c} n\\ m \end{array} \right]_q d^m(x)d^{n-m}(y)\hspace{1cm} \forall x,y\in {\cal A} \] and one defines the $d$ on $\Omega_q({\cal A})$ by the graded $q$-Leibniz rule and $d^N=0$ (from $d$ on ${\cal A}$). One has $d(\mbox{\rm 1\hspace {-.6em} l})=0$ and the $\mathbf k\ $-modules $d^\ell({\cal A})$ are all isomorphic to ${\cal A}/\mathbf k\ \mbox{\rm 1\hspace {-.6em} l}$.\\ The graded $q$-differential algebra ($\mathfrak T({\cal A}),d_1$) of Example 4 together with the identity ${\cal A}=\mathfrak T^0({\cal A})$ is a graded $q$-differential calculus for ${\cal A}$ and therefore there is a unique homomorphism of graded $q$-differential algebras of $\Omega_q({\cal A})$ into $\mathfrak T({\cal A})$ which extends the identity mapping of ${\cal A}$ onto itself and the following result is not very hard to prove \cite{D-VK}. \begin{theorem} The unique homomorphism of graded $q$-differential algebras of $\Omega_q({\cal A})$ into $\mathfrak T({\cal A})$ equipped with $d_1$ which induces the identity mapping of ${\cal A}$ onto itself is injective. \end{theorem} It follows that we can identify $\Omega_q({\cal A})$ with the graded $q$-differential subalgebra of ($\mathfrak T({\cal A}),d_1$) generated by ${\cal A}(\subset \mathfrak T({\cal A}))$. \begin{lemma} Let ${\cal A}^\ast$ be the dual module of ${\cal A}$. The following conditions $(i)$ and $(ii)$ are equivalent for the algebra ${\cal A}$.\\ $(i)$ The canonical bilinear mapping of ${\cal A}^\ast\times {\cal A}$ into $\mathbf k\ $ is surjective.\\ $(ii)$ There exists $\omega\in {\cal A}^\ast$ such that $\omega(\mbox{\rm 1\hspace {-.6em} l})=1$. \end{lemma} \noindent {\bf Proof.} $(ii) \Rightarrow (i)$ is obvious since the canonical image of ${\cal A}^\ast\times {\cal A}$ is an ideal of $\mathbf k\ $. Assume that $(i)$ is satisfied so that there exist $\omega_0\in {\cal A}^\ast$ and $x_0\in {\cal A}$ such that $\omega_0(x_0)=1$; then $\omega\in {\cal A}^\ast$
points comply with \textit{all} flavour bounds included in our study, as described in Section~\ref{sec:lqconstraints}. The lowest $\Delta\chi^2$ region (dark red ellipsoid) suggests that the best fit scenario corresponds to New Physics dominantly coupling to muons and taus. We stress that the patterns emerging from the $\Delta\chi^2$ distribution are not an artefact of some particular assumption imposed on the couplings, but rather the result of a very general scan over the full set of (mixing) parameters. \medskip The right panel of Fig.~\ref{fig:nuni} offers a projection of the viable points (displayed on the left panel) in the plane of the most constraining observables, CR($\mu-e$,N) and BR($K_L \to \mu^\pm e^\mp$). It is interesting to notice that, to a very good approximation, most of the currently phenomenologically viable points lie within future reach of the upcoming muon-electron conversion dedicated facilities (COMET and Mu2e). In the near future, and should the $B$-meson decay anomalies be confirmed, an explanation in terms of such a minimal leptoquark framework could be probed via its impact for cLFV observables, in particular $\mu-e$ conversion in nuclei. However, the fact that this seems to be the preferred parameter space, might be an artefact of the non-unitary parametrisation of leptoquark couplings. Therefore, in the next sections, we will explore the interplay of different constraints on the leptoquark couplings, taking them as independent parameters (recall that due to the number of vector-like leptons $n\geq 2$ all entries in $K_L$ can be viewed as independent). We will further discuss in detail the impact of future negative searches concerning LFV observables. \mathversion{bold} \section{Towards a global fit of the vector leptoquark $V_1$ flavour structure} \mathversion{normal} \label{sec:lqfit} Having established that, in order to address the $B$-meson decay anomalies, the flavour structure of the leptoquark couplings is necessarily non-unitary, we now carry out a comprehensive fit of the relevant couplings of the vector leptoquark to the different generations of SM fermions. Relying on the simplified-model parametrisation (cf. Section~\ref{sec:simplifiedmodel}), our goal is thus to constrain the entries of the matrix $K_L$ (see Eq.~(\ref{eq:lagrangian:Vql_phys3})). Under the assumption that the relevant couplings are real, a total of nine free parameters will thus be subject to a large number of constraints stemming from data on several SM-allowed leptonic and semi-leptonic meson decays, SM-forbidden cLFV transitions and decays, as well as from an explanation of the (anomalous) observables in the $b \to s \ell \ell$ and $b \to c \tau\nu$ systems. \noindent \paragraph{Data relevant for the global fit} In particular, we take into account the data for the charged current $b\to c\ell\nu$ processes (see Appendix~\ref{app:BFCCC}). In addition to the LFUV ratios $R_{D^{(\ast)}}$~\cite{Abdesselam:2017kjf, Abdesselam:2018nnh,Aaij:2015yra,Aaij:2017uff,Huschle:2015rga,Hirose:2016wfn,Abdesselam:2019dgh,Lees:2013uzd}, we also include the binned branching fractions of $B\to D^{(\ast)} \ell\nu$ decays~\cite{Aubert:2008yv, Aubert:2007qs,Urquijo:2006wd, Aubert:2009qda}, as listed in Table~\ref{tab:binned_bcellnu}. Furthermore, we take into account a large array $b\to s\ell\ell$ decays as listed in Appendix~\ref{app:bsll}. This includes the binned data of the angular observables in the optimised basis~\cite{Descotes-Genon:2013vna} (Table~\ref{tab:ang_data}), the differential branching ratios (Table~\ref{tab:br_bsll}), and the binned LFUV observables (Table~\ref{tab:lfuv_bsll}). Other than the binned data, we also include the unbinned data of branching ratios in $B_{(s)}\to \ell\ell$~\cite{Chatrchyan:2013bka, Aaij:2017vad, Aaboud:2018mst, Sirunyan:2019xdu,Aaij:2020nol} and inclusive and exclusive branching ratio measurements of $b\to s\gamma$~\cite{Amhis:2014hma,Misiak:2017bgg,Dutta:2014sxo, Aaij:2012ita}. Other than studying the contributions of the vector leptoquark in the ``anomalous'' channels, we aim to estimate the favoured ranges of all of its couplings to SM fermions. Consequently, we include a large number of additional observables into the likelihoods. Since most processes only constrain a product of at least two distinct leptoquark couplings, a successful strategy is to include an extensive set of processes, thus allowing to constrain distinct combinations of couplings (as many as possible). In addition to the $b\to c\ell\nu$ transitions, we also include certain $b\to u\ell\nu$ decays such as $B^0\to\pi\tau\nu$, $B^+\to\tau\nu$ and $B^+\to\mu\nu$, which are listed in Table~\ref{tab:buellnu}. In many leptoquark models $B\to K^{(\ast)}\nu\bar\nu$ decays provide very stringent constraints. However this is not the case for $V_1$ vector leptoquarks, due to the $SU(2)_L$-structure: the relevant operators for $B\to K^{(\ast)}\nu\bar\nu$ transitions are absent at the tree-level, and are only induced at higher order, thus leading to weaker constraints. Due to the leading operator being generated at the loop-level, a non-linear combination of leptoquark couplings is constrained by this process. Thus, despite the loop suppression, we include $B\to K\nu\bar\nu$ in the likelihoods, and use the data obtained by Belle~\cite{Grygier:2017tzo,Lutz:2013ftz} and BaBar~\cite{Lees:2013kla,delAmoSanchez:2010bk}. To constrain combinations of first and second generation couplings, we further include a large number of binned and unbinned leptonic and semi-leptonic charged current $D$ meson decays, charged and neutral current Kaon decays and SM allowed $\tau$-lepton decays. The observables and corresponding data-sets can be found in Appendix~\ref{app:sctau} and are listed in Tables~\ref{tab:binned_charm} through \ref{tab:fcnc_strange}. Finally, as previously discussed, cLFV processes impose severe constraints on the parameter space of vector leptoquark couplings; in particular neutrinoless $\mu-e$ conversion in nuclei and the decay $K_L\to e^\pm\mu^\mp$ provide some of the most stringent constraints for vector leptoquark couplings to the first two generations of leptons~\cite{Hati:2019ufv}. Recall that in Table~\ref{tab:important_LFV} we present the current experimental bounds and future sensitivities for various cLFV observables yielding relevant constraints to our analysis. Depending on the fit set-up, either only a few, or then all of these observables are included in the global likelihood, as explicitly mentioned in the following paragraphs. \paragraph{Results for the simplified-model fit of the $V_1$ couplings} Firstly, it is important to emphasise that in our analysis we consider all the entries in the $K_L$ coupling matrix as (real) \textit{free parameters to be determined by the fit}. For the leptoquark mass we choose three benchmark-points, $m_{V_1} \in [1.5,\,2.5,\,3.5]\:\mathrm{TeV}$, which allow to illustrate most of the vector leptoquark mass range of interest, while respecting the current bounds from direct searches at colliders~\cite{Khachatryan:2014ura,Aad:2015caa,Aaboud:2016qeg,Aaboud:2019jcc,Aaboud:2019bye,Aad:2020iuy,Sirunyan:2017yrk,Sirunyan:2018vhk,Sirunyan:2018kzh}. In particular, notice that masses significantly heavier than a few TeVs preclude a successful explanation of the charged current anomalies, $R_{D^{(*)}}$. For each mass benchmark point we thus obtain best-fit points corresponding to a SM pull around $\sim 6.4\,\sigma$ (with respect to the global likelihood including all lepton flavour conserving observables). \begin{figure}[h] \centering \includegraphics[width = 0.6\textwidth]{figs/LQ20/KL_CR_woLFV_scatter.pdf} \caption{Result of a random scan around the best-fit point (without the inclusion of cLFV bounds on $\mathrm{CR}(\mu - e, \mathrm{Au})$ and $\mathrm{BR}(K_L\to e^\pm\mu^\mp)$ as inputs to the fit). Following a sampling of the global likelihood(s) via MCMC, the sample points shown in the plot are drawn from the posterior distributions of the leptoquark couplings (cf. Appendix~\ref{app:stats}). The colour scheme reflects the mass benchmark points: blue, orange and green respectively associated with $m_V$=1.5~TeV, 2.5~TeV and 3.5~TeV. The dashed lines indicate the current bounds at $90\,\%$ C.L., while the dotted line denotes the envisaged future sensitivity of the COMET and Mu2e experiment (for Aluminium nuclei). Figure from~\cite{Hati:2020cyn}.} \label{fig:woLFV_KL_CR} \end{figure} In Fig.~\ref{fig:woLFV_KL_CR}, we present the results of a random scan around the best-fit points for the vector leptoquark scenario here considered, in the plane spanned by two of the most constraining cLFV observables, $\mathrm{CR}(\mu - e, \mathrm{N})$ and $\mathrm{BR}(K_L\to e^\pm\mu^\mp)$. The sample points are drawn from the posterior frequency distributions of the leptoquark couplings, following Markov Chain Monte Carlo (MCMC) simulations, as described in Appendix~\ref{app:stats}. It can be easily seen that for the three mass benchmark choices (corresponding to the different colours in the plot) most of the randomly sampled points are excluded by the strong cLFV constraints. Although the involved couplings are compatible with $0$, the constraints on first generation couplings derived from lepton flavour conserving low-energy data (as listed in Appendix~\ref{app:Obs}) are considerably weaker than those from LFV processes. This leads to several ``flat directions'' in the likelihood. The strongest LFV constraints are from $\mathrm{CR}(\mu - e, \mathrm{Au})$ and $\mathrm{BR}(K_L\to e^\pm\mu^\mp)$, while other LFV constraints on second and third generation couplings are weaker, or on par with constraints from lepton flavour conserving low-energy data. Therefore, we redefine the strategy of the global fit, and now directly include the upper bounds from $\mathrm{CR}(\mu - e, \mathrm{Au})$ and $\mathrm{BR}(K_L\to e^\pm\mu^\mp)$ as \textit{inputs} in the fitting procedure for the vector leptoquark couplings. The inclusion of the current upper limits on the observables $\mathrm{CR}(\mu - e, \mathrm{Au})$ and $\mathrm{BR}(K_L\to e^\pm\mu^\mp)$ as input to the fit will consequently
the proof of Sauer's lemma because we were running out of time. #### Class 26 (Mar 22, 2019) Finished the proof on Sauer's lemma. That is $\Pi_{\mathcal{H}}(m) \le \sum_{i=0}^d \binom{m}{i} = \Phi_d(m)$. (Sauer's lemma is proved in the Kearns-Vazirani book in Lemma 3.1.) Discussed about adversarial machine learning (poisoning and evasion attacks) based on the slides that we have. #### Class 27 (Mar 25, 2019) Proof of the Fundamental Theorem of Learning Theory (upper bound on the number of examples needed for PAC learning in the consistency model, using a hypothesis class with finite VC dimension). In fact we proved that, with high probability (at least $1-\delta$), a consistent hypothesis $h\in\mathcal{H}$ on the training sample $S$ will have error $\varepsilon$ not more than $\frac{2}{m}\cdot\left(\lg\left(\Pi_{\mathcal{H}}(2m)\right) + \lg\left(\frac{2}{\delta}\right)\right)\,.$ (This is proved in Section 3.5 from the Kearns-Vazirani book.) A homework exercise will involve using Sauer's lemma in order to substitute $\Pi_{\mathcal{H}}(m)$ with the VC dimension $d$ of the hypothesis class $\mathcal{H}$. #### Class 28 (Mar 27, 2019) Discussion on different noise models. • Bob Sloan has a nice description for several cases of noise models in Four types of noise in data for PAC learning. (Up to and including Section 2.2 is mandatory reading; the rest is optional and in fact if you want you can present it as a 1-person project.) • Apart from the models mentioned in Bob Sloan's paper we also mentioned the nasty noise model as a bounded-budget version of the malicious noise model. We have also made a related discussion here when we were discussing about poisoning attacks (p-tampering being the analogue of malicious noise and strong "clean label" p-budget attacks being the analogue of nasty noise). Showed that the malicious noise that can be tolerated by PAC learning algorithms is less than $\varepsilon$, where $\varepsilon$ is the usual parameter for the required accuracy in PAC learning. The result that we showed was a simple variant of the result of Kearns and Li from their paper Learning in the Presence of Malicious Errors using the method of induced distributions; in particular Theorem 1 from that paper. Here is what we discussed in class — this is what you need to know, the rest of the Kearns and Li paper is optional reading and perhaps you can present a few results from their paper as a 1-person or a 2-person project (after first agreeing with me). The malicious noise model was introduced by Valiant in the paper Learning Disjunctions of Conjunctions (optional reading). #### Class 29 (Mar 29, 2019) Homework 5 announced. Discussion on random classification noise. Proved a theorem for PAC learning in the presence of random classification noise by minimizing disagreements. This is Theorem 2 from the paper by Angluin and Laird, Learning From Noisy Examples. Next time we will discuss Theorem 4 from that paper, which is a hardness result related to the problem of minimizing disagreements. In other words, while this time we can resolve positively the statistical question for PAC learnability based on minimizing disagreements, nevertheless we will have a hardness result for a fairly simple concept class (monotone conjunctions). Here are some notes by Javed Aslam who was discussing these two theorems in 1994 in MIT. Closing, one idea for you to present something from this paper of Angluin and Laird, for your in-class presentation, would be the discussion in Section 3 and the positive result that is presented there. Also, as a last remark, Theorem 3 in that paper waives the requirement of knowing an upper bound $\eta_b$ for the noise rate $\eta$, where the learner by requesting a few more examples is able to determine a good enough upper bound $\eta_b$ for the true noise rate $\eta$. #### Class 30 (Apr 1, 2019) As promised from last time, we proved Theorem 4 from the Angluin & Laird paper Learning From Noisy Examples. In other words we showed that finding a hypothesis that minimizes disagreements with a noisy sample is an NP-hard problem even for the case of monotone conjunctions. We discussed a few high-level ideas about statistical queries. We will use the notes by Javed Aslam again, which you can find here. (These notes are based on Sections 1-3 from the paper Improved Noise-Tolerant Learning and Generalized Statistical Queries, by Javed A. Aslam and Scott E. Decatur.) The Statistical Query model was introduced by Micheal Kearns in the paper Efficient Noise-Tolerant Learning from Statistical Queries; this paper however is optional reading. #### Class 31 (Apr 3, 2019) Discussed on board SQ-learning of monotone conjunctions. This is very very close to the first analysis that we used as introduction to PAC learning discrete concept classes in this course after the axis-aligned rectangles. You may want to re-read Section 1.3 from the Kearns-Vazirani book to pinpoint differences and similarities in these two approaches. Note that Section 1.3 from the Kearns-Vazirani book is about general conjunctions whereas we were discussing about monotone conjunctions this time. Furthermore, the ideas from the analysis on the SQ model extend naturally to the case where we have random classification noise (though we have only just started to address this issue). Then we derived a formula for computing $P_{\chi}$ which is the probability that the statistical query $\chi$ has to be satisfied (and will subsequently be returned within $\tau$ of its true value by the STAT oracle). $P_{\chi} = Pr_{EX(c, \mathcal{D})}[\chi = 1] = \frac{(1-\eta)Pr_{EX_{CN}^\eta(c,\mathcal{D})}[\chi = 1] - \eta Pr_{EX_{CN}^\eta(\overline{c}, \mathcal{D})}[\chi=1]}{1-2\eta}\,.$ The good thing with this last formula is that we can compute the noise-free estimate that we want based on noisy estimates. #### Class 32 (Apr 5, 2019) Finish our discussion on Statistical Query learning. Though we are far from exhausting the subject on SQ learning, by now we have see several ideas around PAC learning, noise, and learning with Statistical Queries. The slides from A Tutorial on Statistical Queries, by Lev Reyzin can prove to be useful as a backup and provide additional pointers for further reading. The slides also have a discussion on other topics connecting to SQ learning such as evolvability, differential privacy and adaptive data analysis. Note that some of the results mentioned in the slides are from papers that are suggested for presentation. #### Class 33 (Apr 8, 2019) Introduction to online learning, learning with expert advice, and the mistake bound model. Upper bound on the number of mistakes that a perceptron will make during learning. Halving algorithm and upper bound on the number of mistakes. #### Class 34 (Apr 10, 2019) VC dimension lower bounding the number of mistakes of an optimal online algorithm. Randomized Halving Algorithm. Weighted Majority Algorithm. #### Class 35 (Apr 12, 2019) Randomized Weighted Majority Algorithm. Winnow on learning monotone disjunctions with few relevant variables. Our discussion on variants of the Weighted Majority Algorithm as well as on Winnow (next class we will discuss an application of Winnow to drifting targets) can be found in the book chapter by Avrim Blum, On-Line Algorithms in Machine Learning. The relevant discussion is Section 2 (Weighted Majority) as well as Theorems 5 and 8 (Winnow) from Section 3. This book chapter by Avrim Blum appears in the book "Online Algorithms: The State of the Art". Tom Mitchell's book, Chapter 7 also has related information. In Section 7.5 the Mistake Bound model of learning is discussed. In particular, Section 7.5.2 is about bounding the number of mistakes of the Halving Algorithm and Section 7.5.3 presents the lower bound on the number of mistakes of deterministic algorithms that is obtained by the VC dimension of the class. #### Class 36 (Apr 15, 2019) We did Theorem 8 from Avrim Blum's survey paper in online algorithms covering the case of Winnow being $O(\log(n))$-competitive against drifting target concepts. I gave a very brief presentation of evolvability. Here are some notes so that you can have an understanding of the field quickly: Notes on Evolution and Learning. Ignore the second algorithm from the notes (the (1+1) EA) and the discussion in Section 8. Here are some slides
\overline{D_p}$ is the same as localization by $D_p$, as in Equation \ref{eqn: localization_isomorphism}. \end{remark} Finally, we prove the `only if' part of claims (2), (3). Suppose that $(B^{2n}_{std}, \Lambda_P)$ is Weinstein subdomain of $(B^{2n}_{std}, \Lambda_Q)$ but $Q \not \subset P$ and $0 \not \in P$. There would be a localization functor from the Fukaya category of $(B^{2n}_{std}, \Lambda_Q)$ to that of $(B^{2n}_{std}, \Lambda_P)$ over any coefficient ring $R$. However, if we take $R = \mathbb{F}_q$ for any $q \in Q \backslash P$, we have $D_q \cong \mathrm{cone}(0_{T^*_0 D^n}) \cong T^*_0 D^n[1]\oplus T^*_0 D^n$ in $\mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_q)$ since $q \equiv 0$ in $\mathbb{F}_q$. This object split-generates $\mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_q)$ and so $$ \mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_Q; \mathbb{F}_q) \cong \mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_q)/D_q \cong 0 $$ On the other hand, all $p\in P$ are invertible in $\mathbb{F}_q$ because $q \in Q \backslash P$ by assumption and $p \ne 0$. Therefore $D_p \cong \mathrm{cone}(p \cdot \mathrm{Id}_{T^*_0 D^n}) \cong 0$ in $\mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_p)$ for all $p \in P$ and so $$ \mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_P; \mathbb{F}_q) \cong \mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_q)/0\cong \mathrm{Tw}\, \mathcal{W}(B^{2n}_{std}, \Lambda_\varnothing; \mathbb{F}_q) \cong \mathrm{Tw}\, \mathbb{F}_q $$ which is non-trivial. Since there cannot be a localization functor from the trivial category to $\mathrm{Tw}\, \mathbb{F}_q$, $(B^{2n}_{std}, \Lambda_P)$ cannot be a Weinstein subdomain of $(B^{2n}_{std}, \Lambda_Q)$. This proves the `only if' part of claim (2). If there is a smoothly trivial regular Lagrangian cobordism from $\Lambda_P$ to $\Lambda_Q$ in $\partial B^{2n}_{std} \times [0,1]$, then $(B^{2n}_{std}, \Lambda_P)$ is a Weinstein subdomain of $(B^{2n}_{std}, \Lambda_Q)$ and so the `only if' part of claim (3) follows from that for claim (2). \end{proof} Now we show that Theorem \ref{thm: p-handles} implies Theorem \ref{thm: exotic_subdomains} concerning Weinstein subdomains of an \textit{arbitrary} Weinstein domain. Recall that an index $n$ Weinstein handle can be viewed as the stopped domain $(T^*D^n, \partial D^n) = (B^{2n}_{std}, \Lambda_{\varnothing})$. We will consider the stopped domains $(B^{2n}_{std}, \Lambda_P)$ in Theorem \ref{thm: p-handles} as generalized Weinstein handles. \begin{definition} A $P$-Weinstein handle of index $n$ is the stopped domain $(B^{2n}_{std}, \Lambda_P)$. \end{definition} Here our model for the $P$-Weinstein handle uses explicit embeddings of Moore spaces into $S^{n-1}$ and hence is well-defined. When attaching Weinstein handles, one implicitly uses the canonical parametrization of $\partial D^n \subset T^*D^n$. Via the construction in the proof of Theorem \ref{thm: p-handles}, this parametrization gives the Legendrians $\Lambda_P \subset \partial B^{2n}$ a parametrization as well. Therefore, given a parametrized Legendrian sphere $\Lambda$ in a contact manifold $(Y, \xi)$, we can attach a $P$-Weinstein handle $(B^{2n}_{std}, \Lambda_P)$ to it and produce a Weinstein cobordism, just like we do for usual Weinstein handles. To prove Theorem \ref{thm: exotic_subdomains}, we replace all standard Weinstein $n$-handles $(B^{2n}_{std}, \Lambda_\varnothing)$ with Weinstein $P$-handles $(B^{2n}_{std}, \Lambda_P)$. \begin{proof}[Proof of Theorem \ref{thm: exotic_subdomains}] Let $X^{2n}$ be a Weinstein domain with $n \ge 5$ and $C_1^n, \dotsc, C_k^n \subset X^{2n}$ the Lagrangian co-core disks of its index $n$ handles $H^n_1, \dotsc, H^n_k$. Hence there is a subcritical Weinstein domain $X_0 \subset X$ and Legendrian spheres $\Lambda_1, \dotsc, \Lambda_k \subset \partial X_0$ so that $X = X_0 \cup H^n_{\Lambda_1} \cup \dotsb \cup H^n_{\Lambda_1}$ and the co-core of $H^n_{\Lambda_i}$ is $C_i \subset X$. That is, $X_0$ is obtained from $X$ by carving out the Lagrangian disks $C_1, \dotsc, C_k$. This gives the following decomposition of $X:$ \begin{equation}\label{eqn: decomposition_weinstein_handles} X = (X_0, \Lambda_1, \dotsc, \Lambda_k) \cup_{\Lambda_1 = \Lambda_\varnothing} (B^{2n}_{std}, \Lambda_\varnothing) \cup \dotsb \cup_{\Lambda_k = \Lambda_\varnothing }(B^{2n}_{std}, \Lambda_\varnothing) \end{equation} where the $i$th copy of $(B^{2n}_{std}, \Lambda_\varnothing)$ is glued to $X_0$ by identifying $\Lambda_\varnothing$ with $\Lambda_i$. Now we define $X_P$ to be the following Weinstein domain: \begin{equation}\label{eqn: decomposition_weinstein_P_handles} X_P:= (X_0, \Lambda_1, \dotsc, \Lambda_k) \cup_{\Lambda_1 = \Lambda_P} (B^{2n}_{std}, \Lambda_P) \cup \dotsb \cup_{\Lambda_k = \Lambda_P }(B^{2n}_{std}, \Lambda_P) \end{equation} Namely, we replace each standard Weinstein $n$-handle $(B^{2n}_{std}, \Lambda_\varnothing)$ by a $P$-Weinstein handle $(B^{2n}_{std}, \Lambda_P)$. \begin{remark}\label{rem: p-handles_isotropicsum} We note that attaching $P$-Weinstein handles $(B^{2n}_{std}, \Lambda_P)$ to $(X_0, \Lambda_1, \dotsc, \Lambda_k)$ is the same as attaching standard Weinstein handles $(B^{2n}_{std}, \Lambda_\varnothing)$ to $X_0$ with some modified attaching Legendrian $\Lambda_i^P \subset \partial X_0$. In fact, $\Lambda_i^P$ is the isotropic connected sum $\Lambda_i \natural \Lambda_P$ of $\Lambda_i \subset \partial X_0$ and $\Lambda_P \subset \partial B^{2n}_{std}$, which we place into a Darboux chart in $\partial X_0$ disjoint from $\Lambda_i$. To see this, note that gluing $(B^{2n}_{std}, \Lambda_P)$ to $(X_0, \Lambda_i)$ by identifying $\Lambda_P$ with $\Lambda_i \subset \partial X_0$ is the same as gluing a cylinder $T^*(S^{n-1} \times D^1)$ to $(X_0, \Lambda_i) \amalg (B^{2n}_{std}, \Lambda_P)$ by identifying $S^{n-1}\times 0$ with $\Lambda_i$ and $S^{n-1}\times 1$ with $\Lambda_P$. The cylinder can be decomposed into a standard Weinstein index $1$ handle and a standard Weinstein index $n$ handle. So we first do simultaneous index $1$ handle attachment to $(X_0, \Lambda_i)$ and $(B^{2n}_{std}, \Lambda_P)$, with attaching sphere a point in $\Lambda_i$ and a point in $\Lambda_P$, to produce $(X_0 \natural B^{2n}_{std}, \Lambda_i \natural \Lambda_P)$. If we identify $X_0 \natural B^{2n}$ with $X_0$, then $\Lambda_P$ becomes a Legendrian in $\partial X_0$ (in a Darboux chart disjoint from $\Lambda_i$) and $\Lambda_i\natural \Lambda_P$ is precisely the isotropic connected sum of $\Lambda_i$ and $\Lambda_P$ in $\partial X_0$. Then we attach the (standard) index $n$ Weinstein handle of the cylinder $T^*(S^{n-1}\times D^1)$ along $\Lambda_i \natural \Lambda_P$. Thus, the decomposition of $X_P$ in Equation \ref{eqn: decomposition_weinstein_P_handles} can alternatively be described as \begin{equation}\label{eqn: decomposition_weinstein_handles_isotropic_sum} (X_0, \Lambda_1\natural \Lambda_P, \dotsc, \Lambda_k\natural \Lambda_P) \cup_{\Lambda_1\natural \Lambda_P = \Lambda_\varnothing} (B^{2n}_{std}, \Lambda_\varnothing) \cup \dotsb \cup_{\Lambda_k \natural \Lambda_P= \Lambda_\varnothing }(B^{2n}_{std}, \Lambda_\varnothing) \end{equation} In particular, the attaching spheres for the (standard) index $n$ handles for $X$ and $X_P$ differ by a purely local modification, namely an isotropic connected sum with $\Lambda_P$. \end{remark} Now Claims 1), 2), 3) in Theorem \ref{thm: exotic_subdomains} follow from the analogous claims in Theorem \ref{thm: p-handles}. For example, $X_\varnothing = X$ since $(B^{2n}_{std}, \Lambda_{\varnothing})$ is the standard Weinstein handle $(T^*D^n, \partial D^n)$. Also, since $(B^{2n}_{std}, \Lambda_P)$ is a Weinstein subdomain of $(B^{2n}_{std}, \Lambda_Q)$ for $Q \subset P$, $X_P$ is a Weinstein subdomain of $X_Q$ and this Weinstein embedding is also functorial with respect to inclusions of various subsets of primes. If $0 \in P$, then $X_P$ is flexible. To see this, recall that $\Lambda_P \subset \partial B^{2n}_{std}$ is loose by Theorem \ref{thm: p-handles}; this implies that the attaching spheres $\Lambda_i^P \subset \partial X_0$ for $X_P$ are also loose since by Remark \ref{rem: p-handles_isotropicsum}, $\Lambda_i^P$ is the isotropic connected sum of $\Lambda_i$ with $\Lambda_P$, which is a loose Legendrian loosely embedded in a Darboux chart disjoint from $\Lambda_i$. If $0 \in Q \subset P$, then the cobordism between $X_P$ and $X_Q$ is flexible since the cobordism between $(B^{2n}_{std}, \Lambda_P)$ and $(B^{2n}_{std}, \Lambda_Q)$ is also flexible (in the complement of $\Lambda_P$). Finally, we compute $\mathrm{Tw}\, \mathcal{W}(X_P; \mathbb{Z})$. Since $X_P$ is a Weinstein subdomain of $X$, there is a Viterbo transfer functor: $$ \mathrm{Tw}\, \mathcal{W}(X; \mathbb{Z}) \rightarrow \mathrm{Tw}\, \mathcal{W}(X_P; \mathbb{Z}) $$ As in the proof of Theorem \ref{thm: p-handles}, this functor is localization by $D_p \subset (T^*D^n, \partial D^n)$ (or equivalently by $D_p \natural \overline{D_p}$) and $D_p \cong \mathrm{cone}(p \cdot \mathrm{Id}_{T^*_0 D^n})$. On the other hand, $T^*_0 D^n \subset (T^*D^n, \partial D^n) = (B^{2n}_{std}, \Lambda_\varnothing)$ is precisely the co-core $C_i^n$ of $H^n_{\Lambda_i}$ under the decomposition of $X$ in Equation \ref{eqn: decomposition_weinstein_handles} and so $D_p$ is isomorphic to $\mathrm{cone}(p \cdot \mathrm{Id}_{C_i^n})$. By \cite{ganatra_generation, chantraine_cocores_generate}, the co-cores $C_i^n$ of all the $H^n_{\Lambda_i}$ generate $\mathrm{Tw}\, \mathcal{W}(X)$. So localizing by $\mathrm{cone}(p \cdot \mathrm{Id}_{C_i^n})$ for all $i$ is the same as localizing by $\mathrm{cone}(p \cdot \mathrm{Id}_{L})$ for all $L \in \mathrm{Tw}\, \mathcal{W}(X; \mathbb{Z})$. That is, $\mathrm{Tw}\, \mathcal{W}(X_P; \mathbb{Z})\cong \mathrm{Tw}\, \mathcal{W}(X; \mathbb{Z})[\frac{1}{P}]$ as desired. \end{proof} We observe that our construction of $X_P$ depends on many choices. For example, it depends on the choice of initial Weinstein presentation for $X$. There are Weinstein homotopic presentations for $X$ with different numbers of index $n$ handles; hence in this case, our construction would involve carving out different numbers of Lagrangian disks (and then attaching the appropriate flexible cobordism). There are also choices to be made in constructing the $P$-handles $(B^{2n}_{std}, \Lambda_P)$. We fixed a $p$-Moore space $U \subset S^{n-1}$ so that $\tilde{C}^*(U) = \mathbb{Z}[-2]
DMASS) or the mass itself (see MM). Added: > > ## DPDGM and DPDGMASS: The mass difference to the nominal mass Returns the difference with the PDG value of a particle mass. Useful to compute the difference with the measured mass. This calls the ParticlePropertySvc at very call and is therefore slow. Use DMASS('B0') rather than DPDGM if you know the PID. ## ID: Particle ID Like ABSID but without the absolute value. Line: 193 to 209 The predicate which checks the existence of the particle satisfying the requirements in the decay tree at the given depth %SYNTAX{ syntax="python" }% Changed: < < FilterDesktop.Filter.Code = "INGENERATION ( ( 'mu+'==ABSID) & ( PT > 2 * GeV ) , 2 ) " > > FilterDesktop.Code = "INGENERATION ( ( 'mu+'==ABSID) & ( PT > 2 * GeV ) , 2 ) " %ENDSYNTAX% Requires the presence of at least one granddaughter , which the transverse momentum Line: 303 to 319 %ENDSYNTAX% Requires that there are two electrons in the tree with . Added: > > ## NMASS: The nominal mass of a particle Returns the PDG value of a particle mass. Useful to compute the difference with the measured mass. This calls the ParticlePropertySvc at very call and is therefore slow. Use ADMASS('B0') rather than abs(MM-NMASS) if you know the PID. ## P: Momentum Gets the momentum of the particle. %SYNTAX{ syntax="python" }% #### Revision 202008-08-18 - unknown Line: 1 to 1 META TOPICPARENT name="DaVinci" # Recommended LoKi::Hybrid Functors This lists the filters recommended for use in the HLT and the selections. See DaVinciTutorial4 for a hands-on tutorial. A longer, but not necessarily up-to-date, list can be found at LoKiParticleFunctions. It also contains examples on how to use these functors in C++ code. Changed: < < This page refers to DaVinci v19r12. > > This page refers to DaVinci v20r0. # Particle Functors Some mnemonic rules: Changed: < < • The functors with the prefix BPV deal with "the best primary vertex". The best primary vertex is extracted from the desktop-tool using the method IPhysDesktop::relatedVertex; > > • The functors with the prefix BPV deal with "the best primary vertex". The best primary vertex is extracted from the desktop-tool using the method IPhysDesktop::relatedVertex • The functors with the suffix DV get some even-data (e.g. the list of all vertices) through desktop tool • The functors with the suffix TES get the event data from the Transient Event Store • The VD as a part of the functor name means that the functor deals with "vertex distance" Line: 23 to 23 ## ALL: All Takes all particles. This is required if one wants to apply no cut. Changed: < < %SYNTAX{ syntax="cpp" }% FilterDesktop.Filter.Code = "ALL" ; > > %SYNTAX{ syntax="python" }% FilterDesktop.Filter.Code = "ALL" %ENDSYNTAX% ## ABSID: Absolute value of PID. Returns the absolute value of the PID. The following lines are equivalent: Changed: < < %SYNTAX{ syntax="cpp" }% FilterDesktop.Filter.Code = "ABSID==211" ; FilterDesktop.Filter.Code = "ABSID=='pi+' " ; FilterDesktop.Filter.Code = "ABSID=='pi-' " ; > > %SYNTAX{ syntax="python" }% FilterDesktop.Filter.Code = "ABSID==211" FilterDesktop.Filter.Code = "ABSID=='pi+' " FilterDesktop.Filter.Code = "ABSID=='pi-' " %ENDSYNTAX% Note the last line! The comparison (ABSID=='pi-') takes the absolute value on both sides. This avoid having to remember that the has a positive pid (211) while the has a negative pid (-13). Line: 47 to 47 ACHILD(PT,2) return the transverse momentum of the second particle in the combination. For example: Changed: < < %SYNTAX{ syntax="cpp" }% CombinationCut = "ACHILD(PT,1)*ACHILD(PT,2)>1500000" ; > > %SYNTAX{ syntax="python" }% CombinationCut = "ACHILD(PT,1)*ACHILD(PT,2)>1500000" %ENDSYNTAX% See the function AMAXCHILD or AMINCHILD, to get the daughter particle by name. ## ADMASS: The absolute mass difference to the reference value Calculates the absolute difference between the measured mass and the PDG reference value. It takes the pid of the reference particle as argument. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% FilterDesktop.Filter.Code = "(ADMASS('KS0')<50*MeV)" %ENDSYNTAX% Line: 68 to 68 e.g. the minimal PT for all negative kaons: AMINCHILD(PT,"K-"==ID) Changed: < < %SYNTAX{ syntax="cpp" }% CombinationCut = "AMINCHILD(PT,"K-"==ID)*ACHILD(PT,"K+"==ID)>1500000" ; > > %SYNTAX{ syntax="python" }% CombinationCut = "AMINCHILD(PT,"K-"==ID)*ACHILD(PT,"K+"==ID)>1500000" %ENDSYNTAX% This is computationally expensive, using ACHILD is much quicker if possible. ## BPVDIRA: Direction angle Computes the cosine of the angle between the momentum of the particle and the direction fo flight from the best PV to the decay vertex. Changed: < < %SYNTAX{ syntax="cpp" }% MotherCut = "(BPVDIRA>0.9999)" ; > > %SYNTAX{ syntax="python" }% MotherCut = "(BPVDIRA>0.9999)" %ENDSYNTAX% ## BPVIPCHI2(): IP on related PV Computes the -IP on the related PV. Changed: < < %SYNTAX{ syntax="cpp" }% MotherCut = "BPVIPCHI2()<25" ; > > %SYNTAX{ syntax="python" }% MotherCut = "BPVIPCHI2()<25" %ENDSYNTAX% TODO : So far it needs the (). Line: 92 to 92 of the particle using ILifetimeFitter tool. This is probably the best measure of the consistency of the hypothesis that the particle originates from the given primary vertex. It is also probably the best "pointing" variable. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "BPVLTFITCHI2('PropertimeFitter/properTime:PUBLIC')<16" ; > > MotherCut = "BPVLTFITCHI2('PropertimeFitter/properTime:PUBLIC')<16" %ENDSYNTAX% The related primary vertex is extracted from the desktop, the fitter itself Line: 106 to 106 The functor evaluates the proper lifetime of the particle using ILifetimeFitter tool. Unfortunately due to very sad conventions adopted for LHCb, the proper time is measured in ns instead of the natural units: . Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "BPVLTIME('PropertimeFitter/properTime:PUBLIC')>1.5" ; > > MotherCut = "BPVLTIME('PropertimeFitter/properTime:PUBLIC')>1.5" %ENDSYNTAX% The related primary vertex is extracted from the desktop, the fitter itself Line: 117 to 117 ## BPVLTCHI2: the -significance of the proper lifetime of the particle The functor evaluates the -significance of the proper lifetime of the particle using ILifetimeFitter tool. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "BPVLTCHI2('PropertimeFitter/properTime:PUBLIC')>9" ; > > MotherCut = "BPVLTCHI2('PropertimeFitter/properTime:PUBLIC')>9" %ENDSYNTAX% The related primary vertex is extracted from the desktop, the fitter itself Line: 129 to 129 ## BPVLTSIGNCHI2: the signed -significance of the proper lifetime of the particle The functor evaluates the signed -significance of the proper lifetime of the particle using ILifetimeFitter tool. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "BPVLTSIGNCHI2('PropertimeFitter/properTime:PUBLIC')>-4" ; > > MotherCut = "BPVLTSIGNCHI2('PropertimeFitter/properTime:PUBLIC')>-4" %ENDSYNTAX% The related primary vertex is extracted from the desktop, the fitter itself Line: 139 to 139 ## BPVVDCHI2: -separation from related PV Computes the -distance from the related PV. Changed: < < %SYNTAX{ syntax="cpp" }% MotherCut = "BPVVDCHI2>100" ; > > %SYNTAX{ syntax="python" }% MotherCut = "BPVVDCHI2>100" %ENDSYNTAX% ## BPVVDZ: -distance from the end vertex of the particle and the related primary vertex. The functor computes the -distance from the end vertex of the particle and the related primary vertex. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "0<BPVVDZ" ; > > MotherCut = "0<BPVVDZ" %ENDSYNTAX% The concept and the name come from Sean Brisbane. Line: 155 to 155 ## BPVVDR: -distance from the end vertex of the particle and the related primary vertex. The functor computes the -distance(cylindrical) from the end vertex of the particle and the related primary vertex. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "0.1<BPVVDR" ; > > MotherCut = "0.1<BPVVDR" %ENDSYNTAX% The concept and the name come from Sean Brisbane. Line: 165 to 165 ## BPVVDRHO: -distance from the end vertex of the particle and the related primary vertex. The functor computes the -distance(cylindrical) from the end vertex of the particle and the related primary vertex. Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < MotherCut = "0.1<BPVVDRHO" ; > > MotherCut = "0.1<BPVVDRHO" %ENDSYNTAX% The concept and the name come from Sean Brisbane. Line: 175 to 175 ## CHILDCUT: Applies a cut to a given child Changed: < < %SYNTAX{ syntax="cpp" }% FilterDesktop.Filter.Code = "CHILDCUT ( MIPCHI2DV ( PRIMARY ) > 1 , 2 )" ; > > %SYNTAX{ syntax="python" }% FilterDesktop.Filter.Code = "CHILDCUT ( MIPCHI2DV ( PRIMARY ) > 1 , 2 )" %ENDSYNTAX% In this example one applies an IP cut on the first daughter of the input particle. This requires to know which is the first, second, etc daughter. Can be useful when (N)INTREE won't work. Like here for the slow pion in a where searching for a pion in the tree would also return the daughters of the . Use the safer INTREE and NINTREE instead. Line: 191 to 191 ## INGENERATION: "in generation" The predicate which checks the existence of the particle satisfying the requirements in the decay tree at the given depth Changed: < < %SYNTAX{ syntax="cpp" }% > > %SYNTAX{ syntax="python" }% Changed: < < FilterDesktop.Filter.Code = "INGENERATION ( ( 'mu+'==ABSID) & ( PT > 2 * GeV ) , 2 ) " ; > > FilterDesktop.Filter.Code = "INGENERATION ( ( 'mu+'==ABSID) & ( PT > 2 * GeV ) , 2 ) " %ENDSYNTAX% Requires the presence of at least one granddaughter , which the transverse momentum Line: 206 to 206 ## INTREE: In tree Requires there is a particle in the decay tree satisfying the requirements. Changed: < < %SYNTAX{ syntax="cpp" }% FilterDesktop.Filter.Code = "(INTREE( (ID=='J/psi(1S)') & (BPVVDCHI2>25) ) )" ; > > %SYNTAX{ syntax="python" }% FilterDesktop.Filter.Code = "(INTREE( (ID=='J/psi(1S)') & (BPVVDCHI2>25) ) )" %ENDSYNTAX% Requires there is a in the tree more than away from the
of NGC 253 ULX1 detect a mid-IR counterpart with median absolute magnitudes and $3\sigma$ limits of [3.6] = $-10.03 \pm 0.26$ and [4.5] $>-9.79$. Although the mid-IR counterpart is present in the 4.5 $\mu$m observations based on visual inspection, it does not pass the $3\sigma$ detection threshold since the extended mid-IR emission from the galaxy and point sources in the vicinity of NGC 253 ULX1 complicate the background subtraction and raise the RMS flux estimate of the background. Due to the high photometric uncertainties due to the crowded background, it is unclear if the mid-IR counterpart of NGC 253 ULX1 is variable. The 3.6 $\mu$m absolute magnitude and 4.5 $\mu$m limit of NGC 253 ULX1 are consistent with the RSGs on the LMC CMD (Fig.~\ref{fig:ULX_CMD}). Williams \& Bonanos (2016) arrive at a similar conclusion of the RSG-like color and magnitude of NGC 253 ULX1 in their independent analysis of \textit{Spitzer}~photometry on point sources in nearby galaxies. The close agreement of the IR SED of NGC 253 ULX1 to the RSG template corroborates the RSG interpretation by H14 and Heida et al. (2015). \subsubsection{Holmberg IX X-1} Holmberg IX X-1, hereafter referred to as Ho IX X-1, is another well-studied ULX located in the dwarf galaxy Holmberg IX ($d = 3.61$ Mpc; Dalcanton et al. 2009). Ho IX X-1 exhibits a high X-ray luminosity of $L_X\sim10^{40}$ erg s$^{-1}$ (e.g. Kong et al. 2010), where the 0.3-10.0 keV flux has been observed to vary by a factor of $\sim3$ while the 15 - 40 keV flux varied by only $\sim20$ $\%$ (Walton et al. 2017). Ho IX X-1 is also surrounded by a $\sim300$ pc-sized nebula that is believed to be shock ionized (Pakull \& Mirioni 2002, Abolmasov \& Moiseev 2008). Previous near-IR $H$-band imaging by H14 did not detect a counterpart to Ho IX X-1 down to a $1\sigma$ limiting magnitude of $H > 20.25$, which corresponds to a limiting absolute magnitude of $>-8.0$. Archival \textit{Spitzer}/IRAC observations of Ho IX X-1 published by Dudik et al.~(2016) revealed robust ($>10\sigma$) detections of $\sim10$ $\mu$Jy at both 3.6 and 4.5 $\mu$m in only three epochs taken between 2007 Nov 13 and Nov 15 (MJD 54417.9 - 54419.33). However, the mid-IR counterpart of Ho IX X-1 in 2007 Nov is not detected in the subsequent observation taken $\sim5$ months later in 2008 Apr 9 (MJD 54565.6) to a limiting $3\sigma$ flux of $\lesssim 3$ $\mu$Jy (see Fig.~\ref{fig:ULX_LCs}), which demonstrates the transient behavior of Ho IX X-1 in the mid-IR. Four follow-up \textit{Spitzer}/IRAC observations taken between 2017 Jun 10 - Jun 16 (MJD 57914 - 57920) do not detect the mid-IR counterpart of Ho IX X-1, and near-contemporaneous \textit{Swift}/XRT observations consistently show the ULX to be in a low luminosity state (Fig.~\ref{fig:ULX_LCs}). There was, however, no X-ray coverage during the detection of mid-IR emission in 2007 Nov. The limiting near and mid-IR absolute magnitudes of Ho IX X-1 in quiescence show that the mid-IR counterpart candidate is unlikely a RSG nor sgB[e] star (see Fig.~\ref{fig:ULX_SEDs}). \subsubsection{NGC 3031 ULX1} NGC 3031 ULX1, also known as M81 X-6 and NGC 3031 X-11, is a ULX located in the nearby galaxy M81 (d = 3.61 Mpc; Durrell et al. 2010). NGC 3031 ULX1 has an average X-ray luminosity of $\sim2\times10^{39}$ ergs s$^{-1}$ and exhibits a factor of $40\%$ variability (Roberts \& Warwick 2000; Liu et al. 2002). NGC 3031 ULX1 is located near a large $\sim300$ pc shell-like nebula that is believed to be a supernova remnant (Pakull \& Mirioni 2003; Ramsey et al. 2006). Liu et al. (2002) identified an optical counterpart of NGC 3031 ULX1 with \textit{HST}/WFPC2 \textit{BVI} imaging observations where $V = 23.89 \pm 0.03$. They claim that the optical counterpart is a locally dust-obscured O8 V star with an extinction corrected absolute \textit{V}-band magnitude and $B-V$ color of $M_V = -4.9$ and $B-V$ = 0.32. H14 did not detect a near-IR counterpart down to a limiting $1\sigma$ magnitude of $K_s>18.5$, and near-IR follow-up with P200/WIRC from this work did not detect a near-IR counterpart down to limiting $3\sigma$ magnitudes of $K_s>18.85$ and $H>20.12$, or an absolute limiting magnitude of $K_s>-8.94$ and $H>-7.67$ (Tab.~\ref{tab:ULXNIRTab}). Based on the brightness of the optical counterpart reported by Liu et al. (2002), the near-IR observations were unlikely sensitive enough to detect a near-IR counterpart of an O8V photosphere. \textit{Spitzer}/IRAC observations of NGC 3031 ULX1 show that emission from its mid-IR counterpart is variable with 4.5 $\mu$m fluxes ranging from $<7$ $\mu$Jy to $18.7\pm1.8$ $\mu$Jy (Tab.~\ref{tab:ULXDetTab} and Fig.~\ref{fig:ULX_LCs}). The median 3.6 and 4.5 $\mu$m absolute magnitudes of NGC 3031 ULX1 for the epochs where it is detected are $-9.32 \pm0.4$ and $-9.73\pm0.31$, respectively. It is unclear if the optical counterpart is also variable, but the brightness and red color of the mid-IR counterpart suggests an excess of mid-IR emission. The SED of NGC 3031 ULX1 shows absolute mid-IR magnitudes consistent with sgB[e]s and RSGs, and its mid-IR color $[3.6]-[4.5] = 0.41\pm0.47$ places it in the color gap between red and blue ULXs. However, the non-supergiant O8 V star claim for the optical counterpart by Liu et al. (2002) cannot be ruled out since the mid-IR emission from NGC 3031 ULX1 was only detected in its brightest state while most of the observations were below the \textit{Spitzer}~detection threshold. \subsubsection{M101 XMM1} M101 XMM1, also known as J140314+541807 and NGC 5457 ULX2, is a ULX in the face-on spiral galaxy M101 (d = 6.43 Mpc; Shappee \& Stanek 2011) and exhibits an X-ray luminosity of $2.9\times10^{39}$ ergs s$^{-1}$ (Winter et al. 2006). H14 detected a near-IR counterpart and measured an absolute magnitude of $H = -10.69\pm0.1$ and claim it is consistent with an RSG. \textit{Spitzer}/IRAC observations of M101 XMM1 measure median absolute magnitudes of [3.6] = $-11.16\pm0.07$ and [4.5] = $-11.27\pm0.09$ with small-amplitude variability on the order of $\sim20\%$ or $2\sigma$. This mid-IR variability is consistently measured at both 3.6 and 4.5 $\mu$m. The mid-IR properties of M101 XMM1 are therefore similar to that of M101 XMM3. The \textit{Spitzer}~mid-IR photometry again supports the H14 hypothesis that the IR counterpart of M101 XMM1 is an RSG donor star. \subsubsection{M101 XMM3} M101 XMM3, also known as J1402+5440, NGC 5457 X23, and NGC 5457 ULX3, is another ULX in M101 and radiates with an X-ray luminosity of $2.4\times10^{39}$ ergs s$^{-1}$ (Swartz et al. 2011). H14 detected the near-IR counterpart of M101 XMM3 and measured an absolute magnitude of $H=-9.7\pm 0.2$, which they claim is consistent with a RSG. \textit{Spitzer}/IRAC observations of M101 XMM3 provide median absolute magnitudes of [3.6] = $-11.21\pm0.07$ and [4.5] = $-11.33\pm0.09$ with small-amplitude variability on the order of $\sim10\%$\footnote{The \textit{Spitzer}/IRAC observations on MJD 58323.95 shows a substantial increase in the 3.6 $\mu$m flux. A closer inspection of the image reveals a cosmic ray or hot pixel coincident with the source position. We therefore disregard this measurement.}. However, it is unclear if the variability is consistent at both 3.6 and 4.5 $\mu$m since M101 XMM3 was only observed with one channel each epoch due to its $\sim10'$ displacement from the center of the field of view. The mid-IR color and [3.6] absolute magnitude of M101 XMM3 are consistent with the brightest RSGs in the mid-IR (Fig.~\ref{fig:ULX_CMD}). The IR SED of M101 XMM3 also appears consistent with RSGs with the \textit{Spitzer}~ photometry slightly above the upper $1\sigma$ end of the RSG template. The mid-IR photometry therefore support the hypothesis from H14 that the IR counterpart of M101 XMM3 is an RSG donor star. \subsubsection{NGC 925 ULX1} NGC 925 ULX1, also known as CXO J022727+333443, is a ULX in the barred spiral galaxy NGC 925 ($d = 7.24\pm1.34$ Mpc; Tully et al. 2009) and exhibits one of the highest X-ray luminosities from a ULX $\sim2 - 4\times 10^{40}$ ergs s$^{-1}$ (Swartz et al. 2011; Pintore et
Figure \ref{fig:all_models} shows the simulated spectra for Trappist-1~b across the CO$_2$-rich to H$_2$O-rich compositions considered here, while the bottom three rows of Figure \ref{fig:ret_spectra} show the simulated data and retrieved spectra. For all three observational strategies, the data are clearly distinguishable from the bare-rock scenario with no energy redistribution. Furthermore, for the Venus-like composition, the sharp CO$_2$ feature at $\sim$9~$\mu$m can be distinguished by eye, consistent with the confident detections of CO$_2$ in these retrievals. Assuming 30 MIRI LRS eclipses, we find that CO$_2$ and H$_2$O are detected in both the Venus-like and 50\%~CO$_2$/50\%~H$_2$O cases (Table \ref{tab:sig}). The Venus-like composition is most confidently constrained, with 5.5$\sigma$ and 4.9$\sigma$ detections of CO$_2$ and H$_2$O, respectively, while the 50\%~CO$_2$/50\%~H$_2$O composition leads to $\sim$3$\sigma$ detections of these species. However, in the 100\%~H$_2$O case the simulated data are consistent with a blackbody and spectral features are not detected with statistical significance. Nevertheless, for all three compositions the retrieval framework fits the true spectrum and photospheric $P$-$T$ profile within 2$\sigma$ uncertainties (Figure \ref{fig:ret_PT}). This 30-eclipse strategy would therefore be ideal to characterise (or rule out) clear, CO$_2$-dominated atmospheric compositions on Trappist-1~b. Using a longer observing time of 80$-$84 eclipses, we find that the Venus-like and 50\%~CO$_2$/50\%~H$_2$O compositions are readily characterised, allowing $\sim$4-10$\sigma$ detections of CO$_2$ and H$_2$O (Table \ref{tab:sig}). The 100\%~H$_2$O composition is more challenging to characterise, though a blackbody spectrum is nevertheless rejected by the data at $\sim$4$\sigma$ with both of these observing strategies. For all three compositions, both the spectra and temperature profiles are retrieved within 2$\sigma$, while the abundances of CO$_2$ and H$_2$O are retrieved to within the 2$\sigma$ uncertainties. We note that these results are very similar for both the observing strategies with and without the photometry, suggesting that the MIRI LRS data is driving these detections. 80 eclipses using MIRI LRS are therefore sufficient to confidently characterise cloud-free CO$_2$-rich compositions for Trappist-1~b, and to detect the presence of atmospheric absorption in the case of a water-rich composition. Furthermore, across all of the observing strategies discussed here, the abundances of CO$_2$ and H$_2$O are constrained to within 1-$\sigma$ uncertainties of $<0.7$~dex, where these species are detected. \begin{table*} \centering \caption{Detection significances for CO$_2$ and H$_2$O given different atmospheric compositions and observing strategies for LHS~3844~b, GJ~1132~b and Trappist-1~b (see Section \ref{sec:case_studies}). The confidence level at which the data eliminates a blackbody spectrum is also shown for each case (columns labelled `non-BB'). The number of eclipses with MIRI LRS and MIRI photometry assumed for each case are shown in italics. Confidence levels below 2$\sigma$ are not shown.} \begin{tabular}{lcccccccccc} \multicolumn{1}{c}{\multirow{2}[3]{*}{\textbf{Planet}}} & \multicolumn{3}{c}{\textbf{Venus}} & & \multicolumn{3}{c}{\boldmath{}\textbf{50\% H$_2$O/50\% CO$_2$}\unboldmath{}} & & \multicolumn{2}{c}{\boldmath{}\textbf{100\% H$_2$O}\unboldmath{}} \bigstrut[b]\\ \cline{2-4}\cline{6-8}\cline{10-11} & \boldmath{}\textbf{CO$_2$}\unboldmath{} & \boldmath{}\textbf{H$_2$O}\unboldmath{} & \textbf{non-BB} & & \boldmath{}\textbf{CO$_2$}\unboldmath{} & \boldmath{}\textbf{H$_2$O}\unboldmath{} & \textbf{non-BB} & & \boldmath{}\textbf{H$_2$O}\unboldmath{} & \textbf{non-BB} \bigstrut\\ \hline \hline \textbf{LHS 3844 b} & & & & & & & & & & \bigstrut[t]\\ \emph{8 LRS} & 5.55 & 4.07 & 8.37 & & $-$ & 2.52 & 2.28 & & $-$ & 2.93 \\ \emph{20 LRS} & 7.06 & 5.32 & 10.59 & & 3.87 & 5.02 & 4.86 & & 2.74 & 4.20 \\ \emph{20 LRS, 4 F1500W} & 6.95 & 5.20 & 11.00 & & 3.61 & 4.72 & 4.45 & & 2.82 & 4.50 \bigstrut[b]\\ \hline \textbf{GJ 1132 b} & & & & & & & & & & \bigstrut[t]\\ \emph{8 LRS} & 4.70 & 4.42 & 5.69 & & $-$ & 2.68 & 2.31 & & $-$ & 4.38 \\ \emph{20 LRS} & 7.62 & 6.91 & 8.75 & & 4.35 & 4.53 & 5.03 & & 3.18 & 3.20 \\ \emph{20 LRS, 4 F1500W} & 8.05 & 6.90 & 9.20 & & 4.76 & 5.05 & 5.33 & & 2.96 & 3.20 \bigstrut[b]\\ \hline \textbf{Trappist-1 b} & & & & & & & & & & \bigstrut[t]\\ \emph{30 LRS} & 5.51 & 4.90 & 5.82 & & 2.89 & 3.40 & 3.02 & & 2.31 & 2.47 \\ \emph{80 LRS} & 9.09 & 9.60 & 11.06 & & 4.40 & 4.68 & 4.75 & & 2.50 & 4.08 \\ \emph{80 LRS, 4 F1500W} & 9.79 & 9.57 & 11.73 & & 4.86 & 4.70 & 4.98 & & 2.65 & 3.90 \bigstrut[b]\\ \hline \end{tabular}% \label{tab:sig}% \end{table*}% \section{Discussion} \label{sec:discussion} In section \ref{sec:case_studies}, we have shown that rocky exoplanet atmospheres across a wide range of temperatures ($\sim$400-800~K) can be characterised in thermal emission with JWST/MIRI, including confident detections of atmospheric absorption by CO$_2$ and H$_2$O. Here, we begin by discussing the calculation of detection significances and key subtleties which can arise. Having focused on cloud- and haze-free atmospheric compositions in previous sections, we also discuss the impact which clouds and hazes may have on the characterisation of rocky exoplanet atmospheres in Section \ref{sec:discussion:clouds}. In sections \ref{sec:discussion:3D} and \ref{sec:discussion:stitching}, we also discuss 3D effects and stitching together multi-mode observations. \subsection{Detection Significances} When considering detection significances, it is useful to understand the role of model complexity in determining the Bayesian evidence. As described in Section \ref{sec:ret_detect}, the confidence of a molecular detection can be assessed by comparing the Bayesian evidences of two retrievals including/excluding the molecule(s) in question. Similarly, the confidence with which a blackbody spectrum can be rejected can be assessed by comparing the Bayesian evidences of retrievals with a full atmospheric model vs. only a single temperature (i.e. a blackbody model). However, these Bayesian evidences encapsulate not only the fit to the spectrum but also the complexity of the model. For example, if a 10-parameter model results in the same goodness of fit as a 5-parameter model, the 5-parameter model will have a higher Bayesian evidence as it has a lower model complexity. When calculating the detection significance of a single molecule, the models compared only have a difference of one parameter (i.e. the models are identical apart from the presence of one molecule), and so the detection significance calculated from this comparison is a fairly good measure of `goodness of fit' as the model complexities are comparable. However, when the `full model' is compared to a blackbody model, there is a significant difference in model complexity (i.e. 13 parameters for the full model vs 1 parameter for the blackbody). This means that the full model is penalised for its complexity relative to the blackbody model, and the blackbody spectrum may be rejected at a lower significance than expected. That is, the poor fit from a blackbody model may be compensated for by the simplicity of the model. An alternative way to assess how confidently a blackbody model can be rejected is to compare this model to a `core parameters' model. This `core parameters' model is based on the full atmospheric model, but includes only the species for which there is evidence in the data. For example, if H$_2$O and CO$_2$ are detected in the data (using the full model) but no other species are detected, the `core parameters' model would include only H$_2$O and CO$_2$ (as well as the usual temperature profile parameterisation). This effectively strips down the full model to the components which are necessary to fit the data, without including unnecessary parameters. Thus, when compared to the blackbody model, the `core parameters' model is not penalised by unnecessary parameters, and instead the difference in model complexity is more representative of the complexity required to fit the data. For example, in the Venus-like case for Trappist-1~b with 30 MIRI LRS eclipses, comparing the blackbody model to the `core parameters' model results in a 6.09$\sigma$ rejection of the blackbody, whereas a comparison to the full model results in a 5.82$\sigma$ rejection. Note that all detection significances shown in Table \ref{tab:sig} use a comparison with the full model. Ultimately, it is important to understand how the models used can affect the confidence with which a blackbody spectrum can
References of "E-prints/Working papers"      in Complete repository Arts & humanities   Archaeology   Art & art history   Classical & oriental studies   History   Languages & linguistics   Literature   Performing arts   Philosophy & ethics   Religion & theology   Multidisciplinary, general & others Business & economic sciences   Accounting & auditing   Production, distribution & supply chain management   Finance   General management & organizational theory   Human resources management   Management information systems   Marketing   Strategy & innovation   Quantitative methods in economics & management   General economics & history of economic thought   International economics   Macroeconomics & monetary economics   Microeconomics   Economic systems & public economics   Social economics   Special economic topics (health, labor, transportation…)   Multidisciplinary, general & others Engineering, computing & technology   Aerospace & aeronautics engineering   Architecture   Chemical engineering   Civil engineering   Computer science   Electrical & electronics engineering   Energy   Geological, petroleum & mining engineering   Materials science & engineering   Mechanical engineering   Multidisciplinary, general & others Human health sciences   Alternative medicine   Anesthesia & intensive care   Cardiovascular & respiratory systems   Dentistry & oral medicine   Dermatology   Endocrinology, metabolism & nutrition   Forensic medicine   Gastroenterology & hepatology   General & internal medicine   Geriatrics   Hematology   Immunology & infectious disease   Laboratory medicine & medical technology   Neurology   Oncology   Ophthalmology   Orthopedics, rehabilitation & sports medicine   Otolaryngology   Pediatrics   Pharmacy, pharmacology & toxicology   Psychiatry   Public health, health care sciences & services   Radiology, nuclear medicine & imaging   Reproductive medicine (gynecology, andrology, obstetrics)   Rheumatology   Surgery   Urology & nephrology   Multidisciplinary, general & others Law, criminology & political science   Civil law   Criminal law & procedure   Criminology   Economic & commercial law   European & international law   Judicial law   Metalaw, Roman law, history of law & comparative law   Political science, public administration & international relations   Public law   Social law   Tax law   Multidisciplinary, general & others Life sciences   Agriculture & agronomy   Anatomy (cytology, histology, embryology...) & physiology   Animal production & animal husbandry   Aquatic sciences & oceanology   Biochemistry, biophysics & molecular biology   Biotechnology   Entomology & pest control   Environmental sciences & ecology   Food science   Genetics & genetic processes   Microbiology   Phytobiology (plant sciences, forestry, mycology...)   Veterinary medicine & animal health   Zoology   Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences   Chemistry   Earth sciences & physical geography   Mathematics   Physics   Space science, astronomy & astrophysics   Multidisciplinary, general & others Social & behavioral sciences, psychology   Animal psychology, ethology & psychobiology   Anthropology   Communication & mass media   Education & instruction   Human geography & demography   Library & information sciences   Neurosciences & behavior   Regional & inter-regional studies   Social work & social policy   Sociology & social sciences   Social, industrial & organizational psychology   Theoretical & cognitive psychology   Treatment & clinical psychology   Multidisciplinary, general & others     Showing results 1 to 100 of 1603 1 2 3 4 5 6     The edge-based strain smoothing method for compressible and nearly incompressible non-linear elasticity for solid mechanicsLee, Chang-Kye; Mihai, L. Angela; Kerfriden, Pierre et alE-print/Working paper (in press)Detailed reference viewed: 566 (41 UL) The mod 2 cohomology rings of congruence subgroups in the Bianchi groupsBerkove, Ethan; Lakeland, Grant; Rahm, Alexander E-print/Working paper (in press)We provide new tools for the calculation of the torsion in the cohomology of congruence subgroups in the Bianchi groups : An algorithm for finding particularly useful fundamental domains, and an analysis ... [more ▼]We provide new tools for the calculation of the torsion in the cohomology of congruence subgroups in the Bianchi groups : An algorithm for finding particularly useful fundamental domains, and an analysis of the equivariant spectral sequence combined with torsion subcomplex reduction. [less ▲]Detailed reference viewed: 325 (24 UL) Yolanda von Vianden und das Yolanda-EposSieburg, Heinz E-print/Working paper (in press)Detailed reference viewed: 172 (10 UL) Forensic DNA Phenotyping: a review on SNP panels, genotyping techniques, and prediction modelsTerrado Ortuno, Nuria ; May, Patrick E-print/Working paper (2023)In the past few years, forensic DNA phenotyping (FDP) has attracted a strong interest in the forensic research. Among the increasing publications, many have focused on testing the available panels to ... [more ▼]In the past few years, forensic DNA phenotyping (FDP) has attracted a strong interest in the forensic research. Among the increasing publications, many have focused on testing the available panels to infer biogeographical ancestry (BGA) on less represented populations and understanding the genetic mechanisms underlying externally visible characteristics (EVC). However, there are currently no publications that gather all the existing panels limited to FDP and discuss the main technical limitations of the technique. In this review, we performed a bibliographic search in Scopus database of FDP-related literature, which resulted in a total of 48, 43 and 15 panels for BGA, EVC and both BGA and EVC inference, respectively. Here we provide a list of commercial and non-commercial FDP panels and the FDP limitations regarding the lack of harmonization in terms of terminology (i.e., categorization and measurement of traits) and reporting, the lack of genetic knowledge and environment influence to select markers and develop panels, and the debate surrounding the selection of genotyping technologies and prediction models and algorithms. In conclusion, this review aims to be an updated guide and to present an overview of the current FDP-related literature. [less ▲]Detailed reference viewed: 58 (0 UL) Multifractional Hermite processes: definition and first propertiesLoosveldt, Laurent E-print/Working paper (2023)We define multifractional Hermite processes which generalize and extend both multifractional Brownian motion and Hermite processes. It is done by substituting the Hurst parameter in the definition of ... [more ▼]We define multifractional Hermite processes which generalize and extend both multifractional Brownian motion and Hermite processes. It is done by substituting the Hurst parameter in the definition of Hermite processes as a multiple Wiener-Itô integral by a Hurst function. Then, we study the pointwise regularity of these processes, their local asymptotic self-similarity and some fractal dimensions of their graph. Our results show that the fundamental properties of multifractional Hermite processes are, as desired, governed by the Hurst function. Complements are given in the second order Wiener chaos, using facts from Malliavin calculus. [less ▲]Detailed reference viewed: 38 (1 UL) The Next Generation EU (NGEU) as a catalyst for the promotion of the international role of the euroLupinu, Pier Mario ; Machura, Anna E-print/Working paper (2023)Recent developments in the global arena such as the COVID-19 pandemic and stresses in energy markets, made it clear that it is critical for the EU to ensure its strategic autonomy in the macroeconomic ... [more ▼]Recent developments in the global arena such as the COVID-19 pandemic and stresses in energy markets, made it clear that it is critical for the EU to ensure its strategic autonomy in the macroeconomic field. Strengthening the international role of the euro is one of the key elements in this regard. Through timely analysis of the changes stemming from the establishment of the Next Generation EU (NGEU), we seek to understand to what extent the NGEU can serve as a catalyst for the promotion of the international role of the euro. While it is implausible that the euro will overcome the primacy of the US dollar, we center our analysis around the transformation of the EU’s presence in capital markets. Following massive issuances of green bonds under the NGEU, the EU became the largest issuer of green bonds and has the potential to progress from a small supranational issuer to a sovereign size issuer. This means that the pool of highly rated euro-denominated safe assets will expand significantly. That is where we focus our analysis and where we see the opportunity of the NGEU for unleashing the potential of the euro and boosting its international role. [less ▲]Detailed reference viewed: 29 (1 UL) Wavelet-Type Expansion of Generalized Hermite Processes with rate of convergenceAyache, Antoine; Hamonier, Julien; Loosveldt, Laurent E-print/Working paper (2023)Wavelet-type random series representations of the well-known Fractional Brownian Motion (FBM) and many other related stochastic processes and fields have started to be introduced since more than two ... [more ▼]Wavelet-type random series representations of the well-known Fractional Brownian Motion (FBM) and many other related stochastic processes and fields have
$\delta^{18}$O of GISP2 during the last deglaciation is noticed. We report that Southern Ocean degassing played an important role in raising atmospheric CO$_{2}$ through Atlantic Meridional Overturning Circulation (AMOC), which has an implication in triggering abrupt climate events through coupling of ocean and atmospheric processes. $\bf{Highlights}$ $\bullet$ A synchrony between the temperature variations between the Antarctica and the southern sector of the Indian Ocean are noticed during the deglaciation. $\bullet$ Initiation of deglacial warming in the southern sector of Indian Ocean started around 18 Ka which coincides with rise of atmospheric CO$_{2}$ during the same time. $\bullet$ Degassing in the Southern Ocean played an important role in raising atmospheric CO$_{2}$ during the deglaciation. $\bullet$ Changes in AMOC variations contributed to the trigger of CO$_{2}$ degassing from the deep Southern Ocean. • Grain-shape controlled strain in quartz grains in high ductile flow regime: Observations from the Main Central Thrust Zone of the Kumaun Himalaya, India In ductile shear zones, the strain shown by the rocks depends much on the composition and shape of the mineral constituents. Under simple shear, quartz grains commonly reorient themselves in the direction of tectonic transport or flow. In ductile shear zones, quartz grains are elliptically stretched in the direction of mylonitic foliation to accommodate the imposed ductile strain. Our observations on the rocks of a crustal scale shear zone, the Main Central Thrust (MCT) of the Himalaya, however, reveal that at several places of the shear zone the quartz grains are polygonal and show planar boundaries. The fabric of rocks at such places is not compatible with that of the prevailing fabric of rocks, and can be described as strain insensitive fabric. Following the Panozzo (J. Struct. Geol. 6:215–221, 1984) method, we have estimated strain from quartz grains that show planar boundaries. Our results show that in the MCT zone, the areas of high ductile strain, as existing near the trace of the MCT, the amount of strain shown by such grains of quartz is low, while in areas of low strain, as existing in areas away from the MCT, the amount of strain is relatively higher. As such, the method holds importance in those cases where grain shapes (i.e., planar boundaries) put constraint on estimation of strain because the conventional methods of strain estimation require elliptical shape of objects. This is possibly the first application of the Panozzo method on deformed rocks from India. $\bf{Highlights}$ $\bullet$ The general fabric of rocks of ductile MCT zone of Himalaya is dominated by elliptically deformed quartz grains. $\bullet$ However locally the fabric, not compatible with prevailing ductile fabric, contains polygonal quartz grains with flat boundaries. $\bullet$ Strain has been estimated for polygonal grains by digitizing their outlines and analysing data by computer software. $\bullet$ Such grains show lower strains near MCT and higher strains away. This is reversely shown by elliptically deformed grains. $\bullet$ This suggests that the quartz grains with polygonal shapes remained rather insensitive to ductile strain. • Bleaching of blue light stimulated luminescence of quartz by moonlight Moonlight is sunlight reflected from the moon’s surface. It is additionally modulated by the Earth’s atmosphere, dust and pollutants on its way to the surface of the Earth. This contribution reports the bleaching rates of blue light stimulated luminescence (BLSL) signal of Quartz under full moonlight exposure at the Earth’s surface. Quartz BLSL reduced to 70% by an exposure of 5 hrs moonlight, is in contrast to $\sim$90% reduction in < 3 s with daylight. This was anticipated due to (a) reduced moonlight flux by about a factor of half a million (Agrawal in Lat. Am. J. Phys. Educ. 4(2):325–328, 2010; J. Phys. Astron. 5(1):1–15, 2017); (b) inverse power law dependence of bleaching efficiency on wavelength (Spooner in The validity of optical dating based on feldspar, Ph.D. Thesis, Oxford University, Oxford, 1993; Chen and McKeever in Theory of Thermoluminescence and related phenomena, World Scientific Publications, London, 1997, Chen and Pagonis in Thermally and optically stimulated luminescence: A simulation approach, Wiley and Sons, Chichester, 2011); and (c) moonlight and daylight have spectral peaks around 650 and 550 nm, respectively. Deconvolution of OSL components suggests that moonlight affects the fast component of OSL signal the most. This has ramification for the application in polar regions, where the availability of daylight is at a premium during the winter months. Within a given context, it is conjectured that this could be used to infer the seasonality of sediment transport. $\bf{Highlights}$ $\bullet$ Up to 40% reduction of quartz luminescence signal observed over long moonlight exposure. $\bullet$ Moonlight can bleach up to 70% of the fast component of blue light stimulated luminescence signal. $\bullet$ Moonlight bleaching may hamper the accuracy of ages of sediments which are only transported during night. $\bullet$ Seasonality of sediment deposition can be studied using the bleaching effect of moonlight on quartz. • Seasonal variability of tropospheric CO$_{2}$ over India based on model simulation, satellite retrieval and in-situ observation In this study, investigation of the seasonal cycle of the tropospheric CO$_{2}$ concentration over India was carried out using the GEOS-Chem atmospheric transport model, Greenhouse gas Observation SATellite (GOSAT) retrievals, and in-situ measurements. The model simulation is highly coherent with the satellite and in-situ datasets, and it shows a distinct seasonal cycle of the tropospheric CO$_{2}$ tendency over India with a negative phase (decreasing concentration) during April–August and a positive phase (increasing concentration) during September–March. The model diagnostics were analyzed to estimate budgets of the surface layer CO$_{2}$, up to 650 hPa pressure level, for the two-phases of the seasonal cycle. A mean tendency, equivalent to −0.70 ppmv month$^{-1}$, observed during April–August, which results from the loss of CO$_{2}$, content in the surface layer through horizontal advection (−2.25 ppmv month$^{-1}$) and vertical diffusion (−0.20 ppmv month$^{-1}$), that dominates the gain from vertical advection (1.53 ppmv month$^{-1}$). The negative contribution of horizontal advection in this period comes from the transport of CO$_{2}$ depleted air-parcels over the oceanic region to India by the southwest monsoon winds and the positive contributions of vertical advection comes from upwelling of CO$_{2}$ enriched air-parcels. The mean tendency, equivalent to 1.01 ppmv month$^{-1}$, during September–March results from the gain through vertical advection (0.78 ppmv month$^{-1}$) and horizontal advection (0.37 ppmv month$^{-1}$) and a small contribution of vertical diffusion (−0.15 ppmv month$^{-1}$). In this period, positive contribution of horizontal advection is due to the transport of CO$_{2}$ enriched air-parcels from the southeast Asian region to India by north-east monsoon winds. At the annual scale, CO$_{2}$ content of the surface layer over India has a net gain of 0.75 GtC that comes from 14.31 GtC through vertical advection that exceeds the loss due to horizontal advection (−11.10 GtC) and vertical diffusion processes (−2.46 GtC). This net gain is almost 85% higher than the input of 0.4 GtC through surface fluxes, which composed of 0.61 GtC anthropogenic emission and −0.21 GtC net terrestrial ecosystem exchanges. Additional sensitivity experiment was carried out to elucidate the semi-annual features of the seasonal cycle of CO$_{2}$ for north India, in contrast to the annual characteristics of the seasonal cycle for south India in relation to the GOSAT observation. $\bf{Highlights}$ $\bullet$ Greenhouse gas Observation SATellite (GOSAT) L3B and L4B retrievals and in-situ flux tower measurements were analysed to describe seasonal cycle of tropospheric CO$_{2}$ over India; and GEOS-Chem atmospheric transport model diagnostics were used to examine the causes of the variability. $\bullet$ The seasonal cycle over north India is composed of mixed signature of annual and semi-annual frequencies while south India experiences dominance of annual oscillation. However, the surface layer CO$_{2}$ seasonal tendency has a major negative phase during April–August and a positive phase during September–March. $\bullet$ The net negative tendency during April–August results from the loss of CO$_{2}$ from the surface layer through horizontal advection and vertical diffusion processes that dominates the gain from vertical advection; while the net positive tendency
\section{Introduction and Preliminaries} The concept of a lattice group ($\ell$-group, for short) was initially considered in \cite{B,C}. In addition, topological $\ell$-groups as an extension of topological Riesz spaces were investigated in \cite{S1,S2}. Since the most known classes of function spaces are Banach lattices: one of the most powerful tools in the theory of Banach spaces, and Riesz spaces are the fundamental basis of Banach lattices, these notions have been investigated extensively from the past until now. But topological $\ell$-groups are rarely utilized although in general, topological groups have many applications in other disciplines for example Fourier analysis. Recently, a suitable reference has been announced regarding basic properties of topological $\ell$-groups ( see \cite{H} for more details on these expositions). On the other hand, in \cite{KZ}, Kocinac and the author, considered three different kinds of bounded homomorphisms on a topological group. They allocated each class of them to an appropriate topology and showed that they form again topological groups. If the underlying group has a lattice structure ( for example topological $\ell$-groups), it is of interest to ask whether bounded homomorphisms can have a lattice construction, too? This question for bounded order bounded operators on locally solid Riesz spaces have been answered affirmatively in \cite{EGZ}. Almost, the most fruitful structure for the lattice operations in order bounded operators is the remarkable Riesz-Kantorovich formulae ( see \cite[Theorem 1.18]{AB} for more information). Thus, in prior to anything, for order bounded homomorphisms on topological $\ell$-groups, we need a version of this formulae; this is done recently in \cite{Z2}. Then, we can consider lattice structures for classes of bounded order bounded homomorphisms. A related and major point to consider is that although some proofs in this paper might seem similar to the ones related to Riesz spaces at the first glance, It is obligatory to check them one by one because some known results in analysis rely heavily on scalar multiplication such as the Hahn-Banach theorem and some consequences of it; so that we can not expect them in topological $\ell$-groups. But order structure enables us to generalize some results in Riesz spaces which count on just group and order structures. Recently, among other things, some extensions of this kind, have been considered in \cite{Z2}. We organize the paper as follows. First, we consider some preliminaries and terminology which will be used in the sequel. In Section 2, we investigate a method which enables us to allocate lattice structures on bounded homomorphisms between topological $\ell$-groups. In fact, we use the Fatou property with a version of the Riesz-Kantorovich formulae to give a lattice structure to bounded order bounded homomorphisms. Also, we see that unbounded convergence in a locally solid $\ell$-group is topological and we state some points in this direction. In Section 3, we show that each class of bounded group homomorphisms defined on a topological ring is topologically complete if and only if so is the underlying topological ring. By a {\bf lattice group} ( $\ell$-group), we mean a group which is also a lattice at the same time. Observe that a subset $B$ in an abelian topological group $(G,+)$ is said to be {\bf bounded} if for each neighborhood $U$ of the identity, there exists a positive integer $n$ with $B\subseteq nU$, in which $nU=\{x_1+\ldots +x_n: x_i\in U\}$. An $\ell$-group $G$ is called {\bf Dedekind complete} if every non-empty bounded above subset of $G$ has a supremum. $G$ is {\bf Archimedean} if $nx\leq y$ for each $n\in \Bbb N$ implies that $x\leq 0$. One may verify easily that every Dedekind complete $\ell$-group is Archimedean. In this note, all groups are considered to be abelian. A set $S\subseteq G$ is called {\bf solid} if $x\in G$, $y\in S$ and $|x|\leq |y|$ imply that $x\in S$. Note that by a {\bf topological lattice group}, we mean a topological group which is simultaneously a lattice whose lattice operations are also continuous with respect to the assumed topology. Suppose $G$ is a topological $\ell$-group. A net $(x_{\alpha})\subseteq G$ is said to be {\bf order} convergent to $x\in G$ if there exists a net $(z_{\beta})$ ( possibly over a different index set) such that $z_{\beta}\downarrow 0$ and for every $\beta$, there is an $\alpha_0$ with $|x_{\alpha}-x|\leq z_{\beta}$ for each $\alpha\ge \alpha_0$. A set $A\subseteq G$ is called {\bf order closed} if it contains limits of all order convergent nets which lie in $A$. Keep in mind that topology $\tau$ on a topological $\ell$-group $(G,\tau)$ is referred to as {\bf Fatou} if it has a local basis at the identity consists of solid order closed neighborhoods. For undefined expressions and the related topics, see \cite{H}. Now, we recall some terminology we need in the sequel ( see \cite{KZ} for further notifications about these facts). \begin{definition}\rm Let $G$ and $H$ be topological groups. A homomorphism $T:G \to H$ is said to be \begin{itemize} \item[$(1)$] \emph{{\sf nb}-bounded} if there exists a neighborhood $U$ of $e_G$ such that $T(U)$ is bounded in $H$; \item[$(2)$] \emph{{\sf bb}-bounded} if for every bounded set $B \subseteq G$, $T(B)$ is bounded in $H$. \end{itemize} \end{definition} The set of all {\sf nb}-bounded ({\sf bb}-bounded) homomorphisms from a topological group $G$ to a topological group $H$ is denoted by ${\sf Hom_{nb}}(G,H)$ (${\sf Hom_{bb}}(G,H)$). We write ${\sf Hom}(G)$ instead of ${\sf Hom}(G,G)$. Here, we emphasize the group operation in ${\sf Hom}(G,H)$ is pointwise, that is $(T+S)(x):=T(x)+S(x)$. \smallskip Now, assume $G$ is a topological group. The class of all ${\sf nb}$-bounded homomorphisms on $G$ equipped with the topology of uniform convergence on some neighborhood of $e_G$ is denoted by ${\sf Hom_{nb}}(G)$. Observe that a net $(S_{\alpha})$ of ${\sf nb}$-bounded homomorphisms converges uniformly on a neighborhood $U$ of $e_G$ to a homomorphism $S$ if for each neighborhood $V$ of $e_G$ there exists an $\alpha_0$ such that for each $\alpha\geq\alpha_0$, $(S_{\alpha}-S)(U)\subseteq V$. \smallskip The class of all ${\sf bb}$-bounded homomorphisms on $G$ endowed with the topology of uniform convergence on bounded sets is denoted by ${\sf Hom_{bb}}(G)$. Note that a net $(S_{\alpha})$ of ${\sf bb}$-bounded homomorphisms uniformly converges to a homomorphism $S$ on a bounded set $B\subseteq G$ if for each neighborhood $V$ of $e_G$ there is an $\alpha_0$ with $(S_{\alpha}-S)(B) \subseteq V$ for each $\alpha\ge \alpha_0$. \smallskip The class of all continuous homomorphisms on $G$ equipped with the topology of ${\sf c}$-convergence is denoted by ${\sf Hom_{c}}(G)$. A net $(S_{\alpha})$ of continuous homomorphisms ${\sf c}$-converges to a homomorphism $S$ if for each neighborhood $W$ of $e_G$, there is a neighborhood $U$ of $e_G$ such that for every neighborhood $V$ of $e_G$ there exists an $\alpha_0$ with $(S_{\alpha}-S)(U)\subseteq V+W$ for each $\alpha\geq\alpha_0$. \smallskip Note that ${\sf Hom_{nb}}(G)$, ${\sf Hom_{c}}(G)$, and ${\sf Hom_{bb}}(G)$ form subgroups of the group of all homomorphisms on $G$. \section{topological lattice groups} \begin{rem} As opposed to topological vector spaces, in topological groups, not every singleton is bounded. In fact, scalar multiplication is a fruitful tool in this direction that we lack in topological groups; suppose $G$ is an abelian topological group and put $H=G\times {\Bbb Z}_2$. Then, $H$ is a topological group which contains unbounded singletons. Nevertheless, in some cases such as many classical topological groups or connected topological groups, we do have this mild property. In this paper, we always assume that all topological groups have this mild property. \end{rem} \begin{exam} Consider the additive group $\Bbb Z$ of integer numbers. It can be seen easily that with discrete topology, it is a locally solid topological group. Furthermore, it can be verified that it possesses Fatou property. But it is not a Riesz space, certainly. \end{exam} Recall that a homomorphism $T:G\to H$ is said to be order bounded if it maps order bounded sets into order bounded ones. The set of all order bounded homomorphisms from $G$ into $H$ is denoted by $\sf{Hom^{b}(G,H)}$. One may justify that under group operations of homomorphisms defined in \cite{KZ} and invoking \cite[Theorem 4.9]{H}, $\sf{Hom^{b}(G,H)}$ is a group. \begin{lem} Suppose $G$ is a Dedekind complete locally solid $\ell$-group with Fatou topology and ${\sf Hom^{b}_{n}}(G)$ is the group of all order bounded $nb$-bounded homomorphisms. Then ${\sf Hom^{b}_{n}}(G)$ is an $\ell$-group. \end{lem} \begin{proof} We need to prove that for a homomorphism $T\in {\sf Hom^{b}_{n}}(G)$, $T^{+} \in
at right angles is nearly at the horizon, where the antenna gain is low and ranges are ambiguous. This means that the times at which specular meteor radars can observe the shower are restricted to radiant elevations under $\approx$70$^\circ$. For high-latitude showers like the Draconids, this results in a substantial blind period each day, even while the shower is above the horizon. Optical methods are not subject to the same difficulty, but most optical observations missed the peak of the 2012 outburst, which occurred during the day in North America and Europe. High-power, large aperture radars observing primarily head echoes do not have this issue, but are mostly (with the exception of EISCAT and MAARSY in northern Scandinavia) located at low northern latitudes or further south, where the radiant is below the horizon at least some of the time \citep[e.g.][]{kero2012}. These high power radars are not normally run in a mode compatible with meteor observations, and tend to see fewer shower meteors, since sporadics increasingly dominate at smaller sizes \citep[see ][]{kero2019,SCHULT_2017_MAARSY_sporadics,SCHULT_2018_MAARSY_shower}. \section{Conclusions} Five recent outbursts of the Draconids, observed with radars, have been compared. There is broad agreement in the shape of the outbursts from system to system, even in very different geographical locations. There is more uncertainty in the level of activity, stressing the need for good calibrations and bias corrections. For the Draconid shower in particular, it is evident that an improvement is needed in the initial radius correction factor, which apparently needs to be larger. These observations also show the importance of observations with a spread in geographical longitude for short-duration outbursts like the Draconid showers. In general, no single location will provide continuous coverage with a specular meteor radar. \section*{Acknowledgements} Funding for this work was provided through NASA cooperative agreement 80NSSSC18M0046 and the Natural Sciences and Engineering Research Council of Canada (Grant no. RGPIN-2018-05474). The Esrange meteor radar operation, maintenance and data collection is provided by Esrange Space Center of Swedish Space Corporation (SSC). GS is a member of the Oeschger Center for Climate Change Research. The Andenes and Juliusruh meteor radar data was collected under the grant STO 1053/1-1 of the Deutsche Forschungsgemeinschaft (DFG). We thank Jorge L. Chau and R. Latteck for their support of the AHEAD project. \section*{Data Availability} Data available on request. \bibliographystyle{mnras} \section{Introduction} The Draconid meteor shower (009 DRA) (formerly known as the Giacobinids, after their parent comet 21P/Giacobini-Zinner) is a low-activity shower which has irregular, and sometimes spectacular, outbursts. These outbursts can be rich in bright meteors, or confined to very faint meteors mainly visible to radar \citep{egal2019}. The meteors themselves are slow (about 23 km~s$^{-1}$) and fragile \citep[see][for a summary of observations]{borovicka2007}. The first radar observations of the Draconids took place in the UK, during the 1946 outburst, with a military radar at a frequency of 60~MHz \citep{hey1947}. This radar had a power of 150 kW, and had a narrow, vertical beam which increased the gain. They observed a dramatic increase in meteor rates (from ~10/minute to 300/minute) on 1946 October 10 (peaking at solar longitude 197$^\circ$), lasting for about six hours. The echoes included many overdense (radiatively thick trails caused by larger meteoroids) and head echoes (scattering from the ionization around the meteoroid itself rather than the ionized train). The shower was also observed with the Jodrell Bank radar \citep{lovell1947}, frequency 72~MHz and transmitter power 150 kW, likewise with a narrow beam. It observed a peak of 200 echoes per minute at the same solar longitude. \citet{davies1955} describe a radar survey using Jodrell Bank from 1947 to 1954 which specifically ran on the expected Draconid peak days, and found only one significant return of the shower in 1952, peaking at solar longitude 197$^\circ$. At this point the radar had been upgraded to have two independently steerable antennas \citep{aspinall1951}, which were pointed just north and south of due west. The time of maximum is somewhat uncertain, since the radiant was close to zenith, meaning that radar echoes were close to the horizon. A moderate outburst in 1985 was observed by \citet{simek1986} with the Springhill meteor radar in Ottawa, Canada, which had a peak power of 1.8~MW and ran at 32.5~MHz. The high power of the radar meant that it was not normally used for meteor rate studies, because of the manual analysis required for each echo, but a small portion of the data during the shower peak was analysed carefully. A correction was performed to correct the observed rates for the gain pattern of the antennas, which is the first step required to obtain fluxes, though formal collecting areas were not calculated. Overdense and underdense echoes were treated separately, and the sporadic background, based on observations in 1967, was subtracted. There was a peak at 9 UT 1985 October 8 (solar longitude 195.2$^\circ$), with a comparable second peak in the overdense (larger meteoroids) rates one hour later, though the scatter in number of echoes was large and the peak time therefore uncertain. \citet{simek1999} observed another Draconid outburst in 1998 using the Ond\v{r}ejov Observatory radar. This radar has a broad beam which was directed perpendicular to the radiant, and only underdense echoes were used for the analysis. The frequency of the radar is 37.5~MHz, and the transmitter power 20~kW. This was the first radar flux measurement of the Draconids, using the gain pattern of the antennas to determine the collecting area of the system. The authors found a peak flux of 0.162 meteoroids~km$^{-2}$~hr$^{-1}$ at a solar longitude of 195.1$^\circ$. A 1999 Draconid outburst at radar magnitudes (faint meteors) was post-predicted by \citet{egal2019}, and a search of data from the early CMOR radar, moved to Alert, Canada, for the 1999 Leonids, found observations of the outburst. More details are given in the following sections. The 2005 outburst was observed with CMOR \citep{campbell2006}, and is described in more detail below; similarly, the 2011 and 2012 outbursts were likewise observed with CMOR \citep{ye2013,ye2014}. The 2011 and 2012 outbursts were also seen by the Shigaraki middle and upper atmosphere (MU) radar in Japan \citep{kero2012,fujiwara2016}. This radar, at 46.5~MHz and 1~MW power, detects primarily head echoes, and does not therefore have to use statistical methods to calculate shower activity, since the orbits of meteors can be determined. Only 13 Draconid meteors were observed in 2011 because the radiant was below the horizon during the peak of the shower at solar longitude 195$^\circ$. The radiant was also low in 2012, but 57 Draconids were detected, and correcting for the radiant elevation showed a peak activity at a solar longitude of 195.6$^\circ$, in reasonable agreement with the CMOR results. The MU study also confirmed that the 2012 outburst was very rich in faint meteors, which explains why it was not a major outburst in optical observations. Simulations by \citet{Kastinen2017} support the enhanced delivery of smaller masses to the Earth in 2012 compared to 2011. The mass index of the shower, $s$, describes how mass is distributed in the stream by particle size. An $s$ of 2 indicates that there is equal mass in each size bin; $s>2$ indicates more mass in small particles, and $s<2$ means there is more mass in large particles. Typical shower mass indices fall between 1.70 and 2.0, with sporadic meteors normally having an index in the range 2.0 to 2.3. There is a wide scatter in mass index measurements for the Draconids. In 1985, \citet{simek1994} found that for underdense echoes, the mass index was 2.06, and 2.11 for larger, overdense meteors. In 1998, visual observations showed a scattered $s$ between 1.75 and 2.36 \citep{arlt1998}, while slightly fainter video observations gave an $s$ of 1.81 \citep{watanabe1999}. In 2005, video observations gave an $s$ of 1.87 at the peak and 1.78 for the full distribution \citep{koten2007}, while the CMOR data gave $s$=2.0 \citep{campbell2006}. In 2011, the