text
stringlengths
174
460k
context
stringlengths
491
460k
<context>[NEXA_RESTORE] \sigma^2_{\rm zf}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2}\;du\ .$$ Therefore, the decision-point signal-to-noise ratio for any CIR realization $\bh$ and $\snr=\frac{P_0}{N_0}$ is $$\label{eq:zf_snr} {\gamma_{\rm zf}(\snr,\bh)}\dff\snr \bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{|H(e^{-ju})|^2}\;du\bigg]^{-1}.$$ MMSE linear equalizers are designed to strike a balance between ISI reduction and noise enhancement through minimizing the combined residual ISI and noise level. Given the combined effect of the ISI channel and its corresponding matched-filter, the MMSE linear equalizer in the $D$-domain is [@cioffi Equation (3.148)] $$\begin{aligned} \label{eq:mmse_eq} W_{\rm mmse}(D)= \frac{\|\bh\|}{H(D)H^*(D^{-*})+\snr^{-1}}\ .\end{aligned}$$ The variance of the residual ISI and the noise variance as seen at the output of the equalizer is $$\label{eq:mmse_var} \sigma^2_{\rm mmse}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2+\snr^{-1}}\;du\ .$$ Hence, the *unbiased*[^4] decision-point signal-to-noise ratio at for any CIR realization $\bh$ and $\snr$ is $$\begin{aligned} \label{eq:mmse_snr}{\gamma_{\rm mmse}(\snr,\bh)}\dff\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr|H(e^{-ju})|^2+1}\; du\bigg]^{-1}-1\ .\end{aligned}$$ Diversity Gain {#sec:diversity} -------------- For a transmitter sending information bits at spectral efficiency $R$ bits/sec/Hz, the system is said
\sigma^2_{\rm zf}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2}\;du\ .$$ Therefore, the decision-point signal-to-noise ratio for any CIR realization $\bh$ and $\snr=\frac{P_0}{N_0}$ is $$\label{eq:zf_snr} {\gamma_{\rm zf}(\snr,\bh)}\dff\snr \bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{|H(e^{-ju})|^2}\;du\bigg]^{-1}.$$ MMSE linear equalizers are designed to strike a balance between ISI reduction and noise enhancement through minimizing the combined residual ISI and noise level. Given the combined effect of the ISI channel and its corresponding matched-filter, the MMSE linear equalizer in the $D$-domain is [@cioffi Equation (3.148)] $$\begin{aligned} \label{eq:mmse_eq} W_{\rm mmse}(D)= \frac{\|\bh\|}{H(D)H^*(D^{-*})+\snr^{-1}}\ .\end{aligned}$$ The variance of the residual ISI and the noise variance as seen at the output of the equalizer is $$\label{eq:mmse_var} \sigma^2_{\rm mmse}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2+\snr^{-1}}\;du\ .$$ Hence, the *unbiased*[^4] decision-point signal-to-noise ratio at for any CIR realization $\bh$ and $\snr$ is $$\begin{aligned} \label{eq:mmse_snr}{\gamma_{\rm mmse}(\snr,\bh)}\dff\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr|H(e^{-ju})|^2+1}\; du\bigg]^{-1}-1\ .\end{aligned}$$ Diversity Gain {#sec:diversity} -------------- For a transmitter sending information bits at spectral efficiency $R$ bits/sec/Hz, the system is said[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] normalization of the cross section in each bin is allowed to float with four scaling parameters $\bm{x}=(x_0, x_1, x_2, x_3)$, where the index goes from the lowest energy bin to the highest energy bin. We further assume that the ratio of the CC to NC cross section is fixed and that there is no additional flavor dependence. Thus, $\bm{x}$ is applied identically across all flavors and interaction channels on the CSMS prediction. In order to model the effect of varying the cross section on the arrival flux, we used [`nuSQuIDS`]{} [@Delgado:2014kpa]. This allows us to account properly for destructive CC interactions as well as for secondaries from NC interactions and tau-regeneration. The Earth density is set to the preliminary reference Earth model (PREM) [@Dziewonski:1981xy] and the GR cross section is kept fixed to the Standard Model prediction. We also include the nuisance parameters given in \[tab:nuisances\], for a single-power-law astrophysical flux, pion
normalization of the cross section in each bin is allowed to float with four scaling parameters $\bm{x}=(x_0, x_1, x_2, x_3)$, where the index goes from the lowest energy bin to the highest energy bin. We further assume that the ratio of the CC to NC cross section is fixed and that there is no additional flavor dependence. Thus, $\bm{x}$ is applied identically across all flavors and interaction channels on the CSMS prediction. In order to model the effect of varying the cross section on the arrival flux, we used [`nuSQuIDS`]{} [@Delgado:2014kpa]. This allows us to account properly for destructive CC interactions as well as for secondaries from NC interactions and tau-regeneration. The Earth density is set to the preliminary reference Earth model (PREM) [@Dziewonski:1981xy] and the GR cross section is kept fixed to the Standard Model prediction. We also include the nuisance parameters given in \[tab:nuisances\], for a single-power-law astrophysical flux, pion[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] - \deg(E_2)F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2}.$ Let $r_1 = \operatorname{{rank}}(E_1)$ and $r_2 = \operatorname{{rank}}(E_2)$ and without loss of generality assume that $ r_1 \leq r_2$. Then the bases of $N^k(X)$ are given by $$N^k(X) = \begin{cases} \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i}\}_{i = 0}^k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {k - 1} \Big) & if \quad k < r_1\\ \\ \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0} ^{r_1 - 1}, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {r_1 - 1} \Big) & if \quad r_1 \leq k < r_2 \\ \\ \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = t+1} ^{r_1 - 1} , \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = t}^{r_1 - 1} \Big) & if \quad k = r_2 + t \quad where \quad
- \deg(E_2)F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2}.$ Let $r_1 = \operatorname{{rank}}(E_1)$ and $r_2 = \operatorname{{rank}}(E_2)$ and without loss of generality assume that $ r_1 \leq r_2$. Then the bases of $N^k(X)$ are given by $$N^k(X) = \begin{cases} \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i}\}_{i = 0}^k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {k - 1} \Big) & if \quad k < r_1\\ \\ \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0} ^{r_1 - 1}, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {r_1 - 1} \Big) & if \quad r_1 \leq k < r_2 \\ \\ \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = t+1} ^{r_1 - 1} , \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = t}^{r_1 - 1} \Big) & if \quad k = r_2 + t \quad where \quad[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] current density, $J_{\Psi}(x,t) =(\hbar/m)\textrm{Im}\left[\Psi^{*}\nabla\Psi\right]$ can be easily put in the form $$\frac{m}{\hbar}J_{\Psi}(x,t) =(\nabla\theta)\rho_{\Psi}+\frac12q \left[\rho_{\Psi}+|\phi|^{2} (A_{2}^{2} -A_{1}^{2})\right], \label{eq:current}$$ with $\rho_{\Psi}(x,t)=|\phi(x,t)|^{2}\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\cos\left(qx + \varphi\right)\right)$ being the total density. Therefore, a negative flux, $J_{\Psi}(x,t)<0$, corresponds to the following inequality for the density $$\textrm{sign}[\eta(x,t)]\rho_{\Psi}(x,t)< \frac{1}{|\eta(x,t)| }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}),$$ where we have defined $\eta(x,t)= 1 +2\nabla\theta(x,t)/q$. Later on we will show that $\eta(x,t)<0$ corresponds to a *classical regime*, whereas for $\eta(x,t)>0$ the backflow is a purely quantum effect, without any classical counterpart. Therefore, in the *quantum regime*, backflow takes place when the density is below the following critical threshold: $$\rho_{\Psi}^{crit}(x,t)= \frac{q}{q +2\nabla\theta(x,t) }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}). \label{eq:denscrit}$$ This is a fundamental relation that allows to detect backflow by a density measurement. It applies to any class of wavepackets of the form (\[eq:bragg\]), including the superposition of two plane waves discussed in [@Jus; @year]. We remark that, while in the ideal case of plane waves backflow repeats periodically at specific time
current density, $J_{\Psi}(x,t) =(\hbar/m)\textrm{Im}\left[\Psi^{*}\nabla\Psi\right]$ can be easily put in the form $$\frac{m}{\hbar}J_{\Psi}(x,t) =(\nabla\theta)\rho_{\Psi}+\frac12q \left[\rho_{\Psi}+|\phi|^{2} (A_{2}^{2} -A_{1}^{2})\right], \label{eq:current}$$ with $\rho_{\Psi}(x,t)=|\phi(x,t)|^{2}\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\cos\left(qx + \varphi\right)\right)$ being the total density. Therefore, a negative flux, $J_{\Psi}(x,t)<0$, corresponds to the following inequality for the density $$\textrm{sign}[\eta(x,t)]\rho_{\Psi}(x,t)< \frac{1}{|\eta(x,t)| }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}),$$ where we have defined $\eta(x,t)= 1 +2\nabla\theta(x,t)/q$. Later on we will show that $\eta(x,t)<0$ corresponds to a *classical regime*, whereas for $\eta(x,t)>0$ the backflow is a purely quantum effect, without any classical counterpart. Therefore, in the *quantum regime*, backflow takes place when the density is below the following critical threshold: $$\rho_{\Psi}^{crit}(x,t)= \frac{q}{q +2\nabla\theta(x,t) }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}). \label{eq:denscrit}$$ This is a fundamental relation that allows to detect backflow by a density measurement. It applies to any class of wavepackets of the form (\[eq:bragg\]), including the superposition of two plane waves discussed in [@Jus; @year]. We remark that, while in the ideal case of plane waves backflow repeats periodically at specific time[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] estimate obtained here can be used for global SA involving general conditional moments, but it includes as a special case Sobol sensitivity indices. Note also that an extension of the approach developed in our work is simultaneously proposed in the context of sliced inverse regression [@loubes11].\ The paper is organized as follows. Section \[sa\] first recaps variance-based methods for global SA. In particular, we point out which type of nonlinear functional appears in sensitivity indices. Section \[model\] then describes the theoretical framework and the proposed methodology for building an asymptotically efficient estimator. In Section \[examples\], we focus on Sobol sensitivity indices and study numerical examples showing the good behavior of the proposed estimate. We also illustrate its interest on a reservoir engineering example, where uncertainties on the geology propagate to the potential oil recovery of a reservoir. Finally, all proofs are postponed to the appendix. Global sensitivity analysis {#sa} =========================== In many applied fields,
estimate obtained here can be used for global SA involving general conditional moments, but it includes as a special case Sobol sensitivity indices. Note also that an extension of the approach developed in our work is simultaneously proposed in the context of sliced inverse regression [@loubes11].\ The paper is organized as follows. Section \[sa\] first recaps variance-based methods for global SA. In particular, we point out which type of nonlinear functional appears in sensitivity indices. Section \[model\] then describes the theoretical framework and the proposed methodology for building an asymptotically efficient estimator. In Section \[examples\], we focus on Sobol sensitivity indices and study numerical examples showing the good behavior of the proposed estimate. We also illustrate its interest on a reservoir engineering example, where uncertainties on the geology propagate to the potential oil recovery of a reservoir. Finally, all proofs are postponed to the appendix. Global sensitivity analysis {#sa} =========================== In many applied fields,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the schemes discussed in [@PRL; @LM] are therefore probably more thought experiments than realistic possibilities. Here we come closer to experiments by studying schemes involving only individual position measurement of the particles, without any necessity of accurate localization. With Bose condensed gases of metastable helium atoms, micro-channel plates indeed allow one to detect atoms one by one [@Saubamea; @Robert]. The first idea that then comes to mind is to consider the interference pattern created by a DFS, a situation that has been investigated theoretically by several authors [@Java; @WCW; @CGNZ; @PS; @Dragan], and observed experimentally [@WK]. The quantum effects occurring in the detection of the fringes have been studied in [@Polkovnikov-1; @Polkovnikov-2], in particular the quantum fluctuations of the fringe amplitude; see also [@DBRP] for a discussion of fringes observed with three condensates, in correlation with populations oscillations. But, for obtaining quantum violations of Bell type inequalities, continuous position measurements are not necessarily
the schemes discussed in [@PRL; @LM] are therefore probably more thought experiments than realistic possibilities. Here we come closer to experiments by studying schemes involving only individual position measurement of the particles, without any necessity of accurate localization. With Bose condensed gases of metastable helium atoms, micro-channel plates indeed allow one to detect atoms one by one [@Saubamea; @Robert]. The first idea that then comes to mind is to consider the interference pattern created by a DFS, a situation that has been investigated theoretically by several authors [@Java; @WCW; @CGNZ; @PS; @Dragan], and observed experimentally [@WK]. The quantum effects occurring in the detection of the fringes have been studied in [@Polkovnikov-1; @Polkovnikov-2], in particular the quantum fluctuations of the fringe amplitude; see also [@DBRP] for a discussion of fringes observed with three condensates, in correlation with populations oscillations. But, for obtaining quantum violations of Bell type inequalities, continuous position measurements are not necessarily[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] natural result since $\hat p \psi\rangle_q=-i\hbar \frac{\partial\psi}{\partial \hat q}\rangle_q$ if we anticipate the Schrödinger representation. Then if $\psi=\mbox{constant}$, we have $\hat p\;\rangle_q=0$. If we write a general function of the set $\{\hat q,\hat p\}$ in [*normal order*]{}, we obtain $$\begin{aligned} \psi(\hat q,\hat p, t)\rangle_q=\psi(\hat q, t)\rangle_q\rightarrow \psi (q,t).\end{aligned}$$ By introducing a dual symbol, the [*standard*]{} bra, $_q\langle\;$, we can formally write $$\begin{aligned} _q\langle q|\psi(\hat q,\hat p,t)\rangle_q=\psi(q,t).\end{aligned}$$ In this way we arrive at the usual wave function in a configuration space. To choose the $p$-representation, we introduce a standard ket satisfying $\hat q\rangle_p=0$ since in this representation $\hat q=i\hbar \frac{\partial}{\partial p}$. In this case we write the operators in [*anti-normal order*]{} to obtain any element in the $p$-representation. Thus one can derive the two equations (\[eq:ConP\]) and (\[eq:QHJ\]) directly from the Schrödinger equation written in the [*algebraic*]{} form $$\begin{aligned} i\hbar\frac{\partial}{\partial t}\left(Re^{iS/\hbar}\right)\rangle_q=H(\hat q, \hat p)Re^{iS/\hbar}\rangle_q. \label{eq:ASeqn}\end{aligned}$$ Dirac remarked that equation (\[eq:ConP\]) was similar to the
natural result since $\hat p \psi\rangle_q=-i\hbar \frac{\partial\psi}{\partial \hat q}\rangle_q$ if we anticipate the Schrödinger representation. Then if $\psi=\mbox{constant}$, we have $\hat p\;\rangle_q=0$. If we write a general function of the set $\{\hat q,\hat p\}$ in [*normal order*]{}, we obtain $$\begin{aligned} \psi(\hat q,\hat p, t)\rangle_q=\psi(\hat q, t)\rangle_q\rightarrow \psi (q,t).\end{aligned}$$ By introducing a dual symbol, the [*standard*]{} bra, $_q\langle\;$, we can formally write $$\begin{aligned} _q\langle q|\psi(\hat q,\hat p,t)\rangle_q=\psi(q,t).\end{aligned}$$ In this way we arrive at the usual wave function in a configuration space. To choose the $p$-representation, we introduce a standard ket satisfying $\hat q\rangle_p=0$ since in this representation $\hat q=i\hbar \frac{\partial}{\partial p}$. In this case we write the operators in [*anti-normal order*]{} to obtain any element in the $p$-representation. Thus one can derive the two equations (\[eq:ConP\]) and (\[eq:QHJ\]) directly from the Schrödinger equation written in the [*algebraic*]{} form $$\begin{aligned} i\hbar\frac{\partial}{\partial t}\left(Re^{iS/\hbar}\right)\rangle_q=H(\hat q, \hat p)Re^{iS/\hbar}\rangle_q. \label{eq:ASeqn}\end{aligned}$$ Dirac remarked that equation (\[eq:ConP\]) was similar to the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] aforementioned issues, underwater manipulation becomes a challenging task in order to achieve low overshoot, transient and steady state performance. Motivated by the above, in this work we propose a force - position control scheme which does not require any knowledge of the UVMS dynamic parameters, environment model as well as the disturbances. More specifically, it tackles all the aforementioned issues and guarantees a predefined behavior of the system in terms of desired overshoot and prescribed transient/steady state performance. Moreover, noise measurements, UVMS model uncertainties (a challenging issue in underwater robotics) and external disturbance are considered during control design. In addition, the complexity of the proposed control law is significantly low. It is actually a static scheme involving only a few calculations to output the control signal, which enables its implementation on most of current UVMS. The rest of this paper is organized as follows: in Section 2 the mathematical model of UVMS
aforementioned issues, underwater manipulation becomes a challenging task in order to achieve low overshoot, transient and steady state performance. Motivated by the above, in this work we propose a force - position control scheme which does not require any knowledge of the UVMS dynamic parameters, environment model as well as the disturbances. More specifically, it tackles all the aforementioned issues and guarantees a predefined behavior of the system in terms of desired overshoot and prescribed transient/steady state performance. Moreover, noise measurements, UVMS model uncertainties (a challenging issue in underwater robotics) and external disturbance are considered during control design. In addition, the complexity of the proposed control law is significantly low. It is actually a static scheme involving only a few calculations to output the control signal, which enables its implementation on most of current UVMS. The rest of this paper is organized as follows: in Section 2 the mathematical model of UVMS[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] D =(X_1, \cdots, X_d)$ under different scenarios is represented by different sets of data observed or generated under those scenarios because specifying accurate models for $D$ is usually very difficult. Here, we suppose that there always exist $l$ scenarios. Let $n_j$ be the sample size of $D$ in the $j^{th}$ scenario, $j=1, \cdots, l.$ Let $n:= n_1 +\cdots+n_l$. More precisely, suppose that the behavior of $D$ is represented by a collection of data $X=(X_1, \cdots, X_d) \in \mathbb{R}^n \times \cdots \times \mathbb{R}^n$, where $X_i=(X^{i, 1}, \cdots, X^{i, l}) \in \mathbb{R}^n$, $X^{i, j} =(x^{i, j}_1, \cdots, x^{i, j}_{n_j}) \in \mathbb{R}^{n_j}$ is the data subset that corresponds to the $j^{th}$ scenario with respect to $X_i$. For each $j=1, \cdots, l$, $h=1, \cdots, n_j$, $X^j_h:=\left(x^{1, j}_h, x^{2, j}_h, \cdots, x^{d, j}_h\right)$ is the data subset that corresponds to the $h^{th}$ observation of $D$ in the $j^{th}$ scenario, and can be based on historical
D =(X_1, \cdots, X_d)$ under different scenarios is represented by different sets of data observed or generated under those scenarios because specifying accurate models for $D$ is usually very difficult. Here, we suppose that there always exist $l$ scenarios. Let $n_j$ be the sample size of $D$ in the $j^{th}$ scenario, $j=1, \cdots, l.$ Let $n:= n_1 +\cdots+n_l$. More precisely, suppose that the behavior of $D$ is represented by a collection of data $X=(X_1, \cdots, X_d) \in \mathbb{R}^n \times \cdots \times \mathbb{R}^n$, where $X_i=(X^{i, 1}, \cdots, X^{i, l}) \in \mathbb{R}^n$, $X^{i, j} =(x^{i, j}_1, \cdots, x^{i, j}_{n_j}) \in \mathbb{R}^{n_j}$ is the data subset that corresponds to the $j^{th}$ scenario with respect to $X_i$. For each $j=1, \cdots, l$, $h=1, \cdots, n_j$, $X^j_h:=\left(x^{1, j}_h, x^{2, j}_h, \cdots, x^{d, j}_h\right)$ is the data subset that corresponds to the $h^{th}$ observation of $D$ in the $j^{th}$ scenario, and can be based on historical[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] such as the precise location of the contact line and a slight bending of the windows which are under stress [@jltp]. When cooling down a homogeneous mixture with concentration higher than the tri-critical value $X_{t}$, J.P. Romagnan et al. [@romagnan] found that a superfluid film formed between the bulk mixture and a metallic substrate. As they approached $T_{eq}$ where separation into “c- ” and “d-” phases occurred, they observed a film thickness diverging as $(T-T_{eq})^{-1/3}$. This behaviour is characteristic of the van der Waals attraction by the substrate, which is stronger on the densest phase [@sornette]. One used to believe that van der Waals forces were the only long range forces in this problem, so that the film thickness should diverge to infinity, and complete wetting by the superfluid d-phase should occur. However, Romagnan et al. only measured this thickness up to about 20 atomic layers (60 Å). If other forces act on the film
such as the precise location of the contact line and a slight bending of the windows which are under stress [@jltp]. When cooling down a homogeneous mixture with concentration higher than the tri-critical value $X_{t}$, J.P. Romagnan et al. [@romagnan] found that a superfluid film formed between the bulk mixture and a metallic substrate. As they approached $T_{eq}$ where separation into “c- ” and “d-” phases occurred, they observed a film thickness diverging as $(T-T_{eq})^{-1/3}$. This behaviour is characteristic of the van der Waals attraction by the substrate, which is stronger on the densest phase [@sornette]. One used to believe that van der Waals forces were the only long range forces in this problem, so that the film thickness should diverge to infinity, and complete wetting by the superfluid d-phase should occur. However, Romagnan et al. only measured this thickness up to about 20 atomic layers (60 Å). If other forces act on the film[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] approach [@2012NewA...17..254D]. Regarding the sequence of steady solutions with different values of specific angular momentum, the hysteresis-like behavior of the shock front was proposed in the latter work. Different geometrical configurations with polytropic or isothermal equation of state were studied in the post Newtonian approach with pseudo-Kerr potential [@SAHA201610]. In the general relativistic description the dependence of the flow properties (Mach number, density, temperature and pressure) in the close vicinity of the horizon was studied by [@Das201581], and the asymmetry of prograde versus retrograde accretion was shown. The complete picture of the accreting microquasar consisting of the Keplerian cold disc and the low angular momentum hot corona, the so called two component advective flow (TCAF), was described by [@1995ApJ...455..623C]. This model combined with the propagating oscillatory shock (POS) model was later used to explain the evolution of the low frequency QPO during the outburst in several microquasars and it was also
approach [@2012NewA...17..254D]. Regarding the sequence of steady solutions with different values of specific angular momentum, the hysteresis-like behavior of the shock front was proposed in the latter work. Different geometrical configurations with polytropic or isothermal equation of state were studied in the post Newtonian approach with pseudo-Kerr potential [@SAHA201610]. In the general relativistic description the dependence of the flow properties (Mach number, density, temperature and pressure) in the close vicinity of the horizon was studied by [@Das201581], and the asymmetry of prograde versus retrograde accretion was shown. The complete picture of the accreting microquasar consisting of the Keplerian cold disc and the low angular momentum hot corona, the so called two component advective flow (TCAF), was described by [@1995ApJ...455..623C]. This model combined with the propagating oscillatory shock (POS) model was later used to explain the evolution of the low frequency QPO during the outburst in several microquasars and it was also[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] IBC, and [@grabowski2013fracIBC], which covers a monodimensional wave equation with a fractional admittance. The objective of this paper is to prove the asymptotic stability of the multidimensional wave equation (\[eq:=00005BMOD=00005D\_Wave-Equation\]) coupled with a wide range of IBCs (\[eq:=00005BMOD=00005D\_IBC\]) chosen for their physical relevance. All the considered IBCs share a common property: the Laplace transform of their kernel is a positive-real function. A common method of proof, inspired by [@matignon2014asymptotic], is employed that consists in formulating an abstract Cauchy problem on an extended state space (\[eq:=00005BSTAB=00005D\_Abstract-Cauchy-Problem\]) using a realization of each impedance operator, be it finite or infinite-dimensional; asymptotic stability is then obtained with the ABLV theorem, although a less general alternative based on the invariance principle is also discussed. In spite of the apparent unity of the approach, we are not able to provide a single, unified proof: this leads us to formulate a conjecture at the end of this work,
IBC, and [@grabowski2013fracIBC], which covers a monodimensional wave equation with a fractional admittance. The objective of this paper is to prove the asymptotic stability of the multidimensional wave equation (\[eq:=00005BMOD=00005D\_Wave-Equation\]) coupled with a wide range of IBCs (\[eq:=00005BMOD=00005D\_IBC\]) chosen for their physical relevance. All the considered IBCs share a common property: the Laplace transform of their kernel is a positive-real function. A common method of proof, inspired by [@matignon2014asymptotic], is employed that consists in formulating an abstract Cauchy problem on an extended state space (\[eq:=00005BSTAB=00005D\_Abstract-Cauchy-Problem\]) using a realization of each impedance operator, be it finite or infinite-dimensional; asymptotic stability is then obtained with the ABLV theorem, although a less general alternative based on the invariance principle is also discussed. In spite of the apparent unity of the approach, we are not able to provide a single, unified proof: this leads us to formulate a conjecture at the end of this work,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and $\hat{M}$ are just a shorthand notation for the same underlying four-manifold $M^{(4)}$ with different metrics, with ${\bf g}$ Lorentzian, whereas ${\hat{\bf g}}$ is Riemannian. As we are not concerned with global problems we may restrict ourselves to a a tubular neighborhood of $\Theta({\cal S})$ (containing also $\hat{\Theta}({\cal S})$). For the moment ${\bf g}$ and ${\bf \hat{g}}$ are arbitrary. This coexistence of both Riemannian and Lorentzian metrics on the same region of the manifold will in our opinion avoid a lot of problems when thinking of the geometry involved.\ We are going to identify - modulo the action of the diffeomorphisms- the Lorentzian and Riemannian geometry along a common imbedded space-like hypersurface, determined by the constraints associated with the Einstein equations. Geometry ======== In order to define the variables of interest, we need to characterise the foliations employed and the related lapse and shift in both the
and $\hat{M}$ are just a shorthand notation for the same underlying four-manifold $M^{(4)}$ with different metrics, with ${\bf g}$ Lorentzian, whereas ${\hat{\bf g}}$ is Riemannian. As we are not concerned with global problems we may restrict ourselves to a a tubular neighborhood of $\Theta({\cal S})$ (containing also $\hat{\Theta}({\cal S})$). For the moment ${\bf g}$ and ${\bf \hat{g}}$ are arbitrary. This coexistence of both Riemannian and Lorentzian metrics on the same region of the manifold will in our opinion avoid a lot of problems when thinking of the geometry involved.\ We are going to identify - modulo the action of the diffeomorphisms- the Lorentzian and Riemannian geometry along a common imbedded space-like hypersurface, determined by the constraints associated with the Einstein equations. Geometry ======== In order to define the variables of interest, we need to characterise the foliations employed and the related lapse and shift in both the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] on the gonality of very general abelian varieties: A very general abelian variety of dimension at least $2k-1$ has gonality at least $k+1$.\[Vweakconj\] The central result of this paper is the proof of this conjecture. It is obtained by generalizing Voisin’s method to powers of abelian varieties. This allows us to rule out the existence of positive dimensional $\text{CCS}_k$ in very general abelian varieties of dimension $g$ for $g$ large compared to $k$. For $k\geq 3$, a very general abelian variety of dimension at least $2k-2$ has no positive dimensional orbits of degree $k$, i.e. $\mathscr{G}(k)\leq 2k-2$. This gives the following lower bound on the gonality of very general abelian varieties: For $k\geq 3$, a very general abelian variety of dimension $\geq 2k-2$ has gonality at least $k+1$. In particular Conjecture [\[Vweakconj\]](Vweakconj) holds. Another approach to generalizing Theorem [\[P\]](P) is the following: Observe that a nonconstant morphism from a hyperelliptic curve $C$ to an abelian surface
on the gonality of very general abelian varieties: A very general abelian variety of dimension at least $2k-1$ has gonality at least $k+1$.\[Vweakconj\] The central result of this paper is the proof of this conjecture. It is obtained by generalizing Voisin’s method to powers of abelian varieties. This allows us to rule out the existence of positive dimensional $\text{CCS}_k$ in very general abelian varieties of dimension $g$ for $g$ large compared to $k$. For $k\geq 3$, a very general abelian variety of dimension at least $2k-2$ has no positive dimensional orbits of degree $k$, i.e. $\mathscr{G}(k)\leq 2k-2$. This gives the following lower bound on the gonality of very general abelian varieties: For $k\geq 3$, a very general abelian variety of dimension $\geq 2k-2$ has gonality at least $k+1$. In particular Conjecture [\[Vweakconj\]](Vweakconj) holds. Another approach to generalizing Theorem [\[P\]](P) is the following: Observe that a nonconstant morphism from a hyperelliptic curve $C$ to an abelian surface[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] steps to form the pillars and the cantilever (Section \[struct\_scan\]). In the subsequent step, we identify the scanning probes that contain single NV centers (Section \[sec:precharacterization\]). Finally, we mount the selected scanning probes to a tuning fork based AFM head (Section \[sec:transfer\]). Diamond material and initial sample preparation \[sec:initialstep\] ------------------------------------------------------------------- Our nano-fabrication procedure for the all-diamond scanning probe devices is based on commercially available, high purity, synthetic diamond grown by chemical vapor deposition (Element Six, electronic grade, \[N\]${}^s$$<$$5~$ppb, B$<$$1~$ppb).[@e6url] The $500~\mu$m thick diamonds are processed into 30-$100~\mu$m thick diamond plates by laser cutting and subsequent polishing (Delaware Diamond Knives, USA or Almax Easy Lab, Belgium [@Almaxurl]). While our process can be applied to a large range of thicknesses, we found $50~\mu$m thick plates to form the best compromise between mechanical stability, ease of handling and reasonable processing times (see Section \[sec:deepetch\]). The surface roughness of the starting diamond plates is typically $0.7~$nm, as evidenced
steps to form the pillars and the cantilever (Section \[struct\_scan\]). In the subsequent step, we identify the scanning probes that contain single NV centers (Section \[sec:precharacterization\]). Finally, we mount the selected scanning probes to a tuning fork based AFM head (Section \[sec:transfer\]). Diamond material and initial sample preparation \[sec:initialstep\] ------------------------------------------------------------------- Our nano-fabrication procedure for the all-diamond scanning probe devices is based on commercially available, high purity, synthetic diamond grown by chemical vapor deposition (Element Six, electronic grade, \[N\]${}^s$$<$$5~$ppb, B$<$$1~$ppb).[@e6url] The $500~\mu$m thick diamonds are processed into 30-$100~\mu$m thick diamond plates by laser cutting and subsequent polishing (Delaware Diamond Knives, USA or Almax Easy Lab, Belgium [@Almaxurl]). While our process can be applied to a large range of thicknesses, we found $50~\mu$m thick plates to form the best compromise between mechanical stability, ease of handling and reasonable processing times (see Section \[sec:deepetch\]). The surface roughness of the starting diamond plates is typically $0.7~$nm, as evidenced[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] principle. Bimetric formulation ==================== The key to the bimetric formulation of the variational principle relies on the fact that one can invert the relations (\[hPhi0\]) and (\[fP0\]) to express, up to gauge transformation terms for $\Phi_{ij}$ and $P_{ij}$ that drop from the action, the prepotentials in terms of the metrics when these latters satisfy the Hamiltonian constraints (\[H1constraint\]) and (\[H2constraint\]). Remarkably enough, the expression takes almost the same form and read $$\Phi_{ij} =- \frac{1}{4} \left[ {\epsilon}_{irs} \triangle^{-1} \left(\partial^r h^{s}_{\; \; j} \right)+ {\epsilon}_{jrs} \triangle^{-1} \left( \partial^r h^{s}_{\; \; i} \right) \right] \label{Phih1}$$ and $$P_{ij} = - \frac{1}{4} \left[ {\epsilon}_{irs} \triangle^{-1} \left(\partial^r f^{s}_{\; \; j} \right)+ {\epsilon}_{jrs} \triangle^{-1} \left( \partial^r f^{s}_{\; \; i} \right) \right]. \label{Pf1}$$ One may easily verify that if one replaces (\[Phih1\]) and (\[Pf1\]) in (\[hPhi0\]) and (\[fP0\]) and uses the constraints (\[H1constraint\]) and (\[H2constraint\]), one recovers indeed $h_{ij}$ and $f_{ij}$, with some definite $u_i$ and $v_i$
principle. Bimetric formulation ==================== The key to the bimetric formulation of the variational principle relies on the fact that one can invert the relations (\[hPhi0\]) and (\[fP0\]) to express, up to gauge transformation terms for $\Phi_{ij}$ and $P_{ij}$ that drop from the action, the prepotentials in terms of the metrics when these latters satisfy the Hamiltonian constraints (\[H1constraint\]) and (\[H2constraint\]). Remarkably enough, the expression takes almost the same form and read $$\Phi_{ij} =- \frac{1}{4} \left[ {\epsilon}_{irs} \triangle^{-1} \left(\partial^r h^{s}_{\; \; j} \right)+ {\epsilon}_{jrs} \triangle^{-1} \left( \partial^r h^{s}_{\; \; i} \right) \right] \label{Phih1}$$ and $$P_{ij} = - \frac{1}{4} \left[ {\epsilon}_{irs} \triangle^{-1} \left(\partial^r f^{s}_{\; \; j} \right)+ {\epsilon}_{jrs} \triangle^{-1} \left( \partial^r f^{s}_{\; \; i} \right) \right]. \label{Pf1}$$ One may easily verify that if one replaces (\[Phih1\]) and (\[Pf1\]) in (\[hPhi0\]) and (\[fP0\]) and uses the constraints (\[H1constraint\]) and (\[H2constraint\]), one recovers indeed $h_{ij}$ and $f_{ij}$, with some definite $u_i$ and $v_i$[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] ${\rm strain} = (a - a_0) / a_0 \times 100 \% $, where $a$ and $a_0$ are the lattice constants with and without strain, respectively. With the relaxed structure, the spin-polarized self-consistent calculation is performed to obtain the charge density. Finally, the magnetic anisotropy energies are determined by calculating the total energies for different Néel vector directions including spin orbit coupling. Table \[tab1\] shows the lattice constants and the magnetic moments of the Mn site in MnX without strain. All of the values are very close to those from previous results [@wang2013first; @wang2013structural]. The local magnetic moments of the X site are zero for all materials. Figures \[fig:Ir\]–\[fig:Pt\] show the differences in the total energies as a function of the strain for MnIr, MnRh, MnNi, MnPd, and MnPt, respectively, where $E_{abc}$ is the ground state energy with the Néel vector along the $[abc]$ direction. The reference energy levels from each figure,
${\rm strain} = (a - a_0) / a_0 \times 100 \% $, where $a$ and $a_0$ are the lattice constants with and without strain, respectively. With the relaxed structure, the spin-polarized self-consistent calculation is performed to obtain the charge density. Finally, the magnetic anisotropy energies are determined by calculating the total energies for different Néel vector directions including spin orbit coupling. Table \[tab1\] shows the lattice constants and the magnetic moments of the Mn site in MnX without strain. All of the values are very close to those from previous results [@wang2013first; @wang2013structural]. The local magnetic moments of the X site are zero for all materials. Figures \[fig:Ir\]–\[fig:Pt\] show the differences in the total energies as a function of the strain for MnIr, MnRh, MnNi, MnPd, and MnPt, respectively, where $E_{abc}$ is the ground state energy with the Néel vector along the $[abc]$ direction. The reference energy levels from each figure,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \, Q_j = \lambda q_j, \,\, \lambda \in \mathbb{R}\setminus \{0\},$$ the potential scales by a factor of $\lambda$, and the energy by a factor of $\lambda^2$, leaving the variational problems (and their solutions) unchanged. Therefore, the only two distinct cases that need to be considered are $Q = 0$ and $Q = 1$. The second (classical) problem we discuss is the characterization of critical manifolds $C$ (specifically, curves) on which the gradient of the potential of a given charge configuration vanishes. More precisely, the problem in ${\mathbb{R}}^2$ is: [p2]{} Let $\mu$ be a locally finite charge distribution with planar support $\Sigma \subset {\mathbb{R}}^2$. Can the critical manifold $C \subset {\mathbb{R}}^3$ of $\mu$, defined by $${\label}{8} C = \{ x \in \mathbb{R}^3 : \nabla U^{\mu}_{\Sigma}(x) = 0 \} $$ contain a curve in ${\mathbb{R}}^2$? This problem has numerous applications, some of which are discussed in this paper. One of its most obvious
\, Q_j = \lambda q_j, \,\, \lambda \in \mathbb{R}\setminus \{0\},$$ the potential scales by a factor of $\lambda$, and the energy by a factor of $\lambda^2$, leaving the variational problems (and their solutions) unchanged. Therefore, the only two distinct cases that need to be considered are $Q = 0$ and $Q = 1$. The second (classical) problem we discuss is the characterization of critical manifolds $C$ (specifically, curves) on which the gradient of the potential of a given charge configuration vanishes. More precisely, the problem in ${\mathbb{R}}^2$ is: [p2]{} Let $\mu$ be a locally finite charge distribution with planar support $\Sigma \subset {\mathbb{R}}^2$. Can the critical manifold $C \subset {\mathbb{R}}^3$ of $\mu$, defined by $${\label}{8} C = \{ x \in \mathbb{R}^3 : \nabla U^{\mu}_{\Sigma}(x) = 0 \} $$ contain a curve in ${\mathbb{R}}^2$? This problem has numerous applications, some of which are discussed in this paper. One of its most obvious[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] that promotes spatially piecewise homogeneous solutions without compromising sharp discontinuities between neighboring pixels. This property is important to handle the type of spatial correlation found in many hyperspectral unmixing applications [@shi2014surveySpatialRegUnmixing; @afonso2011augmented]. Although important to mitigate the ill-posedness of the inverse problem, the use of spatial regularization in spectral-variability-aware HU introduces interdependencies among abundance solutions for different image pixels. This in turn leads to intricate, large scale and computationally demanding optimization problems. Even though some approaches have been investigated to accelerate the minimization of convex TV-regularized functionals [@chambolle2017acceleratedAlternatingDescentDykstra; @chambolle2011primalDualTV], this is still a computationally demanding operation which, in the context of HU, have been primarily addressed using variable splitting (e.g. ADMM) techniques [@thouvenin2016hyperspectralPLMM; @drumetz2016blindUnmixingELMMvariability; @imbiriba2018GLMM]. Such complexity is usually incompatible with recent demands for timely processing of vast amounts of remotely sensed data required by many modern real world applications [@ma2015BigDataRemoteSensing; @chi2016BigDataRemoteSensing]. Thus, it is desirable to search for faster and lower complexity strategies that
that promotes spatially piecewise homogeneous solutions without compromising sharp discontinuities between neighboring pixels. This property is important to handle the type of spatial correlation found in many hyperspectral unmixing applications [@shi2014surveySpatialRegUnmixing; @afonso2011augmented]. Although important to mitigate the ill-posedness of the inverse problem, the use of spatial regularization in spectral-variability-aware HU introduces interdependencies among abundance solutions for different image pixels. This in turn leads to intricate, large scale and computationally demanding optimization problems. Even though some approaches have been investigated to accelerate the minimization of convex TV-regularized functionals [@chambolle2017acceleratedAlternatingDescentDykstra; @chambolle2011primalDualTV], this is still a computationally demanding operation which, in the context of HU, have been primarily addressed using variable splitting (e.g. ADMM) techniques [@thouvenin2016hyperspectralPLMM; @drumetz2016blindUnmixingELMMvariability; @imbiriba2018GLMM]. Such complexity is usually incompatible with recent demands for timely processing of vast amounts of remotely sensed data required by many modern real world applications [@ma2015BigDataRemoteSensing; @chi2016BigDataRemoteSensing]. Thus, it is desirable to search for faster and lower complexity strategies that[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] circular harmonics, Gabor filters offer another type of *a priori* knowledge that can be used to reinforce convolutional kernels of CNNs. Gabor filters are able to achieve optimal joint time-frequency resolution from a signal processing perspective [@Gabor1946], thus being appropriate for low-level and middle-level feature extractions (which are exactly the functions of the bottom layers of CNNs). Furthermore, researches have revealed that the shape of Gabor filters is similar to that of receptive fields of simple cells in the primary visual cortex [@Hubel1965CatCortex; @Jones1987EvaluationCat; @Pollen1983Bio; @Alex2012AlexNet], which means that using Gabor filters to extract low-level and middle-level features can be associated with a biological interpretation. In fact, as illustrated by Fig. \[fig:1stlayerKer\], many filters in CNNs (especially those in the first several layers), look very similar to Gabor filters. Inspired by these aspects, some attempts have been made to utilize the Gabor *a priori* knowledge to reinforce CNN kernels, i.e.
circular harmonics, Gabor filters offer another type of *a priori* knowledge that can be used to reinforce convolutional kernels of CNNs. Gabor filters are able to achieve optimal joint time-frequency resolution from a signal processing perspective [@Gabor1946], thus being appropriate for low-level and middle-level feature extractions (which are exactly the functions of the bottom layers of CNNs). Furthermore, researches have revealed that the shape of Gabor filters is similar to that of receptive fields of simple cells in the primary visual cortex [@Hubel1965CatCortex; @Jones1987EvaluationCat; @Pollen1983Bio; @Alex2012AlexNet], which means that using Gabor filters to extract low-level and middle-level features can be associated with a biological interpretation. In fact, as illustrated by Fig. \[fig:1stlayerKer\], many filters in CNNs (especially those in the first several layers), look very similar to Gabor filters. Inspired by these aspects, some attempts have been made to utilize the Gabor *a priori* knowledge to reinforce CNN kernels, i.e.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] display a BHB starting at a semi-major axis of $a_0=0.1$ AU and with initially two very different eccentricities, so as to illustrate the main idea of this article: (i) $e_0=0.05$ (thin colored lines), and (ii) an extreme case, $e_0=0.999$ (thick colored lines). Along the harmonics we mark several particular moments with dots, where the labels show the time before the coalescence of the binary and the corresponding orbital eccentricities. The two black solid curves depict the noise curves ($\sqrt{f\,S_h(f)}$) for LISA and LIGO in its advanced configuration. Although we have chosen a very high eccentricity for the second case in this example, we note that lower eccentricities can also be inaudible to LISA (see discussion). \[fig.harmonics\]](harmonics.eps){width="1\columnwidth"} \(ii) Increasing the eccentricity shifts the peak of the relative power of the GW harmonics towards higher frequencies (see Fig. 3 of [@Peters63]). Hence, more eccentric orbits emit their maximum power at frequencies farther away
display a BHB starting at a semi-major axis of $a_0=0.1$ AU and with initially two very different eccentricities, so as to illustrate the main idea of this article: (i) $e_0=0.05$ (thin colored lines), and (ii) an extreme case, $e_0=0.999$ (thick colored lines). Along the harmonics we mark several particular moments with dots, where the labels show the time before the coalescence of the binary and the corresponding orbital eccentricities. The two black solid curves depict the noise curves ($\sqrt{f\,S_h(f)}$) for LISA and LIGO in its advanced configuration. Although we have chosen a very high eccentricity for the second case in this example, we note that lower eccentricities can also be inaudible to LISA (see discussion). \[fig.harmonics\]](harmonics.eps){width="1\columnwidth"} \(ii) Increasing the eccentricity shifts the peak of the relative power of the GW harmonics towards higher frequencies (see Fig. 3 of [@Peters63]). Hence, more eccentric orbits emit their maximum power at frequencies farther away[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] problem defined by $F$ is NP-hard even when $|L|=2$ as it can be used to solve the independent set problem on $G$. It can also be used to solve coloring with $k$ colors when $|L|=k$. The optimization problem can be solved in polynomial time using dynamic programming when $G$ is a tree [@BB72]. More generally dynamic programming leads to polynomial optimization algorithms when the graph $G$ is chordal (triangulated) and has bounded tree-width. Min-sum (max-product) belief propagation [@WJ08; @KF09] is a local message passing algorithm that is equivalent to dynamic programming when $G$ is a tree. Both dynamic programming and belief propagation aggregate local costs by sequential propagation of information along the edges in $E$. For $i \in V$ we define the value function $f_i:L\to{\mathbb{R}}$, $$\begin{aligned} \label{eq:f} f_i(\tau) = \min_{\substack{x \in L^V\\ x_i=\tau}} F(x).\end{aligned}$$ In the context of MRFs the value functions are also known as *max-marginals*. The value
problem defined by $F$ is NP-hard even when $|L|=2$ as it can be used to solve the independent set problem on $G$. It can also be used to solve coloring with $k$ colors when $|L|=k$. The optimization problem can be solved in polynomial time using dynamic programming when $G$ is a tree [@BB72]. More generally dynamic programming leads to polynomial optimization algorithms when the graph $G$ is chordal (triangulated) and has bounded tree-width. Min-sum (max-product) belief propagation [@WJ08; @KF09] is a local message passing algorithm that is equivalent to dynamic programming when $G$ is a tree. Both dynamic programming and belief propagation aggregate local costs by sequential propagation of information along the edges in $E$. For $i \in V$ we define the value function $f_i:L\to{\mathbb{R}}$, $$\begin{aligned} \label{eq:f} f_i(\tau) = \min_{\substack{x \in L^V\\ x_i=\tau}} F(x).\end{aligned}$$ In the context of MRFs the value functions are also known as *max-marginals*. The value[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] $\eta$ $$k_{R \rightarrow P}=\frac{(\eta^2/4+\omega_a^2)^{1/2}-\eta/2}{\omega_a}\frac{\omega_0}{2\pi} \exp(-\Delta E/k_BT).$$ Here $\omega_a$ stays for the positive-valued angular frequency of the unstable state at the barrier, and $\omega_0$ is an angular frequency of the metastable state at $x=R$. For strong friction, $\eta>>\omega_a$ the above formula leads to a common Kramers result $$k_{R \rightarrow P}=\frac{\omega_0 \omega_a}{2\pi\eta}\exp(-\Delta E/k_BT),$$ and reproduces the TST result eq. (\[cusp\]) after letting the barrier frequency tend to infinity $\omega_a\rightarrow\infty$ with $\eta$ held fixed. Moreover, as pointed out by Northrup and Hynes,[@Hynes1] in a weakly adiabatic case, the barrier “point” is a negligible fraction of the high energy barrier region, so that the rate can be influenced by medium relaxation in the wells. The full rate constant for a symmetric reaction is then postulated in the form $$k_{WA}=\left(1+2k^a_{WA}/k_D\right)^{-1}k_{WA}^a$$ where $k_{WA}^a\approx k^{cusp}$ and $k_D$ is the well (solvent or medium) relaxation rate constant which for a harmonic potential and a high barrier ($\Delta E\ge 5k_BT$) within 10%
$\eta$ $$k_{R \rightarrow P}=\frac{(\eta^2/4+\omega_a^2)^{1/2}-\eta/2}{\omega_a}\frac{\omega_0}{2\pi} \exp(-\Delta E/k_BT).$$ Here $\omega_a$ stays for the positive-valued angular frequency of the unstable state at the barrier, and $\omega_0$ is an angular frequency of the metastable state at $x=R$. For strong friction, $\eta>>\omega_a$ the above formula leads to a common Kramers result $$k_{R \rightarrow P}=\frac{\omega_0 \omega_a}{2\pi\eta}\exp(-\Delta E/k_BT),$$ and reproduces the TST result eq. (\[cusp\]) after letting the barrier frequency tend to infinity $\omega_a\rightarrow\infty$ with $\eta$ held fixed. Moreover, as pointed out by Northrup and Hynes,[@Hynes1] in a weakly adiabatic case, the barrier “point” is a negligible fraction of the high energy barrier region, so that the rate can be influenced by medium relaxation in the wells. The full rate constant for a symmetric reaction is then postulated in the form $$k_{WA}=\left(1+2k^a_{WA}/k_D\right)^{-1}k_{WA}^a$$ where $k_{WA}^a\approx k^{cusp}$ and $k_D$ is the well (solvent or medium) relaxation rate constant which for a harmonic potential and a high barrier ($\Delta E\ge 5k_BT$) within 10%[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] pressure, uniform density $\rho = \rho(t)$, and at rest at $t=0$. These assumptions lead to the unique solution of the Einstein field equations, and in the comoving co-ordinates the metric inside the black hole is given by $$\begin{aligned} ds^2 = dt^2 -R^2(t)\bbs \frac{dr^2}{1-k\,r^2} + r^2 d\theta^2 + r^2\sin^2\theta\,d\phi^2\ebs\end{aligned}$$ in units in which speed of light in vacuum, c=1, and where $k$ is a constant. The requirement of energy conservation implies that $\rho(t)R^3(t)$ remains constant. On normalizing the radial co-ordinate $r$ so that $$\begin{aligned} R(0) = 1\end{aligned}$$ one gets $$\begin{aligned} \rho(t) = \rho(0)R^{-3}(t)\end{aligned}$$ The fluid is assumed to be at rest at $t=0$, so $$\begin{aligned} \dot{R}(0) = 0\end{aligned}$$ Consequently, the field equations give $$\begin{aligned} k = \frac{8\pi\,G}{3} \rho(0)\end{aligned}$$ Finally, the solution of the field equations is given by the parametric equations of a cycloid : $$\begin{aligned} \nonumber t = \bb \frac{\psi + \sin\,\psi}{2\sqrt{k}} \eb \\ R = \frac{1}{2} \bb 1 + \cos\,\psi \eb\end{aligned}$$ From equation $(6)$ it is obvious that when $\psi = \pi $. i.e., when $$\begin{aligned} t = t_{s} =
pressure, uniform density $\rho = \rho(t)$, and at rest at $t=0$. These assumptions lead to the unique solution of the Einstein field equations, and in the comoving co-ordinates the metric inside the black hole is given by $$\begin{aligned} ds^2 = dt^2 -R^2(t)\bbs \frac{dr^2}{1-k\,r^2} + r^2 d\theta^2 + r^2\sin^2\theta\,d\phi^2\ebs\end{aligned}$$ in units in which speed of light in vacuum, c=1, and where $k$ is a constant. The requirement of energy conservation implies that $\rho(t)R^3(t)$ remains constant. On normalizing the radial co-ordinate $r$ so that $$\begin{aligned} R(0) = 1\end{aligned}$$ one gets $$\begin{aligned} \rho(t) = \rho(0)R^{-3}(t)\end{aligned}$$ The fluid is assumed to be at rest at $t=0$, so $$\begin{aligned} \dot{R}(0) = 0\end{aligned}$$ Consequently, the field equations give $$\begin{aligned} k = \frac{8\pi\,G}{3} \rho(0)\end{aligned}$$ Finally, the solution of the field equations is given by the parametric equations of a cycloid : $$\begin{aligned} \nonumber t = \bb \frac{\psi + \sin\,\psi}{2\sqrt{k}} \eb \\ R = \frac{1}{2} \bb 1 + \cos\,\psi \eb\end{aligned}$$ From equation $(6)$ it is obvious that when $\psi = \pi $. i.e., when $$\begin{aligned} t = t_{s} =[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] field having cubic and quadratic terms with strengths $g_2$ and $g_3$ respectively. The Lagrangian then consists of the free baryon and meson parts and the interaction part with minimal coupling, together with the nucleon mass $M$ and $m_\sigma$ ($g_\sigma$), $m_\omega$ ($g_\omega$), and $m_\rho$ ($g_\rho$) the masses (coupling constants) of the respective mesons: $$\begin{array}{rl} {\cal L} &= \bar \psi (i\rlap{/}\partial -M) \psi + \,{1\over2}\partial_\mu\sigma\partial^\mu\sigma-U(\sigma) -{1\over4}\Omega_{\mu\nu}\Omega^{\mu\nu}\\ \ &+ {1\over2}m_\omega^2\omega_\mu\omega^\mu -{1\over4}{\vec R}_{\mu\nu}{\vec R}^{\mu\nu} + {1\over2}m_{\rho}^{2} \vec\rho_\mu\vec\rho^\mu -{1\over4}F_{\mu\nu}F^{\mu\nu} \\ &- g_{\sigma}\bar\psi \sigma \psi~ -~g_{\omega}\bar\psi \rlap{/}\omega \psi~ -~g_{\rho} \bar\psi \rlap{/}\vec\rho \vec\tau \psi -~e \bar\psi \rlap{/}A \psi \end{array}$$. For the proper treatment of the pairing correlations and for correct description of the scattering of Cooper pairs into the continuum in a self-consistent way, one needs to
field having cubic and quadratic terms with strengths $g_2$ and $g_3$ respectively. The Lagrangian then consists of the free baryon and meson parts and the interaction part with minimal coupling, together with the nucleon mass $M$ and $m_\sigma$ ($g_\sigma$), $m_\omega$ ($g_\omega$), and $m_\rho$ ($g_\rho$) the masses (coupling constants) of the respective mesons: $$\begin{array}{rl} {\cal L} &= \bar \psi (i\rlap{/}\partial -M) \psi + \,{1\over2}\partial_\mu\sigma\partial^\mu\sigma-U(\sigma) -{1\over4}\Omega_{\mu\nu}\Omega^{\mu\nu}\\ \ &+ {1\over2}m_\omega^2\omega_\mu\omega^\mu -{1\over4}{\vec R}_{\mu\nu}{\vec R}^{\mu\nu} + {1\over2}m_{\rho}^{2} \vec\rho_\mu\vec\rho^\mu -{1\over4}F_{\mu\nu}F^{\mu\nu} \\ &- g_{\sigma}\bar\psi \sigma \psi~ -~g_{\omega}\bar\psi \rlap{/}\omega \psi~ -~g_{\rho} \bar\psi \rlap{/}\vec\rho \vec\tau \psi -~e \bar\psi \rlap{/}A \psi \end{array}$$. For the proper treatment of the pairing correlations and for correct description of the scattering of Cooper pairs into the continuum in a self-consistent way, one needs to[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] that the charge noise is proportional to the dielectric loss tangent $\tan\delta$. We then calculate the dielectric loss tangent due to fluctuating TLS with electric dipole moments [@Phillips; @Classen1994; @Arnold1976]. At low frequencies we recover $1/f$ noise. At high frequencies $\tan\delta$ is proportional to $1/\sqrt{1+\left(J/J_c(\omega,T)\right)}$. In the saturation regime ($J\gg J_c(\omega,T)$), $\tan\delta$ and hence, the charge noise, are proportional to $\sqrt{J_c(\omega,T)/J}$. Some TLS experiments [@Bachellerie1977; @Bernard1979; @Graebner1979] indicate that $J_c(\omega,T)\sim\omega^{2}T^{2}$ which implies that at high frequencies the charge noise and the dielectric loss tangent would increase linearly in frequency if $J\gg J_c(\omega,T)$. Unlike previous theoretical efforts, we use the standard TLS density of states that is independent of energy, and can still obtain charge noise that increases linearly with frequency in agreement with the conclusions of Astafiev [*et al.*]{} [@Astafiev2004]. In applying the standard model of two level systems to Josephson junction devices, we consider a TLS that sits in the
that the charge noise is proportional to the dielectric loss tangent $\tan\delta$. We then calculate the dielectric loss tangent due to fluctuating TLS with electric dipole moments [@Phillips; @Classen1994; @Arnold1976]. At low frequencies we recover $1/f$ noise. At high frequencies $\tan\delta$ is proportional to $1/\sqrt{1+\left(J/J_c(\omega,T)\right)}$. In the saturation regime ($J\gg J_c(\omega,T)$), $\tan\delta$ and hence, the charge noise, are proportional to $\sqrt{J_c(\omega,T)/J}$. Some TLS experiments [@Bachellerie1977; @Bernard1979; @Graebner1979] indicate that $J_c(\omega,T)\sim\omega^{2}T^{2}$ which implies that at high frequencies the charge noise and the dielectric loss tangent would increase linearly in frequency if $J\gg J_c(\omega,T)$. Unlike previous theoretical efforts, we use the standard TLS density of states that is independent of energy, and can still obtain charge noise that increases linearly with frequency in agreement with the conclusions of Astafiev [*et al.*]{} [@Astafiev2004]. In applying the standard model of two level systems to Josephson junction devices, we consider a TLS that sits in the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] processing networks as introduced by Harrison [@harrison2000brownian], have attracted a lot of attention starting [@tassiulas1992maxweight] including some recent examples [@walton2014concave; @shah2014SFA; @maguluri2015heavy]. Switched networks are queueing networks where there are constraints on which queues can be served simultaneously. They effectively model a variety of interesting applications, exemplified by wireless communication networks, and input-queued switches for Internet routers. The MaxWeight/BackPressure policy, introduced by Tassiulas and Ephremides for wireless communication [@tassiulas1992maxweight; @mckeown1996achieving], have been shown to achieve a maximum throughput stability for switched networks. However, the provable delay bounds of this scheme scale with the number of queues in the network. As the scheme requires maintaining one queue per route-destination at each one, the scaling can be potentially very bad. For instance, recently Gupta and Javidi [@gupta2007routing] showed that such an algorithm can result in very poor delay performance via a specific example. Walton [@walton2014concave] proposed a proportional scheduler which achieves throughput
processing networks as introduced by Harrison [@harrison2000brownian], have attracted a lot of attention starting [@tassiulas1992maxweight] including some recent examples [@walton2014concave; @shah2014SFA; @maguluri2015heavy]. Switched networks are queueing networks where there are constraints on which queues can be served simultaneously. They effectively model a variety of interesting applications, exemplified by wireless communication networks, and input-queued switches for Internet routers. The MaxWeight/BackPressure policy, introduced by Tassiulas and Ephremides for wireless communication [@tassiulas1992maxweight; @mckeown1996achieving], have been shown to achieve a maximum throughput stability for switched networks. However, the provable delay bounds of this scheme scale with the number of queues in the network. As the scheme requires maintaining one queue per route-destination at each one, the scaling can be potentially very bad. For instance, recently Gupta and Javidi [@gupta2007routing] showed that such an algorithm can result in very poor delay performance via a specific example. Walton [@walton2014concave] proposed a proportional scheduler which achieves throughput[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of neoclassical furniture in their entirety, which are permissive licensed[^1]. The images are either historical pictorial material or photographs from the modern as-built documentation. They are split into different classes. Combined Corpus --------------- The nature of the experiment is *proof-of-concept*, so we used a VGG-like architecture of a “simple” convolutional neural network[^2]. The independence and robustness of the train/test split are guaranteed with 5-fold cross-validation on on the full corpus from which are used for training and for testing. The remaining are treated as an evaluation set. ![Class distribution of the image corpus[]{data-label="fig:class_distribution"}](class_distribution){height=".45\textheight"} The dataset is unbalanced, as can be seen in Fig. \[fig:class\_distribution\]. During training, more prominent classes are weighted down and underrepresented classes are given a higher weight [@johnson_SurveyDeepLearningClassImbalance_2019 p. 27]. Apart from dropout during training for regularization, *Early Stopping* was used to prevent overfitting. Results ======= Results for CNN trained on Word Vectors --------------------------------------- The Top-5 true-positive rate is , meaning that the gold label from the annotations is in
of neoclassical furniture in their entirety, which are permissive licensed[^1]. The images are either historical pictorial material or photographs from the modern as-built documentation. They are split into different classes. Combined Corpus --------------- The nature of the experiment is *proof-of-concept*, so we used a VGG-like architecture of a “simple” convolutional neural network[^2]. The independence and robustness of the train/test split are guaranteed with 5-fold cross-validation on on the full corpus from which are used for training and for testing. The remaining are treated as an evaluation set. ![Class distribution of the image corpus[]{data-label="fig:class_distribution"}](class_distribution){height=".45\textheight"} The dataset is unbalanced, as can be seen in Fig. \[fig:class\_distribution\]. During training, more prominent classes are weighted down and underrepresented classes are given a higher weight [@johnson_SurveyDeepLearningClassImbalance_2019 p. 27]. Apart from dropout during training for regularization, *Early Stopping* was used to prevent overfitting. Results ======= Results for CNN trained on Word Vectors --------------------------------------- The Top-5 true-positive rate is , meaning that the gold label from the annotations is in[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] policies, where an incoming task is assigned to a server with the shortest queue among $d \geq 2$ servers selected uniformly at random. Note that this involves exchange of $2 d$ messages per task, irrespective of the number of servers $N$. Results in Mitzenmacher [@Mitzenmacher01] and Vvedenskaya [*et al.*]{} [@VDK96] indicate that even sampling as few as $d = 2$ servers yields significant performance enhancements over purely random assignment ($d = 1$) as $N$ grows large, which is commonly referred to as the “power-of-two” or “power-of-choice” effect. Specifically, when tasks arrive at rate $\lambda N$, the queue length distribution at each individual server exhibits super-exponential decay for any fixed $\lambda < 1$ as $N$ grows large, compared to exponential decay for purely random assignment. As illustrated by the above, the diversity parameter $d$ induces a fundamental trade-off between the amount of communication overhead and the delay performance. Specifically, a random assignment policy does not entail
policies, where an incoming task is assigned to a server with the shortest queue among $d \geq 2$ servers selected uniformly at random. Note that this involves exchange of $2 d$ messages per task, irrespective of the number of servers $N$. Results in Mitzenmacher [@Mitzenmacher01] and Vvedenskaya [*et al.*]{} [@VDK96] indicate that even sampling as few as $d = 2$ servers yields significant performance enhancements over purely random assignment ($d = 1$) as $N$ grows large, which is commonly referred to as the “power-of-two” or “power-of-choice” effect. Specifically, when tasks arrive at rate $\lambda N$, the queue length distribution at each individual server exhibits super-exponential decay for any fixed $\lambda < 1$ as $N$ grows large, compared to exponential decay for purely random assignment. As illustrated by the above, the diversity parameter $d$ induces a fundamental trade-off between the amount of communication overhead and the delay performance. Specifically, a random assignment policy does not entail[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] cutoff must be lowered for the constituent approximation to work. If the cutoff is too large, atomic states must explicitly include photons. After the cutoff is lowered to a value that can be self-consistently determined [*a-posteriori*]{}, photons are removed from the states and replaced by the Coulomb interaction and relativistic corrections. The cutoff cannot be lowered too far using a perturbative renormalization group, hence the window. Thus, if we remove high energy degrees of freedom, or coupling to high energy degrees of freedom, we should encounter self-energy shifts leading to effective one-body operators, vertex corrections leading to effective vertices, and exchange effects leading to explicit many-body interactions not found in the canonical Hamiltonian. We naively expect these operators to be local when acting on low energy states, because simple uncertainty principle arguments indicate that high energy virtual particles cannot propagate very far. Unfortunately this expectation is indeed naive, and at best
cutoff must be lowered for the constituent approximation to work. If the cutoff is too large, atomic states must explicitly include photons. After the cutoff is lowered to a value that can be self-consistently determined [*a-posteriori*]{}, photons are removed from the states and replaced by the Coulomb interaction and relativistic corrections. The cutoff cannot be lowered too far using a perturbative renormalization group, hence the window. Thus, if we remove high energy degrees of freedom, or coupling to high energy degrees of freedom, we should encounter self-energy shifts leading to effective one-body operators, vertex corrections leading to effective vertices, and exchange effects leading to explicit many-body interactions not found in the canonical Hamiltonian. We naively expect these operators to be local when acting on low energy states, because simple uncertainty principle arguments indicate that high energy virtual particles cannot propagate very far. Unfortunately this expectation is indeed naive, and at best[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] SDSS-III BOSS. The emission-line galaxy sample is the largest set, 18M, covering $0.6<z<1.6$ and providing the majority of the distance scale precision. 2.4M quasars selected from their WISE excess will extend the map. Importantly, these will yield Lyman $\alpha$ forest measurements along 600K lines-of-sight, from which we will measure the acoustic oscillations at $z>2$. In bright time, DESI will conduct a flux-limited survey of 10M galaxies to $r\approx19.5$, with a median redshift around 0.2. This will allow dense sampling of a volume over 10 times that of the SDSS MAIN and 2dF GRS surveys, which we expect will spur development of cosmological probes of the non-linear regime of structure formation. In addition to extragalactic targets, DESI will observe many millions of stars. About 10M stars at $16<G<19$ will fill unused fibers in the bright time program, and we will conduct a backup program of brighter stars when observing conditions (clouds, moon, and/or
SDSS-III BOSS. The emission-line galaxy sample is the largest set, 18M, covering $0.6<z<1.6$ and providing the majority of the distance scale precision. 2.4M quasars selected from their WISE excess will extend the map. Importantly, these will yield Lyman $\alpha$ forest measurements along 600K lines-of-sight, from which we will measure the acoustic oscillations at $z>2$. In bright time, DESI will conduct a flux-limited survey of 10M galaxies to $r\approx19.5$, with a median redshift around 0.2. This will allow dense sampling of a volume over 10 times that of the SDSS MAIN and 2dF GRS surveys, which we expect will spur development of cosmological probes of the non-linear regime of structure formation. In addition to extragalactic targets, DESI will observe many millions of stars. About 10M stars at $16<G<19$ will fill unused fibers in the bright time program, and we will conduct a backup program of brighter stars when observing conditions (clouds, moon, and/or[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] manageable complexity around its center to obtain, with a user-defined probability, a region that is included in the chance constrained set. In this paper, we extend the aforementioned approach showing how it is possible to reduce the sample complexity via probabilistic scaling exploiting so-called simple approximating sets (SAS). The starting point consists in obtaining a first simple approximation of the “shape” of the probabilistic set. To design a candidate SAS, we propose two possibilities. The first one is based on the definition of an approximating set by drawing a fixed number of samples. On the other hand, the second case envisions the use of $\ell_p$-norm based sets, first proposed in [@dabbene2010complexity]. In particular, we consider as SAS a $\ell_1$-norm *cross-polytope* and a $\ell_\infty$-norm *hyper-cube*. Solving a standard optimization problem, it is possible to obtain the center and the shape of the SAS, which will be later scaled to obtain the
manageable complexity around its center to obtain, with a user-defined probability, a region that is included in the chance constrained set. In this paper, we extend the aforementioned approach showing how it is possible to reduce the sample complexity via probabilistic scaling exploiting so-called simple approximating sets (SAS). The starting point consists in obtaining a first simple approximation of the “shape” of the probabilistic set. To design a candidate SAS, we propose two possibilities. The first one is based on the definition of an approximating set by drawing a fixed number of samples. On the other hand, the second case envisions the use of $\ell_p$-norm based sets, first proposed in [@dabbene2010complexity]. In particular, we consider as SAS a $\ell_1$-norm *cross-polytope* and a $\ell_\infty$-norm *hyper-cube*. Solving a standard optimization problem, it is possible to obtain the center and the shape of the SAS, which will be later scaled to obtain the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] for coronal measurements [@linetal2004]. Scattering polarization is another prime candidate for photon-starved observations. Because of its low amplitudes, usually only small fractions of a percent of the continuum intensity, it is notoriously difficult to observe [@stenflokeller1997]. With integration times of several minutes and averaging over most of the length of the spectrograph slit (few dozen of arcsec), one obtains a puzzling result: it seems as if the field strength of the turbulent magnetic field depends on the spectral line that was observed. For example, measurements in the atomic Sr I line consistently give higher magnetic field values on the order of 100 G [@trujillobuenoetal2006], than molecular lines, such as CN ($\sim 80$ G) or C$_2$ ($\sim 10$ G) [@shapiroetal2011; @kleintetal2011b]. An explanation based on modeling was proposed by @trujillobueno2003, suggesting that Sr I may be formed in intergranular lanes, where the magnetic field is stronger and C$_2$ in granules, where the field
for coronal measurements [@linetal2004]. Scattering polarization is another prime candidate for photon-starved observations. Because of its low amplitudes, usually only small fractions of a percent of the continuum intensity, it is notoriously difficult to observe [@stenflokeller1997]. With integration times of several minutes and averaging over most of the length of the spectrograph slit (few dozen of arcsec), one obtains a puzzling result: it seems as if the field strength of the turbulent magnetic field depends on the spectral line that was observed. For example, measurements in the atomic Sr I line consistently give higher magnetic field values on the order of 100 G [@trujillobuenoetal2006], than molecular lines, such as CN ($\sim 80$ G) or C$_2$ ($\sim 10$ G) [@shapiroetal2011; @kleintetal2011b]. An explanation based on modeling was proposed by @trujillobueno2003, suggesting that Sr I may be formed in intergranular lanes, where the magnetic field is stronger and C$_2$ in granules, where the field[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] flavor symmetry. Then, the superconformal $u(1)_R$ symmetry can be identified with the compact $SO(2) \subset SO(2)\times SO(3)$.[^1] We consider following twisted index of the $\CT_N[\Sigma_g]$ theory that is defined on closed hyperbolic 3-manifold $M_3 = \mathbb{H}^3/\Gamma$. $$\begin{aligned} \begin{split} \mathcal{I}_{M_3} (\CT_{N}[\Sigma_g]) :=& \mathcal{Z}_{BPS}(\textrm{4d $\CT_{N}[\Sigma_g]$ on $S^1\times M_3$}) \\ =& \textrm{Tr} (-1)^R\; \end{split}\end{aligned}$$ Here, the trace is taken over the Hilbert-space of $\CT_N[\Sigma_g]$ on $M_3$. $\mathcal{Z}_{BPS}(\textrm{$\CT$ on $\mathbb{B}$})$ denotes the partition function of a theory $\CT$ on a supersymmetric background $\mathbb{B}$ while $R$ denote the charge of IR superconformal $u(1)_R$ R-symmetry, which is normalized as $$\begin{aligned} R (Q) = \pm 1\;, \quad \textrm{for supercharge $Q$}\;.\end{aligned}$$ Note that the topological twisting is performed along the $M_3$ using the $su(2)_R$ symmetry of the 4d SCFT, to preserve some supercharges. The twisted index can be defined for arbitrary 4d $\CN=2$ SCFTs using $su(2)_R \times u(1)_R$ R-symmetry. For general 4d $\CN=2$ SCFTs, the charge $R$ is not integer valued and thus the index is
flavor symmetry. Then, the superconformal $u(1)_R$ symmetry can be identified with the compact $SO(2) \subset SO(2)\times SO(3)$.[^1] We consider following twisted index of the $\CT_N[\Sigma_g]$ theory that is defined on closed hyperbolic 3-manifold $M_3 = \mathbb{H}^3/\Gamma$. $$\begin{aligned} \begin{split} \mathcal{I}_{M_3} (\CT_{N}[\Sigma_g]) :=& \mathcal{Z}_{BPS}(\textrm{4d $\CT_{N}[\Sigma_g]$ on $S^1\times M_3$}) \\ =& \textrm{Tr} (-1)^R\; \end{split}\end{aligned}$$ Here, the trace is taken over the Hilbert-space of $\CT_N[\Sigma_g]$ on $M_3$. $\mathcal{Z}_{BPS}(\textrm{$\CT$ on $\mathbb{B}$})$ denotes the partition function of a theory $\CT$ on a supersymmetric background $\mathbb{B}$ while $R$ denote the charge of IR superconformal $u(1)_R$ R-symmetry, which is normalized as $$\begin{aligned} R (Q) = \pm 1\;, \quad \textrm{for supercharge $Q$}\;.\end{aligned}$$ Note that the topological twisting is performed along the $M_3$ using the $su(2)_R$ symmetry of the 4d SCFT, to preserve some supercharges. The twisted index can be defined for arbitrary 4d $\CN=2$ SCFTs using $su(2)_R \times u(1)_R$ R-symmetry. For general 4d $\CN=2$ SCFTs, the charge $R$ is not integer valued and thus the index is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] a first application, the method is applied to the nucleation and growth of voids with a two-component distribution of cluster compositions, examining the evolution of helium-vacancy clusters [@COGHLAN:1983], while continuing to treat oxygen adsorption by simply reducing the cavity surface energy by a constant (temperature-independent) factor. The method predicts realistic swelling behavior for ferritic steel in reactor environments. As before, the void distribution function is partitioned into overlapping regions [@Surh:2004], treating small clusters with the Master Equation (ME domain) and large ones with Monte Carlo methods (MC domain). This allows self-consistent evolution of the full void population with no truncation of the size domain, no assumptions as to the critical void size or the nature of the nucleation process, and no approximations for the overall nucleation rate or duration of the nucleation stage. Monomer concentrations are included in the ME region, where they may either be treated separately by a
a first application, the method is applied to the nucleation and growth of voids with a two-component distribution of cluster compositions, examining the evolution of helium-vacancy clusters [@COGHLAN:1983], while continuing to treat oxygen adsorption by simply reducing the cavity surface energy by a constant (temperature-independent) factor. The method predicts realistic swelling behavior for ferritic steel in reactor environments. As before, the void distribution function is partitioned into overlapping regions [@Surh:2004], treating small clusters with the Master Equation (ME domain) and large ones with Monte Carlo methods (MC domain). This allows self-consistent evolution of the full void population with no truncation of the size domain, no assumptions as to the critical void size or the nature of the nucleation process, and no approximations for the overall nucleation rate or duration of the nucleation stage. Monomer concentrations are included in the ME region, where they may either be treated separately by a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] shock formation and, as we discuss later, shock decay. However, once $T$ falls below the electroweak scale, the Higgs field gains a vev $v$ and the neutrino mean free path grows as $\sim v^4/T^5$, exceeding $10^{-4} H^{-1}$ when $T$ falls below $\sim 1$ GeV for $\epsilon=10^{-4}$ or $\sim 100$ MeV for $\epsilon=10^{-1}$ . At lower temperatures, acoustic waves are damped away by neutrino scattering before they steepen into shocks. This Letter is devoted to the early, radiation-dominated epoch in which shocks form. We assume standard, adiabatic, growing mode perturbations. Their evolution is shown in Fig. \[modesfig\]: as a mode crosses the Hubble radius, the fluid starts to oscillate as a standing wave, and the associated metric perturbations decay. Thereafter, the fluid evolves as if it is in an unperturbed FLRW background. The tracelessness of the stress-energy tensor means that the evolution of the fluid is identical, up to a Weyl rescaling,
shock formation and, as we discuss later, shock decay. However, once $T$ falls below the electroweak scale, the Higgs field gains a vev $v$ and the neutrino mean free path grows as $\sim v^4/T^5$, exceeding $10^{-4} H^{-1}$ when $T$ falls below $\sim 1$ GeV for $\epsilon=10^{-4}$ or $\sim 100$ MeV for $\epsilon=10^{-1}$ . At lower temperatures, acoustic waves are damped away by neutrino scattering before they steepen into shocks. This Letter is devoted to the early, radiation-dominated epoch in which shocks form. We assume standard, adiabatic, growing mode perturbations. Their evolution is shown in Fig. \[modesfig\]: as a mode crosses the Hubble radius, the fluid starts to oscillate as a standing wave, and the associated metric perturbations decay. Thereafter, the fluid evolves as if it is in an unperturbed FLRW background. The tracelessness of the stress-energy tensor means that the evolution of the fluid is identical, up to a Weyl rescaling,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Most of these systems allow for partial knowledge in their representation of ground facts. i.e., some of the features in the training data may be missing for some of the instances of the training data. The inability of feature vectors to represent entities and relations between them has lead to work in embeddings, which try to represent entities and relations in a language that is more friendly to learning systems. However, as we note below, these embedding based representations leave out an important feature of classical logic based representations — a feature we argue is very important. We first review embedding based representations, show how they are incapable of capturing partial information Embeddings {#embeddings .unnumbered} ---------- Recent work on distributed representations \[[@socher], [@manning], [@bordes], [@bordes2014semantic], [@quoc]\] has explored the use of embeddings as a representation tool. These approaches typically ’learn an embedding’, which maps terms and statements in a knowledge base (such as Freebase [@freebase])
Most of these systems allow for partial knowledge in their representation of ground facts. i.e., some of the features in the training data may be missing for some of the instances of the training data. The inability of feature vectors to represent entities and relations between them has lead to work in embeddings, which try to represent entities and relations in a language that is more friendly to learning systems. However, as we note below, these embedding based representations leave out an important feature of classical logic based representations — a feature we argue is very important. We first review embedding based representations, show how they are incapable of capturing partial information Embeddings {#embeddings .unnumbered} ---------- Recent work on distributed representations \[[@socher], [@manning], [@bordes], [@bordes2014semantic], [@quoc]\] has explored the use of embeddings as a representation tool. These approaches typically ’learn an embedding’, which maps terms and statements in a knowledge base (such as Freebase [@freebase])[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] null as well as $\sigma$-porosity. (All $\sigma$-porous sets are meager. See [@z2] for more on $\sigma$-porosity.) Recently, Elekes and Steprāns investigated some deep foundational properties of Haar null sets and analogous problems concerning meager sets. We refer the interested reader to their paper [@es]. In this note we introduce what we call Haar meager sets. The purpose of this definition is to have a topological notion of smallness which is analogous to Christensen’s measure theoretic notion of smallness. We show that Haar meager sets form a $\sigma$-ideal. We show every Haar meager set is meager. Example \[ex1\] shows that some type of definability is necessary in the definition of Haar null to obtain this result. We next show that in locally compact groups the notions Haar meagerness and meagerness coincide. Using a result of Solecki, we observe that every non locally compact Polish group admits a closed nowhere dense set which
null as well as $\sigma$-porosity. (All $\sigma$-porous sets are meager. See [@z2] for more on $\sigma$-porosity.) Recently, Elekes and Steprāns investigated some deep foundational properties of Haar null sets and analogous problems concerning meager sets. We refer the interested reader to their paper [@es]. In this note we introduce what we call Haar meager sets. The purpose of this definition is to have a topological notion of smallness which is analogous to Christensen’s measure theoretic notion of smallness. We show that Haar meager sets form a $\sigma$-ideal. We show every Haar meager set is meager. Example \[ex1\] shows that some type of definability is necessary in the definition of Haar null to obtain this result. We next show that in locally compact groups the notions Haar meagerness and meagerness coincide. Using a result of Solecki, we observe that every non locally compact Polish group admits a closed nowhere dense set which[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \%$ of known single planet systems show evidence for additional companions. Furthermore, [@marcy2005] showed that the distribution of observed planets rises steeply towards smaller masses. The analyses of [@wright2007] & [@marcy2005] suggests that many systems may have low mass planets[^1]. Therefore, maps of stable regions in known planetary systems can aid observers in their quest to discover more planets in known systems. We consider two definitions of dynamical stability: (1) [*Hill stability*]{}: A system is Hill-stable if the ordering of planets is conserved, even if the outer-most planet escapes to infinity. (2) [*Lagrange stability*]{}: In this kind of stability, every planet’s motion is bounded, i.e, no planet escapes from the system and exchanges are forbidden. Hill stability for a two-planet, non-resonant system can be described by an analytical expression [@MB1982; @Gladman1993], whereas no analytical criteria are available for Lagrange stability so we investigate it through numerical simulations. Previous studies by [@BG06;
\%$ of known single planet systems show evidence for additional companions. Furthermore, [@marcy2005] showed that the distribution of observed planets rises steeply towards smaller masses. The analyses of [@wright2007] & [@marcy2005] suggests that many systems may have low mass planets[^1]. Therefore, maps of stable regions in known planetary systems can aid observers in their quest to discover more planets in known systems. We consider two definitions of dynamical stability: (1) [*Hill stability*]{}: A system is Hill-stable if the ordering of planets is conserved, even if the outer-most planet escapes to infinity. (2) [*Lagrange stability*]{}: In this kind of stability, every planet’s motion is bounded, i.e, no planet escapes from the system and exchanges are forbidden. Hill stability for a two-planet, non-resonant system can be described by an analytical expression [@MB1982; @Gladman1993], whereas no analytical criteria are available for Lagrange stability so we investigate it through numerical simulations. Previous studies by [@BG06;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] we discuss the basics of error estimation and the detector noise curves used in this study, and briefly outline the detector orbits in §\[det\_orbit\] (more details are available in the appendix). We report the results of our analysis in §\[result\], and discuss implications of our results and some future directions in §\[sec:impli\]. Non-spinning compact binaries {#sec:non-spin} ============================= Within general relativity, the post-Newtonian (PN) formalism is used to model the inspiral part of the binary evolution and the gravitational waveform. Physical quantities like the conserved energy, flux etc. are written as expansions in a small parameter $(v/c)$, where $v$ is the characteristic speed of the binary system and $c$ is the speed of light [@luc]. Corrections of [*O*]{}$((v/c)^n)$ (counting from the leading order) are referred to as a $(n/2)$PN order terms in the standard convention. For non-spinning systems, the GW amplitude is known up to 3PN order, whereas the phase and binary dynamics are
we discuss the basics of error estimation and the detector noise curves used in this study, and briefly outline the detector orbits in §\[det\_orbit\] (more details are available in the appendix). We report the results of our analysis in §\[result\], and discuss implications of our results and some future directions in §\[sec:impli\]. Non-spinning compact binaries {#sec:non-spin} ============================= Within general relativity, the post-Newtonian (PN) formalism is used to model the inspiral part of the binary evolution and the gravitational waveform. Physical quantities like the conserved energy, flux etc. are written as expansions in a small parameter $(v/c)$, where $v$ is the characteristic speed of the binary system and $c$ is the speed of light [@luc]. Corrections of [*O*]{}$((v/c)^n)$ (counting from the leading order) are referred to as a $(n/2)$PN order terms in the standard convention. For non-spinning systems, the GW amplitude is known up to 3PN order, whereas the phase and binary dynamics are[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] correlation and the resulting consequence of the t-statistic. We will emphasize the difference and challenge when suddenly, the underlying observations are not any more independent. We will study the numerator and denominator of the t-statistic and derive their underlying distribution. We will in particular prove that it is only in the case of normal noise in the underlying AR(1) process, that the numerator and denominator are independent. We will then provide a few approximation for this statistic and conclude. AR(1) process ============= The assumptions that the underlying process (or observations) $(X_i)_{i=1,\ldots; n }$ follows an AR(1) writes : $$\label{AR_assumptions} \left\{ { \begin{array}{l l l l } X_t & = & \mu + \epsilon_t & \quad t \geq 1 ; \\ \epsilon_t & = & \rho \epsilon_{t-1} +
correlation and the resulting consequence of the t-statistic. We will emphasize the difference and challenge when suddenly, the underlying observations are not any more independent. We will study the numerator and denominator of the t-statistic and derive their underlying distribution. We will in particular prove that it is only in the case of normal noise in the underlying AR(1) process, that the numerator and denominator are independent. We will then provide a few approximation for this statistic and conclude. AR(1) process ============= The assumptions that the underlying process (or observations) $(X_i)_{i=1,\ldots; n }$ follows an AR(1) writes : $$\label{AR_assumptions} \left\{ { \begin{array}{l l l l } X_t & = & \mu + \epsilon_t & \quad t \geq 1 ; \\ \epsilon_t & = & \rho \epsilon_{t-1} +[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \begin{array}{cc} \rho _{k} & 1 \end{array} \right) \,p_{k}\,.$$ The transfer matrix is the trace of the monodromy matrix $t(u)$ \[6\]: $$T(u)=tr\,(t(u)),\,\,t(u)=L_{1}(u)L_{2}(u)...L_{n}(u)\,.$$ It can be verified that $t(u)$ satisfies the Yang-Baxter equation \[5\],\[6\]: $$t_{r_{1}^{\prime }}^{s_{1}}(u)\,t_{r_{2}^{\prime }}^{s_{2}}(v)\,l_{r_{1}r_{2}}^{r_{1}^{\prime }r_{2}^{\prime }}(v-u)=l_{s_{1}^{\prime }s_{2}^{\prime }}^{s_{1}s_{2}}(v-u)\,t_{r_{2}}^{s_{2}^{\prime }}(v)\,t_{r_{1}}^{s_{1}^{\prime }}(u)\,,$$ where $l(w)$ is the $L$-operator for the well-known Heisenberg spin model: $$l_{s_{1}^{\prime }s_{2}^{\prime }}^{s_{1}s_{2}}(w)=w\,\delta _{s_{1}^{\prime }}^{s_{1}}\,\delta _{s_{2}^{\prime }}^{s_{2}}+i\,\delta _{s_{2}^{\prime }}^{s_{1}}\,\delta _{s_{1}^{\prime }}^{s_{2}}\,.$$ The commutativity of $T(u)$ and $T(v)$ $$\lbrack T(u),T(v)]=0$$ is a consequence of the Yang-Baxter equation. If one will parametrize $t(u)$ in the form $$t(u)=\left( \begin{array}{cc} j_{0}(u)+j_{3}(u) & j_{-}(u) \\ j_{+}(u) & j_{0}(u)-j_{3}(u) \end{array} \right) ,$$ this equation is reduced to the following Lorentz-covariant relations for the currents $j_{\mu }(u)$: $$\left[ j_{\mu }(u),j_{\nu }(v)\right] =\left[ j_{\mu }(v),j_{\nu }(u)\right] =\frac{i\,\epsilon _{\mu \nu \rho \sigma }}{2(u-v)}\left( j^{\rho }(u)j^{\sigma }(v)-j^{\rho }(v)j^{\sigma }(u)\right) .$$ Here $\epsilon _{\mu \nu \rho \sigma }$ is the antisymmetric tensor ($% \epsilon _{1230}=1$) in the four-dimensional Minkovski space and the metric tensor $g^{\mu \nu }$ has the signature ($1,-1,-1,-1$). This form follows from the invariance of the Yang-Baxter equations under the Lorentz transformations. The generators
\begin{array}{cc} \rho _{k} & 1 \end{array} \right) \,p_{k}\,.$$ The transfer matrix is the trace of the monodromy matrix $t(u)$ \[6\]: $$T(u)=tr\,(t(u)),\,\,t(u)=L_{1}(u)L_{2}(u)...L_{n}(u)\,.$$ It can be verified that $t(u)$ satisfies the Yang-Baxter equation \[5\],\[6\]: $$t_{r_{1}^{\prime }}^{s_{1}}(u)\,t_{r_{2}^{\prime }}^{s_{2}}(v)\,l_{r_{1}r_{2}}^{r_{1}^{\prime }r_{2}^{\prime }}(v-u)=l_{s_{1}^{\prime }s_{2}^{\prime }}^{s_{1}s_{2}}(v-u)\,t_{r_{2}}^{s_{2}^{\prime }}(v)\,t_{r_{1}}^{s_{1}^{\prime }}(u)\,,$$ where $l(w)$ is the $L$-operator for the well-known Heisenberg spin model: $$l_{s_{1}^{\prime }s_{2}^{\prime }}^{s_{1}s_{2}}(w)=w\,\delta _{s_{1}^{\prime }}^{s_{1}}\,\delta _{s_{2}^{\prime }}^{s_{2}}+i\,\delta _{s_{2}^{\prime }}^{s_{1}}\,\delta _{s_{1}^{\prime }}^{s_{2}}\,.$$ The commutativity of $T(u)$ and $T(v)$ $$\lbrack T(u),T(v)]=0$$ is a consequence of the Yang-Baxter equation. If one will parametrize $t(u)$ in the form $$t(u)=\left( \begin{array}{cc} j_{0}(u)+j_{3}(u) & j_{-}(u) \\ j_{+}(u) & j_{0}(u)-j_{3}(u) \end{array} \right) ,$$ this equation is reduced to the following Lorentz-covariant relations for the currents $j_{\mu }(u)$: $$\left[ j_{\mu }(u),j_{\nu }(v)\right] =\left[ j_{\mu }(v),j_{\nu }(u)\right] =\frac{i\,\epsilon _{\mu \nu \rho \sigma }}{2(u-v)}\left( j^{\rho }(u)j^{\sigma }(v)-j^{\rho }(v)j^{\sigma }(u)\right) .$$ Here $\epsilon _{\mu \nu \rho \sigma }$ is the antisymmetric tensor ($% \epsilon _{1230}=1$) in the four-dimensional Minkovski space and the metric tensor $g^{\mu \nu }$ has the signature ($1,-1,-1,-1$). This form follows from the invariance of the Yang-Baxter equations under the Lorentz transformations. The generators[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] most basic states is a significant challenge. We have had success in this endeavor, which is the theme of the present work. Section \[sec:simulation\] describes the method used to create the lattice ensembles. Section \[sec:operators\] develops a set of creation operators that is able to probe all quantum numbers $I(\Lambda^P)$, where $I$ denotes weak isospin, $P$ is parity, and $\Lambda$ is a lattice representation corresponding to angular momentum. Section \[sec:analysis\] explains how the variational method was used for analysis of the lattice data. Section \[sec:spectrum\] presents the energy spectrum that was obtained from our lattice simulations. Section \[sec:biggerlattice\] examines the effects on the spectrum of increasing the lattice volume. Section \[sec:infiniteHiggs\] reports on a simulation with a much larger Higgs mass, so that changes in the energy spectrum can be observed and understood. Section \[Two-Particle Operators\] describes the construction of two-particle operators and uses them to extend the observed energy spectrum. Concluding remarks are contained in Sec. \[sec:conclusions\]. Simulation Details
most basic states is a significant challenge. We have had success in this endeavor, which is the theme of the present work. Section \[sec:simulation\] describes the method used to create the lattice ensembles. Section \[sec:operators\] develops a set of creation operators that is able to probe all quantum numbers $I(\Lambda^P)$, where $I$ denotes weak isospin, $P$ is parity, and $\Lambda$ is a lattice representation corresponding to angular momentum. Section \[sec:analysis\] explains how the variational method was used for analysis of the lattice data. Section \[sec:spectrum\] presents the energy spectrum that was obtained from our lattice simulations. Section \[sec:biggerlattice\] examines the effects on the spectrum of increasing the lattice volume. Section \[sec:infiniteHiggs\] reports on a simulation with a much larger Higgs mass, so that changes in the energy spectrum can be observed and understood. Section \[Two-Particle Operators\] describes the construction of two-particle operators and uses them to extend the observed energy spectrum. Concluding remarks are contained in Sec. \[sec:conclusions\]. Simulation Details[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] fibre of its minimal model is the curve $$\bar{E}/\F_p: y^2=x^3+1.$$ The Galois group $G_{\Q_p}$ acts on $\bar{E}$ by semilinear morphisms, which by Theorem \[main\] are given on $\bar{E}(\bar\F_p)$ by the “lift-act-reduce” procedure. Explicitly, we compute the action of $\sigma \in G_{\Q_p}$ on a point $(x,y)\in\bar{E}(\bar\F_p)$, with lift $(\tilde x,\tilde y)$ to the model of $E$ with good reduction, as $$(x,y) \!\to\! (\tilde x, \tilde y) \!\to\! (\pi^2 \tilde x, p \tilde y) \!\to\! (\sigma (\pi^2\tilde x), p \sigma \tilde y) \!\to\! (\tfrac{\sigma\pi^2}{\pi^2} \sigma \tilde x, \sigma\tilde y) \!\to\! (\zeta^{2\chi(\sigma)}\bar\sigma x,\bar\sigma y),$$ where $\bar\sigma$ is the induced action of $\sigma$ on the residue field and $\frac{\sigma(\pi)}{\pi}\equiv \zeta^{\chi(\sigma)}\mod \pi$. In particular, $\sigma$ in the inertia group of $\mathbb{Q}_p$ acts as the geometric automorphism $(x,y)\mapsto (\zeta^{2\chi(\sigma)}x,y)$ of $\bar{E}$. By Theorem \[main\], $T_l(E)$ with the usual Galois action is isomorphic to $T_l(\bar{E})$ with the action induced by the semilinear automorphisms. In particular, we see that the
fibre of its minimal model is the curve $$\bar{E}/\F_p: y^2=x^3+1.$$ The Galois group $G_{\Q_p}$ acts on $\bar{E}$ by semilinear morphisms, which by Theorem \[main\] are given on $\bar{E}(\bar\F_p)$ by the “lift-act-reduce” procedure. Explicitly, we compute the action of $\sigma \in G_{\Q_p}$ on a point $(x,y)\in\bar{E}(\bar\F_p)$, with lift $(\tilde x,\tilde y)$ to the model of $E$ with good reduction, as $$(x,y) \!\to\! (\tilde x, \tilde y) \!\to\! (\pi^2 \tilde x, p \tilde y) \!\to\! (\sigma (\pi^2\tilde x), p \sigma \tilde y) \!\to\! (\tfrac{\sigma\pi^2}{\pi^2} \sigma \tilde x, \sigma\tilde y) \!\to\! (\zeta^{2\chi(\sigma)}\bar\sigma x,\bar\sigma y),$$ where $\bar\sigma$ is the induced action of $\sigma$ on the residue field and $\frac{\sigma(\pi)}{\pi}\equiv \zeta^{\chi(\sigma)}\mod \pi$. In particular, $\sigma$ in the inertia group of $\mathbb{Q}_p$ acts as the geometric automorphism $(x,y)\mapsto (\zeta^{2\chi(\sigma)}x,y)$ of $\bar{E}$. By Theorem \[main\], $T_l(E)$ with the usual Galois action is isomorphic to $T_l(\bar{E})$ with the action induced by the semilinear automorphisms. In particular, we see that the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] force on a sphere turns to the Stokes formula, but at small distances, $h \ll R$, the drag force is inversely proportional to the gap, $F_2/F_{St} \to 9 R/(8 h)$. A consequence of this lubrication effect is that the sphere would never touch the wall in a finite time. The flow in the vicinity of a rough surface should deviate from these predictions. A possible assumption is that the boundary condition at the plane $x_2=r$ should be written as a slip condition [@bonaccurso.e:2003]. To investigate this scenario we suggest to present a force as a product of Eq. \[firstorder\] and a correction for slip $$\label{firstorder_slip} \frac{F_3}{F_{St}}\sim \left(1+\frac{9}{8}\frac{R}{h}\right) f^{\ast},$$ where this correction, $f^{\ast}$, is taken to be equal as predicted for a lubrication force between a no-slip surface and a surface with partial slip [@vinogradova:95]. $$\label{model2} f^{\ast} = \frac{1}{4} \left( 1 + \frac{3 h}{2 b}\left[ \left( 1 + \frac{h}{4 b}
force on a sphere turns to the Stokes formula, but at small distances, $h \ll R$, the drag force is inversely proportional to the gap, $F_2/F_{St} \to 9 R/(8 h)$. A consequence of this lubrication effect is that the sphere would never touch the wall in a finite time. The flow in the vicinity of a rough surface should deviate from these predictions. A possible assumption is that the boundary condition at the plane $x_2=r$ should be written as a slip condition [@bonaccurso.e:2003]. To investigate this scenario we suggest to present a force as a product of Eq. \[firstorder\] and a correction for slip $$\label{firstorder_slip} \frac{F_3}{F_{St}}\sim \left(1+\frac{9}{8}\frac{R}{h}\right) f^{\ast},$$ where this correction, $f^{\ast}$, is taken to be equal as predicted for a lubrication force between a no-slip surface and a surface with partial slip [@vinogradova:95]. $$\label{model2} f^{\ast} = \frac{1}{4} \left( 1 + \frac{3 h}{2 b}\left[ \left( 1 + \frac{h}{4 b}[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and hydrodynamics automatically eliminates the unphysical region. We conclude in section \[fin-rem\] with some final remarks. In appendix \[app-CP\], our results are extended to establish a precise mapping between possible flows on ultrastatic spacetimes (with constant curvature spatial sections) and the parameter space of the Carter-Plebański solution to Einstein-Maxwell-AdS gravity. The proofs of some propositions are relegated to appendix \[app-proof\]. Note that a related, but slightly different approach was adopted in [@Mukhopadhyay:2013gja], where uncharged fluids in Papapetrou-Randers geometries were considered. In these flows, the fluid velocity coincides with the timelike Killing vector of the spacetime (hence the fluid is at rest in this frame), and the Cotton-York tensor has the form of a perfect fluid (so-called ‘perfect Cotton geometries’). We will see below that there is some overlap between the bulk geometries dual to such flows, constructed explicitely in [@Mukhopadhyay:2013gja], and the solutions obtained here. Throughout this paper we use calligraphic letters ${\cal
and hydrodynamics automatically eliminates the unphysical region. We conclude in section \[fin-rem\] with some final remarks. In appendix \[app-CP\], our results are extended to establish a precise mapping between possible flows on ultrastatic spacetimes (with constant curvature spatial sections) and the parameter space of the Carter-Plebański solution to Einstein-Maxwell-AdS gravity. The proofs of some propositions are relegated to appendix \[app-proof\]. Note that a related, but slightly different approach was adopted in [@Mukhopadhyay:2013gja], where uncharged fluids in Papapetrou-Randers geometries were considered. In these flows, the fluid velocity coincides with the timelike Killing vector of the spacetime (hence the fluid is at rest in this frame), and the Cotton-York tensor has the form of a perfect fluid (so-called ‘perfect Cotton geometries’). We will see below that there is some overlap between the bulk geometries dual to such flows, constructed explicitely in [@Mukhopadhyay:2013gja], and the solutions obtained here. Throughout this paper we use calligraphic letters ${\cal[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] these works, due to the well-known large-scale instability existing in the IDE scenario. The cosmological perturbations will blow up on the large scales for the $Q\propto \rho_{de}$ model with the early-time $w<-1$ or $\beta<0$ [@Clemson:2011an; @He:2008si] and for the $Q\propto \rho_{c}$ model with the early-time $w>-1$ [@Valiviita:2008iv]. So to avoid this instability, one has to assume $w>-1$ and $\beta>0$ for the $Q\propto \rho_{de}$ model and $w<-1$ for the $Q\propto \rho_{c}$ model in the observational constraint analyses. In practice, the $Q\propto \rho_{c}$ model with $w<-1$ is not favored by the researchers, since $w<-1$ will lead to another instability of our Universe in a finite future. Thus, the $Q\propto \rho_{de}$ case with $w>-1$ and $\beta>0$ becomes the widely studied IDE model in the literature. The large-scale instability arises from the way of calculating the dark energy pressure perturbation $\delta p_{de}$. In the standard linear perturbation theory, dark energy is considered as a nonadiabatic
these works, due to the well-known large-scale instability existing in the IDE scenario. The cosmological perturbations will blow up on the large scales for the $Q\propto \rho_{de}$ model with the early-time $w<-1$ or $\beta<0$ [@Clemson:2011an; @He:2008si] and for the $Q\propto \rho_{c}$ model with the early-time $w>-1$ [@Valiviita:2008iv]. So to avoid this instability, one has to assume $w>-1$ and $\beta>0$ for the $Q\propto \rho_{de}$ model and $w<-1$ for the $Q\propto \rho_{c}$ model in the observational constraint analyses. In practice, the $Q\propto \rho_{c}$ model with $w<-1$ is not favored by the researchers, since $w<-1$ will lead to another instability of our Universe in a finite future. Thus, the $Q\propto \rho_{de}$ case with $w>-1$ and $\beta>0$ becomes the widely studied IDE model in the literature. The large-scale instability arises from the way of calculating the dark energy pressure perturbation $\delta p_{de}$. In the standard linear perturbation theory, dark energy is considered as a nonadiabatic[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] multiplying atomic states to single particle positron states with the usual Clebsch-Gordan coupling coefficients ; $$\begin{aligned} |\Psi;LS \rangle &=& \sum_{i,j} c_{i,j} \ \langle L_i M_i \ell_j m_j|L M_L \rangle \langle S_i M_{S_i} {\scriptstyle \frac{1}{2}} \mu_j|S M_S \rangle \nonumber \\ &\times& \Phi_i(Atom;L_iS_i) \phi_j({\bf r}_0) \ . \end{aligned}$$ In this expression $\Phi_i(Atom;L_i S_i)$ is an antisymmetric atomic wave function with good $L$ and $S$ quantum numbers. The function $\phi_j({\bf r}_0)$ is a single positron orbital. The single particle orbitals are written as a product of a radial function and a spherical harmonic: $$\phi({\bf r}) = P(r) Y_{lm}({\hat {\bf r}}) \ .$$ As the calculations were conducted in a fixed core model we used HF calculations of the neutral atom ground states to construct the core orbitals. These HF orbitals were computed with a program that can represent the radial wave functions as a linear combination
multiplying atomic states to single particle positron states with the usual Clebsch-Gordan coupling coefficients ; $$\begin{aligned} |\Psi;LS \rangle &=& \sum_{i,j} c_{i,j} \ \langle L_i M_i \ell_j m_j|L M_L \rangle \langle S_i M_{S_i} {\scriptstyle \frac{1}{2}} \mu_j|S M_S \rangle \nonumber \\ &\times& \Phi_i(Atom;L_iS_i) \phi_j({\bf r}_0) \ . \end{aligned}$$ In this expression $\Phi_i(Atom;L_i S_i)$ is an antisymmetric atomic wave function with good $L$ and $S$ quantum numbers. The function $\phi_j({\bf r}_0)$ is a single positron orbital. The single particle orbitals are written as a product of a radial function and a spherical harmonic: $$\phi({\bf r}) = P(r) Y_{lm}({\hat {\bf r}}) \ .$$ As the calculations were conducted in a fixed core model we used HF calculations of the neutral atom ground states to construct the core orbitals. These HF orbitals were computed with a program that can represent the radial wave functions as a linear combination[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] particle in the momentum representation, where there is no clasical solution unless the initial and the final momenta are equal, but yet, one can (and must) compute the amplitude to go from any initial momentum to any final momentum. To the action (\[1\]) one may add any functional of the quantities held fixed and obtain another action appropiate for the same boundary conditions. In particular one may replace (\[1\]) by $$I= I_{can} + B[^{(D-2)} {\cal G}], \label{2}$$ where $B[^{(D-2)} {\cal G}]$ is any functional of the $(D-2)$-geometry at the origin. If we only look at the wedge $t_1 \leq t \leq t_2$ there is no privileged choice for $B$. However if we demand that the action we adopt should also be appropiate for the complete spacetime, then B is uniquely fixed. This is because when one deals with the complete spacetime the slices $t=t_1$ and $t=t_2$ are identified and neither $^{(D-1)} {\cal G}_1$ nor $^{(D-1)}
particle in the momentum representation, where there is no clasical solution unless the initial and the final momenta are equal, but yet, one can (and must) compute the amplitude to go from any initial momentum to any final momentum. To the action (\[1\]) one may add any functional of the quantities held fixed and obtain another action appropiate for the same boundary conditions. In particular one may replace (\[1\]) by $$I= I_{can} + B[^{(D-2)} {\cal G}], \label{2}$$ where $B[^{(D-2)} {\cal G}]$ is any functional of the $(D-2)$-geometry at the origin. If we only look at the wedge $t_1 \leq t \leq t_2$ there is no privileged choice for $B$. However if we demand that the action we adopt should also be appropiate for the complete spacetime, then B is uniquely fixed. This is because when one deals with the complete spacetime the slices $t=t_1$ and $t=t_2$ are identified and neither $^{(D-1)} {\cal G}_1$ nor $^{(D-1)}[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] companion papers describing the spectrum of the infrared diffuse sky; ZL is described in @Tsumura2013a (hereafter Paper I) and EBL is described in @Tsumura2013c (hereafter Paper III) in which the foregrounds described in Paper I and this paper (Paper II) are subtracted. Data Selection and Reduction {#sec_reduction} ============================ AKARI is the first Japanese infrared astronomical satellite launched at February 2006, equipped with a cryogenically cooled telescope of 68.5 cm aperture diameter [@Murakami07]. IRC is one of two astronomical instruments of AKARI, and it covers 1.8-5.3 $\mu$m wavelength region with a 512$\times $412 InSb detector array in the NIR channel[^2] [@Onaka07]. It provides low-resolution ($\lambda /\Delta \lambda \sim 20$) slit spectroscopy for the diffuse radiation by a prism[^3] [@Ohyama2007]. One biggest advantage to the previous IRTS measurement [@Tanaka1996] is the higher angler resolution (1.46 arcseconds) of AKARI IRC which allows us to detect and remove fainter point sources, while the IRTS measurement was highly
companion papers describing the spectrum of the infrared diffuse sky; ZL is described in @Tsumura2013a (hereafter Paper I) and EBL is described in @Tsumura2013c (hereafter Paper III) in which the foregrounds described in Paper I and this paper (Paper II) are subtracted. Data Selection and Reduction {#sec_reduction} ============================ AKARI is the first Japanese infrared astronomical satellite launched at February 2006, equipped with a cryogenically cooled telescope of 68.5 cm aperture diameter [@Murakami07]. IRC is one of two astronomical instruments of AKARI, and it covers 1.8-5.3 $\mu$m wavelength region with a 512$\times $412 InSb detector array in the NIR channel[^2] [@Onaka07]. It provides low-resolution ($\lambda /\Delta \lambda \sim 20$) slit spectroscopy for the diffuse radiation by a prism[^3] [@Ohyama2007]. One biggest advantage to the previous IRTS measurement [@Tanaka1996] is the higher angler resolution (1.46 arcseconds) of AKARI IRC which allows us to detect and remove fainter point sources, while the IRTS measurement was highly[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] this test. And finally, the summary is given in Section V. 0.5cm Observable data and relation between Hubble parameter and growth rate ===================================================================== The kinematic probe $H(z)$ directly measure the cosmic metric while dynamical probes not only measure the cosmic metric but also the gravitational law concurrently on cosmological scales. Converging these two kinds of probes indicates that our universe is accelerating in its expansion. In our work, we choose $H(z)$, the kinematic probe and the growth rate deduced from density fluctuation as the dynamical one to establish the consistency equation. Kinematic Probes: Observational $H(z)$ Data ------------------------------------------- The charming potential of OHD to constrain cosmological parameters and distinguish different models has been surfacing over in recent years. $H(z)$ can be produced by model-independent direct observations, and three methods have been developed to measure them up to now [@Zhang2010]: galaxy differential age, radial BAO size and gravitational-wave standard sirens methods. In practice, $H(z)$ is usually utilized as a
this test. And finally, the summary is given in Section V. 0.5cm Observable data and relation between Hubble parameter and growth rate ===================================================================== The kinematic probe $H(z)$ directly measure the cosmic metric while dynamical probes not only measure the cosmic metric but also the gravitational law concurrently on cosmological scales. Converging these two kinds of probes indicates that our universe is accelerating in its expansion. In our work, we choose $H(z)$, the kinematic probe and the growth rate deduced from density fluctuation as the dynamical one to establish the consistency equation. Kinematic Probes: Observational $H(z)$ Data ------------------------------------------- The charming potential of OHD to constrain cosmological parameters and distinguish different models has been surfacing over in recent years. $H(z)$ can be produced by model-independent direct observations, and three methods have been developed to measure them up to now [@Zhang2010]: galaxy differential age, radial BAO size and gravitational-wave standard sirens methods. In practice, $H(z)$ is usually utilized as a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] $u(t)$ and $v(t)$. This hypothesis requires a small value of $\langle u^2 \rangle^{1/2}/U$. The value in our experiment, 0.05, is small enough. Since $u(t)$ and $v(t)$ are stationary, $u(x)$ and $v(x)$ are homogeneous, although grid turbulence decays along the streamwise direction in the wind tunnel. We are mostly interested in scales up to about the typical scale for energy-containing eddies, which is much less than the tunnel size. Over such scales, fluctuations of $u(x)$ and $v(x)$ correspond to spatial fluctuations that were actually present in the wind tunnel.[@note0] Those over the larger scales do not. They have to be interpreted as fluctuations over long timescales described in terms of large length scales.[@note1] Quantity
$u(t)$ and $v(t)$. This hypothesis requires a small value of $\langle u^2 \rangle^{1/2}/U$. The value in our experiment, 0.05, is small enough. Since $u(t)$ and $v(t)$ are stationary, $u(x)$ and $v(x)$ are homogeneous, although grid turbulence decays along the streamwise direction in the wind tunnel. We are mostly interested in scales up to about the typical scale for energy-containing eddies, which is much less than the tunnel size. Over such scales, fluctuations of $u(x)$ and $v(x)$ correspond to spatial fluctuations that were actually present in the wind tunnel.[@note0] Those over the larger scales do not. They have to be interpreted as fluctuations over long timescales described in terms of large length scales.[@note1] Quantity [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of $d \times d$ complex matrices and let $M_d^+$ be a convex set of semi-positive elements in $M_d$, that is, $M_d^+$ defines a space of (unnormalized) states of $d$-level quantum system. For any $\rho \in (M_d {{\,\otimes\,}}M_d)^+$ denote by $\mathrm{SN}(\rho)$ a Schmidt number of $\rho$ [@SN]. This notion enables one to introduce the following family of positive cones: $$\label{} \mathrm{V}_r = \{\, \rho \in (M_d {{\,\otimes\,}}M_d)^+\ |\ \mathrm{SN}(\rho) \leq r\, \} \ .$$ One has the following chain of inclusions $$\label{V-k} \mathrm{V}_1 \subset \ldots \subset \mathrm{V}_d \equiv (M_d {{\,\otimes\,}}M_d)^+\ .$$ Clearly, $\mathrm{V}_1$ is a cone of separable (unnormalized) states and $V_d \smallsetminus V_1$ stands for a set of entangled states. Note, that a partial transposition $(\oper_d {{\,\otimes\,}}\tau)$ gives rise to another family of cones: $$\label{} \mathrm{V}^l = (\oper_d {{\,\otimes\,}}\tau)\mathrm{V}_l \ ,$$ such that $ \mathrm{V}^1 \subset
of $d \times d$ complex matrices and let $M_d^+$ be a convex set of semi-positive elements in $M_d$, that is, $M_d^+$ defines a space of (unnormalized) states of $d$-level quantum system. For any $\rho \in (M_d {{\,\otimes\,}}M_d)^+$ denote by $\mathrm{SN}(\rho)$ a Schmidt number of $\rho$ [@SN]. This notion enables one to introduce the following family of positive cones: $$\label{} \mathrm{V}_r = \{\, \rho \in (M_d {{\,\otimes\,}}M_d)^+\ |\ \mathrm{SN}(\rho) \leq r\, \} \ .$$ One has the following chain of inclusions $$\label{V-k} \mathrm{V}_1 \subset \ldots \subset \mathrm{V}_d \equiv (M_d {{\,\otimes\,}}M_d)^+\ .$$ Clearly, $\mathrm{V}_1$ is a cone of separable (unnormalized) states and $V_d \smallsetminus V_1$ stands for a set of entangled states. Note, that a partial transposition $(\oper_d {{\,\otimes\,}}\tau)$ gives rise to another family of cones: $$\label{} \mathrm{V}^l = (\oper_d {{\,\otimes\,}}\tau)\mathrm{V}_l \ ,$$ such that $ \mathrm{V}^1 \subset[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of accessible, $\theta-$Hölder, codimension 1 bundle, the inclusion “$B_{\Delta}(0,\epsilon) \subset B_{\alpha}(0, K_2\epsilon)$” holds true with $\alpha=1+\theta$ (this inclusion translates as a certain “lower bound" in the way they choose to express his results in [@Sim10]). Although this result does not assume any regularity beyond being Hölder (which is the weakest regularity assumption in the works we compare), what is lacking is that there is no criterion for accessibility and without the lower inclusion one has no qualitative information about the shape or the volume of the sub-Riemannian ball. In [@MonMor12] they prove the full Ball-Box Theorem under certain Lipschitz continuity assumptions for commutators of the vector-fields involved. In particular working with a collection of vector-fields $\{X_i\}_{i=1}^n$, they say that these vector-fields are completely non-integrable of step $s$ if their Lie derivatives $X_I=[X_{i_1},[\dots,[X_{i_{s-1}},X_{s}]]\cdots]$ up to $s$ iterations are defined and Lipschitz continuous and these $\{X_I\}_{I \in \mathcal{I}}$ span the whole tangent space
of accessible, $\theta-$Hölder, codimension 1 bundle, the inclusion “$B_{\Delta}(0,\epsilon) \subset B_{\alpha}(0, K_2\epsilon)$” holds true with $\alpha=1+\theta$ (this inclusion translates as a certain “lower bound" in the way they choose to express his results in [@Sim10]). Although this result does not assume any regularity beyond being Hölder (which is the weakest regularity assumption in the works we compare), what is lacking is that there is no criterion for accessibility and without the lower inclusion one has no qualitative information about the shape or the volume of the sub-Riemannian ball. In [@MonMor12] they prove the full Ball-Box Theorem under certain Lipschitz continuity assumptions for commutators of the vector-fields involved. In particular working with a collection of vector-fields $\{X_i\}_{i=1}^n$, they say that these vector-fields are completely non-integrable of step $s$ if their Lie derivatives $X_I=[X_{i_1},[\dots,[X_{i_{s-1}},X_{s}]]\cdots]$ up to $s$ iterations are defined and Lipschitz continuous and these $\{X_I\}_{I \in \mathcal{I}}$ span the whole tangent space[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] to zero, thus explaining why the errors in this scheme will eventually increase as the time step is reduced. Keeping the ratio $\lambda$ constant is desirable because the Crank-Nicolson central difference scheme is second order consistent in both space and time and the numerical scheme is more efficient if this property is exploited. The authors extended their analysis to show why the incorporation of a small number of initial fully implicit time steps (i.e., the Rannacher scheme, @LR [@RANNACHER]) eliminates this divergence. In this paper, an alternative method of avoiding the divergence is proposed and analysed. The idea is to introduce a time change into the partial differential equation (PDE) by transforming the original time variable, $t$, to $$\tilde t =\sqrt{t}$$ and solving the equation numerically using the Crank-Nicolson scheme in the new variables. The time change will be applied to the heat equation (\[HE\]) on $\mathbb{R}\:\times\:[0,T]$ and can be considered
to zero, thus explaining why the errors in this scheme will eventually increase as the time step is reduced. Keeping the ratio $\lambda$ constant is desirable because the Crank-Nicolson central difference scheme is second order consistent in both space and time and the numerical scheme is more efficient if this property is exploited. The authors extended their analysis to show why the incorporation of a small number of initial fully implicit time steps (i.e., the Rannacher scheme, @LR [@RANNACHER]) eliminates this divergence. In this paper, an alternative method of avoiding the divergence is proposed and analysed. The idea is to introduce a time change into the partial differential equation (PDE) by transforming the original time variable, $t$, to $$\tilde t =\sqrt{t}$$ and solving the equation numerically using the Crank-Nicolson scheme in the new variables. The time change will be applied to the heat equation (\[HE\]) on $\mathbb{R}\:\times\:[0,T]$ and can be considered[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] except at points $t^*+\tau k$, $k\in{\mathbb{Z}}$, with some $t^*\in\left[0,\tau\right]$, in which we have $$0\ne\lim_{t\uparrow t^*}S(t)=S_-\ne S_+=\lim_{t\downarrow t^*}S(t)\ne0.$$ Denote ${\mathbf{P}}_{\theta}^T$ the distribution (corresponding to the parameter $\theta$) of the observation $X^T$. As $T\to\infty$, the normalized likelihood ratio process of this model defined by $$Z_T(u)=\frac{d{\mathbf{P}}_{\theta+\frac uT}^T}{d{\mathbf{P}}_{\theta}^T}(X^T)=\exp \biggl\{\int_{0}^{T}\!\!\ln\frac{S_{\theta+\frac uT}(t)}{S_{\theta}(t)}\,dX(t) -\int_{0}^{T}\!\!\left[S_{\theta+\frac uT}(t)-S_\theta(t)\right]\!dt\biggr\}$$ converges weakly in the space $\mathcal{D}_0(-\infty ,+\infty)$ to the process $Z_{\tau,S_-,S_+}$ on ${\mathbb{R}}$ defined by $$\ln Z_{\tau,S_-,S_+}=\begin{cases} \vphantom{\bigg)}\ln\bigl(\frac{S_+}{S_-}\bigr)\,\Pi_{S_-}\bigl(\frac u\tau\bigr)-(S_+-S_-)\,\frac u\tau\,, &\text{if } u{\geqslant}0,\\ \vphantom{\bigg)}-\ln\bigl(\frac{S_+}{S_-}\bigr)\,\Pi_{S_+}\bigl(-\frac u\tau\bigr)-(S_+-S_-)\,\frac u\tau\,, &\text{if } u{\leqslant}0,\\ \end{cases}$$ where $\Pi_{S_-}$ and $\Pi_{S_+}$ are two independent Poisson processes on ${\mathbb{R}}_+$ with intensities $S_-$ and $S_+$ respectively. The limiting distributions of the Bayesian estimators and of the maximum likelihood estimator are given by $$\zeta_{\tau,S_-,S_+}=\frac{\int_{{\mathbb{R}}}u\,Z_{\tau,S_-,S_+}(u)\;du} {\int_{{\mathbb{R}}}\,Z_{\tau,S_-,S_+}(u)\;du}\quad\text{and}\quad \xi_{\tau,S_-,S_+}={\mathop{\rm argsup}\limits}_{u\in{\mathbb{R}}}Z_{\tau,S_-,S_+}(u)$$ respectively. The convergence of moments also holds, and the Bayesian estimators are asymptotically efficient. So, ${\mathbf{E}}\zeta_{\tau,S_-,S_+}^2$ and ${\mathbf{E}}\xi_{\tau,S_-,S_+}^2$ are the limiting variances of these estimators, and ${\mathbf{E}}\zeta_{\tau,S_-,S_+}^2/{\mathbf{E}}\xi_{\tau,S_-,S_+}^2$ is the asymptotic efficiency of the maximum likelihood estimator. Now let us note, that up to a linear time
except at points $t^*+\tau k$, $k\in{\mathbb{Z}}$, with some $t^*\in\left[0,\tau\right]$, in which we have $$0\ne\lim_{t\uparrow t^*}S(t)=S_-\ne S_+=\lim_{t\downarrow t^*}S(t)\ne0.$$ Denote ${\mathbf{P}}_{\theta}^T$ the distribution (corresponding to the parameter $\theta$) of the observation $X^T$. As $T\to\infty$, the normalized likelihood ratio process of this model defined by $$Z_T(u)=\frac{d{\mathbf{P}}_{\theta+\frac uT}^T}{d{\mathbf{P}}_{\theta}^T}(X^T)=\exp \biggl\{\int_{0}^{T}\!\!\ln\frac{S_{\theta+\frac uT}(t)}{S_{\theta}(t)}\,dX(t) -\int_{0}^{T}\!\!\left[S_{\theta+\frac uT}(t)-S_\theta(t)\right]\!dt\biggr\}$$ converges weakly in the space $\mathcal{D}_0(-\infty ,+\infty)$ to the process $Z_{\tau,S_-,S_+}$ on ${\mathbb{R}}$ defined by $$\ln Z_{\tau,S_-,S_+}=\begin{cases} \vphantom{\bigg)}\ln\bigl(\frac{S_+}{S_-}\bigr)\,\Pi_{S_-}\bigl(\frac u\tau\bigr)-(S_+-S_-)\,\frac u\tau\,, &\text{if } u{\geqslant}0,\\ \vphantom{\bigg)}-\ln\bigl(\frac{S_+}{S_-}\bigr)\,\Pi_{S_+}\bigl(-\frac u\tau\bigr)-(S_+-S_-)\,\frac u\tau\,, &\text{if } u{\leqslant}0,\\ \end{cases}$$ where $\Pi_{S_-}$ and $\Pi_{S_+}$ are two independent Poisson processes on ${\mathbb{R}}_+$ with intensities $S_-$ and $S_+$ respectively. The limiting distributions of the Bayesian estimators and of the maximum likelihood estimator are given by $$\zeta_{\tau,S_-,S_+}=\frac{\int_{{\mathbb{R}}}u\,Z_{\tau,S_-,S_+}(u)\;du} {\int_{{\mathbb{R}}}\,Z_{\tau,S_-,S_+}(u)\;du}\quad\text{and}\quad \xi_{\tau,S_-,S_+}={\mathop{\rm argsup}\limits}_{u\in{\mathbb{R}}}Z_{\tau,S_-,S_+}(u)$$ respectively. The convergence of moments also holds, and the Bayesian estimators are asymptotically efficient. So, ${\mathbf{E}}\zeta_{\tau,S_-,S_+}^2$ and ${\mathbf{E}}\xi_{\tau,S_-,S_+}^2$ are the limiting variances of these estimators, and ${\mathbf{E}}\zeta_{\tau,S_-,S_+}^2/{\mathbf{E}}\xi_{\tau,S_-,S_+}^2$ is the asymptotic efficiency of the maximum likelihood estimator. Now let us note, that up to a linear time[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the evolution of community structure over time. Some work that investigated theoretical properties of the DSBM and more generally graphon models assuming that the node assignment problem can be solved exactly includes [@P2016s], while the theoretical performance of spectral clustering for the DSBM was examined in [@B2017] and [@P2017]. The last two studies estimate the edge probability matrices by either directly averaging adjacency matrices observed at different time points, or by employing a kernel-type smoothing procedure and extract the group memberships of the nodes by using spectral clustering. The objective of this paper is to examine the [*offline estimation*]{} of a single change point under a DSBM network generating mechanism. Specifically, a sequence of networks is generated independently across time through the SBM mechanism, whose community connection probabilities exhibit a change at some time epoch. Then, the problem becomes one of identifying the change point epoch based on the observed
the evolution of community structure over time. Some work that investigated theoretical properties of the DSBM and more generally graphon models assuming that the node assignment problem can be solved exactly includes [@P2016s], while the theoretical performance of spectral clustering for the DSBM was examined in [@B2017] and [@P2017]. The last two studies estimate the edge probability matrices by either directly averaging adjacency matrices observed at different time points, or by employing a kernel-type smoothing procedure and extract the group memberships of the nodes by using spectral clustering. The objective of this paper is to examine the [*offline estimation*]{} of a single change point under a DSBM network generating mechanism. Specifically, a sequence of networks is generated independently across time through the SBM mechanism, whose community connection probabilities exhibit a change at some time epoch. Then, the problem becomes one of identifying the change point epoch based on the observed[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] paper proceeds as follows. Section \[sec:background\] provides an overview of data compression in signal processing with relevance to the approach presented here. Section \[sec:approach\] describes the components of our proposed approach in detail. Experimental results and analysis are presented in section \[sec:experiment\], with Section \[sec:conclusions\] discussing the findings and future areas of research. Background {#sec:background} ========== Data compression has a broad range of applications in signal processing, which is to encode data into compact representations by taking advantage of perceptual and statistical properties of data to provide superior analytical results. In image processing, we can use cosine transform to compress a BitMap (BMP) image as a JPEG format with a tolerable information loss but a much smaller data size. In computer vision, images and videos are usually represented as high-dimensional visual feature vectors. The goal of encoding images into compact codes is to simultaneously reduce the storage cost and accelerate the computation. To
paper proceeds as follows. Section \[sec:background\] provides an overview of data compression in signal processing with relevance to the approach presented here. Section \[sec:approach\] describes the components of our proposed approach in detail. Experimental results and analysis are presented in section \[sec:experiment\], with Section \[sec:conclusions\] discussing the findings and future areas of research. Background {#sec:background} ========== Data compression has a broad range of applications in signal processing, which is to encode data into compact representations by taking advantage of perceptual and statistical properties of data to provide superior analytical results. In image processing, we can use cosine transform to compress a BitMap (BMP) image as a JPEG format with a tolerable information loss but a much smaller data size. In computer vision, images and videos are usually represented as high-dimensional visual feature vectors. The goal of encoding images into compact codes is to simultaneously reduce the storage cost and accelerate the computation. To[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] uncertainties of different ‘messengers’, or observables, should not cross-correlate, and, under some conditions, statistical noise should also not strongly cross-correlate. This is because different experiments are different machines exploiting different physical effects. However, within a single dataset, for instance the set of arrival directions of UHECRs, the AC of the noise and systematic errors for that set are certainly non-zero, and contribute to hiding any underlying ‘true’ signal. Thus, in this sense the XC is an experimentally cleaner observable. Secondly, in the limit where the UHECR sources are numerous, but UHECR detections themselves are not, we can assume that we observe at most one UHECR per source (as seems to be the case given to the lack of obvious UHECR multiplets [@Abreu:2011md]). The much higher number of galaxies leads to a significant improvement in the signal-to-noise ratio of this cross-correlation (see the discussion in section \[sec:results\]). This effectively allows us to probe
uncertainties of different ‘messengers’, or observables, should not cross-correlate, and, under some conditions, statistical noise should also not strongly cross-correlate. This is because different experiments are different machines exploiting different physical effects. However, within a single dataset, for instance the set of arrival directions of UHECRs, the AC of the noise and systematic errors for that set are certainly non-zero, and contribute to hiding any underlying ‘true’ signal. Thus, in this sense the XC is an experimentally cleaner observable. Secondly, in the limit where the UHECR sources are numerous, but UHECR detections themselves are not, we can assume that we observe at most one UHECR per source (as seems to be the case given to the lack of obvious UHECR multiplets [@Abreu:2011md]). The much higher number of galaxies leads to a significant improvement in the signal-to-noise ratio of this cross-correlation (see the discussion in section \[sec:results\]). This effectively allows us to probe[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] section. Under our assumptions, (\[ee1.1\]) has a unique solution $(X_s^{t,x;u},Y_s^{t,x;u},Z_s^{t,x;u})_{s\in[t,T]}$ and the cost functional is defined by \[ee1.2\] J(t,x;u)=Y\_t\^[t,x;u]{}. We define the value function of our stochastic control problems as follows: \[ee1.3\] W(t,x):=\_[u\_[t,T]{}]{}J(t,x;u). The objective of our paper is to investigate this value function. The main results of the paper state that $W$ is deterministic (Proposition 2.1), continuous viscosity solution of the associated HJB equations (Theorem 3.1). The associated HJB equation is combined with an algebraic equation as follows: \[ee1.5\] { [ll]{} & W(t,x) + H\_V(t, x, W(t,x))=0,\ &V(t,x,u)=DW(t,x).(t,x,W(t,x),V(t,x,u),u),0.5cm (t,x)\[0,T)\^n , uU,\ & W(T,x) =(x),0.5cm x\^n. . In this case $$\begin{array}{lll} H_V(t, x, W(t,x))&=& \sup\limits_{u \in U}\{DW.b(t, x, W(t,x), V(t,x,u), u)+\frac{1}{2}tr(\sigma\sigma^{T}(t, x, W(t,x), V(t,x,u),u)D^2W(t,x))\\ && +f(t, x, W(t,x), V(t,x,u), u)\}, \end{array}$$ where $t\in [0, T], x\in{\mathbb{R}}^n.$ Our paper is organized as follows: Section 2 introduces the framework of the stochastic control problems. In Section 3, we prove that $W$ is a viscosity solution of the associated HJB equation
section. Under our assumptions, (\[ee1.1\]) has a unique solution $(X_s^{t,x;u},Y_s^{t,x;u},Z_s^{t,x;u})_{s\in[t,T]}$ and the cost functional is defined by \[ee1.2\] J(t,x;u)=Y\_t\^[t,x;u]{}. We define the value function of our stochastic control problems as follows: \[ee1.3\] W(t,x):=\_[u\_[t,T]{}]{}J(t,x;u). The objective of our paper is to investigate this value function. The main results of the paper state that $W$ is deterministic (Proposition 2.1), continuous viscosity solution of the associated HJB equations (Theorem 3.1). The associated HJB equation is combined with an algebraic equation as follows: \[ee1.5\] { [ll]{} & W(t,x) + H\_V(t, x, W(t,x))=0,\ &V(t,x,u)=DW(t,x).(t,x,W(t,x),V(t,x,u),u),0.5cm (t,x)\[0,T)\^n , uU,\ & W(T,x) =(x),0.5cm x\^n. . In this case $$\begin{array}{lll} H_V(t, x, W(t,x))&=& \sup\limits_{u \in U}\{DW.b(t, x, W(t,x), V(t,x,u), u)+\frac{1}{2}tr(\sigma\sigma^{T}(t, x, W(t,x), V(t,x,u),u)D^2W(t,x))\\ && +f(t, x, W(t,x), V(t,x,u), u)\}, \end{array}$$ where $t\in [0, T], x\in{\mathbb{R}}^n.$ Our paper is organized as follows: Section 2 introduces the framework of the stochastic control problems. In Section 3, we prove that $W$ is a viscosity solution of the associated HJB equation[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in different reflection lines on the detector. By measuring the distance between these lines it is possible to determine the energy difference. The resolution of the spectrometer is of the order of $0.4~eV$ at 3 keV. Extraction of the hadronic shift and width ========================================== The characteristics of the ground state of pionic hydrogen are evaluated measuring the X-ray transitions $np \to 1s$ (see fig.\[spectra\]). The line width is the result of the convolution of: the spectrometer resolution, the Doppler broadening effect from the non-zero atom velocity, the natural width of the ground state, and, of course, the hadronic broadening. A very accurate measurement of the response function of the crystal was performed using the $1s2s\,^{3}S_{1}\to1s^{2}\,^{1}S_{0}$ M1 transitions in He-like argon (with a natural line width less than 1 meV, Doppler broadening about 40 meV). For this measurement the cyclotron trap was converted into an Electron-Resonance Ion Trap (ECRIT)[@anagnostopoulos2003], with the crucial point that the
in different reflection lines on the detector. By measuring the distance between these lines it is possible to determine the energy difference. The resolution of the spectrometer is of the order of $0.4~eV$ at 3 keV. Extraction of the hadronic shift and width ========================================== The characteristics of the ground state of pionic hydrogen are evaluated measuring the X-ray transitions $np \to 1s$ (see fig.\[spectra\]). The line width is the result of the convolution of: the spectrometer resolution, the Doppler broadening effect from the non-zero atom velocity, the natural width of the ground state, and, of course, the hadronic broadening. A very accurate measurement of the response function of the crystal was performed using the $1s2s\,^{3}S_{1}\to1s^{2}\,^{1}S_{0}$ M1 transitions in He-like argon (with a natural line width less than 1 meV, Doppler broadening about 40 meV). For this measurement the cyclotron trap was converted into an Electron-Resonance Ion Trap (ECRIT)[@anagnostopoulos2003], with the crucial point that the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] generalization of the Monte-Carlo cascade code developed in [@rb10] to arbitrary radiation fields. In particular, we will focus on thermal blackbody radiation fields, representative of the emission from a circum-nuclear dust torus. In Section \[setup\] we will outline the general model setup and assumptions and describe the modified Monte-Carlo code that treats the full three-dimensional cascade development. Numerical results for generic parameters will be presented in Section \[parameterstudy\]. In Section \[CenA\], we will demonstrate that the broad-band SED of the radio galaxy Cen A, including the recent [*Fermi*]{} $\gamma$-ray data [@abdo09c], can be modeled with plausible parameters expected for a mis-aligned blazar, allowing for a contribution from VHE $\gamma$-ray induced cascades in the [*Fermi*]{} energy range. We summarize in Section \[summary\]. ![\[geometry\]Geometry of the model setup.](f1.eps){width="15cm"} \[setup\]Model Setup and Code Description ========================================= Figure \[geometry\] illustrates the geometrical setup of our model system. We represent the primary VHE $\gamma$-ray emission as a mono-directional beam of $\gamma$-rays
generalization of the Monte-Carlo cascade code developed in [@rb10] to arbitrary radiation fields. In particular, we will focus on thermal blackbody radiation fields, representative of the emission from a circum-nuclear dust torus. In Section \[setup\] we will outline the general model setup and assumptions and describe the modified Monte-Carlo code that treats the full three-dimensional cascade development. Numerical results for generic parameters will be presented in Section \[parameterstudy\]. In Section \[CenA\], we will demonstrate that the broad-band SED of the radio galaxy Cen A, including the recent [*Fermi*]{} $\gamma$-ray data [@abdo09c], can be modeled with plausible parameters expected for a mis-aligned blazar, allowing for a contribution from VHE $\gamma$-ray induced cascades in the [*Fermi*]{} energy range. We summarize in Section \[summary\]. ![\[geometry\]Geometry of the model setup.](f1.eps){width="15cm"} \[setup\]Model Setup and Code Description ========================================= Figure \[geometry\] illustrates the geometrical setup of our model system. We represent the primary VHE $\gamma$-ray emission as a mono-directional beam of $\gamma$-rays[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] upstream regulatory region of the genetic program it is attached to, while the atoms can be seen as different binding modules within the regulatory region. Each gene is initially active (activation level $\theta=1$) and then each condition atom acts one after another on $\theta$, modifying it in the $[0,1]$ range, or totally suppressing it ($\theta=0$). Table \[cond1\] shows all the possible condition atoms and how they act on the gene expression level $\theta$ passed on to them. Once a gene activation value $\theta$ has been reached, each of the gene’s expression atoms are executed. Expression atoms can carry out simple actions such as producing a specific protein, or they can emulate complex actions such as cell division and axon growth. Table \[expr1\] contains a complete list of expression atoms used in Norgev. A more complete description of the Norgev model and its evolution operators (mutation and crossover) can be found in
upstream regulatory region of the genetic program it is attached to, while the atoms can be seen as different binding modules within the regulatory region. Each gene is initially active (activation level $\theta=1$) and then each condition atom acts one after another on $\theta$, modifying it in the $[0,1]$ range, or totally suppressing it ($\theta=0$). Table \[cond1\] shows all the possible condition atoms and how they act on the gene expression level $\theta$ passed on to them. Once a gene activation value $\theta$ has been reached, each of the gene’s expression atoms are executed. Expression atoms can carry out simple actions such as producing a specific protein, or they can emulate complex actions such as cell division and axon growth. Table \[expr1\] contains a complete list of expression atoms used in Norgev. A more complete description of the Norgev model and its evolution operators (mutation and crossover) can be found in[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] bottom-superlattice where the SC fingers are below (a) and the top-superlattice where they are on top of the nanowire (b). The nanowire is depicted in green, the SC superlattice in grey, the dielectric substrate in purple and the back gate in black. We choose the $x$-axis along the nanowire and the $z$-axis as the direction perpendicular to the back gate’s surface. Different materials have different dielectric constants and dimensions. $V_{\rm SC}$ is the wire’s conduction band offset to the metal Fermi level at the interface with the SC fingers, $\rho_{\rm surf}$ is the positive surface charge at the rest of the wire’s facets and $V_{\rm gate}$ is the back gate’s voltage. (c) and (d): examples of the self consistent solution of the Poisson-Schrödinger equations in the Thomas-Fermi approximation. The electrostatic potential energy profile (in red) and the charge density profile (in blue) are shown along the wire ($x$-direction at $z=30$nm)
bottom-superlattice where the SC fingers are below (a) and the top-superlattice where they are on top of the nanowire (b). The nanowire is depicted in green, the SC superlattice in grey, the dielectric substrate in purple and the back gate in black. We choose the $x$-axis along the nanowire and the $z$-axis as the direction perpendicular to the back gate’s surface. Different materials have different dielectric constants and dimensions. $V_{\rm SC}$ is the wire’s conduction band offset to the metal Fermi level at the interface with the SC fingers, $\rho_{\rm surf}$ is the positive surface charge at the rest of the wire’s facets and $V_{\rm gate}$ is the back gate’s voltage. (c) and (d): examples of the self consistent solution of the Poisson-Schrödinger equations in the Thomas-Fermi approximation. The electrostatic potential energy profile (in red) and the charge density profile (in blue) are shown along the wire ($x$-direction at $z=30$nm)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] two key ingredients of the proof, and they are formulated in Lemmas \[l:minLambdaChi\] and \[l:muBound\] below. Given these lemmas, the main result of this paper (Theorem \[t:hker\]) shows that $$\hat H(t, x, y) \approx \frac{C'_\mathcal I(x, y)}{t^{k/2}} \exp \paren[\Big]{ -\mu_0 t - \frac{d'_\mathcal I(x, y)^2 }{t} }\,, \quad\text{as } t \to \infty\,.$$ Here $k$ is the rank of the deck transformation group, and $C'_\mathcal I$, $d'_\mathcal I$ are explicitly defined functions. The Abelianized Winding of Brownian Motion on Manifolds. -------------------------------------------------------- We now turn our attention to studying the winding of Brownian trajectories on manifolds. The long time asymptotics of Brownian winding numbers is a classical topic which has been investigated in depth. The first result in this direction is due to Spitzer [@Spitzer58], who considered a Brownian motion in the punctured plane. If $\theta(t)$ denotes the total winding angle up to time $t$, then Spitzer showed $$\frac{2\theta(t)}{\log t} \xrightarrow[t\to \infty]{w} \xi \,,$$ where $\xi$ is a standard
two key ingredients of the proof, and they are formulated in Lemmas \[l:minLambdaChi\] and \[l:muBound\] below. Given these lemmas, the main result of this paper (Theorem \[t:hker\]) shows that $$\hat H(t, x, y) \approx \frac{C'_\mathcal I(x, y)}{t^{k/2}} \exp \paren[\Big]{ -\mu_0 t - \frac{d'_\mathcal I(x, y)^2 }{t} }\,, \quad\text{as } t \to \infty\,.$$ Here $k$ is the rank of the deck transformation group, and $C'_\mathcal I$, $d'_\mathcal I$ are explicitly defined functions. The Abelianized Winding of Brownian Motion on Manifolds. -------------------------------------------------------- We now turn our attention to studying the winding of Brownian trajectories on manifolds. The long time asymptotics of Brownian winding numbers is a classical topic which has been investigated in depth. The first result in this direction is due to Spitzer [@Spitzer58], who considered a Brownian motion in the punctured plane. If $\theta(t)$ denotes the total winding angle up to time $t$, then Spitzer showed $$\frac{2\theta(t)}{\log t} \xrightarrow[t\to \infty]{w} \xi \,,$$ where $\xi$ is a standard[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] convergence rate of $O(\frac{1}{k})$. Since the inception of Nesterov’s work, there has been a body of work done on the theoretical development of first-order accelerated methods (for a detailed discussion see [@nesterov:2005], [@nesterov:2013] and [@nesterov:2014]). Furthermore, an unified summary of all of the methods of Nesterov can be found in [@tseng:2010]. Recently, Su *et al.* [@su:2016] carried out a theoretical analysis on the methods of Nesterov and showed that it can be interpreted as a finite difference approximation of a second-order *Ordinary Differential Equation* (ODE). **Motivation & Contribution** {#subsec:motiv} ----------------------------- We have seen from the literature that Nesterov’s restarting scheme is very successful in achieving faster convergence for *Gradient Descent* algorithms. However, to the best of our knowledge, the potential opportunity of Nesterov’s acceleration has not been yet explored to IPMs to solve LP problems. Motivating by the power of acceleration and to fill the research gap, as a first attempt, in this
convergence rate of $O(\frac{1}{k})$. Since the inception of Nesterov’s work, there has been a body of work done on the theoretical development of first-order accelerated methods (for a detailed discussion see [@nesterov:2005], [@nesterov:2013] and [@nesterov:2014]). Furthermore, an unified summary of all of the methods of Nesterov can be found in [@tseng:2010]. Recently, Su *et al.* [@su:2016] carried out a theoretical analysis on the methods of Nesterov and showed that it can be interpreted as a finite difference approximation of a second-order *Ordinary Differential Equation* (ODE). **Motivation & Contribution** {#subsec:motiv} ----------------------------- We have seen from the literature that Nesterov’s restarting scheme is very successful in achieving faster convergence for *Gradient Descent* algorithms. However, to the best of our knowledge, the potential opportunity of Nesterov’s acceleration has not been yet explored to IPMs to solve LP problems. Motivating by the power of acceleration and to fill the research gap, as a first attempt, in this[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] for each edge of $H$. Two vertices of $f(H)$ representing directed edges $e_1$ from $v_1$ to $v_2$ and $e_2$ from $v_3$ to $v_4$ in $H$ are connected by an edge from $e_1$ to $e_2$ in $f(H)$ when $v_2 = v_3$. That is, each edge in the line digraph of $H$ represents a length-two directed path in $H$. Let $\mathcal{U}$ be a factor language. A path $p$ of length $m$ in $R_n(\mathcal{U})$ corresponds to a word of length $n + m - 1$. The graph $R_m (\mathcal{U}) $ can be considered as a subgraph of $f^{m-n}(R_n(\mathcal{U}))$. Moreover, the graph $R_{n+1}(\mathcal{U})$ is obtained from $f(R_{n}(\mathcal{U}))$ by deleting edges that correspond to obstructions of $\mathcal{U}$ of length $n+1$. We call a vertice $v$ of a directed graph $H$ [*a fork*]{} if $v$ has out-degree more than one. Further we assume that all forks have out-degrees exactly 2 (this is the case of a binary alphabet). For a
for each edge of $H$. Two vertices of $f(H)$ representing directed edges $e_1$ from $v_1$ to $v_2$ and $e_2$ from $v_3$ to $v_4$ in $H$ are connected by an edge from $e_1$ to $e_2$ in $f(H)$ when $v_2 = v_3$. That is, each edge in the line digraph of $H$ represents a length-two directed path in $H$. Let $\mathcal{U}$ be a factor language. A path $p$ of length $m$ in $R_n(\mathcal{U})$ corresponds to a word of length $n + m - 1$. The graph $R_m (\mathcal{U}) $ can be considered as a subgraph of $f^{m-n}(R_n(\mathcal{U}))$. Moreover, the graph $R_{n+1}(\mathcal{U})$ is obtained from $f(R_{n}(\mathcal{U}))$ by deleting edges that correspond to obstructions of $\mathcal{U}$ of length $n+1$. We call a vertice $v$ of a directed graph $H$ [*a fork*]{} if $v$ has out-degree more than one. Further we assume that all forks have out-degrees exactly 2 (this is the case of a binary alphabet). For a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] efficient planning of maintenance. Our goal is to define and implement a system to automatically assess the obstruction on sewers from videos. This system has to provide a status on the volume of dirt or sedimentation in the pipes, to justify the cleaning needs. The deployment of this system in production will enable a more productive use of human resources, and will provide a unified model for guiding cleaning operations. State of the Art / Related Work =============================== The use of computer vision techniques in civil engineering applications has grown exponentially, as visual inspections are necessary to maintain the safety and functionality of basic infrastructures. To mitigate the costs derived from the manual interpretation of images or videos, diverse studies explore the use of computer vision techniques. Methods like feature extraction, edge detection, image segmentation, and object recognition have been considered to asses the condition of bridges, asphalt pavement, tunnels, underground concrete pipes, [*etc.*]{}[@abdel2003analysis;
efficient planning of maintenance. Our goal is to define and implement a system to automatically assess the obstruction on sewers from videos. This system has to provide a status on the volume of dirt or sedimentation in the pipes, to justify the cleaning needs. The deployment of this system in production will enable a more productive use of human resources, and will provide a unified model for guiding cleaning operations. State of the Art / Related Work =============================== The use of computer vision techniques in civil engineering applications has grown exponentially, as visual inspections are necessary to maintain the safety and functionality of basic infrastructures. To mitigate the costs derived from the manual interpretation of images or videos, diverse studies explore the use of computer vision techniques. Methods like feature extraction, edge detection, image segmentation, and object recognition have been considered to asses the condition of bridges, asphalt pavement, tunnels, underground concrete pipes, [*etc.*]{}[@abdel2003analysis;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of the specific ionization energy loss ($dE/dx$) in the main drift chamber and the TOF are combined and used for particle identification (PID) by forming confidence levels for pion and kaon hypotheses ($CL_\pi$, $CL_K$). Kaon (pion) candidates are required to satisfy $CL_{K(\pi)}>CL_{\pi(K)}$. To select $K_S^0$ candidates, pairs of oppositely charged tracks with distances of closest approach to the IP less than 20 cm along the $z$ axis are assigned as $\pi^+\pi^-$ without PID requirements. These $\pi^+\pi^-$ combinations are required to have an invariant mass within $\pm12$MeV of the nominal $K_S^0$ mass [@PDG2017] and have a decay length of the reconstructed $K_S^0$ larger than $2\sigma$ of the vertex resolution away from the IP. The $\pi^0$ and $\eta$ mesons are reconstructed via $\gamma\gamma$ decays. It is required that each electromagnetic shower starts within 700ns of the event start time and its energy is greater than 25(50)MeV in the barrel(endcap) region of the electromagnetic calorimeter (EMC) [@BESCol]. The opening angle between
of the specific ionization energy loss ($dE/dx$) in the main drift chamber and the TOF are combined and used for particle identification (PID) by forming confidence levels for pion and kaon hypotheses ($CL_\pi$, $CL_K$). Kaon (pion) candidates are required to satisfy $CL_{K(\pi)}>CL_{\pi(K)}$. To select $K_S^0$ candidates, pairs of oppositely charged tracks with distances of closest approach to the IP less than 20 cm along the $z$ axis are assigned as $\pi^+\pi^-$ without PID requirements. These $\pi^+\pi^-$ combinations are required to have an invariant mass within $\pm12$MeV of the nominal $K_S^0$ mass [@PDG2017] and have a decay length of the reconstructed $K_S^0$ larger than $2\sigma$ of the vertex resolution away from the IP. The $\pi^0$ and $\eta$ mesons are reconstructed via $\gamma\gamma$ decays. It is required that each electromagnetic shower starts within 700ns of the event start time and its energy is greater than 25(50)MeV in the barrel(endcap) region of the electromagnetic calorimeter (EMC) [@BESCol]. The opening angle between[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] we recall the known results on Rudakov-Shafarevich’s theory on derivations, lattices and Enriques surfaces. In section \[sec3\], we give a construction of a one dimensional family of classical and supersingular Enriques surfaces with the dual graph of type $\VII$. Moreover we show the non-existence of singular Enriques surfaces with the dual graph of type ${\rm VII}$ (Theorem \[non-existVII\]). In section \[sec4\], we discuss other cases, that is, the existence of singular Enriques surfaces of type $\I, \II, \VI$ and the non-existence of other cases (Theorems \[Ithm\], \[non-existI\], \[IIthm\], \[non-existII\], \[VIthm\], \[non-existVI\], \[non-existIII\]). In appendices A and B, we give two remarks. As appendix A, we show that the covering $K3$ surface of any singular Enriques surface has height $1$. As appendix B, we show that for each singular Enriques surface with the dual graph of type ${\rm I}$ its canonical cover is isomorphic to the Kummer surface of the product
we recall the known results on Rudakov-Shafarevich’s theory on derivations, lattices and Enriques surfaces. In section \[sec3\], we give a construction of a one dimensional family of classical and supersingular Enriques surfaces with the dual graph of type $\VII$. Moreover we show the non-existence of singular Enriques surfaces with the dual graph of type ${\rm VII}$ (Theorem \[non-existVII\]). In section \[sec4\], we discuss other cases, that is, the existence of singular Enriques surfaces of type $\I, \II, \VI$ and the non-existence of other cases (Theorems \[Ithm\], \[non-existI\], \[IIthm\], \[non-existII\], \[VIthm\], \[non-existVI\], \[non-existIII\]). In appendices A and B, we give two remarks. As appendix A, we show that the covering $K3$ surface of any singular Enriques surface has height $1$. As appendix B, we show that for each singular Enriques surface with the dual graph of type ${\rm I}$ its canonical cover is isomorphic to the Kummer surface of the product[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] it was conjectured that the face-centered cubic (FCC) with $P=8$ [@FCC] is the preferred structure since its $\gamma$ is negative and the largest [@Bowers2002]. For BCC structure, the GL analysis up to the order $O(\Delta^6)$ predicts a strong first-order phase transition at $\delta\mu_*\simeq3.6\Delta_0$ with the gap parameter $\Delta\simeq0.8\Delta_0$ [@Bowers2002]. The prediction of a strong first-order phase transition may invalidate the GL approach itself. On the other hand, by using the quasiclassical equation approach with a Fourier expansion for the order parameter, Combescot and Mora [@Combescot2004; @Combescot2005] predicted that the BCC-normal transition is of rather weak first order: The upper critical field $\delta\mu_*$ is only about $4\%$ higher than $\delta\mu_2$ with $\Delta\simeq0.1\Delta_0$ at $\delta\mu=\delta\mu_*$. If this result is reliable, it indicates that the higher-order expansions in the GL analysis is important for quantitative predictions. To understand this intuitively, let us simply add the eighth-order term $\frac{\eta}{4}\Delta^8$ to the GL potential (\[GL\]). A detailed analysis
it was conjectured that the face-centered cubic (FCC) with $P=8$ [@FCC] is the preferred structure since its $\gamma$ is negative and the largest [@Bowers2002]. For BCC structure, the GL analysis up to the order $O(\Delta^6)$ predicts a strong first-order phase transition at $\delta\mu_*\simeq3.6\Delta_0$ with the gap parameter $\Delta\simeq0.8\Delta_0$ [@Bowers2002]. The prediction of a strong first-order phase transition may invalidate the GL approach itself. On the other hand, by using the quasiclassical equation approach with a Fourier expansion for the order parameter, Combescot and Mora [@Combescot2004; @Combescot2005] predicted that the BCC-normal transition is of rather weak first order: The upper critical field $\delta\mu_*$ is only about $4\%$ higher than $\delta\mu_2$ with $\Delta\simeq0.1\Delta_0$ at $\delta\mu=\delta\mu_*$. If this result is reliable, it indicates that the higher-order expansions in the GL analysis is important for quantitative predictions. To understand this intuitively, let us simply add the eighth-order term $\frac{\eta}{4}\Delta^8$ to the GL potential (\[GL\]). A detailed analysis[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the information about Vega-like star candidates. This work has been supported (in part) by the Polish Astroparticle Physics Network. AP was financed by the research grant of the Polish Ministry of Science PBZ/MNiSW/07/2006/34A. TTT has been supported by Program for Improvement of Research Environment for Young Researchers from Special Coordination Funds for Promoting Science and Technology, and the Grant-in-Aid for the Scientific Research Fund (20740105) commissioned by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. TTT has been partially supported from the Grand-in-Aid for the Global COE Program “Quest for Fundamental Principles in the Universe: from Particles to the Solar System and the Cosmos” from the MEXT. This research has made use of the NASA/IPAC Extragalactic Database (NED), operated by the Jet Propulsion Laboratory at Caltech, under contract with the NASA and the SIMBAD database, operated at CDS, Strasbourg, France Cannon, J. M., et al. 2006, ,
the information about Vega-like star candidates. This work has been supported (in part) by the Polish Astroparticle Physics Network. AP was financed by the research grant of the Polish Ministry of Science PBZ/MNiSW/07/2006/34A. TTT has been supported by Program for Improvement of Research Environment for Young Researchers from Special Coordination Funds for Promoting Science and Technology, and the Grant-in-Aid for the Scientific Research Fund (20740105) commissioned by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. TTT has been partially supported from the Grand-in-Aid for the Global COE Program “Quest for Fundamental Principles in the Universe: from Particles to the Solar System and the Cosmos” from the MEXT. This research has made use of the NASA/IPAC Extragalactic Database (NED), operated by the Jet Propulsion Laboratory at Caltech, under contract with the NASA and the SIMBAD database, operated at CDS, Strasbourg, France Cannon, J. M., et al. 2006, ,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] hand, many real networks are embedded in the three-dimensional space - these are called spatial networks [@hbp]. In particular, magnetic systems belong obviously to this class. Therefore, the aim of this work is to calculate the phase transition temperature $T_{SG}$ again for the spatial networks. As in our previous texts [@amk1; @amk2] the clustering coefficient $C$ is varied as to investigate the influence of the density of frustrations on $T_{SG}$.\ In the next section we describe the calculation scheme, including the details on the control of the clustering coefficient. Third section is devoted to our numerical results. These are the thermal dependences of the magnetic susceptibility $\chi(T)$ and of the spacific heat $C_v(T)$. Final conclusions are given in the last section. Calculations ============ The spatial network is constructed as follows. Three coordinates of the positions of nodes are selected randomly from the homogeneous distribution between 0 and 1. Two nodes are linked if their
hand, many real networks are embedded in the three-dimensional space - these are called spatial networks [@hbp]. In particular, magnetic systems belong obviously to this class. Therefore, the aim of this work is to calculate the phase transition temperature $T_{SG}$ again for the spatial networks. As in our previous texts [@amk1; @amk2] the clustering coefficient $C$ is varied as to investigate the influence of the density of frustrations on $T_{SG}$.\ In the next section we describe the calculation scheme, including the details on the control of the clustering coefficient. Third section is devoted to our numerical results. These are the thermal dependences of the magnetic susceptibility $\chi(T)$ and of the spacific heat $C_v(T)$. Final conclusions are given in the last section. Calculations ============ The spatial network is constructed as follows. Three coordinates of the positions of nodes are selected randomly from the homogeneous distribution between 0 and 1. Two nodes are linked if their[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] is well within the reionization epoch of the Universe. So we suggest that formation the main body of a large-scale structure in the model of Zeldovich pancakes happens after the period of the secondary ionization [@ts], [@tsb] in the universe. We consider a contractive flat layer of plasma surrounded by the radiation with equilibrium Planckian spectrum (CMB). When crossing the contractive layer the photons experience a Compton scattering on electrons, which velocities have a thermal (chaotic), and directed (bulk motion) components. For sufficiently low-temperature plasma the bulk motion comptonization becomes more important than the thermal one. In Section 2 we consider bulk motion comptonization by the “cold” contractive layer. We solve analytically the Kompaneets equation in a contractive layer, similar to the one for converging flow in [@BP1], [@BW], which is illuminated by an equilibrium radiation flux on its boundary. In the Section 3 we compare the spectra of a thermal,
is well within the reionization epoch of the Universe. So we suggest that formation the main body of a large-scale structure in the model of Zeldovich pancakes happens after the period of the secondary ionization [@ts], [@tsb] in the universe. We consider a contractive flat layer of plasma surrounded by the radiation with equilibrium Planckian spectrum (CMB). When crossing the contractive layer the photons experience a Compton scattering on electrons, which velocities have a thermal (chaotic), and directed (bulk motion) components. For sufficiently low-temperature plasma the bulk motion comptonization becomes more important than the thermal one. In Section 2 we consider bulk motion comptonization by the “cold” contractive layer. We solve analytically the Kompaneets equation in a contractive layer, similar to the one for converging flow in [@BP1], [@BW], which is illuminated by an equilibrium radiation flux on its boundary. In the Section 3 we compare the spectra of a thermal,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card