text
stringlengths
174
26.9k
context
stringlengths
491
27.2k
<context>[NEXA_RESTORE] to their inability to fully exploit the channel’s frequency diversity or due to a degraded performance in combating the residual inter-symbol interference. Therefore, it is of paramount importance to investigate the frequency diversity order achieved by linear equalizers, which is the subject of this paper. Our analysis shows that while single-carrier infinite-length symbol-by-symbol minimum mean-square error (MMSE) linear equalization achieves full frequency diversity, zero-forcing (ZF) linear equalizers cannot exploit the frequency diversity provided by frequency-selective channels. A preliminary version of the results of this paper on the MMSE linear equalization has partially appeared in \[5\] and the proofs available in [@ali:ISIT07_1] are skipped and referred to wherever necessary. The current paper provides two key contributions beyond \[5\]. First, the diversity analysis of ZF equalizers is added. Second, the MMSE analysis in \[5\] lacked a critical step that was not rigorously complete; the missing parts that have key role in analyzing the
to their inability to fully exploit the channel’s frequency diversity or due to a degraded performance in combating the residual inter-symbol interference. Therefore, it is of paramount importance to investigate the frequency diversity order achieved by linear equalizers, which is the subject of this paper. Our analysis shows that while single-carrier infinite-length symbol-by-symbol minimum mean-square error (MMSE) linear equalization achieves full frequency diversity, zero-forcing (ZF) linear equalizers cannot exploit the frequency diversity provided by frequency-selective channels. A preliminary version of the results of this paper on the MMSE linear equalization has partially appeared in \[5\] and the proofs available in [@ali:ISIT07_1] are skipped and referred to wherever necessary. The current paper provides two key contributions beyond \[5\]. First, the diversity analysis of ZF equalizers is added. Second, the MMSE analysis in \[5\] lacked a critical step that was not rigorously complete; the missing parts that have key role in analyzing the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] starting events (HESE) sample with 7.5 years of data [@Schneider:2019icrc_hese]. By using events that start in the detector, the measurement is sensitive to both the northern and southern skies, as well as all three flavors of neutrinos, unlike [@Bustamante:2017xuy] which used only a single class of events in the six-year HESE sample. Analysis method {#sec:method} =============== Several improvements have been incorporated into the HESE-7.5 analysis chain, and are used in this measurement. These include better detector modeling, a three-topology classifier that corresponds to the three neutrino flavors [@Usner:2018qry], improved atmospheric neutrino background calculation [@Arguelles:2018awr], and a new likelihood treatment that accounts for statistical uncertainties [@Arguelles:2019izp]. The selection cuts have remained unchanged and require the total charge associated with the event to be above with the charge in the outer layer of the detector (veto region) to be below . This rejects almost all of the atmospheric muon background, as well as a fraction of atmospheric neutrinos from the
starting events (HESE) sample with 7.5 years of data [@Schneider:2019icrc_hese]. By using events that start in the detector, the measurement is sensitive to both the northern and southern skies, as well as all three flavors of neutrinos, unlike [@Bustamante:2017xuy] which used only a single class of events in the six-year HESE sample. Analysis method {#sec:method} =============== Several improvements have been incorporated into the HESE-7.5 analysis chain, and are used in this measurement. These include better detector modeling, a three-topology classifier that corresponds to the three neutrino flavors [@Usner:2018qry], improved atmospheric neutrino background calculation [@Arguelles:2018awr], and a new likelihood treatment that accounts for statistical uncertainties [@Arguelles:2019izp]. The selection cuts have remained unchanged and require the total charge associated with the event to be above with the charge in the outer layer of the detector (veto region) to be below . This rejects almost all of the atmospheric muon background, as well as a fraction of atmospheric neutrinos from the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Since $X$ is smooth, we can identify $N_k(X)$ with the abstract dual $N^{n - k}(X) := N_{n - k}(X)^\vee$ via the intersection pairing $N_k(X) \times N_{n - k}(X) \longrightarrow \mathbb{R}$. For any k-dimensional subvariety $Y$ of $X$, let $[Y]$ be its class in $N_k(X)$. A class $\alpha \in N_k(X)$ is said to be effective if there exists subvarities $Y_1, Y_2, ... , Y_m$ and non-negetive real numbers $n_1, n _2, ..., n_m$ such that $\alpha$ can be written as $ \alpha = \sum n_i Y_i$. The *pseudo-effective cone* $\overline{\operatorname{{Eff}}}_k(X) \subset N_k(X)$ is the closure of the cone generated by classes of effective cycles. It is full-dimensional and does not contain any nonzero linear subspaces. The pseudo-effective dual classes form a closed cone in $N^k(X)$ which we denote by $\overline{\operatorname{{Eff}}}^k(X)$. For smooth varities $Y$ and $Z$, a map $f: N^k(Y) \longrightarrow N^K(Z)$ is called pseudo-effective if $f(\overline{\operatorname{{Eff}}}^k(Y)) \subset \overline{\operatorname{{Eff}}}^k(Z)$. The *nef cone* $\operatorname{{Nef}}^k(X) \subset
Since $X$ is smooth, we can identify $N_k(X)$ with the abstract dual $N^{n - k}(X) := N_{n - k}(X)^\vee$ via the intersection pairing $N_k(X) \times N_{n - k}(X) \longrightarrow \mathbb{R}$. For any k-dimensional subvariety $Y$ of $X$, let $[Y]$ be its class in $N_k(X)$. A class $\alpha \in N_k(X)$ is said to be effective if there exists subvarities $Y_1, Y_2, ... , Y_m$ and non-negetive real numbers $n_1, n _2, ..., n_m$ such that $\alpha$ can be written as $ \alpha = \sum n_i Y_i$. The *pseudo-effective cone* $\overline{\operatorname{{Eff}}}_k(X) \subset N_k(X)$ is the closure of the cone generated by classes of effective cycles. It is full-dimensional and does not contain any nonzero linear subspaces. The pseudo-effective dual classes form a closed cone in $N^k(X)$ which we denote by $\overline{\operatorname{{Eff}}}^k(X)$. For smooth varities $Y$ and $Z$, a map $f: N^k(Y) \longrightarrow N^K(Z)$ is called pseudo-effective if $f(\overline{\operatorname{{Eff}}}^k(Y)) \subset \overline{\operatorname{{Eff}}}^k(Z)$. The *nef cone* $\operatorname{{Nef}}^k(X) \subset[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and Yearsley *et al.* [@year] studied some specific cases, clarifying the maximal backflow limit. However, so far no experiments have been performed, and a clear program to carry out one is also missing. Two important challenges are the measurement of the current density (the existing proposals for local and direct measurements are rather idealized schemes [@Jus]), and the preparation of states with a detectable amount of backflow. In this paper we derive a general relation that connects the current and the particle density, allowing for the detection of backflow by a density measurement, and propose a scheme for its observation with Bose-Einstein condensates in harmonic traps, that could be easily implemented with current experimental technologies. In particular, we show that preparing a condensate with positive-momentum components, and then further transferring a positive momentum kick to part of the atoms, causes under certain conditions, remarkably, a current flow in the negative direction.
and Yearsley *et al.* [@year] studied some specific cases, clarifying the maximal backflow limit. However, so far no experiments have been performed, and a clear program to carry out one is also missing. Two important challenges are the measurement of the current density (the existing proposals for local and direct measurements are rather idealized schemes [@Jus]), and the preparation of states with a detectable amount of backflow. In this paper we derive a general relation that connects the current and the particle density, allowing for the detection of backflow by a density measurement, and propose a scheme for its observation with Bose-Einstein condensates in harmonic traps, that could be easily implemented with current experimental technologies. In particular, we show that preparing a condensate with positive-momentum components, and then further transferring a positive momentum kick to part of the atoms, causes under certain conditions, remarkably, a current flow in the negative direction.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the impact of some parameters. Such comparisons can be performed by considering various criteria, each one of them providing a different insight on the input-output relationship. For example, some criteria are based on distances between the probability density functions (e.g. $L^1$ and $L^2$ norms ([@borgo07]) or Kullback-Leibler distance ([@LCS06]), while others rely on functionals of conditional moments. Among those, variance-based methods are the most widely used [@salcha00]. They evaluate how the inputs contribute to the output variance through the so-called Sobol sensitivity indices [@SOB93], which naturally emerge from a functional ANOVA decomposition of the output [@hoeff48; @owen94; @anto84]. Interpretation of the indices in this setting makes it possible to exhibit which input or interaction of inputs most influences the variability of the computer code output. This can be typically relevant for model calibration [@kenoha01] or model validation [@bayber07].\ Consequently, in order to conduct a sensitivity study, estimation of such sensitivity indices
the impact of some parameters. Such comparisons can be performed by considering various criteria, each one of them providing a different insight on the input-output relationship. For example, some criteria are based on distances between the probability density functions (e.g. $L^1$ and $L^2$ norms ([@borgo07]) or Kullback-Leibler distance ([@LCS06]), while others rely on functionals of conditional moments. Among those, variance-based methods are the most widely used [@salcha00]. They evaluate how the inputs contribute to the output variance through the so-called Sobol sensitivity indices [@SOB93], which naturally emerge from a functional ANOVA decomposition of the output [@hoeff48; @owen94; @anto84]. Interpretation of the indices in this setting makes it possible to exhibit which input or interaction of inputs most influences the variability of the computer code output. This can be typically relevant for model calibration [@kenoha01] or model validation [@bayber07].\ Consequently, in order to conduct a sensitivity study, estimation of such sensitivity indices[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] argument [@EPR] considers a system of two microscopic particles that are correlated; assuming that various types of measurements are performed on this system in remote locations, and using local realism, it shows that the system contains more elements of reality than those contained in quantum mechanics. Bohr gave a refutation of the argument [@Bohr] by pointing out that intrinsic physical properties should not be attributed to microscopic systems, independently of their measurement apparatuses; in his view of quantum mechanics (often called orthodox), the notion of reality introduced by EPR is inappropriate. Later, Bell extended the EPR argument and used inequalities to show that local realism and quantum mechanics may sometimes lead to contradictory predictions [@Bell]. Using pairs of correlated photons emitted in a cascade, Clauser et al. [@FC] checked that, even in this case, the results of quantum mechanics are correct; other experiments leading to the same conclusion were performed by Fry et al. [@Fry], Aspect
argument [@EPR] considers a system of two microscopic particles that are correlated; assuming that various types of measurements are performed on this system in remote locations, and using local realism, it shows that the system contains more elements of reality than those contained in quantum mechanics. Bohr gave a refutation of the argument [@Bohr] by pointing out that intrinsic physical properties should not be attributed to microscopic systems, independently of their measurement apparatuses; in his view of quantum mechanics (often called orthodox), the notion of reality introduced by EPR is inappropriate. Later, Bell extended the EPR argument and used inequalities to show that local realism and quantum mechanics may sometimes lead to contradictory predictions [@Bell]. Using pairs of correlated photons emitted in a cascade, Clauser et al. [@FC] checked that, even in this case, the results of quantum mechanics are correct; other experiments leading to the same conclusion were performed by Fry et al. [@Fry], Aspect[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] which $$\begin{aligned} H_D(\hat q_D, \hat p_D)=V^\dag H_S(\hat q_S, \hat p_S)V\hspace{0.3cm}\hspace{0.3cm}\mbox{with}\hspace{0.3cm}V(t)=e^{iS(q,t)/\hbar} \label{eq:SUTrans}\end{aligned}$$ where $S(q,t)$ is the classical action. In this case we can then write $$\begin{aligned} \hat q_D=V^\dag \hat q_S V \hspace{0.5cm}\Rightarrow\hspace{0.5cm} \hat q_D=\hat q_S,\hspace{1.5cm}\\ \hat p_D=V^\dag \hat p_S V\hspace{0.5cm}\Rightarrow \hspace{0.5cm}\hat p_D=\hat p_S+\left(\frac{\partial S}{\partial q}\right).\end{aligned}$$ To determine the wave function evolution, we use the Schrödinger equation written in the form $$\begin{aligned} i\hbar\frac{\partial \psi(q,t)}{\partial t}=H_D(\hat q_D, \hat p_D)\psi(q,t).\end{aligned}$$ Utilising $\hat p_D=\hat p_S+\left(\frac{\partial S}{\partial q}\right)$, the quadratic term in the Hamiltonian becomes $$\begin{aligned} \hat p^2_D=\left[\hat p_S+\left(\frac{\partial S}{\partial q}\right)\right]^2=\hat p_S^2+\hat p_S\left(\frac{\partial S}{\partial q}\right)+\left(\frac{\partial S}{\partial q}\right)\hat p_S+\left(\frac{\partial S}{\partial q}\right)^2.\end{aligned}$$ If we use $\hat p_S=-i\hbar\partial/\partial q$, we see that $\hat p_D^2$ has real and imaginary parts. The real part can be written as $$\begin{aligned} \Re(\hat p_D^2)=\hat p_S^2+\left(\frac{\partial S}{\partial q}\right)^2,\end{aligned}$$ while the imaginary part becomes $$\begin{aligned} \Im(\hat p_D^2)=-i\hbar\frac{\partial^2S}{\partial q^2}+2\left(\frac{\partial S}{\partial q}\right)\hat p_S.\end{aligned}$$ The Schrödinger equation can now be expressed in the form $$\begin{aligned} i\hbar R(q,t)^{-1}\frac{\partial R(q,t)}{\partial t}-\hbar\frac{\partial S(q,t)}{\partial t}=R^{-1}(q,t)H_D(\hat q_D, \hat p_D)R(q,t)
which $$\begin{aligned} H_D(\hat q_D, \hat p_D)=V^\dag H_S(\hat q_S, \hat p_S)V\hspace{0.3cm}\hspace{0.3cm}\mbox{with}\hspace{0.3cm}V(t)=e^{iS(q,t)/\hbar} \label{eq:SUTrans}\end{aligned}$$ where $S(q,t)$ is the classical action. In this case we can then write $$\begin{aligned} \hat q_D=V^\dag \hat q_S V \hspace{0.5cm}\Rightarrow\hspace{0.5cm} \hat q_D=\hat q_S,\hspace{1.5cm}\\ \hat p_D=V^\dag \hat p_S V\hspace{0.5cm}\Rightarrow \hspace{0.5cm}\hat p_D=\hat p_S+\left(\frac{\partial S}{\partial q}\right).\end{aligned}$$ To determine the wave function evolution, we use the Schrödinger equation written in the form $$\begin{aligned} i\hbar\frac{\partial \psi(q,t)}{\partial t}=H_D(\hat q_D, \hat p_D)\psi(q,t).\end{aligned}$$ Utilising $\hat p_D=\hat p_S+\left(\frac{\partial S}{\partial q}\right)$, the quadratic term in the Hamiltonian becomes $$\begin{aligned} \hat p^2_D=\left[\hat p_S+\left(\frac{\partial S}{\partial q}\right)\right]^2=\hat p_S^2+\hat p_S\left(\frac{\partial S}{\partial q}\right)+\left(\frac{\partial S}{\partial q}\right)\hat p_S+\left(\frac{\partial S}{\partial q}\right)^2.\end{aligned}$$ If we use $\hat p_S=-i\hbar\partial/\partial q$, we see that $\hat p_D^2$ has real and imaginary parts. The real part can be written as $$\begin{aligned} \Re(\hat p_D^2)=\hat p_S^2+\left(\frac{\partial S}{\partial q}\right)^2,\end{aligned}$$ while the imaginary part becomes $$\begin{aligned} \Im(\hat p_D^2)=-i\hbar\frac{\partial^2S}{\partial q^2}+2\left(\frac{\partial S}{\partial q}\right)\hat p_S.\end{aligned}$$ The Schrödinger equation can now be expressed in the form $$\begin{aligned} i\hbar R(q,t)^{-1}\frac{\partial R(q,t)}{\partial t}-\hbar\frac{\partial S(q,t)}{\partial t}=R^{-1}(q,t)H_D(\hat q_D, \hat p_D)R(q,t) [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Most of the underwater manipulation tasks, such as maintenance of ships, underwater pipeline or weld inspection, surveying, oil and gas searching, cable burial and mating of underwater connector, require the manipulator mounted on the vehicle to be in contact with the underwater object or environment. The aforementioned systems are complex and they are characterized by several strong constraints, namely the complexity in the mathematical model and the difficulty to control the vehicle. These constraints should be taken into consideration when designing a force control scheme. In order to increase the adaptability of UVMS, force control must be included into the control system of the UVMS. Although many force control schemes have been developed for earth-fixed manipulators and space robots, these control schemes cannot be used directly on UVMS because of the unstructured nature of the underwater environment. From the control perspective, achieving these type of tasks requires specific approaches[@Siciliano_Sciavicco]. However, speaking
Most of the underwater manipulation tasks, such as maintenance of ships, underwater pipeline or weld inspection, surveying, oil and gas searching, cable burial and mating of underwater connector, require the manipulator mounted on the vehicle to be in contact with the underwater object or environment. The aforementioned systems are complex and they are characterized by several strong constraints, namely the complexity in the mathematical model and the difficulty to control the vehicle. These constraints should be taken into consideration when designing a force control scheme. In order to increase the adaptability of UVMS, force control must be included into the control system of the UVMS. Although many force control schemes have been developed for earth-fixed manipulators and space robots, these control schemes cannot be used directly on UVMS because of the unstructured nature of the underwater environment. From the control perspective, achieving these type of tasks requires specific approaches[@Siciliano_Sciavicco]. However, speaking[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] first studied the class of set-valued coherent risk measures by proposing some axioms. Hamel (2009) introduced set-valued convex risk measures by an axiomatic approach. For more studies on set-valued risk measures, see Hamel and Heyde (2010), Hamel et al. (2011), Hamel et al. (2013), Labuschagne and Offwood-Le Roux (2014), Ararat et al. (2014), Farkas et al. (2015), Molchanov and Cascos (2016), and the references therein. A natural set-valued risk statistic can be considered as an empirical (or a data-based) version of a set-valued risk measure.\ From the statistical point of view, the behaviour of a random variable can be characterized by its observations, the samples of the random variable. Heyde, Kou and Peng (2007) and Kou, Peng and Heyde (2013) first introduced the class of natural risk statistics, the corresponding representation results are also derived. An alternative proof of the representation result of the natural risk statistics was also derived by
first studied the class of set-valued coherent risk measures by proposing some axioms. Hamel (2009) introduced set-valued convex risk measures by an axiomatic approach. For more studies on set-valued risk measures, see Hamel and Heyde (2010), Hamel et al. (2011), Hamel et al. (2013), Labuschagne and Offwood-Le Roux (2014), Ararat et al. (2014), Farkas et al. (2015), Molchanov and Cascos (2016), and the references therein. A natural set-valued risk statistic can be considered as an empirical (or a data-based) version of a set-valued risk measure.\ From the statistical point of view, the behaviour of a random variable can be characterized by its observations, the samples of the random variable. Heyde, Kou and Peng (2007) and Kou, Peng and Heyde (2013) first introduced the class of natural risk statistics, the corresponding representation results are also derived. An alternative proof of the representation result of the natural risk statistics was also derived by[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] @garcia; @garcia2] in the $^{4}$He-rich film between the sapphire and the $^{3}$He-rich bulk phase (Fig. \[fig:contactangle\]). For a solid substrate in contact with a phase-separated $^{3}$He-$^{4}$He mixture, complete wetting by the $^{4}$He-rich “d-phase” was generally expected, due to the van der Waals attraction by the substrate [@romagnan; @sornette]. However, we measured the contact angle $\theta$ of the $^{3}$He-$^{4}$He interface on sapphire, and we found that it is finite. Furthermore, it increases between 0.81 and 0.86 K, close to the tri-critical point at $T_{t}$ = 0.87 K [@jltp]. This behavior is opposite to the usual “critical point wetting” where $\theta$ decreases to zero at a wetting temperature $T_{w}$ below the critical point. In this letter, we briefly recall our experimental results before explaining why the “critical Casimir effect” provides a reasonable interpretation of our observations. ![The phase diagram of $^{3}$He-$^{4}$He mixtures (left graph). On the right, a schematic view of the contact angle $\theta$. There is a superfluid film of $^{4}$He
@garcia; @garcia2] in the $^{4}$He-rich film between the sapphire and the $^{3}$He-rich bulk phase (Fig. \[fig:contactangle\]). For a solid substrate in contact with a phase-separated $^{3}$He-$^{4}$He mixture, complete wetting by the $^{4}$He-rich “d-phase” was generally expected, due to the van der Waals attraction by the substrate [@romagnan; @sornette]. However, we measured the contact angle $\theta$ of the $^{3}$He-$^{4}$He interface on sapphire, and we found that it is finite. Furthermore, it increases between 0.81 and 0.86 K, close to the tri-critical point at $T_{t}$ = 0.87 K [@jltp]. This behavior is opposite to the usual “critical point wetting” where $\theta$ decreases to zero at a wetting temperature $T_{w}$ below the critical point. In this letter, we briefly recall our experimental results before explaining why the “critical Casimir effect” provides a reasonable interpretation of our observations. ![The phase diagram of $^{3}$He-$^{4}$He mixtures (left graph). On the right, a schematic view of the contact angle $\theta$. There is a superfluid film of $^{4}$He[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] These states are frequenlty associated with the ejection of relativistic streams of plasma, which form the jets, responsible for the radio emission of the sources [@2004MNRAS.355.1105F]. The matter essentially flows into black hole with the speed of light, while the sound speed at maximum can reach $c/\sqrt{3}$. Therefore, the accretion flow must have transonic nature. The viscous accretion with transonic solution based on the alpha-disk model was first studied by [@1981AcA....31..283P] and [@1982AcA....32....1M]. After that, e.g. [@1989ApJ...336..304A] examined the stability and structure of transonic disks. The possibility of collimation of jets by thick accretion tori was proposed by, e.g., [@1981MNRAS.197..529S]. In respect of the value of angular momentum there are two main regimes of accretion, the Bondi accretion, which refers to spherical accretion of gas without any angular momentum, and the disk-like accretion with Keplerian distribution of angular momentum. In the case of the former, the sonic point is located farther away
These states are frequenlty associated with the ejection of relativistic streams of plasma, which form the jets, responsible for the radio emission of the sources [@2004MNRAS.355.1105F]. The matter essentially flows into black hole with the speed of light, while the sound speed at maximum can reach $c/\sqrt{3}$. Therefore, the accretion flow must have transonic nature. The viscous accretion with transonic solution based on the alpha-disk model was first studied by [@1981AcA....31..283P] and [@1982AcA....32....1M]. After that, e.g. [@1989ApJ...336..304A] examined the stability and structure of transonic disks. The possibility of collimation of jets by thick accretion tori was proposed by, e.g., [@1981MNRAS.197..529S]. In respect of the value of angular momentum there are two main regimes of accretion, the Bondi accretion, which refers to spherical accretion of gas without any angular momentum, and the disk-like accretion with Keplerian distribution of angular momentum. In the case of the former, the sonic point is located farther away[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] hand, lossless propagation, encountered in acoustical cavity for instance, can be represented as a *time delay*. Both effects can be combined, so that time-delayed long-memory operators model a propagation with losses. Stabilization of the wave equation by a boundary damping, as opposed to an internal damping, has been investigated in a wealth of works, most of which employing the equivalent admittance formulation (\[eq:=00005BMOD=00005D\_IBC-Admittance\]), see Remark \[rem:=00005BMOD=00005D\_Terminology\] for the terminology. Unless otherwise specified, the works quoted below deal with the multidimensional wave equation. Early studies established exponential stability with a proportional admittance [@chen1981note; @lagnese1983decay; @komornik1990direct]. A delay admittance is considered in [@nicaise2006stability], where exponential stability is proven under a sufficient delay-independent stability condition that can be interpreted as a passivity condition of the admittance operator. The proof of well-posedness relies on the formulation of an evolution problem using an infinite-dimensional realization of the delay through a transport equation (see [@engel2000semigroup § VI.6] [@curtainewart1995infinitedim § 2.4] and
hand, lossless propagation, encountered in acoustical cavity for instance, can be represented as a *time delay*. Both effects can be combined, so that time-delayed long-memory operators model a propagation with losses. Stabilization of the wave equation by a boundary damping, as opposed to an internal damping, has been investigated in a wealth of works, most of which employing the equivalent admittance formulation (\[eq:=00005BMOD=00005D\_IBC-Admittance\]), see Remark \[rem:=00005BMOD=00005D\_Terminology\] for the terminology. Unless otherwise specified, the works quoted below deal with the multidimensional wave equation. Early studies established exponential stability with a proportional admittance [@chen1981note; @lagnese1983decay; @komornik1990direct]. A delay admittance is considered in [@nicaise2006stability], where exponential stability is proven under a sufficient delay-independent stability condition that can be interpreted as a passivity condition of the admittance operator. The proof of well-posedness relies on the formulation of an evolution problem using an infinite-dimensional realization of the delay through a transport equation (see [@engel2000semigroup § VI.6] [@curtainewart1995infinitedim § 2.4] and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] than regarding the problem as just a generalisation of the well-known Israel junction conditions \[13\]. There are more than junction conditions involved. In the case of the surface of a star, junction conditions are rather separated from the role of the initial value problem (because the surface is timelike). In the case of a change of signature, this must take place on a spacelike surface and so is essentially tied in to the nature of the initial value problem. Junction conditions play here a kinematical role, while the real dynamics of the change of signature are captured by the constraints associated with the field equations. This understanding underlies the approach we adopt.\ The first fundamental form must be continuous. The continuity of the the second fundamental form as seen from both sides, is only assumed up to the action of an infinitesimal diffeomorphism corresponding to a Lie derivative. This allows a
than regarding the problem as just a generalisation of the well-known Israel junction conditions \[13\]. There are more than junction conditions involved. In the case of the surface of a star, junction conditions are rather separated from the role of the initial value problem (because the surface is timelike). In the case of a change of signature, this must take place on a spacelike surface and so is essentially tied in to the nature of the initial value problem. Junction conditions play here a kinematical role, while the real dynamics of the change of signature are captured by the constraints associated with the field equations. This understanding underlies the approach we adopt.\ The first fundamental form must be continuous. The continuity of the the second fundamental form as seen from both sides, is only assumed up to the action of an infinitesimal diffeomorphism corresponding to a Lie derivative. This allows a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] example and take a point of view different from the one mentioned above. Instead of fixing an abelian variety $A$ and considering the minimal dimension of orbits of degree $k$ for rational equivalence, we will be interested in the maximal dimension of such orbits for a very general abelian variety $A$ of dimension $g$ with a given polarization $\theta$.\ This perspective has already been studied by Pirola, Alzati-Pirola, and Voisin (see respectively [@P],[@AP],[@V]) with a view towards the gonality of very general abelian varieties. The story begins with [@P] in which Pirola shows that, given a very general abelian variety $A$, curves of geometric genus less than $\dim A-1$ are rigid in the Kummer variety $K(A)=A/\{\pm 1\}$. This allows him to show: \[P\] A very general abelian variety of dimension $\geq 3$ does not have positive dimensional orbits of degree $2$. In particular it does not admit a non-constant morphism from a
example and take a point of view different from the one mentioned above. Instead of fixing an abelian variety $A$ and considering the minimal dimension of orbits of degree $k$ for rational equivalence, we will be interested in the maximal dimension of such orbits for a very general abelian variety $A$ of dimension $g$ with a given polarization $\theta$.\ This perspective has already been studied by Pirola, Alzati-Pirola, and Voisin (see respectively [@P],[@AP],[@V]) with a view towards the gonality of very general abelian varieties. The story begins with [@P] in which Pirola shows that, given a very general abelian variety $A$, curves of geometric genus less than $\dim A-1$ are rigid in the Kummer variety $K(A)=A/\{\pm 1\}$. This allows him to show: \[P\] A very general abelian variety of dimension $\geq 3$ does not have positive dimensional orbits of degree $2$. In particular it does not admit a non-constant morphism from a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] rendering NV centers highly promising not only for applications in material sciences and physics but also for applications in the life sciences.[@LeSage2013] As a point defect in the diamond lattice, the NV center can be considered as an ’artificial atom’ with sub-nanometer size. As such, it promises not only highest sensitivity and versatility but in principle also unprecedented nanoscale spatial resolution. Triggered by this multitude of possible applications, various approaches to bring a scanable NV center in close proximity to a sample were recently developed. The first experiments in scanning NV magnetometry employed nanodiamonds (NDs) grafted to atomic force microscope (AFM) tips.[@Balasubramanian2008; @Rondin2012; @Rondin2014; @Tetienne2014] However, NVs in NDs suffer from short coherence times limiting their sensitivity as a magnetic sensor. Secondly, efficient light collection from NDs on scanning probe tips is difficult and limits the resulting sensitivities. Lastly, it has proven challenging to ensure close NV-to-sample separations in this
rendering NV centers highly promising not only for applications in material sciences and physics but also for applications in the life sciences.[@LeSage2013] As a point defect in the diamond lattice, the NV center can be considered as an ’artificial atom’ with sub-nanometer size. As such, it promises not only highest sensitivity and versatility but in principle also unprecedented nanoscale spatial resolution. Triggered by this multitude of possible applications, various approaches to bring a scanable NV center in close proximity to a sample were recently developed. The first experiments in scanning NV magnetometry employed nanodiamonds (NDs) grafted to atomic force microscope (AFM) tips.[@Balasubramanian2008; @Rondin2012; @Rondin2014; @Tetienne2014] However, NVs in NDs suffer from short coherence times limiting their sensitivity as a magnetic sensor. Secondly, efficient light collection from NDs on scanning probe tips is difficult and limits the resulting sensitivities. Lastly, it has proven challenging to ensure close NV-to-sample separations in this[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the manifestly duality-invariant formulation of [@Henneaux:2004jw] was the introduction of prepotentials. Technically, these prepotentials arise through the solutions of the constraints present in the Hamiltonian formalism. The prepotential for the metric (spin-2 Pauli-Fierz field in the linearized theory) appears through the solution of the Hamiltonian constraint, while the prepotential for its conjugate momentum appears through the solution of the momentum constraint. Explicitly, if $h_{ij}$ are the spatial components of the metric deviations from flat space and $\pi^{ij}$ the corresponding conjugate momenta, one has $$h_{ij} = {\epsilon}_{irs} \partial^r \Phi^{s}_{\; \; j} + {\epsilon}_{jrs} \partial^r \Phi^{s}_{\; \; i} + \partial_i u_j + \partial_j u_i \label{hPhi0}$$ and $$\pi^{ij} = {\epsilon}^{ipq} {\epsilon}^{jrs} \partial_p \partial_r P_{qs} \label{piP}$$ where $\Phi_{rs} = \Phi_{sr}$ and $P_{rs} = P_{sr}$ are the two prepotentials (the vector $u_i$ can also be thought of as a prepotential but it drops out from the theory so that we shall not put emphasis on
the manifestly duality-invariant formulation of [@Henneaux:2004jw] was the introduction of prepotentials. Technically, these prepotentials arise through the solutions of the constraints present in the Hamiltonian formalism. The prepotential for the metric (spin-2 Pauli-Fierz field in the linearized theory) appears through the solution of the Hamiltonian constraint, while the prepotential for its conjugate momentum appears through the solution of the momentum constraint. Explicitly, if $h_{ij}$ are the spatial components of the metric deviations from flat space and $\pi^{ij}$ the corresponding conjugate momenta, one has $$h_{ij} = {\epsilon}_{irs} \partial^r \Phi^{s}_{\; \; j} + {\epsilon}_{jrs} \partial^r \Phi^{s}_{\; \; i} + \partial_i u_j + \partial_j u_i \label{hPhi0}$$ and $$\pi^{ij} = {\epsilon}^{ipq} {\epsilon}^{jrs} \partial_p \partial_r P_{qs} \label{piP}$$ where $\Phi_{rs} = \Phi_{sr}$ and $P_{rs} = P_{sr}$ are the two prepotentials (the vector $u_i$ can also be thought of as a prepotential but it drops out from the theory so that we shall not put emphasis on[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] for compatibility with on-chip temperatures in current Si integrated circuits. The high Néel temperatures of the Mn-based equiatomic alloys such as MnIr, MnRh, MnNi, MnPd, and MnPt make them suitable candidates for on-chip applications [@2018_Tserkovnyak_RMP]. Extensive research has been conducted on the electronic [@sakuma1998electronic; @umetsu2002electrical; @umetsu2004pseudogap; @umetsu2006electrical; @umetsu2007electronic], magnetic [@pal1968magnetic; @sakuma1998electronic; @umetsu2006electrical; @umetsu2007electronic], and elastic properties [@wang2013first; @wang2013structural] of these materials. The spins on the Mn atoms are antiferromagnetically coupled with each other in the basal plane, and each plane is coupled ferromagnetically as shown in Fig. \[fig:structure\]. ![\[fig:structure\] Antiferromagnetic [*L*]{}1$_0$-type Mn alloy structures. Mn atoms are the purple spheres with the spin vectors, and the gold spheres indicate the Ir, Rh, Ni, Pd, or Pt atoms. (a) In-plane equilibrium spin texture of MnIr, MnRh, MnNi, and MnPd. (b) Out-of-plane equilibrium spin texture of MnPt. ](Structure2.eps){width="1.0\linewidth"} The positive attributes of speed, scaling, and robustness to stray fields are accompanied by the challenges of
for compatibility with on-chip temperatures in current Si integrated circuits. The high Néel temperatures of the Mn-based equiatomic alloys such as MnIr, MnRh, MnNi, MnPd, and MnPt make them suitable candidates for on-chip applications [@2018_Tserkovnyak_RMP]. Extensive research has been conducted on the electronic [@sakuma1998electronic; @umetsu2002electrical; @umetsu2004pseudogap; @umetsu2006electrical; @umetsu2007electronic], magnetic [@pal1968magnetic; @sakuma1998electronic; @umetsu2006electrical; @umetsu2007electronic], and elastic properties [@wang2013first; @wang2013structural] of these materials. The spins on the Mn atoms are antiferromagnetically coupled with each other in the basal plane, and each plane is coupled ferromagnetically as shown in Fig. \[fig:structure\]. ![\[fig:structure\] Antiferromagnetic [*L*]{}1$_0$-type Mn alloy structures. Mn atoms are the purple spheres with the spin vectors, and the gold spheres indicate the Ir, Rh, Ni, Pd, or Pt atoms. (a) In-plane equilibrium spin texture of MnIr, MnRh, MnNi, and MnPd. (b) Out-of-plane equilibrium spin texture of MnPt. ](Structure2.eps){width="1.0\linewidth"} The positive attributes of speed, scaling, and robustness to stray fields are accompanied by the challenges of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] respect to the kernel $K_{d}(x,y)$ by $$\left(U^{\mu}_{\Sigma}\right)(x) = \int_{\Sigma} K_{d}(x,y) d\mu(y).$$ If the support of the measure $\mu$ is either clear from context or, otherwise, irrelevant to the problem, we drop the subscript $\Sigma$, and write $U^{\mu}(x)$. We define the *Newtonian (or Coulomb) energy* of a measure $\mu$ with respect to the kernel $K_{d}(x,y)$ by $$W_{\Sigma}[\mu] = \int_{\Sigma} \left (U^{\mu}_{\Sigma}\right)(x) d\mu(x).$$ We also refer to this functional as the *electrostatic energy*, and sometimes use the simpler notation $W_{\Sigma}[\mu] = W[\mu]$ - cf. [@Kellogg; @Landkof] for the basics of Potential Theory. We say a charge distribution (measure) $\mu$ is in *constrained equilibrium*, when its electrostatic potential $${\label}{1} U^{\mu}_{\Sigma}(x) = \int_{\Sigma} K_d(x,y) d\mu(y),$$ is constant (possibly taking different values) on each connected component $\Sigma_j$ of the support of $\mu$, subject to the constraints $${\label}{2}
respect to the kernel $K_{d}(x,y)$ by $$\left(U^{\mu}_{\Sigma}\right)(x) = \int_{\Sigma} K_{d}(x,y) d\mu(y).$$ If the support of the measure $\mu$ is either clear from context or, otherwise, irrelevant to the problem, we drop the subscript $\Sigma$, and write $U^{\mu}(x)$. We define the *Newtonian (or Coulomb) energy* of a measure $\mu$ with respect to the kernel $K_{d}(x,y)$ by $$W_{\Sigma}[\mu] = \int_{\Sigma} \left (U^{\mu}_{\Sigma}\right)(x) d\mu(x).$$ We also refer to this functional as the *electrostatic energy*, and sometimes use the simpler notation $W_{\Sigma}[\mu] = W[\mu]$ - cf. [@Kellogg; @Landkof] for the basics of Potential Theory. We say a charge distribution (measure) $\mu$ is in *constrained equilibrium*, when its electrostatic potential $${\label}{1} U^{\mu}_{\Sigma}(x) = \int_{\Sigma} K_d(x,y) d\mu(y),$$ is constant (possibly taking different values) on each connected component $\Sigma_j$ of the support of $\mu$, subject to the constraints $${\label}{2} [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the endmembers present in the scene. Although this assumption naturally leads to fast and reliable unmixing strategies, the intrinsic limitation of the LMM cannot cope with relevant nonideal effects intrinsic to practical applications [@Dobigeon-2014-ID322; @Imbiriba2016_tip; @Imbiriba2017_bs_tip]. One such important nonideal effect is endmember variability [@Zare-2014-ID324; @drumetz2016variabilityReviewRecent]. Endmember variability can be caused by a myriad of factors including environmental conditions, illumination, atmospheric and temporal changes [@Zare-2014-ID324]. Its occurrence may result in significant estimation errors being propagated throughout the unmixing process [@thouvenin2016hyperspectralPLMM]. The most common approaches to deal with spectral variability can be divided into three basic classes. 1) to group endmembers in variational sets, 2) to model endmembers as statistical distributions, and 3) to incorporate the variability in the mixing model, often using physically motivated concepts [@drumetz2016variabilityReviewRecent]. This work follows the third approach. Recently, [@thouvenin2016hyperspectralPLMM], [@drumetz2016blindUnmixingELMMvariability] and [@imbiriba2018GLMM] introduced variations of the LMM to cope with spectral variability. The Perturbed LMM model (PLMM) [@thouvenin2016hyperspectralPLMM] introduces an additive perturbation
the endmembers present in the scene. Although this assumption naturally leads to fast and reliable unmixing strategies, the intrinsic limitation of the LMM cannot cope with relevant nonideal effects intrinsic to practical applications [@Dobigeon-2014-ID322; @Imbiriba2016_tip; @Imbiriba2017_bs_tip]. One such important nonideal effect is endmember variability [@Zare-2014-ID324; @drumetz2016variabilityReviewRecent]. Endmember variability can be caused by a myriad of factors including environmental conditions, illumination, atmospheric and temporal changes [@Zare-2014-ID324]. Its occurrence may result in significant estimation errors being propagated throughout the unmixing process [@thouvenin2016hyperspectralPLMM]. The most common approaches to deal with spectral variability can be divided into three basic classes. 1) to group endmembers in variational sets, 2) to model endmembers as statistical distributions, and 3) to incorporate the variability in the mixing model, often using physically motivated concepts [@drumetz2016variabilityReviewRecent]. This work follows the third approach. Recently, [@thouvenin2016hyperspectralPLMM], [@drumetz2016blindUnmixingELMMvariability] and [@imbiriba2018GLMM] introduced variations of the LMM to cope with spectral variability. The Perturbed LMM model (PLMM) [@thouvenin2016hyperspectralPLMM] introduces an additive perturbation[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] narrow bands, thus making it promising for some applications such as classification [@Li201DL-HSI-ClassificationReview; @He2018Review]. In the early stages of HSI classification, most techniques were devoted to analyzing the data exclusively in the spectral domain, disregarding the rich spatial-contextual information contained in the scene [@Fauvel2013Review]. Then, many approaches were developed to extract spectral-spatial features prior to classification to overcome this limitation, such as morphological profiles (MPs) [@Benediktsson2005EMP], spatial-based filtering techniques [@He2017DLRGF; @Jia2018GaborHSI], etc., which generally adopts hand-crafted features following by a classifier with predefined hyperparameters. Recently, inspired by the great success of deep learning methods [@Liu2019CNNSAR; @Kim2019CNNBlind], CNNs have emerged as a powerful tool for spectral and spatial HSI classification [@Paoletti2019Pyramidal; @Hamouda2019AdaptiveSize]. Different from traditional methods, CNNs jointly learn the information for feature extraction and classification with a hierarchy of convolutions in a data-driven context, which is capable to capture features in different levels and generate more robust and expressive feature representations
narrow bands, thus making it promising for some applications such as classification [@Li201DL-HSI-ClassificationReview; @He2018Review]. In the early stages of HSI classification, most techniques were devoted to analyzing the data exclusively in the spectral domain, disregarding the rich spatial-contextual information contained in the scene [@Fauvel2013Review]. Then, many approaches were developed to extract spectral-spatial features prior to classification to overcome this limitation, such as morphological profiles (MPs) [@Benediktsson2005EMP], spatial-based filtering techniques [@He2017DLRGF; @Jia2018GaborHSI], etc., which generally adopts hand-crafted features following by a classifier with predefined hyperparameters. Recently, inspired by the great success of deep learning methods [@Liu2019CNNSAR; @Kim2019CNNBlind], CNNs have emerged as a powerful tool for spectral and spatial HSI classification [@Paoletti2019Pyramidal; @Hamouda2019AdaptiveSize]. Different from traditional methods, CNNs jointly learn the information for feature extraction and classification with a hierarchy of convolutions in a data-driven context, which is capable to capture features in different levels and generate more robust and expressive feature representations[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] where stellar densities are very low (e.g [@belczynski16]) or (ii) via exchange of energy and angular momentum in dense stellar systems, where the densities are high enough for stellar close encounters to be common (e.g. [@rodriguez16]). LIGO and other ground-based gravitational wave (GW) observatories, such as Virgo, are, however, blind with regarding the formation channels of BH binaries (BHBs). Both channels predict populations in the $10-10^3~{\rm Hz}$ detector band with similar features, i.e. masses larger than the nominal $10\,M_{\odot}$, a mass ratio ($q\equiv M_2/M_1$) of about $1$, low spin, and nearly circular orbits [@ligo16astro; @ama16]. It has been suggested that a joint detection with a space-borne observatory such as LISA [@Amaro-SeoaneEtAl2012; @Amaro-SeoaneEtAl2013; @Amaro-SeoaneEtAl2017] could allow us to study different moments in the evolution of BHBs on their way to coalescence: LISA can detect BHBs when the BHs are still $10^2-10^3~R_S$ apart, years to weeks before they enter the LIGO/Virgo band [@miller02; @ama10;
where stellar densities are very low (e.g [@belczynski16]) or (ii) via exchange of energy and angular momentum in dense stellar systems, where the densities are high enough for stellar close encounters to be common (e.g. [@rodriguez16]). LIGO and other ground-based gravitational wave (GW) observatories, such as Virgo, are, however, blind with regarding the formation channels of BH binaries (BHBs). Both channels predict populations in the $10-10^3~{\rm Hz}$ detector band with similar features, i.e. masses larger than the nominal $10\,M_{\odot}$, a mass ratio ($q\equiv M_2/M_1$) of about $1$, low spin, and nearly circular orbits [@ligo16astro; @ama16]. It has been suggested that a joint detection with a space-borne observatory such as LISA [@Amaro-SeoaneEtAl2012; @Amaro-SeoaneEtAl2013; @Amaro-SeoaneEtAl2017] could allow us to study different moments in the evolution of BHBs on their way to coalescence: LISA can detect BHBs when the BHs are still $10^2-10^3~R_S$ apart, years to weeks before they enter the LIGO/Virgo band [@miller02; @ama10;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] $G=(V,E)$ be an undirected simple graph and $L$ be a set of labels. A labeling of $V$ is a function $x : V \to L$ assigning a label from $L$ to each vertex in $V$. Local information is modeled by a cost $g_i(a)$ for assigning label $a$ to vertex $i$. Information on label compatibility for neighboring vertices is modeled by a cost $h_{ij}(a,b)$ for assigning label $a$ to vertex $i$ and label $b$ to vertex $j$. The cost for a labeling $x$ is defined by an energy function, $$F(x) = \sum_{i \in V} g_i(x_i) + \sum_{\{i,j\} \in E} h_{ij}(x_i,x_j).$$ In the context of MRFs the energy function defines a Gibbs distribution on random variables $X$ associated with the vertices $V$, $$\begin{aligned} p(X=x) & = \frac{1}{Z} \exp(-F(x)).\end{aligned}$$ Minimizing the energy $F(x)$ corresponds to maximizing $p(X=x)$. This approach has been applied to a variety of problems in image processing and computer vision
$G=(V,E)$ be an undirected simple graph and $L$ be a set of labels. A labeling of $V$ is a function $x : V \to L$ assigning a label from $L$ to each vertex in $V$. Local information is modeled by a cost $g_i(a)$ for assigning label $a$ to vertex $i$. Information on label compatibility for neighboring vertices is modeled by a cost $h_{ij}(a,b)$ for assigning label $a$ to vertex $i$ and label $b$ to vertex $j$. The cost for a labeling $x$ is defined by an energy function, $$F(x) = \sum_{i \in V} g_i(x_i) + \sum_{\{i,j\} \in E} h_{ij}(x_i,x_j).$$ In the context of MRFs the energy function defines a Gibbs distribution on random variables $X$ associated with the vertices $V$, $$\begin{aligned} p(X=x) & = \frac{1}{Z} \exp(-F(x)).\end{aligned}$$ Minimizing the energy $F(x)$ corresponds to maximizing $p(X=x)$. This approach has been applied to a variety of problems in image processing and computer vision[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Gibbs energy with the change of multicomponent “reaction coordinate” that describes a response of the system to the instantaneous transfer of a charge from one site to another. ![\[et\_co\] Schematic energy profiles of reactant ([**R**]{}) and product ([**P**]{}) states of the electron transfer reaction coordinate. The reorganization energy $E_r$ is the sum of the reaction energy $\Delta E$ and the optical excitation energy $E^*$. $\Delta$ stands for energy separation between the energy surfaces due to electronic coupling of [**R**]{} and [**P**]{} states. Thermal ET occurs at nuclear configurations characteristic to the intersection of the parabolas.](rys_et.eps){width="8.5cm" height="8.5cm"} ET reactions involve both classical and quantum degrees of freedom. Quantum effects are mostly related to electronic degrees of freedom. Because of the mass difference between electrons and nuclei, it is frequently assumed that the electrons follow the nuclear motion adiabatically (Born-Oppenheimer approximation). The interaction between two different electronic states results in a splitting energy $\Delta$.
Gibbs energy with the change of multicomponent “reaction coordinate” that describes a response of the system to the instantaneous transfer of a charge from one site to another. ![\[et\_co\] Schematic energy profiles of reactant ([**R**]{}) and product ([**P**]{}) states of the electron transfer reaction coordinate. The reorganization energy $E_r$ is the sum of the reaction energy $\Delta E$ and the optical excitation energy $E^*$. $\Delta$ stands for energy separation between the energy surfaces due to electronic coupling of [**R**]{} and [**P**]{} states. Thermal ET occurs at nuclear configurations characteristic to the intersection of the parabolas.](rys_et.eps){width="8.5cm" height="8.5cm"} ET reactions involve both classical and quantum degrees of freedom. Quantum effects are mostly related to electronic degrees of freedom. Because of the mass difference between electrons and nuclei, it is frequently assumed that the electrons follow the nuclear motion adiabatically (Born-Oppenheimer approximation). The interaction between two different electronic states results in a splitting energy $\Delta$.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] core becomes a stable white dwarf. However, when the mass of the core is greater than the Chandrasekhar limit, the pressure of the relativistically degenerate electron gas is no longer sufficient to arrest the gravitational contraction, the core continues to contract and becomes denser and denser; and when the density reaches the value $\rho \sim 10^{7}\gcmcui$, the process of neutronization sets in; electrons and protons in the core begin to combine into neutrons through the reaction $p + e^{-} \rightarrow n + \nu_{e}$ The electron neutrinos $\nu_{e}$ so produced escape from the core of the star. The gravitational contraction continues and eventually, when the density of the core reaches the value $\rho \sim 10^{14} \gcmcui$, the core consists almost entirely of neutrons. If the mass of the core is less than the Oppenheimer-Volkoff limit ($\sim 3\msol$), then at this stage the contraction stops; the pressure of the degenerate neutron gas is enough
core becomes a stable white dwarf. However, when the mass of the core is greater than the Chandrasekhar limit, the pressure of the relativistically degenerate electron gas is no longer sufficient to arrest the gravitational contraction, the core continues to contract and becomes denser and denser; and when the density reaches the value $\rho \sim 10^{7}\gcmcui$, the process of neutronization sets in; electrons and protons in the core begin to combine into neutrons through the reaction $p + e^{-} \rightarrow n + \nu_{e}$ The electron neutrinos $\nu_{e}$ so produced escape from the core of the star. The gravitational contraction continues and eventually, when the density of the core reaches the value $\rho \sim 10^{14} \gcmcui$, the core consists almost entirely of neutrons. If the mass of the core is less than the Oppenheimer-Volkoff limit ($\sim 3\msol$), then at this stage the contraction stops; the pressure of the degenerate neutron gas is enough[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] draw conclusion on the differences in proton and neutron density distributions definitely, a combined analysis of the interaction cross section and other experiment on either proton or neutron alone are necessary. The charge-changing cross section which is the cross section for all processes which result in a change of the atomic number for the projectile can provide good opportunity for this purpose. In Ref.[@Chu.00], the total charge-changing cross section $\sigma_{\rm cc}$ for the light stable and neutron-rich nuclei at relativistic energy on a carbon target were measured. We will study $\sigma_{\rm cc}$ theoretically by using the fully self-consistent and microscopic relativistic continuum Hartree-Bogoliubov (RCHB) theory and the Glauber Model in the present letter. The RCHB theory [@ME.98; @MR.96; @MR.98], which is an extension of the relativistic mean field (RMF) [@SW86; @RE89; @RI96] and the Bogoliubov transformation in the coordinate representation, can describe satisfactorily the ground state properties for nuclei both near and
draw conclusion on the differences in proton and neutron density distributions definitely, a combined analysis of the interaction cross section and other experiment on either proton or neutron alone are necessary. The charge-changing cross section which is the cross section for all processes which result in a change of the atomic number for the projectile can provide good opportunity for this purpose. In Ref.[@Chu.00], the total charge-changing cross section $\sigma_{\rm cc}$ for the light stable and neutron-rich nuclei at relativistic energy on a carbon target were measured. We will study $\sigma_{\rm cc}$ theoretically by using the fully self-consistent and microscopic relativistic continuum Hartree-Bogoliubov (RCHB) theory and the Glauber Model in the present letter. The RCHB theory [@ME.98; @MR.96; @MR.98], which is an extension of the relativistic mean field (RMF) [@SW86; @RE89; @RI96] and the Bogoliubov transformation in the coordinate representation, can describe satisfactorily the ground state properties for nuclei both near and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] saturation of the two level systems. Dielectric (ultrasonic) experiments on insulating glasses at low temperatures have found that when the electromagnetic (acoustic) energy flux $J$ used to make the measurements exceeds the critical flux $J_c$, the dielectric (ultrasonic) power absorption by the TLS is saturated, and the attenuation decreases [@Golding1976; @Arnold1976; @Schickfus1977; @Graebner1983; @Martinis2005]. The previous theoretical efforts to explain the linear increase in the charge noise in Josephson junctions assumed that the TLS were not saturated, i.e., that $J\ll J_c$. This seems sensible since the charge noise experiments were done in the limit where the qubit absorbed only one photon. However, stray electric fields could saturate TLS in the dielectric substrate as the following simple estimate shows. We can estimate the voltage $V$ across the capacitor associated with the substrate and ground plane beneath the Cooper pair box by setting $CV^2/2=\hbar\omega$ where $\hbar\omega$ is the energy of the microwave
saturation of the two level systems. Dielectric (ultrasonic) experiments on insulating glasses at low temperatures have found that when the electromagnetic (acoustic) energy flux $J$ used to make the measurements exceeds the critical flux $J_c$, the dielectric (ultrasonic) power absorption by the TLS is saturated, and the attenuation decreases [@Golding1976; @Arnold1976; @Schickfus1977; @Graebner1983; @Martinis2005]. The previous theoretical efforts to explain the linear increase in the charge noise in Josephson junctions assumed that the TLS were not saturated, i.e., that $J\ll J_c$. This seems sensible since the charge noise experiments were done in the limit where the qubit absorbed only one photon. However, stray electric fields could saturate TLS in the dielectric substrate as the following simple estimate shows. We can estimate the voltage $V$ across the capacitor associated with the substrate and ground plane beneath the Cooper pair box by setting $CV^2/2=\hbar\omega$ where $\hbar\omega$ is the energy of the microwave[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] is known to suffer from throughput loss and high latency, as each node lacks complete knowledge of entire network conditions and thus fails to take a globally optimal decision. The datacenter network is fundamentally different from the wide-area Internet in that it is under a single administrative control. Such a single-operator environment makes a centralized architecture a feasible option. Indeed, there have been recent proposals for centralized control design for data center networks [@perry17flowtune; @perry2014fastpass]. In particular, Fastpass [@perry2014fastpass] uses a centralized arbiter to determine the path as well as the time slot of transmission for each packet, so as to achieve zero queueing at switches. Flowtune [@perry17flowtune] also uses a centralized controller, but congestion control decisions are made at the granularity of a flowlet, with the goal of achieving rapid convergence to a desired rate allocation. Preliminary evaluation of these approaches demonstrates promising empirical performance, suggesting the feasibility of a
is known to suffer from throughput loss and high latency, as each node lacks complete knowledge of entire network conditions and thus fails to take a globally optimal decision. The datacenter network is fundamentally different from the wide-area Internet in that it is under a single administrative control. Such a single-operator environment makes a centralized architecture a feasible option. Indeed, there have been recent proposals for centralized control design for data center networks [@perry17flowtune; @perry2014fastpass]. In particular, Fastpass [@perry2014fastpass] uses a centralized arbiter to determine the path as well as the time slot of transmission for each packet, so as to achieve zero queueing at switches. Flowtune [@perry17flowtune] also uses a centralized controller, but congestion control decisions are made at the granularity of a flowlet, with the goal of achieving rapid convergence to a desired rate allocation. Preliminary evaluation of these approaches demonstrates promising empirical performance, suggesting the feasibility of a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] propose an early implementation of this method and analyze the results. The conducted experiment is based on the collation of two corpora: one text-based and a visual. From the text, a DSM is created and then queried for a list of target words that are functionally the labels that are manually given to the images. The result is a list of vectors that correspond to the target words leading to images that are annotated not only with a label but also with a unique vector. The images and vectors are used as training data for a CNN that, afterward, should be able to predict a vector for an unseen image. This prediction vector can be converted back to a word by the DSM using a nearest-neighbor algorithm. We are looking for richer representation in this process, so we choose the five nearest neighbors. The similarity measure is the cosine similarity for
propose an early implementation of this method and analyze the results. The conducted experiment is based on the collation of two corpora: one text-based and a visual. From the text, a DSM is created and then queried for a list of target words that are functionally the labels that are manually given to the images. The result is a list of vectors that correspond to the target words leading to images that are annotated not only with a label but also with a unique vector. The images and vectors are used as training data for a CNN that, afterward, should be able to predict a vector for an unseen image. This prediction vector can be converted back to a word by the DSM using a nearest-neighbor algorithm. We are looking for richer representation in this process, so we choose the five nearest neighbors. The similarity measure is the cosine similarity for[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] latter is sufficiently fast, implying that the optimality of the JSQ policy can asymptotically be preserved while dramatically reducing the communication overhead. Stochastic coupling techniques play an instrumental role in establishing the asymptotic optimality and universality properties, and augmentations of the coupling constructions allow these properties to be extended to infinite-server settings and network scenarios. We additionally show how the communication overhead can be reduced yet further by the so-called Join-the-Idle-Queue (JIQ) scheme, leveraging memory at the dispatcher to keep track of idle servers. author: - Mark van der Boor - 'Sem C. Borst' - 'Johan S.H. van Leeuwaarden' - Debankur Mukherjee title: | Scalable Load Balancing in Networked Systems:\ Universality Properties and Stochastic Coupling Methods --- Introduction ============ In the present paper we review scalable load balancing algorithms (LBAs) which achieve excellent delay performance in large-scale systems and yet only involve low implementation overhead. LBAs play a critical role in distributing service
latter is sufficiently fast, implying that the optimality of the JSQ policy can asymptotically be preserved while dramatically reducing the communication overhead. Stochastic coupling techniques play an instrumental role in establishing the asymptotic optimality and universality properties, and augmentations of the coupling constructions allow these properties to be extended to infinite-server settings and network scenarios. We additionally show how the communication overhead can be reduced yet further by the so-called Join-the-Idle-Queue (JIQ) scheme, leveraging memory at the dispatcher to keep track of idle servers. author: - Mark van der Boor - 'Sem C. Borst' - 'Johan S.H. van Leeuwaarden' - Debankur Mukherjee title: | Scalable Load Balancing in Networked Systems:\ Universality Properties and Stochastic Coupling Methods --- Introduction ============ In the present paper we review scalable load balancing algorithms (LBAs) which achieve excellent delay performance in large-scale systems and yet only involve low implementation overhead. LBAs play a critical role in distributing service[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \mid \Psi_\Lambda\rangle = E \mid \Psi_\Lambda \rangle,$$ where, $$\mid \Psi_\Lambda \rangle = \phi^\Lambda_{q\bar{q}} \mid q\bar{q} \rangle + \phi^\Lambda_{q\bar{q}g} \mid q\bar{q}g \rangle + \cdot\cdot\cdot.$$ where I use shorthand notation for the Fock space components of the state. The full state vector includes an infinite number of components, and in a constituent approximation we truncate this series. We derive the Hamiltonian from QCD, so we must allow for the possibility of constituent gluons. I have indicated that the Hamiltonian and the state both depend on a cutoff, $\Lambda$, which is critical for the approximation. This approach has no chance of working without a renormalization scheme [*tailored to it.*]{} Much of our work has focused on the development of such a renormalization scheme. In order to understand the constraints that have driven this development, seriously consider under what conditions it might be possible to truncate the above series without making an arbitrarily large error in the
\mid \Psi_\Lambda\rangle = E \mid \Psi_\Lambda \rangle,$$ where, $$\mid \Psi_\Lambda \rangle = \phi^\Lambda_{q\bar{q}} \mid q\bar{q} \rangle + \phi^\Lambda_{q\bar{q}g} \mid q\bar{q}g \rangle + \cdot\cdot\cdot.$$ where I use shorthand notation for the Fock space components of the state. The full state vector includes an infinite number of components, and in a constituent approximation we truncate this series. We derive the Hamiltonian from QCD, so we must allow for the possibility of constituent gluons. I have indicated that the Hamiltonian and the state both depend on a cutoff, $\Lambda$, which is critical for the approximation. This approach has no chance of working without a renormalization scheme [*tailored to it.*]{} Much of our work has focused on the development of such a renormalization scheme. In order to understand the constraints that have driven this development, seriously consider under what conditions it might be possible to truncate the above series without making an arbitrarily large error in the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Energy and is summarized here as context for the Astro2020 panel. Beyond 2025, DESI will require new funding to continue operations. We expect that DESI will remain one of the world’s best facilities for wide-field spectroscopy throughout the decade. More about the DESI instrument and survey can be found at https://www.desi.lbl.gov. An Overview of DESI: 2020-2025 ============================== DESI is an ambitious multi-fiber optical spectrograph sited on the Kitt Peak National Observatory Mayall 4m telescope, funded to conduct a Stage IV spectroscopic dark energy experiment. DESI featuers 5000 robotically positioned fibers in an 8 deg$^2$ focal plane, feeding a bank of 10 triple-arm spectrographs that measure the full bandpass from 360 nm to 980 nm at spectral resolution of 2000 in the UV and over 4000 in the red and IR (Martini et al. 2018). DESI is designed for efficient operations and exceptionally high throughput, anticipated to peak at over 50% from the top
Energy and is summarized here as context for the Astro2020 panel. Beyond 2025, DESI will require new funding to continue operations. We expect that DESI will remain one of the world’s best facilities for wide-field spectroscopy throughout the decade. More about the DESI instrument and survey can be found at https://www.desi.lbl.gov. An Overview of DESI: 2020-2025 ============================== DESI is an ambitious multi-fiber optical spectrograph sited on the Kitt Peak National Observatory Mayall 4m telescope, funded to conduct a Stage IV spectroscopic dark energy experiment. DESI featuers 5000 robotically positioned fibers in an 8 deg$^2$ focal plane, feeding a bank of 10 triple-arm spectrographs that measure the full bandpass from 360 nm to 980 nm at spectral resolution of 2000 in the UV and over 4000 in the red and IR (Martini et al. 2018). DESI is designed for efficient operations and exceptionally high throughput, anticipated to peak at over 50% from the top[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] randomized [@tempo2012randomized] and scenario-based methods. For the analytic approximation methods, the probabilistic properties of the uncertainty are exploited to reformulate the chance constraints in a deterministic form. For the second class of methods, the craved control performance and constraint satisfaction are guaranteed properly generating a sufficient number of uncertainty realizations and on the solution of a suitable constrained optimization problem, as proposed in [@calafiore2006scenario], [@schildbach2014scenario]. The main advantage of this class of stochastic MPC algorithms is given by the inherent flexibility to be applied to (almost) every class of systems, including any type of uncertainty and both state and input constraints, as long as the optimization problem is convex. On the other hand, they share two main drawback: i) slowness, which has limited their application to problems involving slow dynamics and where the sample time is measured in tens of seconds or minutes; and ii) a significant computational burden required
randomized [@tempo2012randomized] and scenario-based methods. For the analytic approximation methods, the probabilistic properties of the uncertainty are exploited to reformulate the chance constraints in a deterministic form. For the second class of methods, the craved control performance and constraint satisfaction are guaranteed properly generating a sufficient number of uncertainty realizations and on the solution of a suitable constrained optimization problem, as proposed in [@calafiore2006scenario], [@schildbach2014scenario]. The main advantage of this class of stochastic MPC algorithms is given by the inherent flexibility to be applied to (almost) every class of systems, including any type of uncertainty and both state and input constraints, as long as the optimization problem is convex. On the other hand, they share two main drawback: i) slowness, which has limited their application to problems involving slow dynamics and where the sample time is measured in tens of seconds or minutes; and ii) a significant computational burden required[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the McMath-Pierce telescope, the aperture sizes surpassed 1 m. ![image](telsizes.pdf){width=".95\textwidth"} The Need for Large Apertures {#lgap} ---------------------------- A large telescope serves two main purposes: to increase the number of collected photons and to increase the spatial resolution. Both obviously can be traded for another, depending on the science question that is being investigated. There are several science questions that currently cannot be answered because of the limited available resolutions and sensitivities - temporal, spatial, and polarimetric. For example, the need for a large number of photons is evident when searching for horizontal magnetic fields, which are observed in linear polarization. These small-scale fields are important for a better understanding of the surface magnetism, e.g. whether they are created by local dynamo processes. The answer to the long-standing question of whether small-scale magnetic fields are more horizontal or vertical seems to currently depend on the method of analysis, and especially how the noise in $Q$ and $U$ is
the McMath-Pierce telescope, the aperture sizes surpassed 1 m. ![image](telsizes.pdf){width=".95\textwidth"} The Need for Large Apertures {#lgap} ---------------------------- A large telescope serves two main purposes: to increase the number of collected photons and to increase the spatial resolution. Both obviously can be traded for another, depending on the science question that is being investigated. There are several science questions that currently cannot be answered because of the limited available resolutions and sensitivities - temporal, spatial, and polarimetric. For example, the need for a large number of photons is evident when searching for horizontal magnetic fields, which are observed in linear polarization. These small-scale fields are important for a better understanding of the surface magnetism, e.g. whether they are created by local dynamo processes. The answer to the long-standing question of whether small-scale magnetic fields are more horizontal or vertical seems to currently depend on the method of analysis, and especially how the noise in $Q$ and $U$ is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \Sigma_g$, where $\Sigma_g$ denote Riemann surface with genus $g$. As we shrinking the size of $\Sigma_g$, the 6d mother theory is reduced to 4d class $\CS$ SCFT [@Gaiotto:2008cd; @Gaiotto:2009we] on $S^1 \times M_3$. On the other hand, one can arrive to a 3d $\mathcal{N}=2$ class $\CR$ SCFT [@Terashima:2011qi; @Dimofte:2011ju] on $S^1 \times \Sigma_g$ by reducing the size of $M_3$ in 6d theory. Since the 6d twisted index is invariant under continous supersymmetry preserving deformations, we expect the equality between the 4d twisted index for class $\CS$ theory and 3d twisted index for class $\CR$ theory. Fortunately, the 3d twisted index was already studied in [@Gang:2018hjd; @Gang:2019uay] using the 3d-3d relation. Thus our main claim in this paper is that one can utilize the twisted index computation to analyze the entropy of magnetically charged black hole in AdS$_5$. At large-$N$ limit, we checked our speculation using supergravity solution in [@Nieder:2000kc; @Maldacena:2000mw]. One
\Sigma_g$, where $\Sigma_g$ denote Riemann surface with genus $g$. As we shrinking the size of $\Sigma_g$, the 6d mother theory is reduced to 4d class $\CS$ SCFT [@Gaiotto:2008cd; @Gaiotto:2009we] on $S^1 \times M_3$. On the other hand, one can arrive to a 3d $\mathcal{N}=2$ class $\CR$ SCFT [@Terashima:2011qi; @Dimofte:2011ju] on $S^1 \times \Sigma_g$ by reducing the size of $M_3$ in 6d theory. Since the 6d twisted index is invariant under continous supersymmetry preserving deformations, we expect the equality between the 4d twisted index for class $\CS$ theory and 3d twisted index for class $\CR$ theory. Fortunately, the 3d twisted index was already studied in [@Gang:2018hjd; @Gang:2019uay] using the 3d-3d relation. Thus our main claim in this paper is that one can utilize the twisted index computation to analyze the entropy of magnetically charged black hole in AdS$_5$. At large-$N$ limit, we checked our speculation using supergravity solution in [@Nieder:2000kc; @Maldacena:2000mw]. One[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] swelling; this so-called incubation dose includes most of the dependence on radiation environment [@GarnerWolfer:1984; @Okita:2000; @Okita:2002]. The processes that govern microstructure evolution include thermally-activated motion of small defect clusters, mutual impingement, and annihilation or coalescence reactions along with micro-chemical changes from nuclear transmutation and displacements or diffusion of pre-existing impurities. Radiation simulations should ideally encompass all of these processes. Typically, existing models have included only particular types of defects and reactions or have made other numerical approximations in order to obtain a solution. At the least, simulations of early irradiation must account for void nucleation and growth processes, since annihilation, aggregation, and cluster ripening take place concurrently. Transient and steady-state swelling behavior due to these processes have been studied recently [@Surh:2004; @Surh:2004b; @Surh:2005; @Surh:ERR]. However, only void reactions with vacancy or interstitial monomers are included in these studies. This minimal model of void nucleation gives reasonable swelling behavior as a function
swelling; this so-called incubation dose includes most of the dependence on radiation environment [@GarnerWolfer:1984; @Okita:2000; @Okita:2002]. The processes that govern microstructure evolution include thermally-activated motion of small defect clusters, mutual impingement, and annihilation or coalescence reactions along with micro-chemical changes from nuclear transmutation and displacements or diffusion of pre-existing impurities. Radiation simulations should ideally encompass all of these processes. Typically, existing models have included only particular types of defects and reactions or have made other numerical approximations in order to obtain a solution. At the least, simulations of early irradiation must account for void nucleation and growth processes, since annihilation, aggregation, and cluster ripening take place concurrently. Transient and steady-state swelling behavior due to these processes have been studied recently [@Surh:2004; @Surh:2004b; @Surh:2005; @Surh:ERR]. However, only void reactions with vacancy or interstitial monomers are included in these studies. This minimal model of void nucleation gives reasonable swelling behavior as a function[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] focus on the evolution of acoustic waves inside the Hubble horizon. In linear theory, they merely redshift away as the universe expands. However, higher order calculations [@Gielen] revealed that perturbation theory fails due to secularly growing terms. We explain this here by showing, both analytically and numerically, in one, two and three dimensions, that small-amplitude waves steepen and form shocks, after $\sim \epsilon^{-1}$ oscillation periods [@earlierrefs]: for movies and supplementary materials see [@weblink]. Furthermore, shock collisions would generate gravitational waves. As we shall later explain, the scenario of Ref. [@Bird], for example, would produce a stochastic gravitational wave background large enough to be detected by existing pulsar timing array measurements. More generally, planned and future gravitational wave detectors will be sensitive to gravitational waves generated by shocks as early as $10^{-30}$ seconds after the big bang [@penturoklong]. -.6in ![ Simulation showing cosmological initial conditions (left) evolving into shocks (right). The magnitude of the gradient of the
focus on the evolution of acoustic waves inside the Hubble horizon. In linear theory, they merely redshift away as the universe expands. However, higher order calculations [@Gielen] revealed that perturbation theory fails due to secularly growing terms. We explain this here by showing, both analytically and numerically, in one, two and three dimensions, that small-amplitude waves steepen and form shocks, after $\sim \epsilon^{-1}$ oscillation periods [@earlierrefs]: for movies and supplementary materials see [@weblink]. Furthermore, shock collisions would generate gravitational waves. As we shall later explain, the scenario of Ref. [@Bird], for example, would produce a stochastic gravitational wave background large enough to be detected by existing pulsar timing array measurements. More generally, planned and future gravitational wave detectors will be sensitive to gravitational waves generated by shocks as early as $10^{-30}$ seconds after the big bang [@penturoklong]. -.6in ![ Simulation showing cosmological initial conditions (left) evolving into shocks (right). The magnitude of the gradient of the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] frames [@frames], EMycin [@emycin], KRL [@krl] and others. The representation language used by Cyc, CycL [@cyc] is a hybrid of these approaches. Most of these representation systems can be formalized as variants of first order logic. Inference is done via some form of theorem proving. One of the main design issues in these systems is the tradeoff between expressiveness and inferential complexity. More recently, systems such as Google’s ‘Knowledge Graph’ have started finding use, though their extremely limited expressiveness makes them much more like simple databases than knowledge bases. Though a survey of the wide range of KR systems that have been built is beyond the scope of this paper, we note that there are some very basic abilities all of these systems have. In particular, - They are all relational, i.e., are built around the concept of entities that have relations between them - They can be updated incrementally -
frames [@frames], EMycin [@emycin], KRL [@krl] and others. The representation language used by Cyc, CycL [@cyc] is a hybrid of these approaches. Most of these representation systems can be formalized as variants of first order logic. Inference is done via some form of theorem proving. One of the main design issues in these systems is the tradeoff between expressiveness and inferential complexity. More recently, systems such as Google’s ‘Knowledge Graph’ have started finding use, though their extremely limited expressiveness makes them much more like simple databases than knowledge bases. Though a survey of the wide range of KR systems that have been built is beyond the scope of this paper, we note that there are some very basic abilities all of these systems have. In particular, - They are all relational, i.e., are built around the concept of entities that have relations between them - They can be updated incrementally - [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] continuos real-valued functions on $[0,1]$ and $S_{\infty}$, the group of permutations on $\N$, one does not have a suitable notions of smallness. As these objects are complete topological groups with natural topologies, one can use the notion of meagerness as the notion of smallness. One of the earliest instance of such a result is a theorem of Banach which says that the set of all functions in $C([0,1])$ which are differentiable at some point is meager in $C([0,1])$. The notion of meagerness is a topological one. Often it fails to capture the essence of certain properties studied in analysis, dynamical systems, etc. To remedy this, J. P. R. Christensen in 1973 [@christensen] introduced what he called “Haar null" sets. The beauty of this concept is that in locally compact group it is equivalent to the notion of measure zero set under the Haar measure and at the same time it is
continuos real-valued functions on $[0,1]$ and $S_{\infty}$, the group of permutations on $\N$, one does not have a suitable notions of smallness. As these objects are complete topological groups with natural topologies, one can use the notion of meagerness as the notion of smallness. One of the earliest instance of such a result is a theorem of Banach which says that the set of all functions in $C([0,1])$ which are differentiable at some point is meager in $C([0,1])$. The notion of meagerness is a topological one. Often it fails to capture the essence of certain properties studied in analysis, dynamical systems, etc. To remedy this, J. P. R. Christensen in 1973 [@christensen] introduced what he called “Haar null" sets. The beauty of this concept is that in locally compact group it is equivalent to the notion of measure zero set under the Haar measure and at the same time it is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and unstable configurations, and identified the parameter space that can support additional planets. From an astrobiological point of view, dynamically stable habitable zones (HZs) for terrestrial mass planets ($0.3 \mearth < M_{p} < 10 \mearth$) are the most interesting. Classically, the HZ is defined as the circumstellar region in which a terrestrial mass planet with favorable atmospheric conditions can sustain liquid water on its surface [@Huang1959; @Hart1978; @Kasting1993; @Selsis2007 but see also Barnes et al.(2009)]. Previous work [@Jones2001; @MT2003; @Jones2006; @Sandor2007] investigated the orbital stability of Earth-mass planets in the HZ of systems with a Jupiter-mass companion. In their pioneering work, [@Jones2001] estimated the stability of four known planetary systems in the HZ of their host stars. [@MT2003] considered the dynamical stability of 100 terrestrial mass-planets (modelled as test particles) in the HZs of the then-known 85 extra-solar planetary systems. From their simulations, they generated a tabular list of stable HZs for
and unstable configurations, and identified the parameter space that can support additional planets. From an astrobiological point of view, dynamically stable habitable zones (HZs) for terrestrial mass planets ($0.3 \mearth < M_{p} < 10 \mearth$) are the most interesting. Classically, the HZ is defined as the circumstellar region in which a terrestrial mass planet with favorable atmospheric conditions can sustain liquid water on its surface [@Huang1959; @Hart1978; @Kasting1993; @Selsis2007 but see also Barnes et al.(2009)]. Previous work [@Jones2001; @MT2003; @Jones2006; @Sandor2007] investigated the orbital stability of Earth-mass planets in the HZ of systems with a Jupiter-mass companion. In their pioneering work, [@Jones2001] estimated the stability of four known planetary systems in the HZ of their host stars. [@MT2003] considered the dynamical stability of 100 terrestrial mass-planets (modelled as test particles) in the HZs of the then-known 85 extra-solar planetary systems. From their simulations, they generated a tabular list of stable HZs for[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] like the Einstein Telescope (ET) of the European mission and the Cosmic explorer (CE) [@next_genGW]. ET and CE will detect binary coalescence with a higher signal to noise ratio (SNR) and possibly at lower frequencies than the second generation detectors like advanced LIGO , advanced VIRGO and KAGRA [@refET]. LISA on the other hand, would aim to observe supermassive BH binaries in the frequency range 0.1 mHz - 1Hz. Many studies have been performed to estimate the binary parameters (and additionally testing gravity theories) with LISA like detectors [@berti; @yagi_tanaka]. Recently there has been a lot of interest in the possibility of doing cosmography with LISA [@kyutoku_seto; @pozzo]. There is also a proposal for a Japanese space mission, Deci-Hertz Interferometer Gravitational Wave Observatory (DECIGO) for observing GW around $f \sim 0.1 - 10$ Hz. In such a scenario, one can ask how these detectors can complement each other. GW signals
like the Einstein Telescope (ET) of the European mission and the Cosmic explorer (CE) [@next_genGW]. ET and CE will detect binary coalescence with a higher signal to noise ratio (SNR) and possibly at lower frequencies than the second generation detectors like advanced LIGO , advanced VIRGO and KAGRA [@refET]. LISA on the other hand, would aim to observe supermassive BH binaries in the frequency range 0.1 mHz - 1Hz. Many studies have been performed to estimate the binary parameters (and additionally testing gravity theories) with LISA like detectors [@berti; @yagi_tanaka]. Recently there has been a lot of interest in the possibility of doing cosmography with LISA [@kyutoku_seto; @pozzo]. There is also a proposal for a Japanese space mission, Deci-Hertz Interferometer Gravitational Wave Observatory (DECIGO) for observing GW around $f \sim 0.1 - 10$ Hz. In such a scenario, one can ask how these detectors can complement each other. GW signals[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] known that if the sample comes from a normal distribution, $N(0, \sigma)$, $T_n$ has the Student t-distribution with $n - 1$ degrees of freedom. The proof is quite simple (we provide a few in the appendix section in \[t\_student\]). If the variables $(X_i)_{i=1..n}$ have a mean $\mu$ non equal to zero, the distribution is referred to as a non-central t-distribution with non centrality parameter given by $$\eta = \sqrt n \quad \frac{\mu}{\sigma}$$ Extension to weaker condition for the t-statistics has been widely studied. Mauldon [@Mauldon_1956] raised the question for which pdfs the t-statistic as defined by \[tstatistic\] is t-distributed with $d - 1$ degrees of freedom. Indeed, this characterization problem can be generalized to the one of finding all the pdfs for which a certain statistic possesses the property which is a characteristic for these pdfs. [@Kagan_1973], [@Bondesson_1974] and [@Bondesson_1983] to cite a few tackled Mauldon’s problem. [@Bondesson_1983] proved the necessary
known that if the sample comes from a normal distribution, $N(0, \sigma)$, $T_n$ has the Student t-distribution with $n - 1$ degrees of freedom. The proof is quite simple (we provide a few in the appendix section in \[t\_student\]). If the variables $(X_i)_{i=1..n}$ have a mean $\mu$ non equal to zero, the distribution is referred to as a non-central t-distribution with non centrality parameter given by $$\eta = \sqrt n \quad \frac{\mu}{\sigma}$$ Extension to weaker condition for the t-statistics has been widely studied. Mauldon [@Mauldon_1956] raised the question for which pdfs the t-statistic as defined by \[tstatistic\] is t-distributed with $d - 1$ degrees of freedom. Indeed, this characterization problem can be generalized to the one of finding all the pdfs for which a certain statistic possesses the property which is a characteristic for these pdfs. [@Kagan_1973], [@Bondesson_1974] and [@Bondesson_1983] to cite a few tackled Mauldon’s problem. [@Bondesson_1983] proved the necessary[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] basic series of unitary representations of the Möbius group transformations $$\rho _{k}\rightarrow \frac{a\,\rho _{k}+b}{c\,\rho _{k}+d}\,,$$ where $\rho _{k}=x_{k}+iy_{k},\,\rho _{k}^{*}=x_{k}-iy_{k}$ and $a,b,c,d$ are arbitrary complex parameters \[1\]. For this series the conformal weights $$m=1/2+i\nu +n/2,\,\,\widetilde{m}=1/2+i\nu -n/2$$ are expressed in terms of the anomalous dimension $\gamma =1+2i\nu $ and the integer conformal spin $n$ of the composite operators $O_{m\widetilde{,m}}(% \overrightarrow{\rho _{0}})$. They are related to the eigenvalues $$M^{2}f_{m,\widetilde{m}}=m(m-1)f_{m,\widetilde{m}}\,,\,\,\,M^{*2}f_{m,% \widetilde{m}}=\widetilde{m}(\widetilde{m}-1)f_{m,\widetilde{m}}\,$$ of the Casimir operators $M^{2}$ and $M^{*2}$: $$M^{2}=\left( \sum_{k=1}^{n}M_{k}^{a}\right) ^{2}=\sum_{r<s}2\,M_{r}^{a}\,M_{s}^{a}=-\sum_{r<s}\rho _{rs}^{2}\partial _{r}\partial _{s}\,,\,\, M^{*2}=(M^{2})^{*}.$$ Here $M_{k}^{a}$ are the Möbius group generators $$M_{k}^{3}=\rho _{k}\partial _{k}\,,\,\,\,M_{k}^{-}=\partial _{k}\,,\,\,\,M_{k}^{+}=-\rho _{k}^{2}\partial _{k}$$ and $\partial _{k}=\partial /(\partial \rho _{k})$. The wave function $f_{m,\widetilde{m}}$ satisfies the Schrödinger equation \[4\]: $$E_{m,\widetilde{m}}\,f_{m,\widetilde{m}}=H\,f_{m,\widetilde{m}}.$$ Its eigenvalue $E_{m,\widetilde{m}}$ is proportional to the position $\omega _{m,\widetilde{m}}=j-1$ of a $j$-plane singularity of the $t$-channel partial wave: $$\omega _{m,\widetilde{m}}\,=-\frac{g^{2}N_{c}}{8\,\pi ^{2}}\,E_{m,\widetilde{% m}}$$ governing the $n$-Reggeon asymptotic contribution to the total cross-section $\sigma _{tot}\sim s^{\omega _{m,\widetilde{m}}}$. In the particular case of the Odderon, being a compound state of three reggeized gluons with the charge parity $C=-1$ and the signature
basic series of unitary representations of the Möbius group transformations $$\rho _{k}\rightarrow \frac{a\,\rho _{k}+b}{c\,\rho _{k}+d}\,,$$ where $\rho _{k}=x_{k}+iy_{k},\,\rho _{k}^{*}=x_{k}-iy_{k}$ and $a,b,c,d$ are arbitrary complex parameters \[1\]. For this series the conformal weights $$m=1/2+i\nu +n/2,\,\,\widetilde{m}=1/2+i\nu -n/2$$ are expressed in terms of the anomalous dimension $\gamma =1+2i\nu $ and the integer conformal spin $n$ of the composite operators $O_{m\widetilde{,m}}(% \overrightarrow{\rho _{0}})$. They are related to the eigenvalues $$M^{2}f_{m,\widetilde{m}}=m(m-1)f_{m,\widetilde{m}}\,,\,\,\,M^{*2}f_{m,% \widetilde{m}}=\widetilde{m}(\widetilde{m}-1)f_{m,\widetilde{m}}\,$$ of the Casimir operators $M^{2}$ and $M^{*2}$: $$M^{2}=\left( \sum_{k=1}^{n}M_{k}^{a}\right) ^{2}=\sum_{r<s}2\,M_{r}^{a}\,M_{s}^{a}=-\sum_{r<s}\rho _{rs}^{2}\partial _{r}\partial _{s}\,,\,\, M^{*2}=(M^{2})^{*}.$$ Here $M_{k}^{a}$ are the Möbius group generators $$M_{k}^{3}=\rho _{k}\partial _{k}\,,\,\,\,M_{k}^{-}=\partial _{k}\,,\,\,\,M_{k}^{+}=-\rho _{k}^{2}\partial _{k}$$ and $\partial _{k}=\partial /(\partial \rho _{k})$. The wave function $f_{m,\widetilde{m}}$ satisfies the Schrödinger equation \[4\]: $$E_{m,\widetilde{m}}\,f_{m,\widetilde{m}}=H\,f_{m,\widetilde{m}}.$$ Its eigenvalue $E_{m,\widetilde{m}}$ is proportional to the position $\omega _{m,\widetilde{m}}=j-1$ of a $j$-plane singularity of the $t$-channel partial wave: $$\omega _{m,\widetilde{m}}\,=-\frac{g^{2}N_{c}}{8\,\pi ^{2}}\,E_{m,\widetilde{% m}}$$ governing the $n$-Reggeon asymptotic contribution to the total cross-section $\sigma _{tot}\sim s^{\omega _{m,\widetilde{m}}}$. In the particular case of the Odderon, being a compound state of three reggeized gluons with the charge parity $C=-1$ and the signature[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] massive vector bosons ($W^1$, $W^2$ and $W^3$), but the spectrum contains much more than this. For comparison recall the well-known case of QCD, which has a small number of fields in the Lagrangian (gluons and quarks) and a huge number of particles in the spectrum (glueballs and hadrons). The glueballs and hadrons are created by gauge-invariant operators but the gluons and quarks correspond to gauge-dependent fields in the Lagrangian. The spectrum of the SU(2)-Higgs model is similar, at least in the confinement region: the Lagrangian contains gauge fields and a doublet of scalar fields, but lattice simulations suggest a dense spectrum of “W-balls” and “hadrons.” (For lattice studies of the spectrum in 2+1 dimensions, see Refs. [@Philipsen:1996af; @Philipsen:1997rq].) It is interesting to consider the spectrum in the Higgs region of the phase diagram. At weak coupling (which is directly relevant to the actual experimental situation), one might anticipate one Higgs boson, three vector
massive vector bosons ($W^1$, $W^2$ and $W^3$), but the spectrum contains much more than this. For comparison recall the well-known case of QCD, which has a small number of fields in the Lagrangian (gluons and quarks) and a huge number of particles in the spectrum (glueballs and hadrons). The glueballs and hadrons are created by gauge-invariant operators but the gluons and quarks correspond to gauge-dependent fields in the Lagrangian. The spectrum of the SU(2)-Higgs model is similar, at least in the confinement region: the Lagrangian contains gauge fields and a doublet of scalar fields, but lattice simulations suggest a dense spectrum of “W-balls” and “hadrons.” (For lattice studies of the spectrum in 2+1 dimensions, see Refs. [@Philipsen:1996af; @Philipsen:1997rq].) It is interesting to consider the spectrum in the Higgs region of the phase diagram. At weak coupling (which is directly relevant to the actual experimental situation), one might anticipate one Higgs boson, three vector[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \cC_{k_F}) \iso \bigoplus_{\Gamma\in \JJ/G_F} \Ind_{\Stab(\Gamma)}^{G_F} T_l\Pic^0(\Gamma),$$ where $\JJ$ is the set of geometric connected components of $\tilde\cC_{k_F}$. In particular, the above discussion determines the first $l$-adic étale cohomology group of $C$ as a $G_F$-module: $$\label{eq3} \H(C_{\Kbar},\Q_l) \>\>\iso\>\> H^1(\Upsilon,\Z)\!\tensor\!\Sp_2 \>\oplus\> \H(\tilde\cC_{k_F},\Q_l),$$ where $\Sp_2$ is the 2-dimensional ‘special’ representation (see [@Ta 4.1.4]). In this paper we describe the full $G_K$-action on $T_l(A)$ in terms of this filtration, even though $C$ may not be semistable over $K$. \[main\] The filtration of $T_l(A)$ is independent of the choice of $F/K$ and is $G_K$-stable. Moreover, $G_K$ acts semilinearly[^3] on $\mathcal{C}/\mathcal{O}_F$, inducing actions on $\cC_{k_F}$, $\Upsilon$, $\Pic^0 \cC_{k_F}$ and $\Pic^0 \tilde\cC_{k_F}$, with respect to which identifies the graded pieces as $G_K$-modules and extends to a $G_K$-isomorphism $$T_l \Pic^0 (\tilde \cC_{k_F}) \iso \bigoplus_{\Gamma\in \JJ/G_K} \Ind_{\Stab(\Gamma)}^{G_K} T_l\Pic^0(\Gamma).$$ The action of $\sigma \in G_K$ on $\cC_{k_F}$ is uniquely determined by its action on non-singular points, where it is given by $$\qquad
\cC_{k_F}) \iso \bigoplus_{\Gamma\in \JJ/G_F} \Ind_{\Stab(\Gamma)}^{G_F} T_l\Pic^0(\Gamma),$$ where $\JJ$ is the set of geometric connected components of $\tilde\cC_{k_F}$. In particular, the above discussion determines the first $l$-adic étale cohomology group of $C$ as a $G_F$-module: $$\label{eq3} \H(C_{\Kbar},\Q_l) \>\>\iso\>\> H^1(\Upsilon,\Z)\!\tensor\!\Sp_2 \>\oplus\> \H(\tilde\cC_{k_F},\Q_l),$$ where $\Sp_2$ is the 2-dimensional ‘special’ representation (see [@Ta 4.1.4]). In this paper we describe the full $G_K$-action on $T_l(A)$ in terms of this filtration, even though $C$ may not be semistable over $K$. \[main\] The filtration of $T_l(A)$ is independent of the choice of $F/K$ and is $G_K$-stable. Moreover, $G_K$ acts semilinearly[^3] on $\mathcal{C}/\mathcal{O}_F$, inducing actions on $\cC_{k_F}$, $\Upsilon$, $\Pic^0 \cC_{k_F}$ and $\Pic^0 \tilde\cC_{k_F}$, with respect to which identifies the graded pieces as $G_K$-modules and extends to a $G_K$-isomorphism $$T_l \Pic^0 (\tilde \cC_{k_F}) \iso \bigoplus_{\Gamma\in \JJ/G_K} \Ind_{\Stab(\Gamma)}^{G_K} T_l\Pic^0(\Gamma).$$ The action of $\sigma \in G_K$ on $\cC_{k_F}$ is uniquely determined by its action on non-singular points, where it is given by $$\qquad [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and fluorescence cross-correlation (FCS) [@vinogradova.oi:2009] experiments. Despite some remaining controversies in the data and amount of slip (cf. [@lauga2007]), a concept of hydrophobic slippage is now widely accepted. If a liquid flows past a rough hydrophobic (i.e. superhydrophobic) surface, roughness may favor the formation of trapped gas bubbles, resulting in a large slip length [@bib:joseph_superhydrophobic:2006; @ou2005; @feuillebois.f:2009; @sbragaglia-etal-06; @kusumaatmaja-etal-08b; @jari-jens-08]. For rough hydrophilic surfaces the situation is much less clear, and opposite experimental conclusions have been made: one is that roughness generates extremely large slip [@bonaccurso.e:2003], and one is that it decreases the degree of slippage [@granick.s:2003; @granick:02]. More recent experimental data suggests that the description of flow near rough surfaces has to be corrected, but for a separation, not slip [@vinogradova.oi:2006]. The theoretical description of such a flow represents a difficult, nearly insurmountable, problem. It has been solved only approximately, and only for a case of the periodic roughness and far-field flow with a conclusion
and fluorescence cross-correlation (FCS) [@vinogradova.oi:2009] experiments. Despite some remaining controversies in the data and amount of slip (cf. [@lauga2007]), a concept of hydrophobic slippage is now widely accepted. If a liquid flows past a rough hydrophobic (i.e. superhydrophobic) surface, roughness may favor the formation of trapped gas bubbles, resulting in a large slip length [@bib:joseph_superhydrophobic:2006; @ou2005; @feuillebois.f:2009; @sbragaglia-etal-06; @kusumaatmaja-etal-08b; @jari-jens-08]. For rough hydrophilic surfaces the situation is much less clear, and opposite experimental conclusions have been made: one is that roughness generates extremely large slip [@bonaccurso.e:2003], and one is that it decreases the degree of slippage [@granick.s:2003; @granick:02]. More recent experimental data suggests that the description of flow near rough surfaces has to be corrected, but for a separation, not slip [@vinogradova.oi:2006]. The theoretical description of such a flow represents a difficult, nearly insurmountable, problem. It has been solved only approximately, and only for a case of the periodic roughness and far-field flow with a conclusion[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Another possible application is the quark-gluon plasma created in heavy ion collisions, where perturbative QCD does not work, and lattice QCD struggles with dynamic situations, cf. e.g. [@Myers:2008fv]. We will come back to this point in section \[fin-rem\]. Here we will use fluid dynamics to make predictions on which types of black holes can exist in four-dimensional Einstein-Maxwell-AdS gravity. In particular, we shall classify all possible stationary equilibrium flows on ultrastatic manifolds with constant curvature spatial sections, and then use these results to predict (and explicitely construct) new black hole solutions. The remainder of this paper is organized as follows: In the next section, we briefly review the basics of conformal hydrodynamics. In section \[stat flow ultrastatic st\] we consider shearless and incompressible stationary fluids on ultrastatic manifolds, and show that the classification of such flows is equivalent to classifying the isometries of the spatial sections $(\Sigma,\bar g)$[^3]. This is then applied to the
Another possible application is the quark-gluon plasma created in heavy ion collisions, where perturbative QCD does not work, and lattice QCD struggles with dynamic situations, cf. e.g. [@Myers:2008fv]. We will come back to this point in section \[fin-rem\]. Here we will use fluid dynamics to make predictions on which types of black holes can exist in four-dimensional Einstein-Maxwell-AdS gravity. In particular, we shall classify all possible stationary equilibrium flows on ultrastatic manifolds with constant curvature spatial sections, and then use these results to predict (and explicitely construct) new black hole solutions. The remainder of this paper is organized as follows: In the next section, we briefly review the basics of conformal hydrodynamics. In section \[stat flow ultrastatic st\] we consider shearless and incompressible stationary fluids on ultrastatic manifolds, and show that the classification of such flows is equivalent to classifying the isometries of the spatial sections $(\Sigma,\bar g)$[^3]. This is then applied to the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] does not violate current observational constraints. Furthermore, such a dark sector interaction can provide an intriguing mechanism to solve the “coincidence problem” [@coincidence; @Comelli:2003cv; @Zhang:2005rg; @Cai:2004dk] and also induce new features to structure formation by exerting a nongravitational influence on dark matter [@Amendola:2001rc; @Bertolami:2007zm; @Koyama:2009gd]. In an interacting dark energy (IDE) scenario, the energy conservation equations of dark energy and cold dark matter satisfy $$\begin{aligned} \label{rhodedot} \rho'_{de} &=& -3\mathcal{H}(1+w)\rho_{de}+ aQ_{de}, \\ \label{rhocdot} \rho'_{c} &=& -3\mathcal{H}\rho_{c}+ aQ_{c},~~~~~~Q_{de}=-Q_{c}=Q,\end{aligned}$$ where $Q$ denotes the energy transfer rate, $\rho_{de}$ and $\rho_{c}$ are the energy densities of dark energy and cold dark matter, respectively, $\mathcal{H}=a'/a$ is the conformal Hubble expansion rate, a prime denotes the derivative with respect to the conformal time $\tau$, $a$ is the scale factor of the Universe, and $w$ is the equation of state parameter of dark energy. Several forms for $Q$ have been constructed and constrained by observational data [@He:2008tn; @He:2009mz; @He:2009pd; @He:2010im;
does not violate current observational constraints. Furthermore, such a dark sector interaction can provide an intriguing mechanism to solve the “coincidence problem” [@coincidence; @Comelli:2003cv; @Zhang:2005rg; @Cai:2004dk] and also induce new features to structure formation by exerting a nongravitational influence on dark matter [@Amendola:2001rc; @Bertolami:2007zm; @Koyama:2009gd]. In an interacting dark energy (IDE) scenario, the energy conservation equations of dark energy and cold dark matter satisfy $$\begin{aligned} \label{rhodedot} \rho'_{de} &=& -3\mathcal{H}(1+w)\rho_{de}+ aQ_{de}, \\ \label{rhocdot} \rho'_{c} &=& -3\mathcal{H}\rho_{c}+ aQ_{c},~~~~~~Q_{de}=-Q_{c}=Q,\end{aligned}$$ where $Q$ denotes the energy transfer rate, $\rho_{de}$ and $\rho_{c}$ are the energy densities of dark energy and cold dark matter, respectively, $\mathcal{H}=a'/a$ is the conformal Hubble expansion rate, a prime denotes the derivative with respect to the conformal time $\tau$, $a$ is the scale factor of the Universe, and $w$ is the equation of state parameter of dark energy. Several forms for $Q$ have been constructed and constrained by observational data [@He:2008tn; @He:2009mz; @He:2009pd; @He:2010im;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] using only single particle orbitals centered on the nucleus requires the inclusion of orbitals with much higher angular momenta than a roughly equivalent electron-only calculation [@strasburger95; @schrader98; @mitroy99c; @dzuba99]. For example, the largest CI calculations on the group II positronic atoms and PsH have typically have involved single particles bases with 8 radial function per angular momenta, $\ell$, and inclusion of angular momenta up to $L_{\rm max} = 10$ [@bromley02a; @bromley02b; @saito03a]. Even with such large orbital basis sets, between 5-60$\%$ of the binding energy and some 30-80$\%$ of the annihilation rate were obtained by extrapolating from $L_{\rm max} = 10$ to the $L_{\rm max} = \infty$ limit. Since our initial CI calculations [@bromley00a; @bromley02a; @bromley02b], advances in computer hardware mean larger dimension CI calculations are possible. In addition, program improvements have removed the chief memory bottleneck that previously constrained the size of the calculation. As a result, it is now
using only single particle orbitals centered on the nucleus requires the inclusion of orbitals with much higher angular momenta than a roughly equivalent electron-only calculation [@strasburger95; @schrader98; @mitroy99c; @dzuba99]. For example, the largest CI calculations on the group II positronic atoms and PsH have typically have involved single particles bases with 8 radial function per angular momenta, $\ell$, and inclusion of angular momenta up to $L_{\rm max} = 10$ [@bromley02a; @bromley02b; @saito03a]. Even with such large orbital basis sets, between 5-60$\%$ of the binding energy and some 30-80$\%$ of the annihilation rate were obtained by extrapolating from $L_{\rm max} = 10$ to the $L_{\rm max} = \infty$ limit. Since our initial CI calculations [@bromley00a; @bromley02a; @bromley02b], advances in computer hardware mean larger dimension CI calculations are possible. In addition, program improvements have removed the chief memory bottleneck that previously constrained the size of the calculation. As a result, it is now[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] end discuss the connection with the Hilbert action. For non-extreme black holes the Euclidean spacetimes admitted in the action principle have the topology I$\!$R$^2 \times S^{D-2}$. It is useful to introduce a polar system of coordinates in the I$\!$R$^2$ factor of I$\!$R$^2 \times S^{D-2}$. The reason is that the black hole will have a Killing vector field –the Killing time– whose orbits are circles centered at the horizon. We will take the polar angle in I$\!$R$^2$ as the time variable in a Hamiltonian analysis. An initial surface of time $t_1$ and a final surface of time $t_2$ will meet at the origin. There is nothing wrong with the two surfaces intersecting. The Hamiltonian can handle that. The canonical action $$I_{can} = \int(\pi^{ij}\dot{g}_{ij} - N{\cal H} - N^i {\cal H}_i), \label{1}$$ [*without any surface terms added*]{} can be taken as the action for the wedge between $t_1$ and $t_2$ provided the following quantities are held fixed:\ (i) the intrinsic
end discuss the connection with the Hilbert action. For non-extreme black holes the Euclidean spacetimes admitted in the action principle have the topology I$\!$R$^2 \times S^{D-2}$. It is useful to introduce a polar system of coordinates in the I$\!$R$^2$ factor of I$\!$R$^2 \times S^{D-2}$. The reason is that the black hole will have a Killing vector field –the Killing time– whose orbits are circles centered at the horizon. We will take the polar angle in I$\!$R$^2$ as the time variable in a Hamiltonian analysis. An initial surface of time $t_1$ and a final surface of time $t_2$ will meet at the origin. There is nothing wrong with the two surfaces intersecting. The Hamiltonian can handle that. The canonical action $$I_{can} = \int(\pi^{ij}\dot{g}_{ij} - N{\cal H} - N^i {\cal H}_i), \label{1}$$ [*without any surface terms added*]{} can be taken as the action for the wedge between $t_1$ and $t_2$ provided the following quantities are held fixed:\ (i) the intrinsic[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the dust column density estimated by the thermal emission of the dust from far-infrared (100 $\mu$m) observations, or the column density of HI and/or CO from radio observations. In the optical wavelengths, DGL brightness [@Witt2008; @Matsuoka2011; @Ienaka2013] and spectrum [@Brandt2012] are obtained by the correlation with the 100 $\mu$m dust thermal emission. However, observations of DGL at near-infrared (NIR) are limited and controversial. The presence of the infrared band features in DGL has been first confirmed for the 3.3 $\mu$m band by the AROME balloon experiment [@Giard1988]. Such ubiquitous Unidentified Infrared (UIR) bands are a series of distinct emission bands seen at 3.3, 5.3, 6.2, 7.7, 8.6, 11.2, and 12.7 $\mu$m, and they are supposed to be carried by the polycyclic aromatic hydrocarbons (PAH) [@Leger1984; @Allamandola1985]. They are excited by absorbing a single ultraviolet (UV) photon and release the energy with a number of infrared photons in cascade via several lattice
the dust column density estimated by the thermal emission of the dust from far-infrared (100 $\mu$m) observations, or the column density of HI and/or CO from radio observations. In the optical wavelengths, DGL brightness [@Witt2008; @Matsuoka2011; @Ienaka2013] and spectrum [@Brandt2012] are obtained by the correlation with the 100 $\mu$m dust thermal emission. However, observations of DGL at near-infrared (NIR) are limited and controversial. The presence of the infrared band features in DGL has been first confirmed for the 3.3 $\mu$m band by the AROME balloon experiment [@Giard1988]. Such ubiquitous Unidentified Infrared (UIR) bands are a series of distinct emission bands seen at 3.3, 5.3, 6.2, 7.7, 8.6, 11.2, and 12.7 $\mu$m, and they are supposed to be carried by the polycyclic aromatic hydrocarbons (PAH) [@Leger1984; @Allamandola1985]. They are excited by absorbing a single ultraviolet (UV) photon and release the energy with a number of infrared photons in cascade via several lattice[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] function is a sum of $N$ sines with random frequencies, amplitudes and offsets. A sum of sines is a straightforward way of creating a smooth and continuous but non-linear transformation function. A continuous function ensures that gray values that are similar in the original image are also similar in the transformed image so that homogeneous structures remain homogeneous despite the presence of noise. The transformation function is therefore defined as: $$y(x) = \sum_{i=1}^{N} A_i \cdot \sin(f_i \cdot (2\pi \cdot x + \varphi_i)). \label{eq:y}$$ The frequencies $f_i$ are uniformly sampled from $[f_{\min}, f_{\max}]$. This range of permitted frequencies and the number of sines $N$ determine the aggressiveness of the transformation, i.e., how much it deviates from a simple linear mapping. The amplitudes $A_i$ are uniformly sampled from $[\nicefrac{-1}{f}, \nicefrac{1}{f}]$, which ensures that low frequency sines dominate so that the transformation function is overall relatively calm. The
function is a sum of $N$ sines with random frequencies, amplitudes and offsets. A sum of sines is a straightforward way of creating a smooth and continuous but non-linear transformation function. A continuous function ensures that gray values that are similar in the original image are also similar in the transformed image so that homogeneous structures remain homogeneous despite the presence of noise. The transformation function is therefore defined as: $$y(x) = \sum_{i=1}^{N} A_i \cdot \sin(f_i \cdot (2\pi \cdot x + \varphi_i)). \label{eq:y}$$ The frequencies $f_i$ are uniformly sampled from $[f_{\min}, f_{\max}]$. This range of permitted frequencies and the number of sines $N$ determine the aggressiveness of the transformation, i.e., how much it deviates from a simple linear mapping. The amplitudes $A_i$ are uniformly sampled from $[\nicefrac{-1}{f}, \nicefrac{1}{f}]$, which ensures that low frequency sines dominate so that the transformation function is overall relatively calm. The[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] = (\rho - \bar{\rho})/\rho$ is the overdensity. In order to obtain observable data, the growth rate can be defined as a combination of the structure growth, $f(z) = dln\delta/dlna$, and the redshift-dependent rms fluctuations of the linear density field, $\sigma_8(z)$, $R(z)(f\sigma_8(z))\equiv f(z)\sigma_8(z)$. For the cosmic large scale structure, both of them can be measured from cosmological data. Therefore, the kinematic probes and dynamical probes provide complementary measurements of the nature of the observed cosmic acceleration[@Lue2004; @Lue2006; @Heavens2007; @Zhang2007]. In this paper, we will investigate consistency relations between the kinematic and dynamical probes in the framework of general relativity. Our work is partly motivated by the consistency tests in cosmology proposed by Knox et al.[@Knox2006], Ishak et al.[@Ishak2006], T. Chiba and R. Takahashi[@Chiba2007] and Yun Wang[@YW2008]. The consistency relation from T. Chiba and R. Takahashi, which is constructed theoretically between the luminosity distance and the density perturbation, relies on the matter density
= (\rho - \bar{\rho})/\rho$ is the overdensity. In order to obtain observable data, the growth rate can be defined as a combination of the structure growth, $f(z) = dln\delta/dlna$, and the redshift-dependent rms fluctuations of the linear density field, $\sigma_8(z)$, $R(z)(f\sigma_8(z))\equiv f(z)\sigma_8(z)$. For the cosmic large scale structure, both of them can be measured from cosmological data. Therefore, the kinematic probes and dynamical probes provide complementary measurements of the nature of the observed cosmic acceleration[@Lue2004; @Lue2006; @Heavens2007; @Zhang2007]. In this paper, we will investigate consistency relations between the kinematic and dynamical probes in the framework of general relativity. Our work is partly motivated by the consistency tests in cosmology proposed by Knox et al.[@Knox2006], Ishak et al.[@Ishak2006], T. Chiba and R. Takahashi[@Chiba2007] and Yun Wang[@YW2008]. The consistency relation from T. Chiba and R. Takahashi, which is constructed theoretically between the luminosity distance and the density perturbation, relies on the matter density[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] might not be exact. To argue against the exact universality, Landau[@ll59] pointed out that the local rate of energy dissipation $\varepsilon$ fluctuates over large scales. This fluctuation is not universal and is always significant.[@po97; @c03; @mouri06] In fact, the large-scale flow or the configuration for turbulence production appears to affect some small-scale statistics.[@mouri06; @k92; @pgkz93; @ss96; @sd98; @mininni06] Obukhov[@o62] discussed that Kolmogorov’s theory[@k41] still holds in an ensemble of “pure” subregions where $\varepsilon$ is constant at a certain value. Then, the $\varepsilon$ value represents the rate of energy transfer averaged over those subregions. For the whole region, small-scale statistics reflect the large-scale flow through the large-scale fluctuation of the $\varepsilon$ value. The idea that turbulence consists of some elementary subregions is of interest even now.[@ss96] We study statistics among subregions in terms of the effect of large scales on small scales, by using a long record of velocity data obtained in
might not be exact. To argue against the exact universality, Landau[@ll59] pointed out that the local rate of energy dissipation $\varepsilon$ fluctuates over large scales. This fluctuation is not universal and is always significant.[@po97; @c03; @mouri06] In fact, the large-scale flow or the configuration for turbulence production appears to affect some small-scale statistics.[@mouri06; @k92; @pgkz93; @ss96; @sd98; @mininni06] Obukhov[@o62] discussed that Kolmogorov’s theory[@k41] still holds in an ensemble of “pure” subregions where $\varepsilon$ is constant at a certain value. Then, the $\varepsilon$ value represents the rate of energy transfer averaged over those subregions. For the whole region, small-scale statistics reflect the large-scale flow through the large-scale fluctuation of the $\varepsilon$ value. The idea that turbulence consists of some elementary subregions is of interest even now.[@ss96] We study statistics among subregions in terms of the effect of large scales on small scales, by using a long record of velocity data obtained in[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] entanglement witness $W$ detecting $\rho$, i.e. such that $\mbox{Tr}(W\rho)<0$. Clearly, the construction of entanglement witnesses is a hard task. It is easy to construct $W$ which is not positive, i.e. has at leat one negative eigenvalue, but it is very difficult to check that $\mbox{Tr}(W\sigma) \geq 0$ for all separable states $\sigma$. The separability problem may be equivalently formulated in terms positive maps [@Horodeccy-PM]: a state $\rho$ is separable iff $(\oper {{\,\otimes\,}}\Lambda)\rho$ is positive for any positive map $\Lambda$ which sends positive operators on $\mathcal{H}_B$ into positive operators on $\mathcal{H}_A$. Due to the celebrated Choi-Jamio[ł]{}kowski [@Jam; @Choi1] isomorphism there is a one to one correspondence between entanglement witnesses and positive maps which are not completely positive: if $\Lambda$ is such a map, then $W_\Lambda:=(\oper {{\,\otimes\,}}\Lambda)P^+$ is the corresponding entanglement witness ($P^+$ stands for the projector onto the maximally entangled state in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$). Unfortunately, in spite of the considerable effort, the
entanglement witness $W$ detecting $\rho$, i.e. such that $\mbox{Tr}(W\rho)<0$. Clearly, the construction of entanglement witnesses is a hard task. It is easy to construct $W$ which is not positive, i.e. has at leat one negative eigenvalue, but it is very difficult to check that $\mbox{Tr}(W\sigma) \geq 0$ for all separable states $\sigma$. The separability problem may be equivalently formulated in terms positive maps [@Horodeccy-PM]: a state $\rho$ is separable iff $(\oper {{\,\otimes\,}}\Lambda)\rho$ is positive for any positive map $\Lambda$ which sends positive operators on $\mathcal{H}_B$ into positive operators on $\mathcal{H}_A$. Due to the celebrated Choi-Jamio[ł]{}kowski [@Jam; @Choi1] isomorphism there is a one to one correspondence between entanglement witnesses and positive maps which are not completely positive: if $\Lambda$ is such a map, then $W_\Lambda:=(\oper {{\,\otimes\,}}\Lambda)P^+$ is the corresponding entanglement witness ($P^+$ stands for the projector onto the maximally entangled state in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$). Unfortunately, in spite of the considerable effort, the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] (the triple $(M,\Delta,g)$ is called a $C^r$ sub-Riemannian manifold). We will always assume that $\Delta$ is corank $1$ and $dim(M)=n+1$ with $n \geq 2$. A piecewise $C^1$ path $\gamma$ is said to be ***admissible*** if it is a.e everywhere tangent to $\Delta$ (i.e. $\dot{\gamma}(t) \in \Delta_{\gamma(t)}$ for $t$ a.e). We let $\mathcal{C}_{pq}$ denote the set of length parameterized (i.e. $g(\dot{\gamma}(t),\dot{\gamma}(t))=1$ for $t$ a.e) admissible paths between $p$ and $q$. If $\mathcal{C}_{pq} \neq \emptyset$ for all $p,q \in U\subset M$ then $\Delta$ is said to be ***controllable*** or ***accessible*** on $U$. For smooth bundles, the ***Chow-Rashevskii Theorem*** says that if $\Delta$ is everywhere completely non-integrable (i.e. if the smallest Lie algebra $Lie(\Gamma(\Delta))$ generated by smooth sections of $\Delta$ is the whole tangent space at every point), then it is accessible [@Agr04]. In particular if $\Gamma(\Delta)(p) + [\Gamma(\Delta),\Gamma(\Delta)](p) =T_pM$ then such a bundle is called called step 2, completely non-integrable at $p$
(the triple $(M,\Delta,g)$ is called a $C^r$ sub-Riemannian manifold). We will always assume that $\Delta$ is corank $1$ and $dim(M)=n+1$ with $n \geq 2$. A piecewise $C^1$ path $\gamma$ is said to be ***admissible*** if it is a.e everywhere tangent to $\Delta$ (i.e. $\dot{\gamma}(t) \in \Delta_{\gamma(t)}$ for $t$ a.e). We let $\mathcal{C}_{pq}$ denote the set of length parameterized (i.e. $g(\dot{\gamma}(t),\dot{\gamma}(t))=1$ for $t$ a.e) admissible paths between $p$ and $q$. If $\mathcal{C}_{pq} \neq \emptyset$ for all $p,q \in U\subset M$ then $\Delta$ is said to be ***controllable*** or ***accessible*** on $U$. For smooth bundles, the ***Chow-Rashevskii Theorem*** says that if $\Delta$ is everywhere completely non-integrable (i.e. if the smallest Lie algebra $Lie(\Gamma(\Delta))$ generated by smooth sections of $\Delta$ is the whole tangent space at every point), then it is accessible [@Agr04]. In particular if $\Gamma(\Delta)(p) + [\Gamma(\Delta),\Gamma(\Delta)](p) =T_pM$ then such a bundle is called called step 2, completely non-integrable at $p$[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] dampen so-called stiff components effectively (i.e., is not *strongly A-stable*). Specifically, a straightforward Fourier analysis of the scheme applied to a standard finite difference discretisation of the heat equation shows that high wave number components are reflected in every time step but asymptotically do not diminish over time. This gives rise to problems for applications with non-smooth data, where the behaviour of these components over time plays a crucial role in the smoothing of solutions. Examples where this is relevant include Dirac initial data, as they appear naturally for adjoint equations [see @GS], and piecewise linear and step function payoff data in the computation of option prices [see @WHD]. There, the situation is exacerbated if sensitivities of the solution to its input data (so-called ‘Greeks’) are needed; see @SHAW for an early discussion of issues with time stepping schemes in this context. There is a sizeable body of literature concerned with schemes
dampen so-called stiff components effectively (i.e., is not *strongly A-stable*). Specifically, a straightforward Fourier analysis of the scheme applied to a standard finite difference discretisation of the heat equation shows that high wave number components are reflected in every time step but asymptotically do not diminish over time. This gives rise to problems for applications with non-smooth data, where the behaviour of these components over time plays a crucial role in the smoothing of solutions. Examples where this is relevant include Dirac initial data, as they appear naturally for adjoint equations [see @GS], and piecewise linear and step function payoff data in the computation of option prices [see @WHD]. There, the situation is exacerbated if sensitivities of the solution to its input data (so-called ‘Greeks’) are needed; see @SHAW for an early discussion of issues with time stepping schemes in this context. There is a sizeable body of literature concerned with schemes[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] non-regular, namely change-point type, statistical models as the limiting likelihood ratio process, and the variables $\zeta_\rho$ and $\xi_\rho$ (up to a multiplicative constant) as the limiting distributions of the Bayesian estimators and of the maximum likelihood estimator respectively. In particular, $B_\rho$ and $M_\rho$ (up to the square of the above multiplicative constant) are the limiting variances of these estimators, and the Bayesian estimators being asymptotically efficient, the ratio $E_\rho=B_\rho/M_\rho$ is the asymptotic efficiency of the maximum likelihood estimator in these models. The main such model is the below detailed model of i.i.d. observations in the situation when their density has a jump (is discontinuous). Probably the first general result about this model goes back to Chernoff and Rubin [@CR]. Later, it was exhaustively studied by Ibragimov and Khasminskii in [@IKh Chapter 5] $\bigl($see also their previous works [@IKh2] and [@IKh4]$\bigr)$. **Model 1.** Consider the problem of estimation of the location parameter $\theta$ based on the observation $X^n=(X_1,\ldots,X_n)$ of
non-regular, namely change-point type, statistical models as the limiting likelihood ratio process, and the variables $\zeta_\rho$ and $\xi_\rho$ (up to a multiplicative constant) as the limiting distributions of the Bayesian estimators and of the maximum likelihood estimator respectively. In particular, $B_\rho$ and $M_\rho$ (up to the square of the above multiplicative constant) are the limiting variances of these estimators, and the Bayesian estimators being asymptotically efficient, the ratio $E_\rho=B_\rho/M_\rho$ is the asymptotic efficiency of the maximum likelihood estimator in these models. The main such model is the below detailed model of i.i.d. observations in the situation when their density has a jump (is discontinuous). Probably the first general result about this model goes back to Chernoff and Rubin [@CR]. Later, it was exhaustively studied by Ibragimov and Khasminskii in [@IKh Chapter 5] $\bigl($see also their previous works [@IKh2] and [@IKh4]$\bigr)$. **Model 1.** Consider the problem of estimation of the location parameter $\theta$ based on the observation $X^n=(X_1,\ldots,X_n)$ of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] easily satisfy this condition under a multitude of scenarios, including merging/splitting communities, nodes joining another community, etc. Further, for both methods under their respective identifiability and certain additional regularity conditions, we establish rates of convergence and derive the asymptotic distributions of the change point estimators. The results are illustrated on synthetic data. In summary, this work provides an in depth investigation of the novel problem of change point analysis for networks generated by stochastic block models, identifies key conditions for the consistent estimation of the change point and proposes a computationally fast algorithm that solves the problem in many settings that occur in applications. Finally, it discusses challenges posed by employing clustering algorithms in this problem, that require additional investigation for their full resolution.' author: - Monika Bhattacharjee - Moulinath Banerjee - George Michailidis title: Change Point Estimation in a Dynamic Stochastic Block Model --- **Key words and phrases.** stochastic block model, Erdős-Rényi random graph, change point,
easily satisfy this condition under a multitude of scenarios, including merging/splitting communities, nodes joining another community, etc. Further, for both methods under their respective identifiability and certain additional regularity conditions, we establish rates of convergence and derive the asymptotic distributions of the change point estimators. The results are illustrated on synthetic data. In summary, this work provides an in depth investigation of the novel problem of change point analysis for networks generated by stochastic block models, identifies key conditions for the consistent estimation of the change point and proposes a computationally fast algorithm that solves the problem in many settings that occur in applications. Finally, it discusses challenges posed by employing clustering algorithms in this problem, that require additional investigation for their full resolution.' author: - Monika Bhattacharjee - Moulinath Banerjee - George Michailidis title: Change Point Estimation in a Dynamic Stochastic Block Model --- **Key words and phrases.** stochastic block model, Erdős-Rényi random graph, change point,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the robustness of representations and their associated computational requirements. Much emphasis has been placed on solving the “data association” problem - making sure that there are no incorrectly or aliased map-landmark associations. Navigation neurons found in the brain of many mammals such as rodents, known as “grid cells” [@SCIENCE:TF] (see Fig. \[fig:intuation\]), have highly aliased data associations with locations in the environment - each cell encodes an arbitrary number of physical locations laid out in a triangular tesselating grid [@NATURE:M; @ARN:PGB]. There has been much interest in the theoretical advantages of such a neural representation including implications for memory storage, error correction [@NATUREN:GC] and scalability that could revolutionize how artificial systems including robots are developed. ![Neurons in the mammalian brain known as grid cells intentionally alias their representation of the environment; each neuron represents an arbitrary number of places in a regularly repeating pattern. []{data-label="fig:intuation"}](fig/fig1.jpg){width="30.00000%"} In this paper, we propose a novel approach
the robustness of representations and their associated computational requirements. Much emphasis has been placed on solving the “data association” problem - making sure that there are no incorrectly or aliased map-landmark associations. Navigation neurons found in the brain of many mammals such as rodents, known as “grid cells” [@SCIENCE:TF] (see Fig. \[fig:intuation\]), have highly aliased data associations with locations in the environment - each cell encodes an arbitrary number of physical locations laid out in a triangular tesselating grid [@NATURE:M; @ARN:PGB]. There has been much interest in the theoretical advantages of such a neural representation including implications for memory storage, error correction [@NATUREN:GC] and scalability that could revolutionize how artificial systems including robots are developed. ![Neurons in the mammalian brain known as grid cells intentionally alias their representation of the environment; each neuron represents an arbitrary number of places in a regularly repeating pattern. []{data-label="fig:intuation"}](fig/fig1.jpg){width="30.00000%"} In this paper, we propose a novel approach[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] magnetic fields – Galactic (GMF) and extra-Galactic (xGMF) – that deflect UHECRs and distort the original anisotropic patterns. Chemical composition and magnetic fields are degenerate when it comes to UHECRs deflections, since the latter depends on $ZB/E$, where $Z$ is the atomic number, $B$ the strength of the magnetic field, and $E$ is the UHECR energy: doubling the field strength is equivalent to doubling the charge (or halving the energy). Chemical composition instead is the only factor that determines the UHECR propagation length at a given energy: different nuclei come from a different portion of the Universe and carry a different anisotropic imprint, but the relationship between the two is non-monotonic and non-trivial (see, e.g., [@dOrfeuil:2014qgw; @diMatteo:2017dtg]). To a large extent, the statistics of the anisotropies in the distribution of UHECRs can be characterized by the UHECR angular auto-correlation (AC), which, in harmonic space, takes the form of angular the power
magnetic fields – Galactic (GMF) and extra-Galactic (xGMF) – that deflect UHECRs and distort the original anisotropic patterns. Chemical composition and magnetic fields are degenerate when it comes to UHECRs deflections, since the latter depends on $ZB/E$, where $Z$ is the atomic number, $B$ the strength of the magnetic field, and $E$ is the UHECR energy: doubling the field strength is equivalent to doubling the charge (or halving the energy). Chemical composition instead is the only factor that determines the UHECR propagation length at a given energy: different nuclei come from a different portion of the Universe and carry a different anisotropic imprint, but the relationship between the two is non-monotonic and non-trivial (see, e.g., [@dOrfeuil:2014qgw; @diMatteo:2017dtg]). To a large extent, the statistics of the anisotropies in the distribution of UHECRs can be characterized by the UHECR angular auto-correlation (AC), which, in harmonic space, takes the form of angular the power[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] to Li and Wei [@LW] and the references therein. Pardoux and Tang [@PaT] studied fully coupled FBSDEs (but without controls), and gave an existence result for viscosity solutions of related quasi-linear parabolic PDEs, when the diffusion coefficient $\sigma$ of the forward equation does not depend on the second component of the solution $(Y, Z)$ of the BSDE. Wu and Yu [@WY], [@WY2] studied the case when the diffusion coefficient $\sigma$ of the forward equation depends on $Z$, but, the stochastic systems without controls. In Li and Wei [@LW] they studied the optimal control problems of fully coupled FBSDEs. They studied two cases of diffusion coefficients $\sigma$ of FSDEs, that is, in one case $\sigma$ depends on the control and does not depend on $Z$, and in the other case $\sigma$ depends on $Z$ and doesn’t depend on the control. They use a new method to prove that the value functions are deterministic, satisfied the dynamic programming principle
to Li and Wei [@LW] and the references therein. Pardoux and Tang [@PaT] studied fully coupled FBSDEs (but without controls), and gave an existence result for viscosity solutions of related quasi-linear parabolic PDEs, when the diffusion coefficient $\sigma$ of the forward equation does not depend on the second component of the solution $(Y, Z)$ of the BSDE. Wu and Yu [@WY], [@WY2] studied the case when the diffusion coefficient $\sigma$ of the forward equation depends on $Z$, but, the stochastic systems without controls. In Li and Wei [@LW] they studied the optimal control problems of fully coupled FBSDEs. They studied two cases of diffusion coefficients $\sigma$ of FSDEs, that is, in one case $\sigma$ depends on the control and does not depend on $Z$, and in the other case $\sigma$ depends on $Z$ and doesn’t depend on the control. They use a new method to prove that the value functions are deterministic, satisfied the dynamic programming principle[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] @ericson2003]. The hadronic scattering lengths can be related to the isospin-even and isospin-odd scattering length, $a^+$ and $a^-$: $$a^h_{(\pi^- \ p \to \pi^- \ p)}= a^+ + a^-\qquad a^h_{(\pi^- \ p \to \pi^0 \ n)}=-\sqrt{2}\ a^-$$ The isospin scattering lengths can ben related to $\epsilon_{1s}$ and $\Gamma_{1s}$ in the framework of the Heavy Barion Chiral Perturbation Theory ($\chi$PT)[@lyubovitskij2000]. Scattering experiments are restricted to energies above 10 MeV and have to rely on an extrapolation to zero energy to extract the scattering lengths. Pionic hydrogen spectroscopy permits to measure this scattering length at almost zero energy (in the same order as the binding energies, i.e., some keV) and verify with high accuracy the $\chi$PT calculations. Moreover, the measurement of $\Gamma_{1s}$ allows an evaluation of the pion-nucleon coupling constant $f_{\pi N}$, which is related to $a^-$ by the Goldberger-Miyazawa-Oehme sum rule (GMO)[@GMO]. Pionic atom spectroscopy permits to measure another important quantity: the charged pion mass. Orbital
@ericson2003]. The hadronic scattering lengths can be related to the isospin-even and isospin-odd scattering length, $a^+$ and $a^-$: $$a^h_{(\pi^- \ p \to \pi^- \ p)}= a^+ + a^-\qquad a^h_{(\pi^- \ p \to \pi^0 \ n)}=-\sqrt{2}\ a^-$$ The isospin scattering lengths can ben related to $\epsilon_{1s}$ and $\Gamma_{1s}$ in the framework of the Heavy Barion Chiral Perturbation Theory ($\chi$PT)[@lyubovitskij2000]. Scattering experiments are restricted to energies above 10 MeV and have to rely on an extrapolation to zero energy to extract the scattering lengths. Pionic hydrogen spectroscopy permits to measure this scattering length at almost zero energy (in the same order as the binding energies, i.e., some keV) and verify with high accuracy the $\chi$PT calculations. Moreover, the measurement of $\Gamma_{1s}$ allows an evaluation of the pion-nucleon coupling constant $f_{\pi N}$, which is related to $a^-$ by the Goldberger-Miyazawa-Oehme sum rule (GMO)[@GMO]. Pionic atom spectroscopy permits to measure another important quantity: the charged pion mass. Orbital[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] a number of non-HBL blazars and even non-blazar radio-loud AGN have been detected by the current generation of ACTs. This suggests that most blazars might be intrinsically emitters of VHE $\gamma$-rays. According to AGN unification schemes [@up95], radio galaxies are the mis-aligned parent population of blazars with the less powerful FR I radio galaxies corresponding to BL Lac objects and FR II radio galaxies being the parent population of radio-loud quasars. Blazars are those objects which are viewed at a small angle with respect to the jet axis. If this unification scheme holds, then, by inference, also radio galaxies may be expected to be intrinsically emitters of VHE $\gamma$-rays within a narrow cone around the jet axis. While there is little evidence for dense radiation environments in the nuclear regions of BL Lac objects — in particular, HBLs —, strong line emission in Flat Spectrum Radio Quasars (FSRQs) as well as the occasional detection of
a number of non-HBL blazars and even non-blazar radio-loud AGN have been detected by the current generation of ACTs. This suggests that most blazars might be intrinsically emitters of VHE $\gamma$-rays. According to AGN unification schemes [@up95], radio galaxies are the mis-aligned parent population of blazars with the less powerful FR I radio galaxies corresponding to BL Lac objects and FR II radio galaxies being the parent population of radio-loud quasars. Blazars are those objects which are viewed at a small angle with respect to the jet axis. If this unification scheme holds, then, by inference, also radio galaxies may be expected to be intrinsically emitters of VHE $\gamma$-rays within a narrow cone around the jet axis. While there is little evidence for dense radiation environments in the nuclear regions of BL Lac objects — in particular, HBLs —, strong line emission in Flat Spectrum Radio Quasars (FSRQs) as well as the occasional detection of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in noisy and incompletely known environments. It is for this property that some researchers have embraced evolution as a tool for arriving at robust computational systems. Darwinian evolution not only created systems that can withstand small changes in their external conditions and survive, but has also enforced [*functional modularity*]{} to enhance a species’ evolvability [@Gerhart98] and long-term survival. This modularity is one of the key features that is responsible for the evolved system’s robustness: one part may fail, but the rest will continue to work. Functional modularity is also associated with component re-use and developmental evolution [@Koza03]. The idea of evolving neural networks is not new [@Kitano90; @Koza91], but has often been limited to just adapting the network’s structure and weights with a bias to specific models (e.g., feed-forward) and using [*homogeneous*]{} neuron functions. Less constrained models have been proposed [@Belew93; @Eggenberger97; @Gruau95; @Nolfi95], most of which encompass some sort of implicit
in noisy and incompletely known environments. It is for this property that some researchers have embraced evolution as a tool for arriving at robust computational systems. Darwinian evolution not only created systems that can withstand small changes in their external conditions and survive, but has also enforced [*functional modularity*]{} to enhance a species’ evolvability [@Gerhart98] and long-term survival. This modularity is one of the key features that is responsible for the evolved system’s robustness: one part may fail, but the rest will continue to work. Functional modularity is also associated with component re-use and developmental evolution [@Koza03]. The idea of evolving neural networks is not new [@Kitano90; @Koza91], but has often been limited to just adapting the network’s structure and weights with a bias to specific models (e.g., feed-forward) and using [*homogeneous*]{} neuron functions. Less constrained models have been proposed [@Belew93; @Eggenberger97; @Gruau95; @Nolfi95], most of which encompass some sort of implicit[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] ferromagnetic atomic chains on a superconductor (SC) [@Nadj-Perge:Science14; @Ruby:PRL15; @Feldman:17; @Pawlak:17]. Tuning the system to appropriate conditions, experimentalists are able to find zero energy modes compatible with the existence of MBSs in the form of zero bias peaks in tunnelling spectroscopy experiments [@Mourik:Science12; @Deng:Science16; @Zhang:Nat17a; @Chen:Science17; @Deng:PRB18; @Vaitiekenas:PRL18; @Gul:NNano18; @Grivnin:arxiv18; @Vaitiekenas:arxiv18]. However, due to the possibility of alternative explanations for the observed zero bias peak, the actual nature of these low-energy states has been brought into question [@Setiawan:PRB17; @Liu:PRB17; @Reeg:PRB18b; @Moore:PRB18; @Avila:arxiv18; @Vuik:arxiv18]. A complementary measurement that could disperse the doubts would be to measure the actual zero mode probability density along the wire or chain, which should show for Majoranas an exponential decay from the edge towards its center with the Majorana localization length [@Klinovaja:PRB12]. Attempts in this direction, including simultaneous tunneling measurement at the the end and the bulk of the wire, were performed in Ref. . The zero mode probability
ferromagnetic atomic chains on a superconductor (SC) [@Nadj-Perge:Science14; @Ruby:PRL15; @Feldman:17; @Pawlak:17]. Tuning the system to appropriate conditions, experimentalists are able to find zero energy modes compatible with the existence of MBSs in the form of zero bias peaks in tunnelling spectroscopy experiments [@Mourik:Science12; @Deng:Science16; @Zhang:Nat17a; @Chen:Science17; @Deng:PRB18; @Vaitiekenas:PRL18; @Gul:NNano18; @Grivnin:arxiv18; @Vaitiekenas:arxiv18]. However, due to the possibility of alternative explanations for the observed zero bias peak, the actual nature of these low-energy states has been brought into question [@Setiawan:PRB17; @Liu:PRB17; @Reeg:PRB18b; @Moore:PRB18; @Avila:arxiv18; @Vuik:arxiv18]. A complementary measurement that could disperse the doubts would be to measure the actual zero mode probability density along the wire or chain, which should show for Majoranas an exponential decay from the edge towards its center with the Majorana localization length [@Klinovaja:PRB12]. Attempts in this direction, including simultaneous tunneling measurement at the the end and the bulk of the wire, were performed in Ref. . The zero mode probability[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] by the bottom spectrum of the Laplace-Beltrami operator. The problem becomes highly non-trivial for non-compact manifolds. Li [@Li86] determined the exact long time asymptotics on manifolds with nonnegative Ricci curvature, under a polynomial volume growth assumption. Lott [@Lott92] and Kotani-Sunada [@KotaniSunada00] determined the long time asymptotics on abelian covers of closed manifolds. In a very recent paper, Ledrappier-Lim [@LedrappierLim15] established the exact long time asymptotics of the heat kernel of the universal cover of a negatively curved closed manifold, generalizing the situation for hyperbolic space with constant curvature. We also mention that for non-compact Riemannian symmetric spaces, Anker-Ji [@AnkerJi01] established matching upper and lower bounds on the long time behaviour of the heat kernel. Since the work by Lott [@Lott92] and Kotani-Sunada [@KotaniSunada00] is closely related to ours, we describe it briefly here. Let $M$ be a closed Riemannian manifold, and $\hat M$ be an abelian cover (i.e. a covering space whose deck transformation group is abelian). The
by the bottom spectrum of the Laplace-Beltrami operator. The problem becomes highly non-trivial for non-compact manifolds. Li [@Li86] determined the exact long time asymptotics on manifolds with nonnegative Ricci curvature, under a polynomial volume growth assumption. Lott [@Lott92] and Kotani-Sunada [@KotaniSunada00] determined the long time asymptotics on abelian covers of closed manifolds. In a very recent paper, Ledrappier-Lim [@LedrappierLim15] established the exact long time asymptotics of the heat kernel of the universal cover of a negatively curved closed manifold, generalizing the situation for hyperbolic space with constant curvature. We also mention that for non-compact Riemannian symmetric spaces, Anker-Ji [@AnkerJi01] established matching upper and lower bounds on the long time behaviour of the heat kernel. Since the work by Lott [@Lott92] and Kotani-Sunada [@KotaniSunada00] is closely related to ours, we describe it briefly here. Let $M$ be a closed Riemannian manifold, and $\hat M$ be an abelian cover (i.e. a covering space whose deck transformation group is abelian). The[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] comprehensive discussion of these methods. Apart from the simplicity, convergence analysis of the AFS methods for generalized degenerate setup are considered difficult to analyze. Dikin first published a convergence proof with a non-degeneracy assumption in 1974 [@vanderbei:1990]. Both Vanderbei *et al.* [@vanderbei:1986] and Barnes [@barnes:1986] gave simpler proofs in their global convergence analysis but still assumed primal and dual non-degeneracy. First attempt to break out of the non-degeneracy assumption was made by Adler *et al.* [@adler:1991], who investigated the convergence of continuous trajectories of primal and dual AFS. Subsequently, assuming only dual non-degeneracy, Tsuchiya [@tsuchiya:19921] showed that under the condition of step size $\alpha < \frac{1}{8}$, the long-step version of AFS converges globally. In another work, Tsuchiya [@tsuchiya:1991] showed that the dual non-degeneracy condition is not a necessary condition for the convergence as assumed previously [@tsuchiya:19921]. Moreover, Tsuchiya [@tsuchiya:1991] introduced the idea of potential function, a slightly different function than the
comprehensive discussion of these methods. Apart from the simplicity, convergence analysis of the AFS methods for generalized degenerate setup are considered difficult to analyze. Dikin first published a convergence proof with a non-degeneracy assumption in 1974 [@vanderbei:1990]. Both Vanderbei *et al.* [@vanderbei:1986] and Barnes [@barnes:1986] gave simpler proofs in their global convergence analysis but still assumed primal and dual non-degeneracy. First attempt to break out of the non-degeneracy assumption was made by Adler *et al.* [@adler:1991], who investigated the convergence of continuous trajectories of primal and dual AFS. Subsequently, assuming only dual non-degeneracy, Tsuchiya [@tsuchiya:19921] showed that under the condition of step size $\alpha < \frac{1}{8}$, the long-step version of AFS converges globally. In another work, Tsuchiya [@tsuchiya:1991] showed that the dual non-degeneracy condition is not a necessary condition for the convergence as assumed previously [@tsuchiya:19921]. Moreover, Tsuchiya [@tsuchiya:1991] introduced the idea of potential function, a slightly different function than the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] W)$ contains $u$. A finite word $u$ is called an [*obstruction*]{} for $W$ if it is not a subword of $W$ but any its proper subword is a subword of $W$. The [*cogrowth function*]{} $O_W(n)$ is the number of obstructions with length $\leqslant n$. Further we assume that the alphabet $A$ is binary, $A =\{\alpha, \beta\}$. The main result of this article is the following \[thm:ur\] Let $W$ be an uniformly recurrent non-periodic sequence on a binary alphabet. Then $$\overline{\lim_{n \to \infty}} \frac{O_W(n)}{\log_3n} \geq 1.$$ Note that if $F = \alpha \beta \alpha\alpha \beta\alpha \beta\alpha\alpha \beta \alpha \dots$ is the [*Fibonacci sequence*]{}, then $O_F(n) \sim \log_{\varphi}n$, where $\varphi = {\frac{\sqrt5 + 1}2}$ [@Beal]. Factor languages and Rauzy graphs ================================= A [*factor language*]{} $\mathcal{U} $ is a set of finite words such that for any $u\in \mathcal{U}$ all subwords of $u$ also belong to $\mathcal{U}$. A finite word $u$ is called an [*obstruction*]{} for $\mathcal{U}$ if $u\not \in \mathcal{U}$, but
W)$ contains $u$. A finite word $u$ is called an [*obstruction*]{} for $W$ if it is not a subword of $W$ but any its proper subword is a subword of $W$. The [*cogrowth function*]{} $O_W(n)$ is the number of obstructions with length $\leqslant n$. Further we assume that the alphabet $A$ is binary, $A =\{\alpha, \beta\}$. The main result of this article is the following \[thm:ur\] Let $W$ be an uniformly recurrent non-periodic sequence on a binary alphabet. Then $$\overline{\lim_{n \to \infty}} \frac{O_W(n)}{\log_3n} \geq 1.$$ Note that if $F = \alpha \beta \alpha\alpha \beta\alpha \beta\alpha\alpha \beta \alpha \dots$ is the [*Fibonacci sequence*]{}, then $O_F(n) \sim \log_{\varphi}n$, where $\varphi = {\frac{\sqrt5 + 1}2}$ [@Beal]. Factor languages and Rauzy graphs ================================= A [*factor language*]{} $\mathcal{U} $ is a set of finite words such that for any $u\in \mathcal{U}$ all subwords of $u$ also belong to $\mathcal{U}$. A finite word $u$ is called an [*obstruction*]{} for $\mathcal{U}$ if $u\not \in \mathcal{U}$, but[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] to have operators executing sampling inspections, trying to find obstructions before they can cause severe failures that would require urgent and expensive actions. The current approach is hardly scalable, as it is expensive and requires lots of human hours. Companies in charge of large wastewater networks face massive operational costs related to inspection and maintenance. The current environmental context brings added pressure to the topic, since episodes of heavy rainfall are becoming more common as a consequence of climate change [@donat2017addendum]. Within these episodes, obstructed wastewater networks may become the origin of sewer overflows and floods with an impact on urban environments and population. ![Sample frames from the videos database.[]{data-label="fig:sewer_samples"}](Images/sewers_grid_samples.png){width="49.00000%"} In an effort to increase the quality and efficiency of sewer maintenance, the industry is now looking into recent technological advancements in fields such as image recognition and unmanned aerial vehicles. In this paper, we tackle one of the challenges necessary for
to have operators executing sampling inspections, trying to find obstructions before they can cause severe failures that would require urgent and expensive actions. The current approach is hardly scalable, as it is expensive and requires lots of human hours. Companies in charge of large wastewater networks face massive operational costs related to inspection and maintenance. The current environmental context brings added pressure to the topic, since episodes of heavy rainfall are becoming more common as a consequence of climate change [@donat2017addendum]. Within these episodes, obstructed wastewater networks may become the origin of sewer overflows and floods with an impact on urban environments and population. ![Sample frames from the videos database.[]{data-label="fig:sewer_samples"}](Images/sewers_grid_samples.png){width="49.00000%"} In an effort to increase the quality and efficiency of sewer maintenance, the industry is now looking into recent technological advancements in fields such as image recognition and unmanned aerial vehicles. In this paper, we tackle one of the challenges necessary for[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] as those from quenched and unquenched LQCD, QCD sum rules, $etc.$, and to test the unitarity of the quark mixing matrix with better precision. In the SM, the ratio of the branching fraction (BF) of $D^+_s\to \tau^+\nu_\tau$ over that of $D^+_s\to \mu^+\nu_\mu$ is predicted to be 9.74 with negligible uncertainty and the BFs of $D_s^+\to\mu^+\nu_\mu$ and $D_s^-\to\mu^-\bar{\nu}_\mu$ decays are expected to be the same. However, hints of lepton flavor universality (LFU) violation in semileptonic $B$ decays were recently reported at BaBar, LHCb and Belle [@babar_1; @babar_2; @lhcb_1; @lhcb_kee_3; @belle_kee]. It has been argued that new physics mechanisms, such as a two-Higgs-doublet model with the mediation of charged Higgs bosons [@fajfer; @2hdm] or a Seesaw mechanism due to lepton mixing with Majorana neutrinos [@seesaw], may cause LFU or CP violation. Tests of LFU and searches for CP violation in $D^+_s\to\ell^+\nu_\ell$ decays are therefore important tests of the SM. In this Letter, we present an experimental study of
as those from quenched and unquenched LQCD, QCD sum rules, $etc.$, and to test the unitarity of the quark mixing matrix with better precision. In the SM, the ratio of the branching fraction (BF) of $D^+_s\to \tau^+\nu_\tau$ over that of $D^+_s\to \mu^+\nu_\mu$ is predicted to be 9.74 with negligible uncertainty and the BFs of $D_s^+\to\mu^+\nu_\mu$ and $D_s^-\to\mu^-\bar{\nu}_\mu$ decays are expected to be the same. However, hints of lepton flavor universality (LFU) violation in semileptonic $B$ decays were recently reported at BaBar, LHCb and Belle [@babar_1; @babar_2; @lhcb_1; @lhcb_kee_3; @belle_kee]. It has been argued that new physics mechanisms, such as a two-Higgs-doublet model with the mediation of charged Higgs bosons [@fajfer; @2hdm] or a Seesaw mechanism due to lepton mixing with Majorana neutrinos [@seesaw], may cause LFU or CP violation. Tests of LFU and searches for CP violation in $D^+_s\to\ell^+\nu_\ell$ decays are therefore important tests of the SM. In this Letter, we present an experimental study of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] dual graphs of all nonsingular rational curves on the Enriques surface of type $K$ the dual graph of type $K$ ($K = {\I, \II,..., \VII}$). In positive characteristics, the classification problem of Enriques surfaces with a finite group of automorphisms is still open. Especially the case of characteristic 2 is most interesting. In the paper [@BM2], Bombieri and Mumford classified Enriques surfaces in characteristic 2 into three classes, namely, singular, classical and supersingular Enriques surfaces. As in the case of characteristic $0$, an Enriques surface $X$ in characteristic 2 has a canonical double cover $\pi : Y \to X$, which is a separable ${\bf Z}/2{\bf Z}$-cover, a purely inseparable $\mu_2$- or $\alpha_2$-cover according to $X$ being singular, classical or supersingular. The surface $Y$ might have singularities, but it is $K3$-like in the sense that its dualizing sheaf is trivial. In this paper we consider the following problem: [*does there exist an Enriques
dual graphs of all nonsingular rational curves on the Enriques surface of type $K$ the dual graph of type $K$ ($K = {\I, \II,..., \VII}$). In positive characteristics, the classification problem of Enriques surfaces with a finite group of automorphisms is still open. Especially the case of characteristic 2 is most interesting. In the paper [@BM2], Bombieri and Mumford classified Enriques surfaces in characteristic 2 into three classes, namely, singular, classical and supersingular Enriques surfaces. As in the case of characteristic $0$, an Enriques surface $X$ in characteristic 2 has a canonical double cover $\pi : Y \to X$, which is a separable ${\bf Z}/2{\bf Z}$-cover, a purely inseparable $\mu_2$- or $\alpha_2$-cover according to $X$ being singular, classical or supersingular. The surface $Y$ might have singularities, but it is $K3$-like in the sense that its dualizing sheaf is trivial. In this paper we consider the following problem: [*does there exist an Enriques[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in some superconducting materials have been reported [@Heavyfermion; @HighTc; @Organic; @FeSe]. On the other hand, exotic pairing phases have promoted new interest in the studies of dense quark matter under the circumstances of compact stars [@Alford2001; @Bowers2002; @Shovkovy2003; @Alford2004; @EFG2004; @Huang2004; @Casalbuoni2005; @Fukushima2005; @Ren2005; @Gorbar2006; @Anglani2014] and ultracold atomic Fermi gases with population imbalance [@Atomexp; @Sheehy2006]. Color superconductivity in dense quark matter appears due to the attractive interactions in certain diquark channels [@CSC01; @CSC02; @CSC03; @CSC04; @CSCreview]. Because of the constraints from Beta equilibrium and electric charge neutrality, different quark flavors ($u$, $d$, and $s$) acquire mismatched Fermi surfaces. Quark color superconductors under compact-star constraints as well as atomic Fermi gases with population imbalance therefore provide rather clean systems to realize the long-sought exotic LOFF phase. Around the tricritical point in the temperature-mismatch phase diagram, the LOFF phase can be studied rigorously by using the Ginzburg-Laudau (GL) analysis since both the gap parameter and
in some superconducting materials have been reported [@Heavyfermion; @HighTc; @Organic; @FeSe]. On the other hand, exotic pairing phases have promoted new interest in the studies of dense quark matter under the circumstances of compact stars [@Alford2001; @Bowers2002; @Shovkovy2003; @Alford2004; @EFG2004; @Huang2004; @Casalbuoni2005; @Fukushima2005; @Ren2005; @Gorbar2006; @Anglani2014] and ultracold atomic Fermi gases with population imbalance [@Atomexp; @Sheehy2006]. Color superconductivity in dense quark matter appears due to the attractive interactions in certain diquark channels [@CSC01; @CSC02; @CSC03; @CSC04; @CSCreview]. Because of the constraints from Beta equilibrium and electric charge neutrality, different quark flavors ($u$, $d$, and $s$) acquire mismatched Fermi surfaces. Quark color superconductors under compact-star constraints as well as atomic Fermi gases with population imbalance therefore provide rather clean systems to realize the long-sought exotic LOFF phase. Around the tricritical point in the temperature-mismatch phase diagram, the LOFF phase can be studied rigorously by using the Ginzburg-Laudau (GL) analysis since both the gap parameter and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] small but systematic bias of $\sim 4 ''$ in declination of the ADFS positions with respect to counterparts. We revealed that most of the detected bright sources are galaxies, and very few stars, quasars, or AGNs were found. As shown in Figure \[fig:zdist\], most of the identified objects are nearby galaxies at $z<0.1$, a large part of them belonging to a cluster DC 0428-53 at $z \sim 0.04$. The statistics of identified ADFS sources is presented in Table 1. [llc]{} Galaxies & & 78\ & Galaxy & 37\ & Galaxy in cluster of galaxies & 33\ & Pair or interacting galaxies & 4\ & Low surface brightness galaxy & 2\ & Seyfert 1 & 1\ & Starburst & 1\ Star & & 3\ Quasar & & 1\ X-ray source & & 3\ IR sources & & 24\ Spectral energy distributions ============================= Spectral energy distributions (SEDs) give first important clue to the physics of radiation of the sources. The deep image at the [*AKARI*]{} filter bands has
small but systematic bias of $\sim 4 ''$ in declination of the ADFS positions with respect to counterparts. We revealed that most of the detected bright sources are galaxies, and very few stars, quasars, or AGNs were found. As shown in Figure \[fig:zdist\], most of the identified objects are nearby galaxies at $z<0.1$, a large part of them belonging to a cluster DC 0428-53 at $z \sim 0.04$. The statistics of identified ADFS sources is presented in Table 1. [llc]{} Galaxies & & 78\ & Galaxy & 37\ & Galaxy in cluster of galaxies & 33\ & Pair or interacting galaxies & 4\ & Low surface brightness galaxy & 2\ & Seyfert 1 & 1\ & Starburst & 1\ Star & & 3\ Quasar & & 1\ X-ray source & & 3\ IR sources & & 24\ Spectral energy distributions ============================= Spectral energy distributions (SEDs) give first important clue to the physics of radiation of the sources. The deep image at the [*AKARI*]{} filter bands has[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of links. The problem of satisfiability is known also to be NP-complete [@gar]. Here we are particularly interested in an ifluence of the network topology on the collective magnetic state of the Ising network. The topology is to be characterized by the clustering coefficient $C$, which is a measure of the density of triads of linked nodes in the network. In antiferromagnetic systems, these triads contribute to the ground state energy, because three neighboring spins of a triad cannot be antiparallel simultaneously to each other. This effect is known as spin frustration. When the frustration is combined with the topological disorder of a random network, the ground state of the system is expected to be the spin-glass state, a most mysterious magnetic phase which remains under dispute for more than thirty years [@by; @fh; @ns]. These facts suggest that a search on random network with Ising spins and antiferromagnetic interaction
of links. The problem of satisfiability is known also to be NP-complete [@gar]. Here we are particularly interested in an ifluence of the network topology on the collective magnetic state of the Ising network. The topology is to be characterized by the clustering coefficient $C$, which is a measure of the density of triads of linked nodes in the network. In antiferromagnetic systems, these triads contribute to the ground state energy, because three neighboring spins of a triad cannot be antiparallel simultaneously to each other. This effect is known as spin frustration. When the frustration is combined with the topological disorder of a random network, the ground state of the system is expected to be the spin-glass state, a most mysterious magnetic phase which remains under dispute for more than thirty years [@by; @fh; @ns]. These facts suggest that a search on random network with Ising spins and antiferromagnetic interaction[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] E_{\gamma 0}. \label{i1}$$ At such conditions, characteristic for the Thompson-Compton scattering in a non-relativistic plasma, the scattering term is reduced to the differential form [@Komp], [@wey1]. The relative value of spectral distortions induced by such scattering, is preserved during the universe expansion, because the perturbed and background photons have the same dependence on time and on the red shift, when moving without interactions. In addition to the scattering on hot electrons, another type of distortion of the photon spectrum takes place in the interaction of a photons with a bulk motion of a matter. This types of distortion had been investigated in application to the accretion of matter into neutron stars, stellar mass and AGN black holes, which are observed as X-ray, optical, and ultraviolet sources [@BP1], [@BP2], [@TZ], [@PL], [@PL2], [@T1], [@BW], [@CGF], [@BW2]. Among the objects, in which a bulk comptonizanion may be important, we should include Zeldovich pancakes which may
E_{\gamma 0}. \label{i1}$$ At such conditions, characteristic for the Thompson-Compton scattering in a non-relativistic plasma, the scattering term is reduced to the differential form [@Komp], [@wey1]. The relative value of spectral distortions induced by such scattering, is preserved during the universe expansion, because the perturbed and background photons have the same dependence on time and on the red shift, when moving without interactions. In addition to the scattering on hot electrons, another type of distortion of the photon spectrum takes place in the interaction of a photons with a bulk motion of a matter. This types of distortion had been investigated in application to the accretion of matter into neutron stars, stellar mass and AGN black holes, which are observed as X-ray, optical, and ultraviolet sources [@BP1], [@BP2], [@TZ], [@PL], [@PL2], [@T1], [@BW], [@CGF], [@BW2]. Among the objects, in which a bulk comptonizanion may be important, we should include Zeldovich pancakes which may[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card