input
stringlengths 2.56k
275k
 output
stringclasses 1
value 

this bad history is , we have in Lemma 3. Theorem 1 is proved by setting . ∎
Theorem 1 shows that, when is large enough, the NPB procedure used in previous work Eckles and Kaptein (2014); Tang et al. (2015); McNellis et al. (2017) incurs an expected cumulative regret arbitrarily close to a linear regret in the order of . It is straightforward to prove a variant of this lower bound with any constant (in terms of ) number of pseudoexamples. Next, we show that NPB with appropriate forced exploration can result in sublinear regret.
3.3 Forced Exploration
In this subsection, we show that NPB, when coupled with an appropriate amount of forced exploration, can result in sublinear regret in the Bernoulli bandit setting. In order to force exploration, we pull each arm times before starting Algorithm 1. The following theorem shows that for an appropriate value of , this strategy can result in an upper bound on the regret.
Theorem 2.
In any armed bandit setting, if each arm is initially pulled times before starting Algorithm 1, then
E[R(T)]=O(T2/3).
Proof.
The claim is proved in Appendix B based on the following observation: If the gap of the suboptimal arm is large, the prescribed steps are sufficient to guarantee that the bootstrap sample of the optimal arm is higher than that of the suboptimal arm with a high probability at any round . On the other hand, if the gap of the suboptimal arm is small, no algorithm can have high regret. ∎
Although we can remedy the NPB procedure using this strategy, it results in a suboptimal regret bound. In the next section, we consider a weighted bootstrapping approach as an alternative to NPB.
4 Weighted Bootstrapping
In this section, we propose weighted bootstrapping (WB) as an alternative to the nonparametric bootstrap. We first describe the weighted bootstrapping procedure in Section 4.1. For the bandit setting with Bernoulli rewards, we show the mathematical equivalence between WB and TS, hence proving that WB attains nearoptimal regret (Section 4.2).
4.1 Procedure
In order to formulate the bootstrapped loglikelihood, we use a random transformation of the labels in the corresponding loglikelihood function. First, consider the case of Bernoulli observations where the labels . In this case, the loglikelihood function is given by:
L(θ)=∑i∈Djyilog(g(⟨xi,θ⟩))+(1−yi)log(1−g(⟨xi,θ⟩))
where the function is the inverselink function. For each observation , we sample a random weight from an exponential distribution, specifically, for all , . We use the following transformation of the labels: and . Since we transform the labels by multiplying them with exponential weights, we refer to this case as WB with multiplicative exponential weights. Observe that this transformation procedure extends the domain for the labels from values in to those in and does not result in a valid probability mass function. However, below, we describe several advantages of using this transformation.
Given this transformation, the bootstrapped loglikelihood function is defined as:
(4)
Here is the loglikelihood of observing point . As before, the bootstrap sample is computed as: . Note that in WB, the randomness for bootstrapping is induced by the weights and that . As a special case, in the absence of features, when for all , assuming positive and negative pseudocounts and denoting , we obtain the following closedform expression for computing the bootstrap sample:
~θ =∑ni=1[wi⋅yi]+∑α0i=1[wi]∑n+α0+β0i=1wi (5)
Using the above transformation has the following advantages: (i) Using equation 4, we can interpret as a random reweighting (by the weights ) of the observations. This formulation is equivalent to the weighted likelihood bootstrapping procedure proposed and proven to be asymptotically consistent in the offline case in Newton and Raftery (1994). (ii) From an implementation perspective, computing involves solving a weighted maximum likelihood estimation problem. It thus has the same computational complexity as NPB and can be solved by using blackbox optimization routines. (iii) In the next section, we show that using WB with multiplicative exponential weights has good theoretical properties in the bandit setting. Furthermore, such a procedure of randomly transforming the labels lends itself naturally to the Gaussian case and in Appendix C.2.1, we show that WB with an additive transformation using Gaussian weights is equivalent to TS.
4.2 Equivalence to Thompson sampling
We now analyze the theoretical performance of WB in the Bernoulli bandit setting. In the following proposition proved in appendix C.1.1, we show that WB with multiplicative exponential weights is equivalent to TS.
Proposition 1.
If the rewards , then weighted bootstrapping using the estimator in equation 5 results in , where and is the number of positive and negative observations respectively; and are the positive and negative pseudocounts. In this case, WB is equivalent to Thompson sampling under the prior.
Since WB is mathematically equivalent to TS, the bounds in Agrawal and Goyal (2013a) imply nearoptimal regret for WB in the Bernoulli bandit setting.
In Appendix C.1.2, we show that this equivalence extends to the more general categorical (with categories) reward distribution i.e. for . In appendix C.2.1, we prove that for Gaussian rewards, WB with additive Gaussian weights, i.e. and using the additive transformation , is equivalent to TS under an uninformative prior. Furthermore, this equivalence holds even in the presence of features, i.e. in the linear bandit case. Using the results in Agrawal and Goyal (2013b), this implies that for Gaussian rewards, WB with additive Gaussian weights achieves nearoptimal regret.
5 Experiments
In Section 5.1, we first compare the empirical performance of bootstrapping and Thompson sampling in the bandit setting. In section 5.2, we describe the experimental setup for the contextual bandit setting and compare the performance of different algorithms under different featurereward mappings.
5.1 Bandit setting
We consider arms (refer to Appendix D for results with other values of ), a horizon of rounds and average our results across runs. We perform experiments for four different reward distributions  Bernoulli, Truncated Normal, Beta and the Triangular distribution, all bounded on the interval. In each run and for each arm , we choose the expected reward (mean of the corresponding distribution) to be a uniformly distributed random number in . For the TruncatedNormal distribution, we choose the standard deviation to be equal to , whereas for the Beta distribution, the shape parameters of arm are chosen to be and . We use the prior for TS. In order to use TS on distributions other than Bernoulli, we follow the procedure proposed in Agrawal and Goyal (2013a): for a reward in we flip a coin with the probability of obtaining equal to the reward, resulting in a binary “pseudoreward”. This pseudoreward is then used to update the Beta posterior as in the Bernoulli case. For NPB and WB, we use the estimators in equations 3 and 5 respectively. For both of these, we use the pseudocounts .
In the Bernoulli case, NPB obtains a higher regret as compared to both TS and WB which are equivalent. For the other distributions, we observe that both WB and NPB (with WB resulting in consistently better performance) obtain lower cumulative regret than the modified TS procedure. This shows that for distributions that do not admit a conjugate prior, WB (and NPB) can be directly used and results in good empirical performance as compared to making modifications to the TS procedure.
5.2 Contextual bandit setting
We adopt the oneversusall multiclass classification setting for evaluating contextual bandits Agarwal et al. (2014); McNellis et al. (2017). Each arm corresponds to a class. In each round, the algorithm receives a reward of one if the context vector belongs to the class corresponding to the selected arm and zero otherwise. Each arm maintains an independent set of sufficient statistics that map the context vector to the observed binary reward. We use two multiclass datasets: CoverType ( and ) and MNIST ( and ). The number of rounds in experiments is and we average results over independent runs. We experiment with LinUCB AbbasiYadkori et al. (2011), which we call UCB, linear Thompson sampling (TS) Agrawal and Goyal (2013b), greedy (EG) Langford and Zhang (2008), nonparametric bootstrapping  
the local problem
\begin{align}
\label{LSlocal} & \left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\varphi \varphi}} & {\mathbb{A}_k^{\varphi \mathbf{E}}} \\ [4pt]
{\mathbb{A}_k^{\mathbf{E} \varphi}} & {\mathbb{A}_k^{\mathbf{E} \mathbf{E}}} \\ [2pt]
\end{array}} \right]
\left[ {\begin{array}{*{20}{c}}
{\underline{\varphi}_k} \\ [4pt]
{\underline{\mathbf{E}}_k} \\ [2pt]
\end{array}} \right]
+
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_{k}^{\varphi \hat{\varphi}}} & \mathbb{O} \\ [4pt]
{\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}}} & {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}^c}} \\ [2pt]
\end{array}} \right]
\left[ {\begin{array}{*{20}{c}}
{\underline{\hat \varphi}_k} \\ [4pt]
{{\hat \varphi}^c} \\ [2pt]
\end{array}} \right]
=
\left[ {\begin{array}{*{20}{c}}
{F_k^\varphi} \\ [4pt]
{F_k^{\mathbf{E}}} \\ [2pt]
\end{array}} \right]
\end{align}
where the local unknown vectors are defined as
\begin{equation}
\label{unklocal}
\underline{\varphi}_k =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\varphi_k^1} \hfill \\
\vdots \hfill \\
{\varphi_k^{N_p}} \hfill \\
\end{gathered}
\end{array}} \right], \quad
\underline{\mathbf{E}}_k =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\mathbf{E}_k^1} \hfill \\
\vdots \hfill \\
{\mathbf{E}_k^{N_p}} \hfill \\
\end{gathered}
\end{array}} \right]
\end{equation}
and $\underline{\hat{\varphi}}_k$ is a vector of dimension $N_{fe}N_{fp}\times1$, which stores the global unknowns (see \eqref{unkglobal} below) and is defined as
\begin{equation}
\label{unkmap}
\underline{\hat{\varphi}}_k =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\underline{\hat{\varphi}}_{k,1}} \hfill \\
\vdots \hfill \\
{\underline{\hat{\varphi}}_{k,{N_{fe}}}} \hfill \\
\end{gathered}
\end{array}} \right], \quad
{\underline{\hat{\varphi}}_{k,{f_l}}} =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\hat{\varphi}_{k,f_l}^1} \hfill \\
\vdots \hfill \\
{\hat{\varphi}_{k,f_l}^{N_{fp}}} \hfill \\
\end{gathered}
\end{array}} \right].
\end{equation}
In \eqref{LSlocal}, the right hand side vectors ${F_k^{\alpha}}$, $\alpha \in \{ \varphi, \mathbf{E} \}$, correspond to the right hand sides of \eqref{weakfp0}\eqref{weakfp1}, respectively. The matrices $\mathbb{A}_k^{\alpha \beta}$, ($\alpha \in \{ {\varphi, \mathbf{E}} \}$, $\beta \in \{ {\varphi, \mathbf{E}, \hat{\varphi}} \}$), correspond to the inner products in \eqref{weakfp0}\eqref{weakfp1} are the standard DG matrices, e.g., mass, stiffness, and lift matrices. For details, readers are referred to the authors' previous work~\cite{Chen2020float}. Note that $\mathbb{A}_k^{\alpha \beta}$ has dimensions $N_{\alpha} \times N_{\beta}$, where $N_{\beta}$ is the dimension of the input vector $\beta_k$ and $N_{\alpha}$ is the dimension of the output vector $\alpha_k$.
Similarly, Galerkin testing \eqref{weakfp2}\eqref{weakfp3} yields the following matrix system for the global problem
\begin{align}
\label{LSglobal} & \sum_{k=1}^K
\left\{
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\hat{\varphi} \varphi}} & {\mathbb{A}_k^{\hat{\varphi} \mathbf{E}}} \\ [4pt]
\mathbb{O} & {\mathbb{A}_{k}^{\hat{\varphi}^c \mathbf{E}}} \\ [2pt]
\end{array}} \right]
\left[ {\begin{array}{*{20}{c}}
{\underline{\varphi}_k} \\ [4pt]
{\underline{\mathbf{E}}_k} \\ [2pt]
\end{array}} \right]
+
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\hat{\varphi} \hat{\varphi}}} & {\mathbb{O}} \\ [4pt]
{\mathbb{O}} & {\mathbb{O}}\\ [2pt]
\end{array}} \right]
\left[ {\begin{array}{*{20}{c}}
{\underline{\hat \varphi}_k} \\ [4pt]
{{\hat \varphi}^c} \\ [2pt]
\end{array}} \right]
=
\left[ {\begin{array}{*{20}{c}}
{F_k^{\hat \varphi}} \\ [4pt]
Q^c \\ [2pt]
\end{array}} \right] \right\}
\end{align}
where the right hand side vector ${F_k^{\hat{\varphi}}}$ corresponds to the right hand side of \eqref{weakfp2}, and the matrices $\mathbb{A}_k^{\hat{\varphi} \alpha}$, ($\alpha \in \{ {\varphi, \mathbf{E}, \hat{\varphi}} \}$), correspond to the inner products in \eqref{weakfp2}.
Solving $[\underline{\varphi}_k, \underline{\mathbf{E}}_k]^T$ in terms of $\underline{\hat{\varphi}}$ and $\hat{\varphi}^c$ from \eqref{LSlocal} yields
\begin{align}
\label{LSlocal_}
\left[ {\begin{array}{*{20}{c}}
{\underline{\varphi}_k} \\ [4pt]
{\underline{\mathbf{E}}_k} \\ [2pt]
\end{array}} \right]
=
\mathbb{A}_k^{1}
\left[ {\begin{array}{*{20}{c}}
{F_k^\varphi} \\ [4pt]
{F_k^{\mathbf{E}}} \\ [2pt]
\end{array}} \right]

\mathbb{A}_k^{1}
\bar{\mathbb{A}}_k
\left[ {\begin{array}{*{20}{c}}
{\underline{\hat \varphi}_k} \\ [4pt]
{{\hat \varphi}^c} \\ [2pt]
\end{array}} \right]
\end{align}
where
\begin{align}
\nonumber &
\mathbb{A}_k =
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\varphi \varphi}} & {\mathbb{A}_k^{\varphi \mathbf{E}}} \\ [4pt]
{\mathbb{A}_k^{\mathbf{E} \varphi}} & {\mathbb{A}_k^{\mathbf{E} \mathbf{E}}} \\ [2pt]
\end{array}} \right], \quad
\bar{\mathbb{A}}_k =
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_{k}^{\varphi \hat{\varphi}}} & \mathbb{O} \\ [4pt]
{\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}}} & {\mathbb{A}_{k}^{\mathbf{E} \hat{\varphi}^c}} \\ [2pt]
\end{array}} \right].
\end{align}
Inserting \eqref{LSlocal_} into \eqref{LSglobal} yields a global system involving only the global unknowns
\begin{align}
\label{LSglobal_}
& \mathbb{A}_{global}
\left[ {\begin{array}{*{20}{c}}
{\underline{\hat \varphi}} \\ [4pt]
{{\hat \varphi}^c} \\ [2pt]
\end{array}} \right]
=
\left[ {\begin{array}{*{20}{c}}
{F^{\hat \varphi}} \\ [4pt]
Q^c \\ [2pt]
\end{array}} \right]

\sum_{k=1}^K {
\tilde{\mathbb{A}}_k \mathbb{A}_k^{1}
\left[ {\begin{array}{*{20}{c}}
{F_k^\varphi} \\ [4pt]
{F_k^{\mathbf{E}}} \\ [2pt]
\end{array}} \right]
}
\end{align}
where the global unknown vector is defined as
\begin{equation}
\label{unkglobal}
\underline{\hat{\varphi}} =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\underline{\hat{\varphi}}_1} \hfill \\
\vdots \hfill \\
{\underline{\hat{\varphi}}_{N_f}} \hfill \\
\end{gathered}
\end{array}} \right], \quad
{\underline{\hat{\varphi}}_f} =
\left[ {\begin{array}{*{20}{c}}
\begin{gathered}
{\hat{\varphi}_f^1} \hfill \\
\vdots \hfill \\
{\hat{\varphi}_f^{N_{fp}}} \hfill \\
\end{gathered}
\end{array}} \right].
\end{equation}
and
\begin{align}
\label{Aglobal_}
\mathbb{A}_{global} = \sum_{k=1}^K \left\{
\hat{\mathbb{A}}_k 
{ \tilde{\mathbb{A}}_k \mathbb{A}_k^{1} \bar{\mathbb{A}}_k }
\right\}, \quad
\tilde{\mathbb{A}}_k =
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\hat{\varphi} \varphi}} & {\mathbb{A}_k^{\hat{\varphi} \mathbf{E}}} \\ [4pt]
\mathbb{O} & {\mathbb{A}_{k}^{\hat{\varphi}^c \mathbf{E}}} \\ [2pt]
\end{array}} \right] = \bar{\mathbb{A}}_k^T, \quad
\hat{\mathbb{A}}_k =
\left[ {\begin{array}{*{20}{c}}
{\mathbb{A}_k^{\hat{\varphi} \hat{\varphi}}} & {\mathbb{O}} \\ [4pt]
{\mathbb{O}} & {\mathbb{O}}\\ [2pt]
\end{array}} \right].
\end{align}
In~\eqref{unkmap} and~\eqref{unkglobal}, $f_l \in \{1,...,N_{fe}\}$, $f \in \{1,...,N_f\}$, $\underline{\hat{\varphi}}_{k,{f_l}}$ contains the unknowns on local face $f_l$ of element $k$, and $\underline{\hat{\varphi}}_f$ contains the unknowns on face $f$ of $\Gamma$. Apparently, each local face $f_l$ of element $k$ can be mapped to a global face $f$ of $\Gamma$. Fig.~\ref{mesh} illustrates the mapping between the nodes of the local elements (blue dots) and the nodes of the skeleton (red circles). This mapping is included in~\eqref{Aglobal_} in the summation over $k$, i.e., each local face $f_l$ of the $N_{fe}$ faces of element $k$ is mapped to one face $f$ of $\Gamma$ and, the matrix entries of the two local faces corresponding to the same $f$ are combined. The assembled matrix system from~\eqref{Aglobal_} approximately has dimensions $(N_f N_{fp}+1) \times (N_f N_{fp}+1)$. The actual size is smaller than $(N_f N_{fp}+1)$ since the nodes on $\partial \Omega$ are not included in the global problem [see \eqref{Globalfp0}]. The same mapping is done in the summation on the right hand side of~\eqref{LSglobal_}. Note that the elemental matrix ${\mathbb{A}_k^{\hat{\varphi} \varphi}}$ has dimension $N_{fe} N_{fp} \times N_p$ and the resulting vector for each $k$ has the same dimension as $\underline{\hat{\varphi}}_{k}$.
}
The size of the global system \eqref{LSglobal_} [$\sim (N_f N_{fp}+1)$] is much smaller than that of the DG method ($\sim K N_p$, see~\cite{Chen2020float}). Once $\underline{\hat{\varphi}}$ and $\hat{\varphi}^c$ are solved from \eqref{LSglobal_}, they can be used to solve $[\underline{\varphi}_k, \underline{\mathbf{E}}_k]^T$ in the local system \eqref{LSlocal_}. Since the local problems of different elements are independent from each other, they can be solved in parallel. As the dimension of \eqref{LSlocal_} is only $\sim N_p$, the computational cost of this step is relatively low and can be ignored, especially in large scale problems~\cite{Cockburn2016static}.
\section{Numerical Examples}
\subsection{Coaxial Capacitor with FPC}
The proposed method is first validated using a canonical problem with an analytical solution. The simulation domain is illustrated in Figure~\ref{Capacitor} (a). A thin metal tube is inserted into a coaxial capacitor. The voltages applied on the inner and outer boundaries of the capacitor are $\varphi (\mathbf{r} = {r_0}) = {V_0}$ and $\varphi (\mathbf{r} = {r_1}) = {V_1}$, respectively. The metal tube is modeled as an FPC and the FPBC is applied on $\mathbf{r}={r_2}$ and $\mathbf{r}={r_3}$. The total charge on the FPC is $Q$.The analytical solution of the electric potential is given by
\begin{equation*}
\varphi_{Ana} (r) = \left\{ \begin{gathered}
{a_0} + {b_0}\ln (r),\;r \in [{r_0},{r_2}] \hfill \\
{a_1} + {b_1}\ln (r),\;r \in [{r_3},{r_1}] \hfill \\
\end{gathered} \right.
\end{equation*}
where ${a_0} = {V_0}  {b_0}\ln ({r_0})$, ${a_1} = {V_1}  {b_e}\ln ({r_1})$, ${b_0} = {b_1} + Q/(2\pi \varepsilon )$, ${b_1} = [{V_0}  {V_0}  {C_{20}}Q/(2\pi \varepsilon )]/({C_{20}}  {C_{31}})$, and ${C_{ij}} = \ln ({r_i}/{r_j})$. In the following, ${V_0} = 0$, ${V_1} = 10$ V, ${r_0} = 0.1$ cm, ${r_1} = 2$ cm, ${r_2} = 0.8$ cm, and ${r_3} = 1.2$ cm.
\begin{figure}[!ht]
\centering
\subfloat[\label{Capacitora}]{\includegraphics[height=0.32\columnwidth]{Capacitor.png}} \hspace{0.5cm}
\subfloat[\label{Capacitorb}]{\includegraphics[height=0.32\columnwidth]{CapacitorPhiLine3.png}} \\
\subfloat[\label{Capacitorc}]{\includegraphics[height=0.36\columnwidth]{CapacitorPhi_DoF1.png}}
\caption{(a) Schematic description of the coaxial capacitor model. (b) $\varphi$ computed by HDG and $\varphi_{Ana}$ on line $(x,y = 0)$ for different values of $Q$. (c) Illustration of the nodes where $\varphi$ and $\hat{\varphi}$ are defined.}
\label{Capacitor}
\end{figure}
Figure~\ref{Capacitor} (b) compares the electric potential computed by HDG with $p=2$ to the analytical solution along the line $(x,y = 0)$ for $Q\in\{0, 5\times10^{10}e, 10^{10}e\}$, where $e$ is the electron charge. One can see that the numerical solution agree very well with the analytical one. The absolute value of the difference between the FPC potentials computed using HDG and the analytical solution is $1.58\times10^{7}V$, $2.30\times10^{8}V$ and $1.45\times10^{8}V$ for $Q=0$, $Q=5\times10^{9} e$ and $Q=10^{10} e$, respectively.
\begin{table}[!ht]
\scriptsize
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.5}
\centering
\caption{Dimension and condition number of the DG and HDG matrices, (wall) time and (peak) memory required by DG and HDG, and absolute error in FPC potential computed using DG and HDG for the coaxial capacitor example with zero total charge on the FPC\tnote{*}.}
\label{nunk}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{ p{52pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt}  p{36pt} }
\hline
& \multicolumn{2}{c}{$p=1$} & \multicolumn{2}{c}{$p=2$} & \multicolumn{2}{c}{$p=3$} & \multicolumn{2}{c}{$p=4$} & \multicolumn{2}{c}{$p=5$} \\ \hline
& DG & HDG & DG & HDG & DG & HDG & DG & HDG & DG & HDG \\ \hline
Dimension & 254,838 & 252,319 & 509,676 & 378,478 & 849,460 & 504,637 & 1,274,190 & 630,796 & 1,783,866 & 756,955 \\ \hline
Condition \# & 1.33$\times 10^8$ & 1.17$\times 10^8$ & 5.44$\times 10^8$ & 1.67$\times 10^8$ & 16.17$\times 10^8$ & 2.62$\times 10^8$ & 39.4$\times 10^8$ & 3.47$\times 10^8$ & 84.2$\times 10^8$ & 4.57$\times 10^8$ \\ \hline
Time (s) & 1.97 & 1.79 & 5.35 & 3.83 & 10.5 & 7.09 & 18.8 & 9.29 & 32.3 & 12.6 \\ \hline
Memory (GB)& $0.41$ & $0.38$ & $1.07$ & $0.93$ & $2.11$ & $1.66$ & $3.89$ & $2.42$ & $6.05$ & $3.21$ \\ \hline
$\mathrm{Error}$ (V) & $2.86$$\times10^{4}$ & $2.82$$\times10^{4}$ & $2.30$$\times10^{7}$ & $2.26$$\times10^{7}$ & $2.01$$\times10^{7}$ & $1.99$$\times10^{7}$ & $1.90$$\times10^{7}$ & $1.85$$\times10^{7}$ & $1.90$$\times10^{7}$ & $1.83$$\times10^{7}$ \\ \hline
\end{tabular}
\smallskip
\scriptsize
\begin{tablenotes}
\item[*] {The matrix systems are solved using UMFPACK (multifrontal sparse LU factorization) implemented by  
p^{\otimes n/2})$ for
an even $n>1$.
Let $\sigma:[0,1)\to\cP(\Omega)$, $x\mapsto p\vec{1}\{x<1/2\}+q\vec{1}\{x\geq1/2\}$
and $\tau:[0,1)\to\cP(\Omega)$, $x\mapsto \sigma(1x)$.
Let $\nu=\frac12(\atom_\sigma+\atom_\tau)$.
Then
\begin{equation}\label{eqex2}
\cutm(\mu,\nu)=\Cutm(\mu,\nu)=O(n^{1/2}).
\end{equation}
Indeed, to construct a coupling $\gamma$ of $\mu,\nu$ let $X,Y,Y'$ be three independent random variable such that $X={\rm Be}(1/2)$,
$Y\in\{0,1\}^n$ has distribution $p^{\otimes n/2}\otimes q^{\otimes n/2}$ and $Y'\in\{0,1\}^n$
has distribution $p^{\otimes n/2}\otimes q^{\otimes n/2}$.
Further, let
$G=(Y,\atom_\sigma)$ if $X=0$ and $G=(Y',\atom_\tau)$ otherwise
and let $\gamma$ be the law of $G$.
A similar application of Azuma's inequality as in the previous example yields (\ref{eqex2}).
\end{example}
\subsection{Alternative descriptions}
We recall that the (bipartite, decorated version of the) cut metric on the space $W_\Omega$ of measurable maps $[0,1)^2\to\cP(\Omega)$ can be defined as
\begin{align*}
\delta_{\Box}(f,g)&=\inf_{s,t\in S_{[0,1)}}\sup_{U,V\subset[0,1)}\TV{\int_{U\times V}f(x,y)g(s(x),t(y))\,dx\,dy}\qquad
\mbox{(cf.~\cite{Janson,Lovasz,LovaszSzegedy}).}
\end{align*}
Let $\cW_\Omega$ be the space obtained from $W_\Omega$ by identifying $f,g\in W_\Omega$ such that $\delta_{\Box}(f,g)=0$.
Applying \cite[\Thm~7.1]{Janson} to our setting, we obtain
\begin{proposition}\label{Prop_homeomorphic}
There is a homeomorphism $\cM_\Omega\to\cW_\Omega$.
\end{proposition}
\begin{proof}
We recall that for any $\mu\in\cP(\step_\Omega)$ there exists a measurable $\varphi:[0,1)\to\step_\Omega$
such that $\mu=\varphi(\lambda)$, i.e., $\mu(A)=\lambda(\varphi^{1}(A))$ for all measurable $A\subset\step_\Omega$.
Hence, recalling that $\varphi(x)\in L_1([0,1),\cP(\Omega))$,
$\mu$ yields a graphon $w_\mu:[0,1]^2\to\cP(\Omega)$, $(x,y)\mapsto(\varphi(x))(y)$.
Due to~\cite[\Thm~7.1]{Janson} the map $\bar\mu\in\cM_\Omega\mapsto w_\mu\in\cW_\Omega$ is a homeomorphism.
\end{proof}
\begin{corollary}
$\cM_\Omega$ is a compact Polish space.
\end{corollary}
\begin{proof}
This follows from \Prop~\ref{Prop_homeomorphic} and the fact that $\cW_\Omega$ has these properties~\cite[\Thm~9.23]{Lovasz}.
\end{proof}
Diaconis and Janson~\cite{DJ} pointed out that the connection between $\cW_\Omega$ and the AldousHoover representation of ``exchangeable arrays''
(see also Panchenko~\cite[Appendix A]{PanchenkoBook}).
To apply this observation to $\cM_\Omega$,
recall that $\Omega^{\NN\times\NN}$ is compact (by Tychonoff's theorem) and that
a sequence $(A(n))_n$ of $\Omega^{\NN\times\NN}$valued random variables converges to $A$ in distribution iff
$$\lim_{n\to\infty}\pr\brk{\forall i,j\leq k:A_{ij}(n)=a_{ij}}=\pr\brk{\forall i,j\leq k:A_{ij}=a_{ij}}
\qquad\mbox{for all $k$, $a_{ij}\in\Omega$}.$$
Now, for $\bar\mu\in\cM_\Omega$ define a random array $\vec A(\bar\mu)=(\vec A_{ij}(\bar\mu))\in\Omega^{\NN\times\NN}$ as follows.
Let $(\SIGMA_i)_{i\in\NN}$ be a sequence of independent samples from the distribution $\mu$,
independent of the sequence $(\vec x_i)_{i\in\NN}$ of independent uniform samples from $[0,1)$.
Finally, independently for all $i,j$ choose $\vec A_{ij}(\bar\mu)\in\Omega$ from the distribution $\SIGMA_i(\vec x_j)\in\cP(\Omega)$.
Then in our context the correspondence from~\cite[\Thm~8.4]{DJ} reads
\begin{corollary}\label{Cor_AldousHoover}
The sequence $(\bar\mu_n)_n$ converges to $\bar\mu\in\cM_\Omega$ iff $\vec A(\bar\mu_n)$ converges to $\vec A(\bar\mu)$ in distribution.
\end{corollary}
While \Cor~\ref{Cor_AldousHoover} characterizes convergence in $\cutm(\nix,\nix)$, the following statement applies to the strong metric $\Cutm(\nix,\nix)$.
For $\sigma\in\step_\Omega$ and $x_1,\ldots,x_k\in[0,1)$ define $\sigma_{\marg x_1,\ldots,x_k}=\sigma(x_1)\otimes\cdots\otimes\sigma(x_k)\in\cP(\Omega^k)$.
Moreover, for $\mu\in M_\Omega$ let
$$\mu_{\marg x_1,\ldots,x_k}=\int_{\step_\Omega}\sigma_{\marg x_1,\ldots,x_k}\,d\mu(\sigma).$$
If $\mu\in\cP(\Omega^n)$ is a discrete measure, then
$\hat\mu_{\marg x_1,\ldots,x_k}=\widehat{\mu_{\marg i_1,\ldots,i_k}}$ with $i_j=\lceil nx_j\rceil$.
As before, we let $(\vec x_i)_{i\geq1}$ be a sequence of independent uniform samples from $[0,1)$.
\begin{corollary}\label{Cor_sampling}
If $(\mu_n)_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$, then
for any integer $k\geq1$ we have
$\lim_{n\to\infty}\Erw\TV{\mu_{n\marg \vec x_1,\ldots,\vec x_k}\mu_{\marg \vec x_1,\ldots,\vec x_k}}=0.$
\end{corollary}
\begin{proof}
By \cite[\Thm~8.6]{Janson} we can turn $\mu,\mu_n$ into graphons $w,w_n:[0,1)^2\to\cP(\Omega)$ such that for all $n$
\begin{align*}
\mu&=\int_0^1 \atom_{w(\nix,y)}\,dy,\quad\mu_n=\int_0^1 \atom_{w_n(\nix,y)}\,dy\quad\mbox{and}\quad
\Cutm(\mu,\mu_n)=\sup_{U,V\subset[0,1)}\TV{\int_{U\times V}w(x,y)w_n(x,y) \,dx\,dy}.
\end{align*}
Let $(\vec y_j)_{j\geq 1}$ be independent and uniform on $[0,1)$ and independent of $(\vec x_i)_{i\geq1}$.
By \cite[\Thm~10.7]{Lovasz}, we have $\lim_{n\to\infty}\Cutm(\mu_n,\mu)=0$ iff
\begin{equation}\label{eq_samp}
\lim_{r\to\infty}\limsup_{n\to\infty}\
\Erw\brk{\max_{I,J\subset[r]}\TV{\sum_{(i,j)\in I\times J}
w(\vec x_i,\vec y_j)w_n(\vec x_i,\vec y_j)
}}=0.
\end{equation}
Hence, we are left to show that (\ref{eq_samp}) implies
\begin{equation}\label{eq_marg}
\forall k\geq1: \lim_{n\to\infty}\Erw\TV{\mu_{n\marg \vec x_1,\ldots,\vec x_k}\mu_{\marg \vec x_1,\ldots,\vec x_k}} = 0.
\end{equation}
To this end, we note that by the strong law of large numbers uniformly for all $x_1,\ldots, x_k\in[0,1]$
and $n$,
\begin{align}\label{eq_LLN1}
\frac 1r \sum_{j=1}^r (w(x_1,\vec y_j),\ldots,w(x_k,\vec y_j))&\ \stacksign{$r\to\infty$}\to\ \mu_{\marg x_1,\ldots, x_k}&\mbox{in probability},\\
\frac 1r \sum_{j=1}^r (w_n(x_1,\vec y_j),\ldots,w_n(x_k,\vec y_j))&\ \stacksign{$r\to\infty$}\to\ \mu_{n\marg x_1,\ldots, x_k}&\mbox{in probability}.
\label{eq_LLN2}
\end{align}
Hence, if \eqref{eq_samp} holds, then (\ref{eq_marg}) follows from (\ref{eq_LLN1})(\ref{eq_LLN2}).
\end{proof}
\noindent
As an application of \Cor~\ref{Cor_sampling} we obtain
\begin{corollary}\label{Cor_factorise}
Assume that $(\mu_n)_n$ is a sequence such that $\mu_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$.
The following statements are equivalent.
\begin{enumerate}[(i)]
\item There is $\sigma\in\Sigma_\Omega$ such that $\mu=\atom_\sigma$.
\item For any integer $k\geq2$ we have
\begin{equation}\label{eqFactorise}
\lim_{n\to\infty}\Erw\TV{\mu_{n\marg\vec x_1,\ldots,\vec x_k}\mu_{n\marg\vec x_1}\otimes\cdots\otimes\mu_{n\marg\vec x_k}}=0.
\end{equation}
\item The condition (\ref{eqFactorise}) holds for $k=2$.
\end{enumerate}
\end{corollary}
\begin{proof}
The implication (i)$\Rightarrow$(ii) follows from \Cor~\ref{Cor_sampling} and the step from (ii) to (iii) is immediate.
Hence, assume that (iii) holds.
Then by \Cor~\ref{Cor_sampling} and the continuity of the $\otimes$operator,
\begin{align}\label{eqFactorise0}
\Erw\TV{\mu_{\marg\vec x_1,\vec x_2}\mu_{\marg\vec x_1}\otimes\mu_{\marg\vec x_2}}&=
\lim_{n\to\infty}\Erw\TV{\mu_{n\marg\vec x_1,\vec x_2}\mu_{n\marg\vec x_1}\otimes\mu_{n\marg\vec x_2}}=0.
\end{align}
Define $\tilde\sigma:[0,1)\to\cP(\Omega)$ by $x\mapsto\mu_{\marg x}$ and assume that $\mu\neq\atom_{\tilde\sigma}$.
Then $\Cutm(\mu,\atom_{\tilde\sigma})>0$ (by Fact~\ref{Fact_attained}), whence there exist $B\subset\step_\Omega$, $U\subset[0,1)$, $\omega\in\Omega$ such that
\begin{align}\label{eqFactorise1}
\int_B\brk{\int_U\sigma_x(\omega)\tilde\sigma_x(\omega)\,dx}^2\,d\mu(\sigma)>0.
\end{align}
However, (\ref{eqFactorise0}) entails
\begin{align*}
\int_{\step_\Omega}\brk{\int_U\sigma_x(\omega)\tilde\sigma_x(\omega)\,dx}^2\,d\mu(\sigma)
&=\int_{\step_\Omega}\int_U\int_U\sigma_x(\omega)\sigma_y(\omega)\tilde\sigma_x(\omega)\tilde\sigma_y(\omega)\,dx\,dy\,d\mu(\sigma)\\
&=\Erw[\mu_{\marg\vec x_1,\vec x_2}\mu_{\marg\vec x_1}\otimes\mu_{\marg\vec x_2}\vec x_1,\vec x_2\in U]=0,
\end{align*}
in contradiction to (\ref{eqFactorise1}).
\end{proof}
\begin{remark}
Strictly speaking, the results from~\cite{DJ,Lovasz} are stated for graphons with values in $[0,1]$, i.e., $\cP(\Omega)$ for $\Omega=2$.
However, they extend to $\Omega>2$ directly.
For instance, the compactness proof~\cite[\Chap~9]{Lovasz} is by way of the regularity lemma, which we extend in \Sec~\ref{Sec_reg} explicitly.
Moreover, the sampling result for \Cor~\ref{Cor_sampling} follows from~\cite[\Chap~10]{Lovasz} by viewing $w:[0,1)^2\to\cP(\Omega)$
as a family $(w_\omega)_{\omega\in\Omega}$, $w_\omega:(x,y)\mapsto w_{x,y}(\omega)\in[0,1]$.
Finally, the proof of \Cor~\ref{Cor_AldousHoover} in~\cite{DJ} by counting homomorphisms,
extends to $\cP(\Omega)$valued graphons~\cite[\Sec~17.1]{Lovasz}.
\end{remark}
\subsection{Algebraic properties}
The cut metric is compatible with basic algebraic operations on measures.
The following is immediate.
\begin{fact}
If $\mu_n\stacksign{$\Cutm$}\to\mu$, $\nu_n\stacksign{$\Cutm$}\to\nu$,
then $\alpha\mu_n+(1\alpha)\nu_n\stacksign{$\Cutm$}\to\alpha\mu+(1\alpha)\nu$ for any $\alpha\in(0,1)$.
\end{fact}
The construction of a ``product measure'' is slightly more interesting.
Let $\Omega,\Omega'$ be finite sets.
For $\sigma\in\step_\Omega,\tau\in\step_{\Omega'}$ we define $\sigma\times\tau\in\step_{\Omega\times\Omega'}$
by letting $\sigma\times\tau(x)=\sigma(x)\otimes\tau(x)$,
where $\sigma(x)\otimes\tau(x)\in\cP(\Omega\times\Omega')$ is the usual product measure of $\sigma(x),\tau(x)$.
Further, for $\mu\in M_\Omega,\nu\in M_{\Omega'}$ we define $\mu\times\nu\in M_{\Omega\times\Omega'}$ by
\begin{align*}
\mu\times\nu&=\int_{\step_{\Omega}\times\step_{\Omega'}}\atom_{\sigma\times\tau}\,d\mu\otimes\nu(\sigma,\tau).
\end{align*}
Clearly, $\mu\times\nu$ is quite different from the usual product measure $\mu\otimes\nu$.
However, for {\em discrete} measures we observe the following.
\begin{fact}
For $\mu\in\cP(\Omega^n)$ and $\nu\in\cP({\Omega'}^n)$ we have
$\hat\mu\times\hat\nu=\widehat{\mu\otimes\nu}$.
\end{fact}
\begin{proposition}
If $\mu_n\stacksign{$\Cutm$}\to\mu\in M_\Omega$, $\nu_n\stacksign{$\Cutm$}\to\nu\in M_{\Omega'}$,
then $\mu_n\times\nu_n\stacksign{$\Cutm$}\to\mu\times\nu$.
\end{proposition}
\begin{proof}
Let $\eps>0$ and choose $n_0$ large enough so that
$\Cutm(\mu_n,\mu)<\eps$ and $\Cutm(\nu_n,\nu)<\eps$ for all $n>n_0$.
By Fact~\ref{Fact_attained} there exist couplings
$\gamma_n,\gamma_n'$ of $\mu_n,\mu$ and $\nu_n,\nu$ such that (\ref{eqmetric1}) is attained.
Because $\TV{p\otimes p'q\otimes q'}\leq\TV{pq}+\TV{qq'}$ for any $p,q\in\cP(\Omega)$, $p',q'\in\cP(\Omega)$,
we obtain for any $U\subset[0,1)$, $B\subset M_\Omega$, $B'\subset M_\Omega$
\begin{align*}
\TV{\int_{B\times B'}\int_U\sigma\times\sigma'(x)\tau\times\tau'(x)\,dx\,d\gamma_n\otimes\gamma_n'(\sigma,\tau,\sigma',\tau')}<2\eps,
\end{align*}
as desired.
\end{proof}
\subsection{Regularity}\label{Sec_reg}
For $\sigma\in\Sigma_\Omega$ and $U\subset[0,1)$ measurable we write
$$\sigma[\omegaU]=\int_U\sigma_x(\omega)\,dx.$$
Moreover, for $\mu\in M_\Omega$ and a measurable $S\subset\step_\Omega$ with $\mu(S)>0$ we let $\mu[\nixS]\in M_\Omega$
be the conditional distribution.
Further, let $\vV=(V_1,\ldots,V_K)$ be a partition of $[0,1)$ into a finite number of pairwise disjoint measurable sets.
Similarly, let $\vS=(S_1,\ldots,S_L)$ be a partition of $\step_\Omega$ into pairwise disjoint measurable sets.
We write $\#\vV,\#\vS$ for the number $K,L$ of classes, respectively.
A measure $\mu\in M_\Omega$ is {\em $\eps$regular} with respect to $(\vV,\vS)$
if there exists $R\subset[\#\vV]\times[\#\vS]$ such that the following conditions hold.
\begin{description}
\item[REG1] $\lambda(V_i)>0$ and $\mu(S_j)>0$ for all $(i,j)\in R$.
\item[REG2] $\sum_{(i,j)\in R}\lambda(V_i)\mu(S_j)>1\eps$.
\item[REG3] for all $(i,j)\in R$ and all $\sigma,\sigma'\in S_j$ we have
$\TV{\sigma[\nixV_i]\sigma'[\nixV_i]}<\eps$.
\item[REG4] if $(i,j)\in R$, then for every $U\subset V_i$ with $\lambda(U)\geq\eps\lambda(V_i)$
and every $T\subset S_j$ with $\mu(T)\geq\eps\mu(S_j)$ we have
$$\TV{\bck{\SIGMA[\nixU]}_{\mu[\nixT]}\bck{\SIGMA[\nixV_i]}_{\mu[\nixS_j]}}<\eps.$$
\end{description}
Thus, $R$ is a set of index pairs $(i,j)$ of ``good squares'' $V_i\times S_j$.
{\bf REG1} provides that every good square has positive measure and {\bf REG2} that the total probability mass of good squares is at least $1\eps$.
Further, by {\bf REG3} the averages $\sigma[\nixV_i],\sigma'[\nixV_i]\in\cP(\Omega)$ over $V_i$ of any two $\sigma,\sigma'\in S_j$ are close.
Finally, and most importantly, {\bf REG4} requires that the average $\bck{\SIGMA[\nixU]}_{\mu[\nixT]}$ over a ``biggish'' subsquare $U\times T$
is close to the mean over the entire square $V_i\times S_j$.
A {\em refinement} of a partition $(\vV,\vS)$ is a partition $(\vV',\vS')$ such that
for every pair $(i',j')\in[\#\vV']\times[\vS']$ there is a pair $(i,j)\in [\#\vV]\times[\vS]$ such that $(V_{i'}',S_{j'}')\subset(V_i,S_j)$.
\begin{theorem}\label{Thm_reg}
For any $\eps>0$ there exists $N=N(\eps,\Omega)$ such that for every $\mu\in M_\Omega$ the following is true.
Every partition $(\vV_0,\vS_0)$ with $\#\vV_0+\#\vS_0\leq1/\eps$ has a refinement
$(\vV,\vS)$ such that $\#\vV+\#\vS\leq N$
with respect to which $\mu$ is $\eps$regular.
\end{theorem}
In light of \Prop~\ref{Prop_homeomorphic}, \Thm~\ref{Thm_reg} would follow from the regularity lemma for
graphons~\cite[\Lem~9.16]{Lovasz} if we were to drop condition {\bf REG3}.
In fact, adapting the standard proof from~\cite{Szemeredi} to accommodate {\bf REG3} is not difficult.
For the sake of completeness we carry this out in detail in \Sec~\ref{Sec_Kathrin}.
A regularity lemma for measures on $\Omega^n$ was proved in~\cite{Victor}.
But even in the discrete case \Thm~\ref{Thm_reg} gives a stronger result.
The improvement is that {\bf REG4} above holds for all ``small subsquares'' $U\times T$ simultaneously.
How does the concept of regularity connect with the cut metric?
For a partition $\vV$ of $[0,1]$ and $\sigma\in\step_\Omega$ define
$\sigma[\nix\vV]\in\cW_\Omega$ by
$$\sigma_x[\omega\vV]=\sum_{i\in[\#\vV]}\vec{1}\{x\in V_i\}\sigma_x[\omegaV_i].$$
Thus, $\sigma[\nix\vV]:[0,1)\to\cP(\Omega)$ is constant on the classes of $\vV$.
Further, for a pair $(\vV,\vS)$ of partitions and $\mu\in M_\Omega$ let
$$\mu[\nix\vV,\vS]=\sum_{i\in[\#\vS]}\atom_{\int_{S_i}\sigma[\nix\vV]d\mu(\sigma)}.$$
Hence, $\mu[\nix\vV,\vS]\in M_\Omega$ is supported on a discrete set of functions $[0,1)\to\cP(\Omega)$
that are constant on the classes of $\vV$.
We might think of $\mu[\nix\vV,\vS]$ as the ``conditional expectation'' of $\mu$ with respect to $(\vV,\vS)$.
\begin{proposition}\label{Prop_reg2metric}
Let $\eps>0$ and assume that $\mu$ is $\eps$regular w.r.t.\ $(\vV,\vS)$.
Then
$\Cutm(\mu,\mu[\nix\vV,\vS])<2\eps$.
\end{proposition}
\begin{proof}
Let $\sigma^{(i)}=\int_{S_i}\sigma[\nix\vV]d\mu(\sigma)$.
We define a coupling $\gamma$ of $\mu,\mu[\nix\vV,\vS]$ in the obvious way: for a measurable $X\subset S_i$ let
$\gamma(X\times\{\sigma^{(i)}\})=\mu(X)$.
Now, let $U\subset[0,1]$ and $B\subset\step_\Omega^2$ be measurable.
Due to the construction of our coupling we may assume that $B=\bigcup_i B_i\times\{\sigma^{(i)}\}$ for certain sets $B_i\subset S_i$.
Moreover, let $U_j=U\cap V_j$.
Then
\begin{align*}
\TV{\int_B\int_U\sigma(x)\tau(x) dx d\eta(\sigma,\tau)}&\leq
\sum_{(i,j):\mu(S_i)\lambda(V_j)>0}\mu(S_i)\lambda(V_j)\TV{\int_{B_i}\int_{U_j}\sigma(x)s_i(x)\frac{dx}{\lambda(V_j)}\frac{d\mu(\sigma)}{\mu(S_i)}}.
\end{align*}
By {\bf REG1} and {\bf REG4} the last expression is less than $2\eps$.
\end{proof}
\begin{corollary}
For any $\eps>0$ there exists $N=N(\eps)>0$ such that for any $\mu\in M_\Omega$ there exist $\sigma_1,\ldots,\sigma_N\in\step_\Omega$
and $w=(w_1,\ldots,w_N)\in\cP([N])$ such that
$\Cutm\bc{\mu,\sum_{i=1}^kw_i\atom_{\sigma_i}}<\eps.$
\end{corollary}
\begin{proof}
This is immediate from \Thm~\ref{Thm_reg} and \Prop~\ref{Prop_reg2metric}.
\end{proof}
\subsection{Proof of \Thm~\ref{Thm_reg}}\label{Sec_Kathrin}
Following the path beaten in~\cite{Victor,Szemeredi,Tao}, we define the index of $(\vV,\vS)$ as
\begin{align*}
\ind_\mu(\vV,\vS)&=\Erw\bck{\Var[\SIGMA_{\vec x}[\omega]\vV,\vS]}_\mu
=\frac1{\Omega}\sum_{\omega\in\Omega}\sum_{i=1}^{\#\vV}\sum_{j=1}^{\#S_j}
\int_{S_j}\int_{V_i}\bc{\sigma_x(\omega)\int_{S_j}\int_{V_i}\sigma_y(\omega)\frac{dy}{\lambda(V_i)}\frac{d\mu(\sigma)}{\mu(S_j)}}^2dxd\mu(\sigma).
\end{align*}
\noindent
There is only one simple step that we add to the proof from~\cite{Szemeredi}.
Namely, following~\cite{Victor}, we begin by refining the partition  
# SonicWALL Aventail EClass SRA EXSeries Installation and
Aventail EClass SRA 10.7
 1
Notes, Cautions, and Warnings
NOTE: A NOTE indicates important information that helps you make better use of your system.
CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are
not followed.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
Trademarks: Dellв„ў, the DELL logo, SonicWALLв„ў, Aventailв„ў, ReassemblyFree Deep Packet
Inspectionв„ў, Dynamic Security for the Global Networkв„ў, SonicWALL Aventail Advanced End
Point Controlв„ў (EPCв„ў), SonicWALL Aventail Advanced Reportingв„ў, SonicWALL Aventail
Connect Mobileв„ў, SonicWALL Aventail Connectв„ў, SonicWALL Aventail Native Access
Modulesв„ў, SonicWALL Aventail Policy Zonesв„ў, SonicWALL Aventail Smart Accessв„ў,
SonicWALL Aventail Unified Policyв„ў, SonicWALL Aventailв„ў Advanced EPCв„ў, SonicWALL
Clean VPNв„ў, SonicWALL Clean Wirelessв„ў, SonicWALL Global Response Intelligent Defense
(GRID) Networkв„ў, SonicWALL Mobile Connectв„ў, and all other SonicWALL product and
service names and slogans are trademarks of Dell Inc.
2014 вЂ“ 02
P/N 23200183000
2  Aventail EClass SRA 10.7 Administrator Guide
Rev. B
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Features of Your EClass SRA Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
EClass SRA Appliance Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Administrator Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
User Access Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
ADA 508 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
WhatвЂ™s New in This Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Client Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About the Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter 2. Installation and Initial Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Preparing for the Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Gathering Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Verifying Your Firewall Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Helpful Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Installation and Deployment Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Specifications and Rack Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Front Panel Controls and Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Connecting the Appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Powering Up and Configuring Basic Network Settings . . . . . . . . . . . . . . . . . . . . 46
WebBased Configuration Using Setup Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . 48
Configuring the Appliance Using the Management Console . . . . . . . . . . . . . . . . 49
Moving the Appliance into Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Powering Down and Restarting the Appliance . . . . . . . . . . . . . . . . . . . . . . .  
a worstcase deadline that is unknown to the scheduler .…
## Preprints as accelerator of scholarly communication An empirical analysis in Mathematics
In this study we analyse the key driving factors of preprints in enhancingscholarly communication . Articles with preprint versions are morelikely to be mentioned in social media and have shorter Altmetric attentiondelay . We could observe that the “earlyview” and “openaccess” effects of preprint” of .…
## Dendritic trafficking synaptic scaling and structural plasticity
Neuronal circuits internally regulate electrical signaling via a host of homeostatic mechanisms . Two prominent mechanisms, synaptic scaling andstructural plasticity, are believed to maintain average activity within an operating range . However, both mechanisms operate on relatively slow timescales and thus face fundamental limits due to delays .…
## The Reads From Equivalence for the TSO and PSO Memory Models
The verification of concurrent programs remains an open challenge due to thenondeterminism in interprocess communication . The readsfrom (RF) equivalence was recently shownto be coarser than the Mazurkiewicz equivalence, leading to impressivescalability improvements for SMC under SC . For TSO and PSO, the standard equivalence has been ShashaSnir traces .…
## Inverse problems for semiconductors models and methods
We consider the problem of identifying discontinuous doping profiles insemiconductor devices from data obtained by different models connected to thevoltagecurrent map . Stationary as well as transient settings are discussed . Numericalimplementations for the socalled stationary unipolar and stationary bipolarcases show the effectiveness of a level set approach to tackle the inverse problem .…
Stochastic sparse adversarial attacks (SSAA) are simple, fast and purely noisebased targeted and untargeted attacks of NNC . SSAA offer new examples of sparse (or $L_0$) attacks for which only few methodshave been proposed previously . These attacks are devised by exploiting asmalltime expansion idea widely used for Markov processes .…
## Energy Based Models for Continual Learning
EnergyBased Models (EBMs) have a natural way to support a dynamicallygrowing number of tasks or classes that causes less interference with previously learned information . EBMs outperform the baseline methods by a large margin on several continual learningbenchmarks . We also show that EBMs are adaptable to a more general continuallearning setting where the data distribution changes without the notion of explicitly delineated tasks .…
When using heterogeneous hardware, barriers of technical skills such as OpenMP, CUDA and OpenCL are high . I have proposedenvironmentadaptive software that enables automatic conversion, configuration . However, there has been no research toproperly and automatically offload the mixed offloading destination environmentsuch as GPU, FPGA and many core CPU.…
## Hyper parameter estimation method with particle swarm optimization
Particle swarm optimization (PSO) method cannot be directly used in the problem of hyperparameter estimation . Bayesian optimization (BO) framework is capable of converting the optimization of an acquisition function . The proposed method in this paper uses theparticle swarm method to optimize the acquisition function in the BO framework .…
## SpinNet Learning a General Surface Descriptor for 3D Point Cloud Registration
A SpatialPoint Transformer is first introduced to map the input local surface into acarefully designed cylindrical space, enabling endtoend optimization withSO(2) equivariant representation . A Neural Feature Extractor leverages the powerful pointbased and 3D cylindrict convolutional neural layers to derive a compact and representative descriptor for matching .…
## A DC Autotransformer based Multilevel Inverter for Automotive Applications
This paper proposes a novel multilevel inverter for automotive applications . The topology consists of a modular DCDC converter and a tap selector . The DCDC converter is capable of selfbalancing its modules and thus does not require large capacitors which yields a high power density .…
## General Purpose Atomic Crosschain Transactions
The General Purpose Atomic Crosschain Transaction protocol allows composableprogramming across multiple Ethereum blockchains . It allows for intercontractand interblockchain function calls that are both synchronous and atomic . If one part fails, the whole call graph of function calls is rolled back .…
## Rotational Error Metrics for Quadrotor Control
We analyze and experimentally compare various rotational error metrics for use in quadrotor controllers . We provide a catalog of proposed rotational metrics, place them into the same framework . We show experimental results to highlight the salient differences between the rotational errors .…
## New method of verifying cryptographic protocols based on the process model
A cryptographic protocol (CP) is a distributed algorithm designed to provide a secure communication in an insecure environment . Errors in the CPs can lead to great financial and social damage, therefore it is necessary to use mathematical methods to justify the correctness and safety of the CP .…
## Solving Two Dimensional H curl elliptic Interface Systems with Optimal Convergence On Unfitted Meshes
In this article, we develop and analyze a finite element method with the first family N\’ed\’elec elements of the lowest degree for solving a Maxwellinterface problem . We establish a few important properties for the IFEfunctions including the unisolvence according to the edge degrees of freedom, the exact sequence relating to the $H^1$ IFE functions and the optimalapproximation capabilities .…
## Fuzzy Stochastic Timed Petri Nets for Causal properties representation
Imagery is frequently used to model, represent and communicate knowledge . Causality is defined in terms of precedence (thecause precedes the effect), concurrency (often, an effect is provokedsimultaneously by two or more causes) and circularity (a cause provokes the effectand the effect reinforces the cause) We will introduce Fuzzy Stochastic Timed Petri Nets as a graphical tool able to represent time, cooccurrence, loopingand imprecision in causal flow .…
## Tight Integrated End to End Training for Cascaded Speech Translation
A cascaded speech translation model relies on discrete and nondifferentiabletranscription . Such modelingsuffers from error propagation between ASR and MT models . Our experiments on four tasks with different datascenarios show that the model outperforms cascade models up to 1.8% in BLEU and 2.0% in TER .…
## GLGE A New General Language Generation Evaluation Benchmark
Multitask benchmarks such as GLUE and SuperGLUE have driven great progressof pretraining and transfer learning in Natural Language Processing (NLP) These benchmarks mostly focus on a range of Natural Language Understanding(NLU) tasks, without considering the Natural Language Generation (NLG) models .…
## Generate Asset Condition Data for Power System Reliability Studies
Paper explores unconventional methodgenerating numerical and nonnumerical asset condition data based on condition degradation, condition correlation and categorical distribution models . Empirical knowledge from human experts can also be incorporated in themodeling process . Method can be used to conveniently generatehypothetical data for research purposes .…
## PeleNet A Reservoir Computing Framework for Loihi
PeleNetframework aims to simplify reservoir computing for neuromorphic hardware . It is build on top of the NxSDK from Intel and is written in Python . The framework manages weight matrices, parameters and probes . With this, the user is not confronted with technical details and can concentrate on experiments .…
## Effective Parallelism for Equation and Jacobian Evaluation in Power Flow Calculation
This letter investigates parallelism approaches for equations and Jacobianevaluation in power flow calculations . Case studies on the 70,000bus synthetic grid show that equation evaluations can be accelerated by ten times, and the overall Newton power flow outperforms MATPOWER by 20% .…
## InstaHide s Sample Complexity When Mixing Two Private Images
Inspired by InstaHide challenge [Huang, Song, Li and Arora’20], [Chen, Songand Zhuo’20] recently provides one mathematical formulation of InstaHash attackproblem under Gaussian images distribution . They show that it suffices to use $O(n_{\mathsf{priv) +\mathrm{poly}(n__priv)$ samplesto recover one private image in $n_n_priv_time for any integer$k_\maths$– 2/(k_maths) + 1)$ .…
## Provably Robust Runtime Monitoring of Neuron Activation Patterns
For safetycritical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training . The algorithm performs a soundworstcase estimate of neuron values with inputs (or features) subject toperturbation, before the abstraction function is applied to build the monitor .…
## On the Serverless Nature of Blockchains and Smart Contracts
Serverless architecture is more  
# Shirshendu Ganguly
I am an Associate Professor in the Department of Statistics at UC Berkeley.
401 Evans Hall
UC Berkeley
Berkeley, CA 94720
sganguly@berkeley.edu.
## Research
I am broadly interested in probability theory and its applications. Recently I have been working on problems in Disordered metric geometries with focus on geometry of geodesics in percolation models, Scaling limits and Phase transitions in statistical mechanics, Large deviations and counting problems in sparse nonlinear settings, Mixing time of Markov Chains, Random walk on graphs and Random Matrix theory.
## Education and past employment.
• UC Berkeley. Assistant Professsor. 20182021.
• UC Berkeley. Miller Postdoctoral Fellow. Statistics and Mathematics. 20162018.
• University of Washington. PhD in Mathematics. 20112016. .
## Teaching
• Stat 155. Game theory. Fall 2018.
• Stat C205A/ Math C218A. Probability theory. Fall 2019.
• Stat C205A/ Math C218A. Probability theory. Fall 2020.
• Stat C205A/ Math C218A. Probability theory (With Prof. Steve Evans). Fall 2021.
## Recent Works
### Stability, Noise sensitivity and Chaos in dynamical last passsage percolation models.
• (with Alan Hammond). Arxiv
Many complex statistical mechanical models have intricate energy landscapes. The ground state, or lowest energy state, lies at the base of the deepest valley. In examples such as spin glasses and Gaussian polymers, there are many valleys; the abundance of nearground states (at the base of valleys) indicates the phenomenon of chaos, under which the ground state alters profoundly when the model's disorder is slightly perturbed. In this article, we compute the critical exponent that governs the onset of chaos in a dynamic manifestation of a canonical model in the KardarParisiZhang [KPZ] universality class, Brownian last passage percolation [LPP]. In this model in its static form, semidiscrete polymers advance through Brownian noise, their energy given by the integral of the white noise encountered along their journey. A ground state is a geodesic, of extremal energy given its endpoints. We perturb Brownian LPP by evolving the disorder under an OrnsteinUhlenbeck flow. We prove that, for polymers of length n, a sharp phase transition marking the onset of chaos is witnessed at the critical time $$n^{−1/3}$$. Indeed, the overlap between the geodesics at times zero and $$t>0$$ that travel a given distance of order n will be shown to be of order n when $$t\ll n^{−1/3}$$; and to be of smaller order when $$t\gg n^{−1/3}$$. We expect this exponent to be shared among many interface models. The present work thus sheds light on the dynamical aspect of the KPZ class; it builds on several recent advances. These include Chatterjee's harmonic analytic theory [Cha14] of equivalence of superconcentration and chaos in Gaussian spaces; a refined understanding of the static landscape geometry of Brownian LPP developed in the companion paper [GH20]; and, underlying the latter, strong comparison estimates of the geodesic energy profile to Brownian motion in [CHH19].
• (with Alan Hammond). Arxiv
The energy and geometry of maximizing paths in integrable last passage percolation models are governed by the characteristic KPZ scaling exponents of onethird and twothirds. When represented in scaled coordinates that respect these exponents, this random field of paths may be viewed as a complex energy landscape. We investigate the structure of valleys and connecting pathways in this landscape. The routed weight profile $$\mathbb{R}\to \mathbb{R}$$ associates to $$x\in \mathbb{R}$$ the maximum scaled energy obtainable by a path whose scaled journey from $$(0,0)$$ to $$(0,1)$$ passes through the point $$(x,1/2)$$. Developing tools of Brownian Gibbs analysis from [Ham16] and [CHH19], we prove an assertion of strong similarity of this profile for Brownian last passage percolation to Brownian motion of rate two on the unitorder scale. A sharp estimate on the rarity that two macroscopically different routes in the energy landscape offer energies close to the global maximum results. We prove robust assertions concerning modulus of continuity for the energy and geometry of scaled maximizing paths, that develop the results and approach of [HS20], delivering estimates valid on all scales above the microscopic. The geometry of excursions of near ground states about the maximizing path is investigated: indeed, we estimate the energetic shortfall of scaled paths forced to closely mimic the geometry of the maximizing route while remaining disjoint from it. We also provide bounds on the approximate gradient of the maximizing path, viewed as a function, ruling out sharp steep movement down to the microscopic scale. Our results find application in a companion study [GH20a] of the stability, and fragility, of last passage percolation under a dynamical perturbation.
### Fractal Geometry of Airy Sheet
• (with Milind Hegde). Arxiv
There has recently been much activity within the KardarParisiZhang universality class spurred by the construction of the canonical limiting object, the parabolic Airy sheet $$\mathcal{S}:\mathbb{R}^2\to\mathbb{R}$$ \cite{dauvergne2018directed}. The parabolic Airy sheet provides a coupling of parabolic Airy$$_2$$ processesa universal limiting geodesic weight profile in planar last passage percolation modelsand a natural goal is to understand this coupling. Geodesic geometry suggests that the difference of two parabolic Airy$$_2$$ processes, i.e., a difference profile, encodes important structural information. This difference profile $\wdf$, given by $$\mathbb{R}\to\mathbb{R}:x\mapsto \mathcal{S}(1,x)\mathcal{S}(1,x)$$, was first studied by Basu, Ganguly, and Hammond \cite{basu2019fractal}, who showed that it is monotone and almost everywhere constant, with its points of nonconstancy forming a set of Hausdorff dimension $$1/2$$. Noticing that this is also the Hausdorff dimension of the zero set of Brownian motion leads to the question: is there a connection between $$\mathcal{D}$$ and Brownian local time? Establishing that there is indeed a connection, we prove two results. On a global scale, we show that $$\mathcal{D}$$ can be written as a \emph{Brownian local time patchwork quilt}, i.e., as a concatenation of random restrictions of functions which are each absolutely continuous to Brownian local time (of rate four) away from the origin. On a local scale, we explicitly obtain Brownian local time of rate four as a local limit of $$\mathcal{D}$$ at a point of increase, picked by a number of methods, including at a typical point sampled according to the distribution function $$\mathcal{D}$$. Our arguments rely on the representation of $\S$ in terms of a last passage problem through the parabolic Airy line ensemble and an understanding of geodesic geometry at deterministic and random times.
• (with Riddhipratim Basu, Alan Hammond) To appear in Annals of Probability. Arxiv
n last passage percolation models lying in the KardarParisiZhang universality class, maximizing paths that travel over distances of order n accrue energy that fluctuates on scale $$n^{1/3}$$; and these paths deviate from the linear interpolation of their endpoints on scale $$n^{2/3}$$. These maximizing paths and their energies may be viewed via a coordinate system that respects these scalings. What emerges by doing so is a system indexed by $$x,y \in \mathbb{R}$$ and $$s,t \in \mathbb{R}$$ with $$s < t$$ of unit order quantities $$W_n(x,s;y,t)$$ specifying the scaled energy of the maximizing path that moves in scaled coordinates between $$(x,s)$$ and $$(y,t)$$. The spacetime Airy sheet is, after a parabolic adjustment, the putative distributional limit $$W_{\infty}$$ of this system as $$n\to \infty$$. The Airy sheet has recently been constructed in [15] as such a limit of Brownian last passage percolation. In this article, we initiate the study of fractal geometry in the Airy sheet. We prove that the scaled energy difference profile given by $$\mathbb{R} \to \mathbb{R} :z \to W_{\infty}(1,0;z,1)−W_{\infty}(−1,0;z,1)$$ is a nondecreasing process that is constant in a random neighbourhood of almost every $$z \in \mathbb{R}$$; and that the exceptional set of $$z \in \mathbb{R}$$ that violate this condition almost surely has Hausdorff dimension onehalf. Points of violation correspond to special behaviour for scaled maximizing paths, and we prove the result by investigating this behaviour, making use of two inputs from recent studies of scaled Brownian LPP; namely, Brownian regularity of profiles, and estimates on the rarity of pairs of disjoint scaled maximizing paths that begin and end close to each other.
• (with Erik Bates, Alan Hammond) Arxiv
Within the  
"""
A library for cycle triplet extraction and cycle error computation.
Checks the cumulative rotation errors between triplets to throw away cameras.
Note: the same property does not hold for cumulative translation errors when scale is unknown (i.e. in SfM).
Author: John Lambert
"""
import os
from collections import defaultdict
from typing import DefaultDict, Dict, List, Optional, Set, Tuple
import matplotlib.pyplot as plt
import numpy as np
from gtsam import Rot3, Unit3
import gtsfm.utils.geometry_comparisons as comp_utils
import gtsfm.utils.logger as logger_utils
import gtsfm.utils.metrics as metrics_utils
from gtsfm.evaluation.metrics import GtsfmMetric, GtsfmMetricsGroup
from gtsfm.two_view_estimator import TwoViewEstimationReport
logger = logger_utils.get_logger()
CYCLE_ERROR_THRESHOLD = 5.0
MAX_INLIER_MEASUREMENT_ERROR_DEG = 5.0
def extract_triplets(i2Ri1_dict: Dict[Tuple[int, int], Rot3]) > List[Tuple[int, int, int]]:
"""Discover triplets from a graph, without O(n^3) complexity, by using intersection within adjacency lists.
Based off of Theia's implementation:
https://github.com/sweeneychris/TheiaSfM/blob/master/src/theia/math/graph/triplet_extractor.h
If we have an edge a<>b, if we can find any node c such that a<>c and b<>c, then we have
discovered a triplet. In other words, we need only look at the intersection between the nodes
connected to `a` and the nodes connected to `b`.
Args:
i2Ri1_dict: mapping from image pair indices to relative rotation.
Returns:
triplets: 3tuples of nodes that form a cycle. Nodes of each triplet are provided in sorted order.
"""
adj_list = create_adjacency_list(i2Ri1_dict)
# only want to keep the unique ones
triplets = set()
# find intersections
for (i1, i2), i2Ri1 in i2Ri1_dict.items():
if i2Ri1 is None:
continue
if i1 >= i2:
raise RuntimeError("Graph edges (i1,i2) must be ordered with i1 < i2 in the image loader.")
nodes_from_i1 = adj_list[i1]
nodes_from_i2 = adj_list[i2]
node_intersection = (nodes_from_i1).intersection(nodes_from_i2)
for node in node_intersection:
cycle_nodes = tuple(sorted([i1, i2, node]))
if cycle_nodes not in triplets:
triplets.add(cycle_nodes)
return list(triplets)
def create_adjacency_list(i2Ri1_dict: Dict[Tuple[int, int], Rot3]) > DefaultDict[int, Set[int]]:
"""Create an adjacencylist representation of a **rotation** graph G=(V,E) when provided its edges E.
Note: this is specific to the rotation averaging use case, where some edges may be unestimated
(i.e. their relative rotation is None), in which case they are not incorporated into the graph.
In an adjacency list, the neighbors of each vertex may be listed efficiently, in time proportional to the
degree of the vertex. In an adjacency matrix, this operation takes time proportional to the number of
vertices in the graph, which may be significantly higher than the degree.
Args:
i2Ri1_dict: mapping from image pair indices to relative rotation.
Returns:
adj_list: adjacency list representation of the graph, mapping an image index to its neighbors
"""
adj_list = defaultdict(set)
for (i1, i2), i2Ri1 in i2Ri1_dict.items():
if i2Ri1 is None:
continue
adj_list[i1].add(i2)
adj_list[i2].add(i1)
return adj_list
def compute_cycle_error(
i2Ri1_dict: Dict[Tuple[int, int], Rot3],
cycle_nodes: Tuple[int, int, int],
two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport],
verbose: bool = True,
) > Tuple[float, Optional[float], Optional[float]]:
"""Compute the cycle error by the magnitude of the axisangle rotation after composing 3 rotations.
Note: a < b for every valid edge (a,b), by construction inside the image loader class.
Args:
i2Ri1_dict: mapping from image pair indices to relative rotation.
cycle_nodes: 3tuples of nodes that form a cycle. Nodes of are provided in sorted order.
two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information
about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always.
verbose: whether to dump to logger information about error in each Euler angle
Returns:
cycle_error: deviation from 3x3 identity matrix, in degrees. In other words,
it is defined as the magnitude of the axisangle rotation of the composed transformations.
max_rot_error: maximum rotation error w.r.t. GT across triplet edges, in degrees.
If ground truth is not known for a scene, None will be returned instead.
max_trans_error: maximum translation error w.r.t. GT across triplet edges, in degrees.
If ground truth is not known for a scene, None will be returned instead.
"""
cycle_nodes = list(cycle_nodes)
cycle_nodes.sort()
i0, i1, i2 = cycle_nodes
i1Ri0 = i2Ri1_dict[(i0, i1)]
i2Ri1 = i2Ri1_dict[(i1, i2)]
i0Ri2 = i2Ri1_dict[(i0, i2)].inverse()
# should compose to identity, with ideal measurements
i0Ri0 = i0Ri2.compose(i2Ri1).compose(i1Ri0)
I_3x3 = Rot3()
cycle_error = comp_utils.compute_relative_rotation_angle(I_3x3, i0Ri0)
# form 3 edges e_i, e_j, e_k between fully connected subgraph (nodes i0,i1,i2)
edges = [(i0, i1), (i1, i2), (i0, i2)]
rot_errors = [two_view_reports_dict[e].R_error_deg for e in edges]
trans_errors = [two_view_reports_dict[e].U_error_deg for e in edges]
gt_known = all([err is not None for err in rot_errors])
if gt_known:
max_rot_error = float(np.max(rot_errors))
max_trans_error = float(np.max(trans_errors))
else:
# ground truth unknown, so cannot estimate error w.r.t. GT
max_rot_error = None
max_trans_error = None
if verbose:
# for each rotation R: find a vector [x,y,z] s.t. R = Rot3.RzRyRx(x,y,z)
# this is equivalent to scipy.spatial.transform's `.as_euler("xyz")`
i1Ri0_euler = np.rad2deg(i1Ri0.xyz())
i2Ri1_euler = np.rad2deg(i2Ri1.xyz())
i0Ri2_euler = np.rad2deg(i0Ri2.xyz())
logger.info("\n")
logger.info(f"{i0},{i1},{i2} > Cycle error is: {cycle_error:.1f}")
if gt_known:
logger.info(f"Triplet: w/ max. R err {max_rot_error:.1f}, and w/ max. t err {max_trans_error:.1f}")
logger.info(
"X: (0>1) %.1f deg., (1>2) %.1f deg., (2>0) %.1f deg.", i1Ri0_euler[0], i2Ri1_euler[0], i0Ri2_euler[0]
)
logger.info(
"Y: (0>1) %.1f deg., (1>2) %.1f deg., (2>0) %.1f deg.", i1Ri0_euler[1], i2Ri1_euler[1], i0Ri2_euler[1]
)
logger.info(
"Z: (0>1) %.1f deg., (1>2) %.1f deg., (2>0) %.1f deg.", i1Ri0_euler[2], i2Ri1_euler[2], i0Ri2_euler[2]
)
return cycle_error, max_rot_error, max_trans_error
def filter_to_cycle_consistent_edges(
i2Ri1_dict: Dict[Tuple[int, int], Rot3],
i2Ui1_dict: Dict[Tuple[int, int], Unit3],
v_corr_idxs_dict: Dict[Tuple[int, int], np.ndarray],
two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport],
visualize: bool = True,
) > Tuple[Dict[Tuple[int, int], Rot3], Dict[Tuple[int, int], Unit3], GtsfmMetricsGroup]:
"""Remove edges in a graph where concatenated transformations along a 3cycle does not compose to identity.
Note: will return only a subset of these two dictionaries
Concatenating the transformations along a loop in the graph should return the identity function in an
ideal, noisefree setting.
Based off of:
https://github.com/sweeneychris/TheiaSfM/blob/master/src/theia/sfm/filter_view_graph_cycles_by_rotation.cc
See also:
C. Zach, M. Klopschitz, and M. Pollefeys. Disambiguating visual relations using loop constraints. In CVPR, 2010
http://people.inf.ethz.ch/pomarc/pubs/ZachCVPR10.pdf
Enqvist, Olof; Kahl, Fredrik; Olsson, Carl. NonSequential Structure from Motion. ICCVW, 2011.
https://portal.research.lu.se/ws/files/6239297/2255278.pdf
Args:
i2Ri1_dict: mapping from image pair indices (i1,i2) to relative rotation i2Ri1.
i2Ui1_dict: mapping from image pair indices (i1,i2) to relative translation direction i2Ui1.
Should have same keys as i2Ri1_dict.
v_corr_idxs_dict: dictionary, with key as image pair (i1,i2) and value as matching keypoint indices.
two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information
about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always.
visualize: boolean indicating whether to plot cycle error vs. pose error w.r.t. GT
Returns:
i2Ri1_dict_consistent: subset of i2Ri1_dict, i.e. only including edges that belonged to some triplet
and had cycle error below the predefined threshold.
i2Ui1_dict_consistent: subset of i2Ui1_dict, as above.
v_corr_idxs_dict_consistent: subset of v_corr_idxs_dict above.
metrics_group: Rotation cycle consistency metrics as a metrics group.
"""
cycle_errors = []
max_rot_errors = []
max_trans_errors = []
n_valid_edges = len([i2Ri1 for (i1, i2), i2Ri1 in i2Ri1_dict.items() if i2Ri1 is not None])
# (i1,i2) pairs
cycle_consistent_keys = set()
triplets = extract_triplets(i2Ri1_dict)
for (i0, i1, i2) in triplets:
cycle_error, max_rot_error, max_trans_error = compute_cycle_error(
i2Ri1_dict, (i0, i1, i2), two_view_reports_dict
)
if cycle_error < CYCLE_ERROR_THRESHOLD:
# since i0 < i1 < i2 by construction, we preserve the property `a < b` for each edge (a,b)
cycle_consistent_keys.add((i0, i1))
cycle_consistent_keys.add((i1, i2))
cycle_consistent_keys.add((i0, i2))
cycle_errors.append(cycle_error)
max_rot_errors.append(max_rot_error)
max_trans_errors.append(max_trans_error)
if visualize:
plt.scatter(cycle_errors, max_rot_errors)
plt.xlabel("Cycle error")
plt.ylabel("Avg. Rot3 error over cycle triplet")
plt.savefig(os.path.join("plots", "cycle_error_vs_GT_rot_error.jpg"), dpi=200)
plt.close("all")
plt.scatter(cycle_errors, max_trans_errors)
plt.xlabel("Cycle error")
plt.ylabel("Avg. Unit3 error over cycle triplet")
plt.savefig(os.path.join("plots", "cycle_error_vs_GT_trans_error.jpg"), dpi=200)
logger.info("cycle_consistent_keys: " + str(cycle_consistent_keys))
i2Ri1_dict_consistent, i2Ui1_dict_consistent, v_corr_idxs_dict_consistent = {}, {}, {}
for (i1, i2) in cycle_consistent_keys:
i2Ri1_dict_consistent[(i1, i2)] = i2Ri1_dict[(i1, i2)]
i2Ui1_dict_consistent[(i1, i2)] = i2Ui1_dict[(i1, i2)]
v_corr_idxs_dict_consistent[(i1, i2)] = v_corr_idxs_dict[(i1, i2)]
logger.info("Found %d consistent rel. rotations from %d original edges.", len(i2Ri1_dict_consistent), n_valid_edges)
metrics_group = _compute_metrics(
inlier_i1_i2_pairs=cycle_consistent_keys, two_view_reports_dict=two_view_reports_dict
)
return i2Ri1_dict_consistent, i2Ui1_dict_consistent, v_corr_idxs_dict_consistent, metrics_group
def _compute_metrics(
inlier_i1_i2_pairs: List[Tuple[int, int]], two_view_reports_dict: Dict[Tuple[int, int], TwoViewEstimationReport]
) > GtsfmMetricsGroup:
"""Computes the rotation cycle consistency metrics as a metrics group.
Args:
inlier_i1_i2_pairs: List of inlier camera pair indices.
two_view_reports_dict: mapping from image pair indices (i1,i2) to a report containing information
about the verifier's output (and optionally measurement error w.r.t GT). Note: i1 < i2 always.
Returns:
Rotation cycle consistency metrics as a metrics group. Includes the following metrics:
 Number of inlier,  
hold. This completes the proof. \hfill\hspace{10pt}\fbox{}
\vskip.2cm
\noindent{\bf Proof of Proposition \ref{3.1}}
We shall prove the item $(1)$. The proof of item $(2)$ follows similarly using Lemma \ref{lem2ps} instead of Lemma \ref{lem1ps}. Applying Ekeland's variational principle there exists a sequence $(u_n)\subset{\cal{N}}^+$ in such way that
\begin{description}
\item[(i)] $J(u_n) \,= \,\alpha^+ + o_n(1)$,
\item [(ii)] $J(u_n)<J(w)+\frac{1}{n}wu, \,\,\forall \,\, w\,\in{\cal{N}}^+.$
\end{description}
In what follows we shall prove that $\displaystyle \lim_{n \rightarrow \infty} J'(u_n)\to 0$. From Proposition \ref{xi_nbound}, there exist $C>0$ independent on $n\in\mathbb{N}$ such that $\\xi_n(0)\\leq C$. This estimate together with Proposition \ref{J'limpont}
$$
\left<J'(u_n), \displaystyle\frac{u}{u}\right>\leq \displaystyle\frac{C}{n},~u\inW_0^{1,\Phi}(\Omega)/\{0\}.
$$
This implies that $\J'(u_n)\\rightarrow 0$ as $n\rightarrow\infty$. This finishes the proof. \hfill\hspace{10pt}\fbox{}
\section{The proof of our main theorems}
\subsection{The proof of Theorem \ref{teorem1}}
We are going to apply the following result, whose proof is made by using the concentration compactness principle due to Lions for Orlicz Sobolev framework, see \cite{Willem} or else in \cite{CSGG,Fuk_1}.
\begin{lem}\label{conv_grad_qtp}
$(i)$ $\phi(\nabla u_n)\nabla u_n\rightharpoonup \phi(\nabla u)\nabla u$ in $\prod L_{\widetilde{\Phi}}(\Omega)$;\\
$(ii)$ $u_n^{\ell^*2}u_n \rightharpoonupu^{\ell^*2}u$ in $L^{\frac{\ell^*}{\ell^*1}}(\Omega)$.
\end{lem}
Let $\f\_{(\ell^*)'} < \Lambda_1 = \min\left\{\lambda_1,\displaystyle\frac{\ell^*m}{m1}\right\}$ where $\lambda_1 > 0$ is given by $(f_1)$.
From Lemma \ref{nehari+} we infer that $$\alpha^+:=\displaystyle\inf_{u\in{\cal{N}}^+}J(u)=\displaystyle\inf_{u\in{\cal{N}}}J(u) < 0.$$
We will find a function $u\in {\cal{N}}^+$ in such that $$J(u)= \displaystyle\min_{u\in {\cal{N}}^+}J (u)=:\alpha^+ \,\, \mbox{and} \,\, J^{\prime}(u) \equiv 0.$$
First of all, using Proposition \ref{lem1ps}, there exists a minimizing sequence denoted by $(u_n)\subset W^{1,\Phi}(\Omega)$ such that
\begin{equation}\label{cerami1}
J(u_n)=\alpha^++o_n(1) \mbox{ and }
J'(u_n)=o_n(1).
\end{equation}
Since the functional $J$ is coercive in ${\cal{N}}^+$, this implies that $(u_n)$ is bounded in ${\cal{N}}^+$. Therefore, there exists a function $u\in{W^{1,\Phi}_0(\Omega)}$ such that
\begin{equation} \label{convergencia}
u_n \rightharpoonup u \,\, \mbox{ in } \,\, W_0^{1,\Phi}(\Omega),~~
u_n \to u \,\,\mbox{a.e.}\,\, \mbox{ in } \Omega,~~
u_n \to u \,\, \mbox{ in } \,\, L^{\Phi}(\Omega).
\end{equation}
\noindent We shall prove that $u$ is a weak solution for the problem elliptic problem \eqref{eq1}.
Notice that, by \eqref{cerami1}, we mention that
$$o_n(1)=\left<J'(u_n),v\right>=\displaystyle\int_{\Omega} \phi(\nabla u_n)\nabla u_n\nabla v fv u_n^{\ell^*2}u_nv$$
holds for any $v \in W_0^{1,\Phi}(\Omega)$. In view of \eqref{convergencia} and Lemma \ref{conv_grad_qtp} we get
$$\displaystyle\int_{\Omega} \phi(\nabla u)\nabla u\nabla vf vu^{\ell^*2}uv = 0$$
for any $v\in W^{1,\Phi}(\Omega)$ proving that u is a weak solution to the elliptic problem \eqref{eq1}. In addition, the weak solution $u$ is not zero. In fact, using the fact that $u_n\in \mathcal{N}^{+},$ we obtain
$$\begin{array}{rcl}
\displaystyle\int_{\Omega} fu_n&=& \displaystyle\int_{\Omega}( \Phi(\nabla u_n) \displaystyle\frac{1}{\ell^*}\phi(\nabla u_n)\nabla u_n^2) \displaystyle\frac{\ell^*}{\ell^*1} J(u_n) \displaystyle\frac{\ell^*}{\ell^*1}\\[3ex]
&\geq& \displaystyle\frac{\ell^*}{\ell^*1} \left(1  \dfrac{m}{\ell^*}\right)\displaystyle\int_{\Omega} \Phi(\nabla u_n) J(u_n)\displaystyle\frac{\ell^*}{\ell^*1} \\[3ex]
&\geq& J(u_n)\displaystyle\frac{\ell^*}{\ell^*1}.
\end{array}
$$
From \eqref{cerami1} and \eqref{convergencia} we obtain
\begin{eqnarray}\label{fupos}
\displaystyle\int_{\Omega} fu\geq\alpha^{+}\displaystyle\frac{\ell^*}{\ell^*1} > 0.
\end{eqnarray}
Hence $u\not\equiv 0$.
We shall prove that $J(u)=\alpha^+$ and $u_n\to u$ in $W_0^{1,\Phi}(\Omega)$.
Since $u\in {\cal{N}}$ we also see that
$$\alpha^+ \leq J(u)=\displaystyle\int_{\Omega} \Phi(\nabla u)\displaystyle\frac{1}{\ell^*}\phi(\nabla u)\nabla u^2\left(1\displaystyle\frac{1}{\ell^*}\right)fu.$$
Notice that
$$t\mapsto\Phi(t)\displaystyle\frac{1}{\ell^*}\phi(t)t^2$$
is a convex function. In fact, by hypothesis $(\phi_3)$ and $m<\ell^*$, we infer that
\begin{eqnarray}
\left(\Phi(t)\displaystyle\frac{1}{\ell^*}\phi(t)t^2\right)''&=&\left[ \left(1\frac{1}{\ell^*}\right)t\phi(t)\frac{1}{\ell^*}t(t\phi(t))'\right]'\nonumber\\
&=& (t\phi(t))' \left[\left(1\frac{2}{\ell^*}\right)\frac{1}{\ell^*}\frac{t(t\phi(t))''}{(t\phi(t))'}\right]\nonumber\\
&\geq& (t\phi(t))'\left(1\frac{m}{\ell^*}\right)>0, t > 0.\nonumber
\end{eqnarray}
In addition, the last assertion says that
$$u\longmapsto \displaystyle\int_{\Omega} \Phi(\nabla u )\displaystyle\frac{1}{\ell^*}\phi(\nabla u )\nabla u ^2dx$$
is weakly lower semicontinuous function. Therefore we obtain
\begin{eqnarray}
\alpha^+ \leq J(u) &\leq & \liminf \left(\displaystyle\int_{\Omega} \Phi(\nabla u_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla u_n)\nabla u_n^2
\left(1\displaystyle\frac{1}{\ell^*}\right)fu_n\right)\nonumber\\
&=&\liminf J(u_n)= \alpha^+.\nonumber
\end{eqnarray}
This implies that $J(u)=\alpha^+.$ Additionally, using \eqref{convergencia}, we also have
$$\begin{array}{rcl}
J(u)&=&\displaystyle\int_{\Omega} \Phi(\nabla u)\displaystyle\frac{1}{\ell^*}\phi(\nabla u)\nabla u^2\left(1\displaystyle\frac{1}{\ell^*}\right)fu\\[3ex]
&=&
\lim \left(\displaystyle\int_{\Omega} \Phi(\nabla u_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla u_n)\nabla u_n^2\right)\left(1\displaystyle\frac{1}{\ell^*}\right)\displaystyle\int_{\Omega} fu.
\end{array}$$
From the last identity
$$\lim \left(\displaystyle\int_{\Omega} \Phi(\nabla u_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla u_n)\nabla u_n^2\right)=\displaystyle\int_{\Omega} \Phi(\nabla u)\displaystyle\frac{1}{\ell^*}\phi(\nabla u)\nabla u^2.$$
In view of BrezisLieb Lemma, choosing $v_n=u_nu,$ we infer that
\begin{eqnarray}
\lim \left(\displaystyle\int_{\Omega} \Phi(\nabla u_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla u_n)\nabla u_n^2+ \Phi(\nabla v_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla v_n)\nabla v_n^2\right)\nonumber\\
=\displaystyle\int_{\Omega} \Phi(\nabla u)\displaystyle\frac{1}{\ell^*}\phi(\nabla u)\nabla u^2.
\end{eqnarray}
The previous assertion implies that
$$0=\lim \left(\displaystyle\int_{\Omega} \Phi(\nabla v_n)\displaystyle\frac{1}{\ell^*}\phi(\nabla v_n)\nabla v_n^2\right)\geq \lim\left(1\displaystyle\frac{m}{\ell^*}\right)\displaystyle\int_{\Omega} \Phi(\nabla v_n)\geq 0.$$ Therefore, we obtain that $\lim \int_{\Omega} \Phi(\nabla v_n)=0$ and $u_n\to u \,\,\mbox{in} \,\, W^{1,\Phi}(\Omega).$ Hence we conclude that $u_{n} \rightarrow u$ in $W_0^{1,\Phi}(\Omega)$.
We shall prove that $u\in {\cal{N}}^+$. Arguing by contradiction we have that $u\notin{\cal{N}}^+$. Using Lemma \ref{fib} there are unique $t_0^+, t_0^>0$ in such way that $t_0^+u\in{\cal{N}}^+$ and $t_0^u\in{\cal{N}}^$. In particular, we know that $t_0^+< t_0^=1.$ Since
$$\displaystyle\frac{d}{dt}J(t_0^+u)=0$$
and using \eqref{fupos} together the Lemma \ref{fib} we have that
$$\displaystyle\frac{d}{dt}J(tu)>0,~t\in (t_0^+, t_0^).$$
So, there exist $t^ \in ( t_0^+, t_0^ )$ such that $J(t_0^+u)<J(t^u)$.
\noindent In addition $J(t_0^+u)<J(t^u)\leq J(t_0^u)=J(u)$ which is a contradiction to the fact that $u$ is a minimizer in ${\cal{N}}^+$. So that $u$ is in ${\cal{N}}^+$.
\noindent To conclude the proof of theorem it remains to show that $ u\geq 0 $ when $ f \geq0.$ For this we will argue as in \cite{tarantello}. Since $u \in {\cal{N}}^+$, by Lemma \ref{fib} there exists a $ t_0 \geq 1 $ such that $ t_0  u  \in {\cal{N}}^+ $ and $t_0u\geq u.$ Therefore if $f\geq 0$, we get
$$J(u)=\displaystyle\inf_{w\in{\cal{N}}^+}J(w)\leq J(t_0u)\leq J(u)\leq J(u).$$
\noindent So we can assume without loss of generality that $u\geq 0.$
\subsection{The proof of Theorem \ref{teorema2}}
Let $f_{(\ell^*)'} < \Lambda_2 = \min\left\{\lambda_2,\displaystyle\frac{\ell^*m}{m1}\right\}$ where $\lambda_2 > 0$ is given by Lemma \ref{nehari}.
First of all, from Lemma \ref{nehari}, there exists $\delta_1>0$ such that $J(v)\geq \delta_1$ for any $v\in {\cal{N}}^{}.$
So that,
$$\alpha^{}:= \displaystyle \inf_{v \in {\cal{N}}^{}}J(v)\geq \delta_1>0.$$
Now we shall consider a minimizing sequence $(v_n)\subset {\cal{N}}^{}$ given in Proposition \ref{lem1ps}, i.e, $(v_n)\subset {\cal{N}}^{}$ is a sequence satisfying
\begin{equation} \label{e1}
\displaystyle\lim_{n\to\infty}J(v_n)=\alpha^{} \,\,\mbox{and} \,\, \displaystyle\lim_{n\to\infty} J^{\prime}(v_{n}) = 0.
\end{equation}
Since $J$ is coercive in ${\cal{N}}$ and so on ${\cal{N}}^{}$, using Lemma \ref{c1}, we have that $(v_n)$ is bounded sequence
in $W^{1,\Phi}_{0}(\Omega).$ Up to a subsequence we assume that $v_n\rightharpoonup v$ in $W^{1,\Phi}_{0}(\Omega)$ holds for some $v \in W_0^{1,\Phi}(\Omega)$. Additionally, using the fact that $\ell^{*}>1$, we get $t<<\Phi_{*}(t)$ and $W_{0}^{1,\Phi}(\Omega)\hookrightarrow L^1(\Omega)$ is also a compact embedding. This fact implies that $v_n\to v$ in $L^{1}(\Omega).$ In this way, we can obtain
\begin{equation*}\label{lim1}
\displaystyle\lim_{n\to\infty} \displaystyle\int_{\Omega} fv_n=\displaystyle\int_{\Omega} fv.
\end{equation*}
Now we claim that $v \in W_0^{1,\Phi}(\Omega)$ given just above is a weak solution to the elliptic problem \eqref{eq1}. In fact, using \eqref{e1}, we infer that
$$\left<J'(v_n), w\right>=\displaystyle\int_{\Omega} \phi(\nabla v_n)\nabla v_n\nabla wfwv_n^{\ell^*2}v_n w = o_{n}(1)$$
holds for any $w \in W_0^{1,\Phi}(\Omega)$. Now using Lemma \ref{conv_grad_qtp} we get
$$\displaystyle\int_{\Omega} \phi(\nabla v)\nabla v\nabla wfw v^{\ell^*2}v w = 0, w \in W_0^{1,\Phi}(\Omega).$$
So that $v$ is a critical point for the functional $J$.
Without any loss of generality, changing the sequence $(v_{n})$ by $(v_{n})$, we can assume that $v \geq 0$ in $\Omega$.
Next we claim that $v \neq 0$. The proof for this claim follows arguing by contradiction assuming that $v \equiv 0$. Recall that
$J(t v_{n}) \leq J(v_{n})$ for any $t \geq 0$ and $n \in \mathbb{N}$. These facts together with Lemma \ref{lema_naru} imply that
\begin{eqnarray}
\left(1  \dfrac{m}{\ell^{*}}\right)\int_{\Omega} \Phi(\nabla t v_{n}) &\leq& \left(t  1\right)\left(1\displaystyle\frac{1}{\ell^*}\right) \int_{\Omega} f v_n\nonumber \\
&+& \left(1  \dfrac{\ell}{\ell^{*}}\right)\int_{\Omega} \Phi(\nabla v_{n}). \nonumber
\end{eqnarray}
Using the above estimate, Lemma \ref{lema_naru} and the fact that $(v_{n})$ is bounded, we obtain
\begin{equation*}
\min(t^{\ell}, t^{m}) \left(1  \dfrac{m}{\ell^{*}}\right) \int_{\Omega} \Phi(\nabla v_{n}) \leq \left(t  1\right)\left(1\displaystyle\frac{1}{\ell^*}\right) \int_{\Omega} fv_n + C
\end{equation*}
holds for some $C > 0$. These inequalities give us
\begin{equation*}
\begin{array}{rcl}\min(t^{\ell}, t^{m}) \left(1  \dfrac{m}{\ell^{*}}\right) \displaystyle\int_{\Omega} \Phi(\nabla v_{n})
&\leq& \left(t  1\right)\left(1\displaystyle\frac{1}{\ell^*}\right)S^{\frac{1}{\ell}} f_{(\ell^*)'}v_n + C.\end{array}
\end{equation*}
It is no hard to verify that $\v_{n}\ \geq c > 0$ for any $n \in \mathbb{N}$. Using Proposition \ref{lema_naru} we get
\begin{equation*}
\min(t^{\ell}, t^{m}) \leq o_{n}(1) t + C
\end{equation*}
holds for any $t \geq 0$ where $C = C(\ell,m,\ell^{*}, \Omega, a,b) > 0$ where $o_{n}(1)$ denotes a quantity that goes to zero as $n \rightarrow \infty$. Here was used the fact $v_{n} \rightarrow 0$ in $L^1(\Omega)$. This estimate does not make sense for any $t > 0$ big enough. Hence $v \neq 0$ as claimed. Hence $v$ is in $\mathcal{N} = \mathcal{N^{+}} \cup \mathcal{N^{+}}$.
Next, we shall prove that $v_n\to v$ in $W_0^{1,\Phi}(\Omega)$. The proof follows arguing by contradiction.
Assume that $\displaystyle \liminf_{n \rightarrow \infty} \int_{\Omega} \Phi(\nabla v_{n}  \nabla v) \geq \delta$ holds for some $\delta > 0$.
Recall that $\Psi: \mathbb{R} \rightarrow \mathbb{R}$ given by
$$t\mapsto \Psi(t) := \Phi(t)\displaystyle\frac{1}{\ell^*}\phi(t)t^2$$
is a convex function for each $t \geq 0$. The BrezisLieb Lemma for convex functions says that
\begin{equation*}
\lim_{n \rightarrow \infty} \int_{\Omega} \Psi(\nabla v_{n})  \Psi(\nabla v_{n} v) = \int_{\Omega} \Psi(\nabla v)
\end{equation*}
In particular, the last estimate give us
\begin{equation*}
\int_{\Omega} \Psi(\nabla v) < \liminf_{n \rightarrow \infty} \int_{\Omega} \Psi(\nabla v_{n}).
\end{equation*}
Since $v\in {\cal{N}}$ there exists unique $t_{0}$ in $(0, \infty)$ such that $t_{0} v \in \mathcal{N}^{}$. It is easy to verify that
\begin{equation*}
\int_{\Omega} \Psi(\nabla t_{0}v) < \liminf_{n \rightarrow \infty} \int_{\Omega} \Psi(\nabla t_{0} v_{n}).
\end{equation*}
This implies that
\begin{eqnarray}
\alpha^{}&\leq& J(t_{0}v ) = \displaystyle\int_{\Omega} \Psi(\nabla t_{0} v)\left(1\displaystyle\frac{1}{\ell^*}\right)t_{0}f \nonumber \\
&<& \liminf_{n \rightarrow \infty} \displaystyle\int_{\Omega} \Psi(\nabla t_{0} v_{n})\left({1}\displaystyle\frac{1}{\ell^*}\right)t_{0} fv_n \nonumber \\
&=& \liminf_{n \rightarrow \infty} J(t_{0}v_{n}) \leq \liminf_{n \rightarrow \infty} J(v_{n}) = \alpha^{}. \nonumber
\end{eqnarray}
This is a contradiction proving that $v_n\to v$ in $W_0^{1,\Phi}(\Omega)$. Therefore $v$  
of mutations over different human tissues. To achieve so, the GTEx gene expression matrices corresponding to 30 different tissues (see Additional Table 1), containing each a variable number of individuals, are used as controls and then, equivalent matrices of cases are generated by simulating the mutations as previously described on all the individuals. Then a case/control contrast with a Wilcoxon test is carried out for each tissue, which would reveal whether some of the mutations in the list have a significant impact on one or several tissues and the functional nature of such impact.
### The web interface
The input of the program consists of normalized gene expression matrices in CSV format for the two first options of Differential signaling activity and the Perturbation effect (Fig. 1A,B) and also optionally for the Variant interpreter option that explores the effect of mutations across tissues (Fig. 1C), as user defined tissue. Expression may have been measured with any sequencing or microarray technology. The gene expression matrix must include samples as columns and genes as rows. Gene names must be Entrez or HUGO IDs.
For the Variant Interpreter option, a list of Entrez or HUGO gene names can be provided.
### Graphical representation of the results
Different analysis types are carried out on the circuit’s activities calculated, which include twoclass comparisons and PCA, with the corresponding visualizations as heatmaps and PCA plots. Graphical representation of the circuits significantly up or downactivated, including the individual node expression change, are also provided (see Fig. 1 right). An interactive graphical output in which the pathways analyzed are displayed with the possible ways in which the signal can be transmitted from receptor proteins to the corresponding effector proteins, highlighting those in which significant changes in signaling are found. In this visual representation, disruptions or activations in the signal transduction caused by gene perturbations (mutations or expression changes) can be easily visualized and understood in terms of their consequences on cell signaling and their ultimate effect over the corresponding functions triggered by the effectors.
The client of the web application has been implemented in JavaScript using the HTML5 and SVG standards and uses CellMaps72 libraries for interactive visual representation of pathways.
### Mechanistic model of cell functionality triggered by signaling
The Hipathia (acronym for Highthroughput pathway interpretation and analysis) is a mechanistic model of signaling circuit activities previously described66. In brief, circuits that connect receptor proteins to specific effector proteins, which ultimately trigger cell activities, are defined using KEGG pathways60. Such circuits represent the sequence of activation (and inhibition) steps that mediates the transduction of the signal from the receptor to the effector protein. The method assumptions are that, in order to transduce the signal, all the proteins that connect the receptor with the effector should be present and the higher the amount of these proteins the stronger will be the signal. Measurements of mRNA levels are taken as proxies of the amount of the corresponding proteins (a quite common assumption73,74,75,76,77,78). Then, in order to quantify the intensity of signal transduction, the following steps are taken: normalized gene expression values, rescaled to a value in the range [0,1], obtained as explained above, are used as proxies of the protein activities (activations or inhibitions in the transmission chain)73,75,79. Thus, the intensity value of signal transduced along a circuit that reaches the effector is estimated by starting with initial signal intensity with the maximum value of 1 in the receptor, which is propagated along the nodes of the signaling circuits according the recursive formula:
$${S}_{n}={\upsilon }_{n}\cdot (1\prod _{{s}_{a}\in A}(1{s}_{a}))\cdot \prod _{{s}_{i}\in I}(1{s}_{i})$$
(1)
where Sn is the signal intensity for the current node n, vn is its normalized gene expression value, A is the set of activation signals (sa), arriving to the node n from the corresponding activation edges, I is the set of inhibitory signals (si) arriving to the node from inhibition edges66. Like normalized gene expression values, circuit activity values are measurements with no absolute meaning by themselves but rather in a comparison.
The application of this formula to all the circuits defined in all the pathways allows transforming a gene expression profile into the corresponding signaling circuit activity profile for any sample studied. If two conditions are compared, a Wilcoxon test can used to assess differences in signaling circuit activity between both types of samples.
### Estimation of the impact of a mutation over cell functionality
The effect of a mutation is dependent on the context which includes the activity (gene expression status) and the integrity (mutational status) of the rest of proteins involved in the pathways that trigger functionalities relevant to the disease analyzed (disease hallmarks). The effect of one or several simultaneous mutations in a specific tissue can easily be predicted using the mechanistic model68,69. The reference or control dataset is taken from the tissue of interest in GTEx80. Then, an affected dataset is simulated from the control dataset by drastically reducing the expression of the gene(s) with a pLoF mutation by multiplying their expression values by 0.01 in all the control samples. This simulates either an inactive gene or a nonfunctional gene product. Then, the circuit activities are recalculated in the affected dataset and it is compared to the reference dataset. Although not completely realistic, given that the model does not have information on the way in which the diseased tissue will transcriptionally react to the perturbation induced by the mutated genes, the results will certainly point with precision to those cell functions affected in first instance.
### Data Sources
In the current version of HiPathia more than 8000 circuits have been identified and modeled within a total of more than 150 pathways downloaded from KEGG60 corresponding to three species (human 145, mouse 141 and rat 141).
Gene expression data from 30 nondiseased tissue sites (See Additional Table 1) used in the third option were taken from the GTEx Portal80 (GTEx Analysis V7; dbGaP Accession phs000424.v7.p2).
### Data and methods for the examples
Gene expression for bone marrow, which is not present in GTEx, was downloaded from the Gene Expression Omnibus (GEO) database (GSE16334)81.
Gene expression microarray study that compares human islets gene expression from 54 nondiabetic and 9 type 2 diabetic donors82 was downloaded from GEO (GSE38642).
Data on natural variability of different populations, which comprises over 88 million variants of 2,504 individuals from 26 populations, was obtained from the 1000 Genomes project portal3,83.
In order to assess the impact of the natural variation found in genes of healthy population, variants located within gene regions were annotated using CADD29. As proposed by CADD developers, a gene was considered to carry a pLoF mutation when the CADD score is over the threshold of 2084. A gene is considered to be affected by pLoF in a recessive scenario, when the two alternative alleles are present.
### Transcriptomics data processing
Gene expression data from microarrays were summarized and normalized by quantiles with the Robust Multiarray Analysis method using affy R package85. Probes were mapped to the corresponding genes using BiomaRt86. Gene expression values are estimated as the 90 percentile of probe expression values. Probes that mapped in more than one gene were discarded (except in the case that they were the unique probes mapping on the gene, that the median value of intensities was taken.)
RNAseq gene expression data were normalized with the Trimmed mean of M values (TMM) normalization method using the edgeR package87.
Then, the Hipathia66 algorithm requires some extra steps for the calculation of the signal intensities. Thus, a logarithm transformation (apply log(matrix + 1)) followed by a truncation by the quantile 0.99 (all values greater than quantile 0.99 are truncated to this upper value, all values lower than quantile 0.01 are truncated to this lower value) were applied to the normalized gene expression values. Finally, in both cases, quantiles normalization using the preprocessCoreR package88 was carried out.
## Results
We demonstrate the possibilities that mechanistic models offer  
the examples in Section \ref{sect:examp}.
From \cite{MejariCDC2019} it follows
that this assumption is sufficient, as the identification algorithm in \cite{MejariCDC2019} returns a minimal asLPVSSA in innovation form.
We conjecture that the same will be true for
most of the existing subspace identification algorithms \cite{Wingerden09,fewive,Verdult02,veve,CoxTothSubspace,RamosSubspace}.
To deal only with minimal
asLPVSSA representations in innovation form, simple conditions to check minimality and being in innovation form are needed.
The latter is necessary in order to check if the elements of a
parametrization of asLPVSSAs are minimal and in innovation form, or to construct such parametrizations.
\paragraph*{\textbf{Related work}}
As it was mentioned above, there is a rich literature on
subspace identification methods for stochastic LPVSSA representations \cite{RamosSubspace,FavoreelTAC,CoxTothSubspace,Wingerden09}. However, the cited papers do not deal with the problem
of characterizing minimal stochastic
LPV statespace representations in innovation form.
In \cite{CoxLPVSS,CoxTothSubspace} the existence of an LPV statespace representation in innovation form was studied, but
due to the specific assumptions (deterministic scheduling) and the
definition of the innovation process, the resulting LPV statespace
representation in innovation form had dynamic dependence on the
scheduling parameters. Moreover, \cite{CoxLPVSS,CoxTothSubspace}
do not address the issue of minimality of the stochastic part of LPV statespace representations.
This paper uses realization theory of stochastic generalized bilinear systems (\emph{\textbf{GBS}\ } for short) of \cite{PetreczkyBilinear}. In particular,
asLPVSSAs correspond to \textbf{GBS}{s}. The existence and uniqueness of minimal asLPVSSA{s} in innovation form
follows from the results of \cite{PetreczkyBilinear}.
The main novelty of the present paper with respect to \cite{PetreczkyBilinear} is
the new algebraic characterization of minimal asLPVSSA{s} in innovation form, and
that the results on existence and uniqueness of minimal \textbf{GBS}{s} are spelled out explicitly for LPVSSAs.
The paper \cite{MejariCDC2019} used the correspondence
between \textbf{GBS}{s} and asLPVSSA{s} to state existence and uniqueness
of minimal asLPVSSA{s} in innovation form. However, \cite{MejariCDC2019} did not provide an algebraic characterization of minimality or innovation form.
Moreover, it considered only scheduling signals which were zero mean white noises. In contrast, in this paper more general scheduling signals are considered.
The present paper is complementary to \cite{MejariCDC2019}. This paper
explains when the assumption that the data generating system is minimal asLPVSSA in innovation form could be true, while \cite{MejariCDC2019} presents an identification algorithm which is statistically consistent under the latter assumption.
\textbf{Outline of the paper}
In Section \ref{sect:prelim} we introduce the notations used and we recall \cite{PetreczkyBilinear}, some technical assumptions which are necessary to define of the stationary LPVSSA representation.
In Section \ref{sect:min} some principal results on minimal asLPVSSA{s} in innovation form are reviewed.
In Section \ref{sect:main} we present the main results of the paper, namely,
algebraic conditions for an asLPVSSA to be minimal in innovation form.
Finally, in Section \ref{sect:examp} numerical examples are developed
to illustrate the contributions.
\subsection{Necessary conditions for a representation in innovation form}
\section{Main results: algebraic conditions for an asLPVSSA to be minimal in innovation form}
\label{sect:main}
Motivated by the challenges explained in Remark \ref{rem:motiv1},
in this section we present sufficient conditions for an asLPVSSA to be minimal and in innovation form.
These conditions depend only on the matrices of the asLPVSSA in question and do not require any information on the
noise processes.
The first result concerns an algebraic characterization of asLPVSSA
in innovation form. This characterization does not require any knowledge
of the noise process, only the knowledge of system matrices.
In order to streamline the discussion, we introduce the following definition.
\begin{Definition}[Stably invertable w.r.t. $\p$]
Assume that $\mathcal{S}$ is an asLPVSSA of the form \eqref{eq:aslpv} and $F=I_{n_y}$.
We will call $\mathcal{S}$ \emph{stably invertable with respect to $\p$}, or \emph{stably invertable} if $\p$ is clear from the context, if
the matrix
\begin{equation}
\label{inv:gbs:lemma:eq1}
\sum_{i=1}^{\pdim} p_i (A_iK_iC) \otimes (A_iK_iC)
\end{equation}
is stable (all its eigenvalues are inside the complex unit disk).
\end{Definition}
Note that a system can be stably invertable w.r.t. one scheduling process, and not to be
stably invertable w.r.t. another one.
We can now state the result relating stable invertability to asLPVSSAs in innovation forms.
\begin{Theorem}[Innovation form condition]
\label{inv:gbs:lemma}
Assume that $\textbf{y}$ is SII and $(\textbf{y},\p)$ is full rank.
If an asLPVSSA realization of $(\textbf{y},\p)$ is stably invertable, then it is in innovation form.
\end{Theorem}
The proof of Theorem \ref{inv:gbs:lemma} can be found in Appendix \ref{App:proof}.
Stably invertable asLPVSSAs can be viewed as optimal predictors.
Indeed, let $\mathcal{S}$ be the asLPVSSA of the form
\eqref{eq:aslpv} which is in innovation form, and let $\textbf{x}$ be the
state process of $\mathcal{S}$.
It then follows
\begin{equation}
\label{gen:filt:bil:def:pred}
\begin{split}
& \textbf{x}(t+1) = \sum_{i=1}^{\pdim} (A_iK_iC)\textbf{x}(t)+K_i\textbf{y}(t))\p_i(t), \\
& \hat{\textbf{y}}(t) = C\textbf{x}(t)
\end{split}
\end{equation}
where $\hat{\textbf{y}}(t)=E_l[\textbf{y}(t) \mid \{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}]$, i.e.,
$\hat{\textbf{y}}$ is the best linear prediction of $\textbf{y}(t)$ based on
the predictors $\{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}$.
Intuitively, \eqref{gen:filt:bil:def:pred} could be viewed as a filter, i.e.,
a dynamical system
driven by past values of $\textbf{y}$ and generating the best possible
linear prediction $\hat{\textbf{y}}(t)$ of
$\textbf{y}(t)$
based on $\{\mathbf{z}_w^{\textbf{y}}(t)\}_{w \in \Sigma^{+}}$.
However, the solution of \eqref{gen:filt:bil:def:pred} is defined on the whole time
axis $\mathbb{Z}$ and hence cannot be computed exactly. For stably
invertable asLPVSSA we can approximate $\hat{\textbf{y}}(t)$ as follows.
\begin{Lemma}
\label{gbs:finite_filt:lemma}
With the assumptions of Theorem \ref{inv:gbs:lemma}, if $\mathcal{S}$ of the form \eqref{eq:aslpv} is
a stably invertable realization of $(\textbf{y},\p)$, and we consider the following dynamical system:
\begin{equation}
\label{gbs:finite_filt:eq}
\begin{split}
& \bar{\textbf{x}}(t+1)= \sum_{i=1}^{\pdim} (A_iK_iC)\bar{\textbf{x}}(t)+K_i\textbf{y}(t))\bm{\mu}_i(t), \\
& \bar{\textbf{y}}(t) = C\bar{x}(t), ~ \bar{\textbf{x}}(0)=0
\end{split}
\end{equation}
then
\( \underset{t \rightarrow \infty}{\lim} \left(\bar{\textbf{x}}(t)  \textbf{x}(t)\right)\!\!=\!\!0 \), and
\( \underset{t \rightarrow \infty}{\lim} \left(\bar{\textbf{y}}(t)  \textbf{y}(t)\right)=0 \),
where the limits are understood in the mean square sense.
\end{Lemma}
The proof of Lemma \ref{gbs:finite_filt:lemma} is found in Appendix \ref{App:proof}.
That is, the output
$\bar{\textbf{y}}(t)$ of the recursive filter
\eqref{gbs:finite_filt:eq} is an approximation
of the optimal prediction $\hat{\textbf{y}}(t)$ of $\textbf{y}(t)$ for large enough $t$.
That is, stably invertable asLPVSSA not only result in asLPVSSAs in innovation
form, but they represent a class of
asLPVSSAs for which recursive filters of
the form \eqref{gbs:finite_filt:eq} exist.
%
%
Next, we present algebraic conditions for minimality of an asLPVSSA
in innovation form.
\begin{Theorem}[Minimality condition in innovation form]
\label{min:forw:gbs:lemma}
Assume that $\mathcal{S}$ is an asLPVSSA of the form \eqref{eq:aslpv} and that $\mathcal{S}$ is a realization of $(\textbf{y},\p)$ in innovation form.
Assume that $(\textbf{y},\p)$ is full rank and $\textbf{y}$ is SII.
Then $\mathcal{S}$ is a minimal realization of $(\textbf{y},\p)$, if and only if
the dLPVSSA $\mathcal{D}_{\mathcal{S}}=(\{A_i,K_i\}_{i=0}^{\pdim},C,I_{n_y})$ is minimal.
\end{Theorem}
The proof of Theorem \ref{min:forw:gbs:lemma} can be found in Appendix \ref{App:proof}.
Theorem \ref{min:forw:gbs:lemma}, in combination with Theorem \ref{inv:gbs:lemma}, leads to the following corollary.
\begin{Corollary}[Minimality and innovation form]
\label{min:forw:gbs:lemma:col}
With the assumptions of Theorem \ref{min:forw:gbs:lemma},
if $\mathcal{D}_{\mathcal{S}}$ is minimal and $\mathcal{S}$ if stably invertable, then
$\mathcal{S}$ is a minimal asLPVSSA realization of $(\textbf{y},\p)$
in innovation form.
\end{Corollary}
\begin{Remark}[Checking minimality and innovation form]
\label{rem:check1}
We recall that $\mathcal{D}_{\mathcal{S}}$ is minimal, if and only
if it satisfies the rank conditions for the extended $n$step reachability and observability matrices \cite[Theorem 2]{PetreczkyLPVSS},
which can easily be computed from the matrices of $\mathcal{S}$.
Checking that $\mathcal{S}$ is stably invertable boils down to checking the eigenvalues of the matrix \eqref{inv:gbs:lemma:eq1}.
That is, Corollary \ref{min:forw:gbs:lemma:col} provides effective procedure for verifying that an asLPVSSA is minimal and
in innovation form.
Note that in contrast to the rank condition of Theorem \ref{theo:rank_cond},
which required computing the limit of \eqref{gi:comp:eq1}, the procedure
above uses only the matrices of the system.
\end{Remark}
\begin{Remark}[Parametrizations of asLPVSSAs]
Below we will sketch some ideas for applying the above results to
parametrizations of asLPVSSAs. A detailed study of these issues remains a topic for future research.
For all the elements of a parametrization of asLPVSSAs to be minimal and in innovation form, by Corollary \ref{min:forw:gbs:lemma:col} it is necessary that
\textbf{(A)} all elements of the parametrization, when viewed as dLPVSSA, are minimal, and that \textbf{(B)} they
are stably invertable and satisfy condition \textbf{(3)} of Definition \ref{defn:LPV_SSA_wo_u}.
In order to  
of a symmetric matrix and a skewsymmetric matrix, and this shows that M m×n (F) is the direct sum of these two subspaces. Next, the matrices E i j − E ji (see the proof of Theorem 9.1.2), where 1 ≤ i < j ≤ n, are skewsymmetric, and as the diagonal elements of a skewsymmetric matrix are zero, it is clear that these matrices span the subspace of skewsymmetric matrices. As they are also linearly independent, we see that the subspace of skewsymmetric matrices has dimension (n 2 − n)/2. By Theorems 7.4.1 and 9.1.2, the subspace of symmetric matrices has dimension n 2 − (n 2 − n)/2. We give two more examples. Example 9.1.5 Consider the space Z m×n of real m × n matrices X with the property that the sum of the elements over each row, and over each column, is zero. It should be clear that Z m×n is a real vector space, and we claim that dim(Z m×n ) = (m − 1)(n − 1). We start the proof in the case when m = n = 3, but only to give the general idea.
152
Matrices
We start with an ‘empty’ 3 × 3 matrix and then make an arbitrary choice of the entries, say a, b, c and d, that are not in the last row or column; this gives us a matrix a b ∗ c d ∗, ∗ ∗ ∗ where ∗ represents an as yet undetermined entry in the matrix. We now impose the condition that the ﬁrst two columns must sum to zero, and after this we impose the condition that all rows sum to zero; thus the ‘matrix’ becomes a b −a − b c . d −c − d −a − c −b − d a + b + c + d Notice that the last column automatically sums to zero (because the sum over all elements is zero, as is seen by summing over rows, and the ﬁrst two columns sum to zero). Exactly the same argument can be used for any ‘empty’ m × n matrix. The choice of elements not in the last row or last column is actually a choice of an arbitrary matrix in M (m−1)×(n−1) (F), so this construction actually creates a surjective map from M (m−1)×(n−1) (F) onto Z m×n . It should be clear that this map is linear, and that the only element in its kernel is the zero matrix. Thus dim(Z m×n ) = dim ker() + dim M (m−1)×(n−1) (F) = (m − 1)(n − 1) as required.
Example 9.1.6 This example contains a discussion of magic squares (this is a ‘popular’ item, but it is not important). For any n × n matrix X , the trace tr(X ) of X is the sum x11 + · · · + xnn of the diagonal elements, and the antitrace tr∗ (X ) of X is the sum over the ‘other’ diagonal, namely x1n + · · · + xn1 . A real n × n matrix A is a magic square if the sum over each row, the sum over each column, and the sum over each of the two diagonals (that is, tr( A) and tr∗ (X )), all give the same value, say µ(A). We note that µ(A) = n −1 i, j ai j . It is easy to see that the space S n×n of n × n magic squares is a real vector space so, naturally, we ask what is its dimension? It is easy to see that dim(S n×n ) = 1 when n is 1 or 2, and we shall now show that for n ≥ 3, dim(S n×n ) = n(n − 2). Let S0n×n be the subspace of matrices A for which µ(A) = 0. This subspace is the kernel of the linear map A → µ(A) from S n×n to R, and as this map is surjective (consider the matrix A with all entries x/n) we see that dim(S n×n ) = dim(S0n×n ) + 1.
9.1 The vector space of matrices
153
Next, the space Z n×n of n × n matrices all of whose rows and columns sum to zero has dimension (n − 1) 2 (see Example 9.1.6). Now deﬁne : Z n×n → R2 by (X ) = tr(X ), tr∗ (X ) . Then is a linear map, and ker() = S0n×n . It is not difﬁcult to show that is surjective (we shall prove this shortly), and with this we see that (n − 1)2 = dim(Z n×n ) = dimS0n×n + 2, so that dim(S n×n ) = (n − 1)2 − 1 = n(n − 2). It remains to show that is surjective, and it is sufﬁcient to construct matrices P and Q in Z n×n such that (P) = (a, 0) and (Q) = (0, b) for all (or just some nonzero) a and b. If n = 3, we let 1 1 −2 −2 1 1 1 1 , Q = (b/3) 1 1 −2 , P = (a/3) −2 1 −2 1 1 −2 1 and then (P) = (a, 0) and (Q) = (0, b). If n ≥ 4 we can take p11 = p22 = a/2, p12 = p21 = −a/2 and all other pi j = 0; then t(P) = a and t ∗ (P) = 0, so that (P) = (a, 0). Similarly, we choose q1,n−1 = q2n = −b/2 and q1n = q2,n−1 = b/2, so that (Q) = (0, b).
Exercise 9.1 1. A matrix (ai j ) is a diagonal matrix if ai j = 0 whenever i = j. Show that the space D of real n × n diagonal matrices is a vector space of dimension n. 2. A matrix (ai j ) is an uppertriangular matrix if ai j = 0 whenever i > j. Show that the space U of real n × n uppertriangular matrices is a vector space. What is its dimension? 3. Deﬁne what it means to say that a matrix (ai j ) is a lowertriangular matrix (see Exercise 2). Let L be the vector space of real lowertriangular matrices, and let D and U be as in Exercises 1 and 2. Show, without calculating any of the dimensions, that dim(U ) + dim(L) = dim(D) + n 2 . Now verify this by calculating each of the dimensions. 4. Show that the space of n × n matrices with trace zero is a vector space of dimension n 2 − 1. 5. Show (in Example 9.1.7) that dim(S 1×1 ) = dim(S 2×2 ) = 1. 6. Show that if X is a 3 × 3 magic square, then x22 = (X )/3. Deduce that if (X ) = 0 then X is of the form a −a − b b 0 a − b. X = b − a −b a+b −a
154
Matrices
Let A, B, C be the matrices 1 −1 0 0 −1 0 1, 1 0 1 −1 −1
−1 1 0 −1 , 1 0
1 1 1
1 1 1
1 1, 1
respectively. Show that (a) {A, B, C} is a basis of S 3×3 ; (b) {A, B} is a basis of S03×3 ; (c) {A, C} is a basis of the space of symmetric 3 × 3 magic squares; (d) {B} is a basis of the space of skewsymmetric 3 × 3 magic squares.
9.2 A matrix as a linear transformation A  
import os
import numpy as np
import units
import damping
import induce
import config
import analyze
import erepel3
import empole3
def epolar3(ATOMID):
if 'epolar3a' in config.param_dict.keys() or 'onecenterinducemutual' in config.param_dict.keys() or 'onecenterinducedirect' in config.param_dict.keys() or 'twocenterinducemutual' in config.param_dict.keys() or 'twocenterinducedirect' in config.param_dict.keys():
ep, einter, nep = epolar3a(ATOMID)
elif 'inducetrick' in config.param_dict.keys():
ep, einter, nep = epolar3a(ATOMID)
else:
ep, einter, nep = epolar3m(ATOMID)
return ep, einter, nep
def epolar3minduce(uind, ATOMID):
n = len(ATOMID)
uind = uind.reshape(3,n)
uTu_ep, uTu_einter, uTu_nep = uTu(ATOMID, uind)
Eu_ep, Eu_nep = Eu(ATOMID, uind)
if 'exchange21' in config.param_dict.keys():
exchind, exchind_inter = erepel3.exchange21(ATOMID, uind)
elif 'exchange22' in config.param_dict.keys():
exchind = erepel3.exchange22(ATOMID, uind)
else:
exchind = 0
exchind_inter = 0
ep = 1/2*uTu_ep  Eu_ep + exchind
return ep
def epolar3m(ATOMID, uind=[]):
'''epolar3m calculates energy using the equation U_pol = 1/2*uTu  Eu'''
n = len(ATOMID)
uTu_ep, uTu_einter, uTu_nep = uTu(ATOMID, uind)
Eu_ep, Eu_nep = Eu(ATOMID, uind)
ep = 1/2*uTu_ep  Eu_ep
einter = 1/2*uTu_einter  Eu_ep
nep = uTu_nep + Eu_nep
##############
# print('Eu', Eu_ep)
# e1 = empole3.empole3c(ATOMID)
# e2, junk1, junk2 = empole3.empole3a(ATOMID)
# print('Eu_check', e1  e2  1/2*uTu_check(ATOMID))
##############
return ep, einter, nep
def epolar3oinduce(uind, ATOMID):
'''Minimize 1/2u*alpha^1*u  1/2u*gamma*u  Eu using (M+u)T(M+u) trick.'''
n = len(ATOMID)
uind = uind.reshape(3,n)
# zero out the total polarization energy and partitioning
ep = 0.
# set conversion factor, cutoff and switching coefficients
f = units.electric / units.dielec
# calculate u*alpha^1*u
for i in range(n):
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
poli = ATOMID[i].pol
uiu = uix**2 + uiy**2 + uiz**2
e = f * uiu / poli
if e != 0.:
ep = ep + e
ep = 1/2 * ep
e1 = empole3.empole3b(ATOMID, uind)
e2, junk1, junk2 = empole3.empole3a(ATOMID)
ep = ep + (e1e2)
return ep
def epolar3ninduce(uind, ATOMID):
# minimize Tu  E = 0 where T = alpha^1 + gamma via least squared
n = len(ATOMID)
uind = uind.reshape(3,n)
resi = np.zeros_like(uind)
# calculate alpha^1 * u
for i in range(n):
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
poli = ATOMID[i].pol
resi[0,i] = uix / poli
resi[1,i] = uiy / poli
resi[2,i] = uiz / poli
# calculate gamma * u
for i in range(n1):
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
xi = ATOMID[i].coordinates[0]
yi = ATOMID[i].coordinates[1]
zi = ATOMID[i].coordinates[2]
alphai = ATOMID[i].palpha
# set exclusion coefficients for connected atoms
wscale = ATOMID[i].returnscale('w', ATOMID)
# evaluate all sites within the cutoff distance
for k in range(i+1, n):
xr = ATOMID[k].coordinates[0]  xi
yr = ATOMID[k].coordinates[1]  yi
zr = ATOMID[k].coordinates[2]  zi
r2 = xr**2 + yr**2 + zr**2
r = np.sqrt(r2)
ukx = uind[0,k]
uky = uind[1,k]
ukz = uind[2,k]
uir = uix*xr + uiy*yr + uiz*zr
ukr = ukx*xr + uky*yr + ukz*zr
rr3 = 1 / (r*r2)
rr5 = 3. * rr3 / r2
alphak = ATOMID[k].palpha
if 'onecenterinducemutual' in config.param_dict.keys():
dmpi, dmpk = damping.dampdir(r, alphai, alphak)
rr3i = dmpi[2]*rr3
rr5i = dmpi[4]*rr5
rr3k = dmpk[2]*rr3
rr5k = dmpk[4]*rr5
fid = np.empty(3)
fkd = np.empty(3)
fid[0] = xr*(rr5k*ukr)  rr3k*ukx
fid[1] = yr*(rr5k*ukr)  rr3k*uky
fid[2] = zr*(rr5k*ukr)  rr3k*ukz
fkd[0] = xr*(rr5i*uir)  rr3i*uix
fkd[1] = yr*(rr5i*uir)  rr3i*uiy
fkd[2] = zr*(rr5i*uir)  rr3i*uiz
elif 'twocenterinducemutual' in config.param_dict.keys():
dmpik = damping.dampmut(r, alphai, alphak)
rr3ik = dmpik[2]*rr3
rr5ik = dmpik[4]*rr5
fid = np.empty(3)
fkd = np.empty(3)
fid[0] = xr*(rr5ik*ukr)  rr3ik*ukx
fid[1] = yr*(rr5ik*ukr)  rr3ik*uky
fid[2] = zr*(rr5ik*ukr)  rr3ik*ukz
fkd[0] = xr*(rr5ik*uir)  rr3ik*uix
fkd[1] = yr*(rr5ik*uir)  rr3ik*uiy
fkd[2] = zr*(rr5ik*uir)  rr3ik*uiz
for j in range(3):
resi[j,i] = resi[j,i]  fid[j]*wscale[k]
resi[j,k] = resi[j,k]  fkd[j]*wscale[k]
# Tu  E
# get permanent electric field
if 'onecenterinducemutual' in config.param_dict.keys():
field = config.fieldp
elif 'twocenterinducemutual' in config.param_dict.keys():
field = config.mfieldp
for i in range(n):
resi[0,i] = resi[0,i]  field[0,i]
resi[1,i] = resi[1,i]  field[1,i]
resi[2,i] = resi[2,i]  field[2,i]
resi = resi.reshape(3*n)
return np.dot(resi, resi)
def uTu(ATOMID, uind=[]):
'''1/2*uTu is the energy it takes to construct the induced dipoles.
Since uTu = u(alpha^1  gamma)u = u*alpha^1*u  u*gamma*u, we separate out the two loops.
The first term u*alpha^1*u is a single loop since alpha^1 is diagonal.
The second term u*gamma*u is a double loop i < k since gamma has no values in its diagonal.'''
n = len(ATOMID)
# zero out the total polarization energy and partitioning
nep = 0
ep = 0.
einter = 0
aep = np.zeros(n)
# set conversion factor, cutoff and switching coefficients
f = units.electric / units.dielec
# calculate u*alpha^1*u
for i in range(n):
if len(uind) != 0:
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
else:
uix = ATOMID[i].uind[0]
uiy = ATOMID[i].uind[1]
uiz = ATOMID[i].uind[2]
poli = ATOMID[i].pol
uiu = uix**2 + uiy**2 + uiz**2
e = f * uiu / poli
if e != 0.:
ep = ep + e
nep = nep + 1
aep[i] = e
einter = einter + e
# calculate u*gamma*u
for i in range(n1):
if len(uind) != 0:
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
else:
uix = ATOMID[i].uind[0]
uiy = ATOMID[i].uind[1]
uiz = ATOMID[i].uind[2]
xi = ATOMID[i].coordinates[0]
yi = ATOMID[i].coordinates[1]
zi = ATOMID[i].coordinates[2]
alphai = ATOMID[i].palpha
# set exclusion coefficients for connected atoms
wscale = ATOMID[i].returnscale('w', ATOMID)
# evaluate all sites within the cutoff distance
for k in range(i+1, n):
xr = ATOMID[k].coordinates[0]  xi
yr = ATOMID[k].coordinates[1]  yi
zr = ATOMID[k].coordinates[2]  zi
r2 = xr**2 + yr**2 + zr**2
r = np.sqrt(r2)
if len(uind) != 0:
ukx = uind[0,k]
uky = uind[1,k]
ukz = uind[2,k]
else:
ukx = ATOMID[k].uind[0]
uky = ATOMID[k].uind[1]
ukz = ATOMID[k].uind[2]
uik = uix*ukx + uiy*uky + uiz*ukz
uir = uix*xr + uiy*yr + uiz*zr
ukr = ukx*xr + uky*yr + ukz*zr
rr1 = f * wscale[k] / r
rr3 = rr1 / r2
rr5 = 3. * rr3 / r2
alphak = ATOMID[k].palpha
term2ik = uik
term3ik = uir*ukr
dmpik = damping.dampmut(r, alphai, alphak)
rr3ik = dmpik[2]*rr3
rr5ik = dmpik[4]*rr5
e = term2ik*rr3ik + term3ik*rr5ik
if e != 0.:
ep = ep + 2*e
nep = nep + 1
aep[i] = aep[i] + 2*0.5*e
aep[k] = aep[k] + 2*0.5*e
if ATOMID[k].index not in ATOMID[i].connectivity:
einter = einter + 2*e
return ep, einter, nep
def Eu(ATOMID, uind=[]):
n = len(ATOMID)
# zero out the total polarization energy and partitioning
nep = 0
ep = 0.
einter = 0
aep = np.zeros(n)
# set conversion factor, cutoff and switching coefficients
f = units.electric / units.dielec
if 'twocenter' in config.param_dict.keys():
fieldp = config.mfieldp
else:
fieldp = config.fieldp
# calculate Eu
for i in range(n):
if len(uind) != 0:
uix = uind[0,i]
uiy = uind[1,i]
uiz = uind[2,i]
else:
uix = ATOMID[i].uind[0]
uiy = ATOMID[i].uind[1]
uiz = ATOMID[i].uind[2]
e = f*uix*fieldp[0,i] + f*uiy*fieldp[1,i] + f*uiz*fieldp[2,i]
ep = ep + e
nep = nep + 1
aep[i] = aep[i] + e
return ep, nep
def epolar3a(ATOMID):
'''empole3a calculates pairwise electrostatic energies between atoms i and k.'''
n = len(ATOMID)
# zero out the total polarization energy and partitioning
nep = 0
ep = 0.
einter = 0.
aep = np.zeros(n)
# set conversion factor, cutoff and switching coefficients
f = 0.5 * units.electric / units.dielec
for i in range(n1):
xi = ATOMID[i].coordinates[0]
yi = ATOMID[i].coordinates[1]
zi = ATOMID[i].coordinates[2]
ci = ATOMID[i].rpole[0]
dix = ATOMID[i].rpole[1]
diy = ATOMID[i].rpole[2]
diz = ATOMID[i].rpole[3]
qixx = ATOMID[i].rpole[4]
qixy = ATOMID[i].rpole[5]
qixz = ATOMID[i].rpole[6]
qiyy = ATOMID[i].rpole[8]
qiyz = ATOMID[i].rpole[9]
qizz = ATOMID[i].rpole[12]
uix = ATOMID[i].uind[0]
uiy = ATOMID[i].uind[1]
uiz = ATOMID[i].uind[2]
corei = ATOMID[i].pcore
vali = ATOMID[i].pval
alphai = ATOMID[i].palpha
# set exclusion coefficients for connected atoms
pscale = ATOMID[i].returnpscale(ATOMID)
# evaluate all sites within the cutoff distance
for k in range(i+1, n):
xr = ATOMID[k].coordinates[0]  xi
yr = ATOMID[k].coordinates[1]  yi
zr = ATOMID[k].coordinates[2]  zi
r2 = xr**2 + yr**2 + zr**2
r = np.sqrt(r2)
 
Using the directional derivative definition, we can find the directional derivative f at k in the direction of a unit vector u as. The directional derivative of f(x;y) at (x0;y0) along u is the pointwise rate of change of fwith respect to the distance along the line parallel to u passing through (x0;y0). At the point (â 2, 1) on the ellipse, there are drawn two … The partial derivatives off at the point (x,y)=(3,2) are:∂f∂x(x,y)=2xy∂f∂y(x,y)=x2∂f∂x(3,2)=12∂f∂y(3,2)=9Therefore, the gradient is∇f(3,2)=12i+9j=(12,9). All you’ve to do is that, enter a function, point and vectors and then click on the show result button, it will show you the answer of your given function. Given a function , there are many ways to denote the derivative of with respect to . Directional Derivative Calculator All you have to do is that just put the function which you wanted this tool to solve for you and it will show you the step by step answer of your question. If the function f is differentiable at x, then the directional derivative exists along any vector v, and one has If you get an error, doublecheck your expression, add parentheses and multiplication signs where needed, and consult the table below. To embed this widget in a post, install the WolframAlpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source. This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.. Sometimes I see expressions like tan^2xsec^3x: this will be parsed as tan^(2*3)(x sec(x)). He also covers the definition of a gradient vector. Calculate directional derivatives and gradients in three dimensions. Consider the domain of as a subset of Euclidean space. Matrix Inverse Calculator; What are derivatives? Next Section . Directional Derivative Calculator All you have to do is that just put the function which you wanted this tool to solve for you and it will show you the step by step answer of your question. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Next lesson. (b) Find the derivative of fin the direction of (1,2) at the point(3,2). Why the gradient is the direction of steepest ascent. $\begingroup$ I understand that, partial derivatives are just directional derivatives on the axis. Find more Mathematics widgets in WolframAlpha. To find the directional derivative in the direction of th… Derivative Calculator – How It Works. Vector field is 3i – 4k. The derivative is an important tool in calculus that represents an infinitesimal change in a function with respect to one of its variables. by supriya July 7, 2020. Drag the point P or type specific values on the boxes. Now, we will learn about how to use the gradient to measure the rate of change of the function with respect to a change of its variables in any direction, as opposed to a change in a single variable. To get tan^2(x)sec^3(x), use parentheses: tan^2(x)sec^3(x). Suppose is a function of many variables. The calculator will find the directional derivative (with steps shown) of the given function at the point in the direction of the given vector. (a) Find ∇f(3,2). Definition at a point Generic definition. Darcy's law states that the local velocity q in a direction s is given by the directional derivative q =  (k/μ) ∂p/∂ s, where p is the transient or steady pressure, with k and μ representing permeability and viscosity. In a similar way to how we developed shortcut rules for standard derivatives in single variable calculus, and for partial derivatives in multivariable calculus, we can also find a way to evaluate directional derivatives without resorting to the limit definition found in Equation . However, in practice this can be a very difficult limit to compute so we need an easier way of taking directional derivatives. This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined.. We can calculate the directional derivative of a function of three variables by using the gradient, leading to a formula that is analogous to Equation 4.38. Get the free "Directional derivative" widget for your website, blog, Wordpress, Blogger, or iGoogle. Section. All you’ve to do is that, enter a function, point and vectors and then click on the show result button, it will show you the answer of your given function. $\begingroup$ The directional derivative as mentioned above will attain its maximum if $\theta=0^\circ$ $\endgroup$ – Juniven Mar 24 '17 at 11:19 $\begingroup$ @Reddevil magnitude of vector dhat is 1 because it is a unit vector. The directional derivative is the dot product of the gradient and the vector u. Since directional derivatives are composed of partial derivatives. Also, be careful when you write fractions: 1/x^2 ln(x) is 1/x^2 ln(x), and 1/(x^2 ln(x)) is 1/(x^2 ln(x)). Free derivative calculator  differentiate functions with all the steps. derivative to show the directional derivative. Now, we have to find the gradient f for finding the directional derivative. Let's look at an example of finding a higher order directional derivative… The slope of the tangent line to this curve (within the vertical plane) at the point C IS the directional derivative of the function at A in the direction of u. Similarly, tanxsec^3x will be parsed as tan(xsec^3(x)). Free partial derivative calculator  partial differentiation solver stepbystep. Then, the directional derivative at the point in the direction is the derivative of the function with respect to movement of the point along that direction, at the specific point. Mobile Notice. If the calculator did not compute something or you have identified an error, please write it in At the point (â 2, 1) on the ellipse, there are drawn two arrows, one tangent vector and one normal vector. Thedirectional derivative at (3,2) in the direction of u isDuf(3,2)=∇f(3,2)⋅u=(12i+9j)⋅(u1i+u2j)=12u1+9u2. h3,5i = 1 25 p 34 (920) = 11 25 p 34 Example 5.4.2.2 Find the directional derivative of f(x,y,z)= p xyz in the direction of ~ v = h1,2,2i at the point (3,2,6). Things to try: Change the function f(x,y). Hint: consider the level curve at $(1,1).$ By computation, find the directional derivative at $(1,1)$ in the direction of $… From the table below, you can notice that sech is not supported, but you can still enter it using the identity sech(x)=1/cosh(x). The Derivative Calculator has to detect these cases and insert the multiplication … The rate of change of a function of several variables in the direction u is called the directional derivative in the direction u. The concept of directional derivatives is … Note that if u is a unit vector in the x direction u = (1,0), then the directional derivative is simply the partial derivative with respect to x. The directional derivative of $$f$$ at the point $$(x,y)$$ in the direction of the unit vector $$\vu = \langle u_1, u_2 \rangle$$ is \begin{equation*} D_{\vu}f(x,y) = \lim_{h \to 0} \frac{f(x+u_1h, y+u_2h)  … A specialty in mathematical expressions is that the multiplication sign can be left out sometimes, for example we write "5x" instead of "5*x". The Derivative Calculator supports computing first, second, …, fifth derivatives as well as differentiating functions with many variables (partial derivatives), implicit differentiation and calculating roots/zeros. If you skip parentheses or a multiplication sign, type at least a whitespace, i.e. Directional Derivative Definition. Fix a direction in this space and a point in the domain. So let's say we have a multivariable function. Instructor/speaker: Prof. Herbert Gross Now, to get one's hands on directional derivatives in polar, or any nonCartesian or curvilinear coordinate system, one  
more information on this topic. For $X \Subset \mathbb{R}^d$ we write $\mathscr{E}(\overline{X})$ for the space consisting of all $f \in \mathscr{E}(X)$ such that $\partial^\alpha f$ has a continuous extension to $\overline{X}$ for all $\alpha \in \mathbb{N}^d$. We endow it with the family of norms $\{\ \, \cdot \, \_{{\overline{X},J}} \, ; \, J \in \mathbb{N}_0\}$, hence it becomes a Fr\'echet space. We set
$$
\mathscr{E}_P(\overline{X}) = \{f \in \mathscr{E}(\overline{X}) \, ; \, P(D) f = 0\}
$$
and endow it with the subspace topology from $\mathscr{E}(\overline{X})$. Then, $\mathscr{E}_P(\overline{X})$ is also a Fr\'echet space.
\begin{proposition}\label{cor: explicit Omega}
Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ and let $X \subseteq \mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective. Let $X_1, X_2 \Subset X$ with $X_1 \subseteq X_2$ be such that $(\overline{X_1}, \overline{X_2})$ is augmentedly $P$locating for $X$. Then, for all $X'_1 \Subset X''_1 \Subset X_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C>0$ such that
\begin{gather*}
\forall f \in\mathscr{E}_P(X),\,\varepsilon\in (0,1) \,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,:\, \\
\fh_\varepsilon\_{\overline{X'_1},r_1}\leq\varepsilon\max\{\f\_{\overline{X''_1}, r_1+1}, \f\_{\overline{X_2}}\} \quad
\mbox{ and } \quad \h_\varepsilon\_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\f\_{\overline{X_2}}.
\end{gather*}
In particular, it holds that $ \fh_\varepsilon\_{\overline{X'_1},r_1}\ \leq \varepsilon\f\_{\overline{X_2}, r_1+1}$.
\end{proposition}
\begin{proof} Fix $X'_1 \Subset X''_1 \Subset X_1$.
\textsc{Auxiliary Claim 1:} \emph{For all $X_1 \Subset X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C_1,C_2>0$ and $\varepsilon_0 \in (0,1)$ such that
\begin{gather*}
\forall f\in\mathscr{E}_P(X), \varepsilon\in (0,\varepsilon_0) \,\exists\,g_\varepsilon \in\mathscr{E}_P(\overline{X'_3})\,: \\
\fg_\varepsilon\_{\overline{X'_1},r_1}\leq C_1\varepsilon\max\{\f\_{\overline{X''_1}, r_1+1}, \f\_{\overline{X_2}}\} \quad
\mbox{ and } \quad \g_\varepsilon\_{\overline{X'_3},r_2}\leq\frac{C_2}{\varepsilon^s}\f\_{\overline{X_2}}.
\end{gather*}
}
\emph{Proof of auxiliary claim 1.} Let $X_1 \Subset X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ be arbitrary. Choose $X_3 \Subset X$ such that $X_2 \subseteq X_3$ and $X'_3 \Subset X_3$. Set $\varepsilon_0 = \min \{1,d(\overline{X'_1}, \mathbb{R}^d \backslash X''_1), d(\overline{X'_3},\mathbb{R}^d \backslash X_3) \}$. By Proposition \ref{lem: explicit POmega}, there are $K \in \mathbb{N}_0$ and $s',C >0$ such that
$$
B_{\overline{X}_2,0} \subseteq\frac{C}{\delta^{s'}} B_{\overline{X}_3,K}+\delta B_{\overline{X}_1,0}, \qquad \forall \delta \in (0,1).
$$
Let $f\in\mathscr{E}_P(X)$ be arbitrary. As $f \in \f\_{\overline{X_2}}B_{\overline{X}_2,0}$, we have that for all $\delta \in (0,1)$ there is $f_\delta \in C\delta^{s'} \f\_{\overline{X_2}}B_{\overline{X}_3,K}$ such that $f  f_\delta \in \delta \f\_{\overline{X_2}}B_{\overline{X}_1,0}$. Choose $\chi \in \mathscr{D}(\mathbb{R}^d)$ with $\chi \geq 0$, $\operatorname{supp} \chi \subseteq B(0,1)$, and $\int_{\mathbb{R}^d} \chi(x) {\rm d}x = 1$, and set $\chi_\varepsilon = \varepsilon^{d}\chi(x/\varepsilon)$ for $\varepsilon \in (0,1)$. For $\delta \in (0,1)$ and $\varepsilon \in (0,\varepsilon_0)$ we define $g_{\delta, \varepsilon} = f_\delta \ast \chi_\varepsilon \in \mathscr{E}_P(\overline{X'_3})$. The mean value theorem implies that
$$
\ f  f \ast \chi_\varepsilon\_{\overline{X'_1}, r_1} \leq \sqrt{d} \varepsilon \f\_{\overline{X''_1}, r_1+1}, \qquad \forall \varepsilon \in (0,\varepsilon_0).
$$
Since $f  f_\delta \in \delta \f\_{\overline{X_2}}B_{\overline{X}_1,0}$, it holds that
$$
\ f \ast \chi_\varepsilon  g_{\delta, \varepsilon}\_{\overline{X'_1}, r_1} = \ (f  f_\delta) \ast \chi_\varepsilon\_{\overline{X'_1}, r_1} \leq \frac{\delta}{\varepsilon^{r_1 + d}} \ \chi \_{r_1}\f\_{\overline{X_2}}, \qquad \forall \varepsilon \in (0,\varepsilon_0).
$$
Similarly, as $f_\delta \in C\delta^{s'} \f\_{\overline{X_2}}B_{\overline{X}_3,K}$, we have that
$$
\ g_{\delta, \varepsilon}\_{\overline{X'_3}, r_2} \leq \frac{C}{\delta^{s'}\varepsilon^{K +r_2 + d}} \ \chi \_{K + r_2}\f\_{\overline{X_2}}, \qquad \forall \varepsilon \in (0,\varepsilon_0).
$$
Define $g_\varepsilon = g_{\varepsilon^{r_1+d + 1}, \varepsilon} \in \mathscr{E}_P(\overline{X'_3})$ for $\varepsilon \in (0,\varepsilon_0)$. Set $s = s'(r_1 + d + 1) + K + r_2 + d$ Then, for all $\varepsilon \in (0,\varepsilon_0)$
\begin{eqnarray*}
\ f  g_\varepsilon\_{\overline{X'_1}, r_1} &\leq& \ f  f\ast \chi_\varepsilon\_{\overline{X'_1}, r_1}
+ \ f \ast \chi_\varepsilon  g_\varepsilon\_{\overline{X'_1}, r_1} \\
&\leq&
(\sqrt{d} + \ \chi\_{r_1}) \varepsilon\max\{\f\_{\overline{X''_1}, r_1+1}, \f\_{\overline{X_2}}\},
\end{eqnarray*}
and
$$
\ g_\varepsilon\_{\overline{X'_3}, r_2} \leq \frac{C \ \chi\_{K +r_2} }{\varepsilon^s}\f\_{\overline{X_2}}.
$$
\textsc{Auxiliary Claim 2:} \emph{For all $X' \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C_1,C_2>0$ and $\varepsilon_0 \in (0,1)$ such that
\begin{gather*}
\forall f\in\mathscr{E}_P(X), \varepsilon\in (0,\varepsilon_0) \,\exists\,g_\varepsilon \in\mathscr{E}_P(X)\,: \\
\fg_\varepsilon\_{\overline{X'_1},r_1}\leq C_1\varepsilon\max\{\f\_{\overline{X''_1}, r_1+1}, \f\_{\overline{X_2}}\} \quad
\mbox{ and } \quad \g_\varepsilon\_{\overline{X'},r_2}\leq\frac{C_2}{\varepsilon^s}\f\_{\overline{X_2}}.
\end{gather*}
}
\noindent Note that, by a simple rescaling argument, the auxiliary claim 2 implies the result.
\emph{Proof of auxiliary claim 2.}
Let $(\Omega_j)_{j \in \mathbb{N}_0}$ be an exhaustion by relatively compact open subsets of $X$, i.e., $\Omega_j \Subset \Omega_{j+1} \Subset X$ for all $j \in \mathbb{N}_0$ and $X = \bigcup_{j \in \mathbb{N}_0} \Omega_j$. Since $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective, $P(D): \mathscr{E}(X) \to \mathscr{E}(X)$ is as well. Hence, by \cite[Lemma 3.1]{DeKa22}, we have that $\operatorname{Proj}^1( \mathscr{E}_P(\overline{\Omega}_j))_{j \in \mathbb{N}_0} = 0$.
Let $X' \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ be arbitrary. We may assume that $X'_1 \Subset X'$. Since $\operatorname{Proj}^1( \mathscr{E}_P(\overline{\Omega}_j))_{j \in \mathbb{N}_0} = 0$, the MittagLeffler lemma \cite[Theorem 3.2.8]{Wengenroth} implies that there is $X'_3 \Subset X$ such that
\[
\mathscr{E}_P(\overline{X'_3}) \subseteq \mathscr{E}_P(X) + \{ f \in \mathscr{E}_P(\overline{X'}) \, ; \, \ f \_{\overline{X'},r_2} \leq 1 \}.
\]
We may assume that $X' \subseteq X'_3$.
By multiplying both sides of the above inclusion with $\delta$, we find that
\begin{equation}
\label{ML}
\mathscr{E}_P(\overline{X'_3}) \subseteq \mathscr{E}_P(X) + \{ f \in \mathscr{E}_P(\overline{X'}) \, ; \, \ f \_{\overline{X'},r_2} \leq \delta \}, \qquad \forall \delta > 0.
\end{equation}
Let $s,C_1,C_2, \varepsilon_0$ be as in the auxiliary claim 1 (with $X'_3 \Subset X$ and $r_1,r_2\in\mathbb{N}_0$ as above). Let $f\in\mathscr{E}_P(X)$ be arbitrary. Choose $g_\varepsilon \in\mathscr{E}_P(\overline{X'_3})$, $\varepsilon\in (0,\varepsilon_0)$, as in auxiliary claim 1. By \eqref{ML}, there is $h_\varepsilon \in \mathscr{E}_P(X)$ such that $\g_\varepsilonh_\varepsilon\_{\overline{X'},r_2}\leq \varepsilon \f\_{\overline{X_2}}$ for all $\varepsilon\in (0,\varepsilon_0)$. Hence, for all $\varepsilon \in (0,\varepsilon_0)$
$$
\ f  h_\varepsilon\_{\overline{X'_1}, r_1} \leq \ f  g_\varepsilon\_{\overline{X'_1}, r_1}
+ \ g_\varepsilon  h_\varepsilon\_{\overline{X'}, r_2} \leq
(C_1 +1) \varepsilon\max\{\f\_{\overline{X''_1}, r_1+1}, \f\_{\overline{X_2}}\},
$$
and
$$
\ h_\varepsilon\_{\overline{X'}, r_2} \leq \ g_\varepsilon  h_\varepsilon\_{\overline{X'}, r_2} + \ g_\varepsilon\_{\overline{X'_3}, r_2} \leq \frac{C_2 +1}{\varepsilon^s} \f\_{\overline{X_2}}.
$$
\end{proof}
\begin{remark}
We believe that Proposition \ref{cor: explicit Omega} holds with $\fh_\varepsilon\_{\overline{X'_1},r_1}$ replaced by $\fh_\varepsilon\_{\overline{X_1},r_1}$, but are unable to show this.
\end{remark}
For hypoelliptic operators it is more natural to work with supseminorms. In this regard, we have the following result.
\begin{corollary}\label{cor: explicit Omegahypo}
Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ be hypoelliptic and let $X \subseteq \mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X \times \mathbb{R}) \to \mathscr{D}'(X \times \mathbb{R})$ is surjective. Let $X_1, X_2 \Subset X$ with $X_1 \subseteq X_2$ be such that $(\overline{X_1}, \overline{X_2})$ is augmentedly $P$locating for $X$. Then, for all $X'_1 \Subset X_1$ and $X' \Subset X$ there exist $s, C>0$ such that
\begin{gather*}
\forall f \in\mathscr{E}_P(X),\,\varepsilon\in (0,1) \,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,:\, \\
\fh_\varepsilon\_{\overline{X'_1}}\leq\varepsilon\f\_{\overline{X_2}} \quad
\mbox{ and } \quad \h_\varepsilon\_{\overline{X'}}\leq\frac{C}{\varepsilon^s}\f\_{\overline{X_2}}.
\end{gather*}
\end{corollary}
\begin{proof}
Fix $X''_1 \Subset X_1$ such that $X'_1 \Subset X''_1$. Since $P(D)$ is hyoelliptic, there is $C' >0$ such that
$$
\f\_{\overline{X''_1},1} \leq C' \f\_{\overline{X_2}}, \qquad f \in \mathscr{E}_P(X).
$$
The result now follows from Proposition \ref{cor: explicit Omega} with $r_1 = r_2 = 0$.
\end{proof}
\section{Quantitative Runge type approximation theorems}\label{sec: quantitative Runge}
We now combine the results from Sections \ref{sec: qualitative Runge} and \ref{sec: technical} to obtain quantitative approximations results. In particular, we shall show Theorems \ref{theo: quantitative convex} \ref{theo: quantitative Runge for wave operator} from the introduction. We start with the following general result.
\begin{proposition}\label{prop: general quantitative}
Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ and let $X\subseteq\mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X\times\mathbb{R}) \to \mathscr{D}'(X\times\mathbb{R})$ is surjective. Let $Y \subseteq X$ be open such that the restriction map $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range. Let $Y_1, Y_2 \Subset Y$ with $Y_1 \subseteq Y_2$ be such that $(\overline{Y_1}, \overline{Y_2})$ is augmentedly $P$locating for $X$. Then, for all $Y'_1 \Subset Y_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ there exist $s, C>0$ such that
\begin{gather*}
\forall f\in\mathscr{E}_P(Y), \varepsilon\in (0,1) \,\exists h_\varepsilon \in\mathscr{E}_P(X) \, :\\
\fh_\varepsilon\_{\overline{Y'_1},r_1}\leq \varepsilon\f\_{\overline{Y_2},r_1+1} \quad \mbox{ and } \quad \h_\varepsilon\_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\f\_{\overline{Y_2}}.
\end{gather*}
\end{proposition}
\begin{proof}
Let $Y'_1 \Subset Y_1$, $X' \Subset X$, and $r_1,r_2\in\mathbb{N}_0$ be arbitrary.
By Proposition \ref{cor: explicit Omega} we find that there are $s,C >0$ such that
\begin{gather}\label{eq: decomposition 1}
\forall g\in\mathscr{E}_P(X),\varepsilon\in (0,1)\,\,\exists\,h_\varepsilon \in\mathscr{E}_P(X) \,: \\ \nonumber
\gh_\varepsilon\_{\overline{Y'_1},r_1}\leq \varepsilon\g\_{\overline{Y_2},r_1+1} \quad \mbox{ and } \quad \h_\varepsilon\_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\g\_{\overline{Y_2}}.
\end{gather}
Let $f\in\mathscr{E}_P(Y)$ and $\varepsilon\in (0,1)$ be arbitrary. Since $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range, there is $g_\varepsilon\in\mathscr{E}_P(X)$ with $\fg_\varepsilon\_{\overline{Y_2}, r_1+1}\leq\varepsilon\f\_{\overline{Y_2}}$. Choose $h_\varepsilon$ according to \eqref{eq: decomposition 1} for $g = g_\varepsilon$. Then,
\begin{eqnarray*}
\fh_\varepsilon\_{\overline{Y'_1}, r_1}&\leq&\fg_\varepsilon\_{\overline{Y'_1}, r_1}+\g_\varepsilonh_\varepsilon\_{\overline{Y'_1}, r_1}\\
&\leq&\fg_\varepsilon\_{\overline{Y_2}, r_1+1}+\varepsilon \g_\varepsilon\_{\overline{Y_2},r_1+1} \\
&\leq&\fg_\varepsilon\_{\overline{Y_2}, r_1+1}+\varepsilon\left(\fg_\varepsilon\_{\overline{Y_2}, r_1+1}+\f\_{\overline{Y_2}, r_1+1}\right)\\
&\leq& 3\varepsilon\f\_{\overline{Y_2},r_1+1},
\end{eqnarray*}
and
$$\h_\varepsilon\_{\overline{X'},r_2}\leq\frac{C}{\varepsilon^s}\g_\varepsilon\_{\overline{Y_2}}\leq \frac{C}{\varepsilon^s}\left(\fg_\varepsilon\_{\overline{Y_2}}+\f\_{\overline{Y_2}}\right)\leq\frac{2C}{\varepsilon^s}\f\_{\overline{Y_2}}.$$
This implies the result.
\end{proof}
For hypoelliptic operators, we obtain the following result.
\begin{proposition}\label{cor: general quantitativehypo}
Let $P\in\mathbb{C}[X_1,\ldots,X_d]$ be hypoelliptic and let $X\subseteq\mathbb{R}^d$ be open such that $P^+(D): \mathscr{D}'(X\times\mathbb{R}) \to \mathscr{D}'(X\times\mathbb{R})$ is surjective. Let $Y \subseteq X$ be open such that the restriction map $r_{\mathscr{E}}^P:\mathscr{E}_P(X)\rightarrow\mathscr{E}_P(Y)$ has dense range. Let $Y_1, Y_2 \Subset Y$ with $Y_1 \subseteq Y_2$ be such that $(\overline{Y_1}, \overline{Y_2})$ is augmentedly $P$locating for $X$. Then, for all $Y'_1 \Subset Y_1$ and $X' \Subset X$ there exist $s,  
based on discrete logarithms Abstract: A new signature scheme is proposed, together with an implementation of the DiffieHellman key distribution scheme that achieves a public key cryptosystem. *Bad news for any publickey scheme. It can be defined over any cyclic group G. Its security depends upon the difficulty of a certain problem in G related to computing discrete logarithms. The sym… To decrypt the ciphertext, the receiver needs to compute. Last Updated: 16112018 ElGamal encryption is an publickey cryptosystem. ElGamal doesn't have signature generation capabilities. endstream ElGamal is another popular publickey encryption algorithm. It chooses random exponent, say, computes the ciphertext, and sends this to the receiver. If Bob now wants to send a message m to Alice, he randomly picks a The resultant encryption scheme has 1 + 1/n ciphertext expansion, a roughly reduction by half. The ElGamal signature algorithm is rarely used in practice. The receiver now has the message digest. Alice can use this to reconstruct the message m by computing. ElGamal T (1985) A public key cryptosystem and a signature scheme based on discrete logarithms. So in total two key pairs, one of the receiver and one of the sender. So hereâs an overview of ElGamal â¦ Thus, “mod p” is omitted when computing exponentiations and discrete logarithms, and “mod q” is omitted when performing computation on exponents. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange I want to implement a chat feature whereby every text chat to be sent will be encrypted with the public key of the receiver. We can sample one random value r and encrypt the message in the Karosawa's manner, as if each point of the public key is for an independent receiver. q can be an ElGamal private key, and then K = (p,q,g,y) with y = gk mod p is the corresponding public key. deterministic like ElGamal publickey cryptosystem. ElGamal encryption is an publickey cryptosystem. prime number p and a generator g. Alice chooses a random number • Then a second encryption is performed using the receivers public key, which delivers conﬁdentiality. although Mr. Elgamal's last name does not have a capital letter 'G'. If you are thinking "Maybe I could securely distribute the public key only to the intended receiver", then you are not disclosing any key at all, and the definition of public and private do not hold anymore: you use RSA as a sort of secret key cipher like AES. not to encrypt messages. • The disadvantage with this scheme is that the publickey algorithm which is complex must be used four times. A disadvantage of the ElGamal system is that the encrypted message becomes very big, about twice the size of the original message m. For this reason it is only used for small messages such as secret keys. Each entity A should do the following: 1. In asymmetric cryptography or publickey cryptography, the sender and the receiver use a pair of publicprivate keys, as opposed to the same symmetric key, and therefore their cryptographic operations are … .% �5��.��N��E�U�kэ �Q�n��H�Q��bFC���;�i��G0*�AI�� (�j.�V� �HӒo����������ǡ��'w*�Pu~�OS��\_oV V+�Xe�� ɤԑ�h��;�P���}���S��!��%�V�i� � endobj /Filter /FlateDecode Security of an asymmetric key (publickey) cryptosystem such as RSA and ElGamal is measured with respect to a chosen plaintext attack (CPA) and a chosen ciphertext attack (CCA). In this the Plain text is encrypted using receiver public key. • Pseudorandom to the rescue. Idea of ElGamal cryptosystem View Tutorial 7.pdf from COMPUTER S Math at University of California, Berkeley. As before, the group is the largest multiplicative subgroup of the integers modulo p, with pprime. The security of the ElGamal signature scheme is based (like DSA) on the discrete logarithm problem ().Given a cyclic group, a generator g, and an element h, it is hard to find an integer x such that $$g^x = h$$.. The public key of the receiver is retrieved and calculated: c_2=m \beta^v\alpha^v\mod p The c=(c_1,c_2) encryption At this point it is said that the user3 , found by chance a value "d" and an outline of the above mentioned encryption. Select the second encryption key as E1. ������~髬q�� 5����ʕ�խ=nQ�����A����$Ѿi����Q/���~�h;E��G�VZoү�NG�v�X�*��i�vȀU_�S��}k��U This paper proposes a new threeparty extension of ElGamal encryption scheme and a multireceiver extension of ElGamal encryption scheme. These public key systems are generally called ElGamal public key encryption schemes. Taher ElGamal was actually Marty Hellman's student. 64 0 obj y = g x mod p. (1). A. Algorithm Key generation for ElGamal publickey encryption Each entity creates a public key and a corresponding private key. IEEE Trans Inf Theory 31:469â472 zbMATH MathSciNet CrossRef Google Scholar 2. The group is the largest multiplicative subgroup of the integers modulo p, with p prime. Symmetric cryptography was well suited for organizations such as governments, military, and big financial corporations were involved in the classified communication. (This assures authenticity,as only sender has his private key so only sender can encrypt using his private key which can thus be decrypted by sender’s public key). Elgamal digital signature scheme: This scheme used the same keys but a different algorithm. It can be defined over any cyclic group G. Its security depends upon the difficulty of a certain problem in G related to computing discrete logarithms. Receiver decrypts the digital signature using the public key of sender. A. Algorithm Key generation for ElGamal publickey encryption Each entity creates a public key and a corresponding private key. For no apparent reason everyone calls this the "ElGamal" system /Type /XObject ElGamal cryptosystem can be defined as the cryptography algorithm that uses the public and private key concept to secure the communication occurring between two systems. As with DiffieHellman, Alice and Bob have a (publicly known) /ed]c+��d���*w�ܧ�w� %���� Each entity A should do the following: 1. Overview The ElGamal signature scheme is a digital signature scheme based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem. There are three main methods of creating public key encryption; RSA (based on prime number factorization); Elliptic Curve; and Discrete Logarithms (ElGamal). Figure 6.4shows steps through the algorithm from encryption to decryption. More likely, you want publickey authenticated encryption, for which you should use NaCl/libsodium crypto_box_curve25519xsalsa20poly1305, if you have a definite notion of a sender and receiver who know one another's public keys and want to exchange unforgeable secret messages. To sign a message M, choose a random number k such that k has no factor in common with p — 1 and compute a = g k mod p. Then find a value s that satisfies. 67 0 obj A disadvantage of the ElGamal system is that the encrypted message This cryptosystem is based on the difficulty of finding discrete logarithm in a cyclic group that is even if we know g a and g k, it is extremely difficult to compute g ak.. The ElGamal publickey encryption scheme is based on the intractability of the discrete logarithm problem (DLP), which will be described in this section. DiffieHellman system. The resultant encryption scheme has 1 + 1/n ciphertext expansion, a roughly reduction by half. In a chosen plaintext attack (sometimes called a semantic attack) is Alice and Bob's adversary Eve passive, i.e. Each entity A should do the following: 1. number k which is smaller than p. He then computes: and sends c1 and c2 (ElGamal PublicKey Encryption Scheme) The ElGamal. In 1984 aherT ElGamal introduced a cryptosystem which depends on the Discrete Logarithm Problem.The ElGamal encryption system is an asymmet ric key encryption algorithm for publickey cryptography which is based on the DieHellman key exchange.ElGamal depends on the one way function, means that the  
at \(s \ngeq r\),
and that is \(\pi_n(X(s),\phi^X_{r,s}(x)) \in \mathbf{Grp}\) at \(s \geq r\).
\end{defn}
Note that \(\pi_n\) is functorial for every \(n \in \mathbb{N}\).
\begin{defn}
\label{interleavinginhomotopygroups}
Let \(\epsilon,\delta \geq 0 \in \mathbf{R}^{m}\).
Assume given a homotopy class of morphisms \([f] \colon X' \to Y'^\epsilon \in \mathsf{Ho}\left(\SS^{\mathbf{R}^m}\right)\).
Let \(X' \simeq X\) be a cofibrant replacement, let \(Y' \simeq Y\) be a fibrant replacement,
and let \(f \colon X \to_\epsilon Y\) be a representative of \(f\).
We say that \([f]\) \define{induces an \((\epsilon,\delta)\)interleaving in homotopy groups} if the induced map \(\pi_0(f) \colon \pi_0(X) \to_\epsilon \pi_0(Y)\) is part of an \((\epsilon,\delta)\)interleaving
of persistent sets, and if for every \(r \in \mathbf{R}^m\), every \(x \in X(r)\), and every \(n \geq 1 \in \mathbb{N}\),
the induced map
\(\pi_n(f) : \pi_n(X,x) \to_\epsilon \pi_n(Y,f(x))\) is part of an \((\epsilon,\delta)\)interleaving
of persistent groups.
\end{defn}
It is clear that the definition above is independent of the choices of representatives.
A standard result in classical homotopy theory is that a fibration of Kan complexes
inducing an isomorphism in all homotopy groups has the right lifting property with respect
to cofibrations (\cite[Theorem~I.7.10]{GJ}).
An analogous, persistent, result
(\cref{liftingpropertyncofibrations}), says that, for a fibration of fibrant objects
inducing a \(\delta\)interleaving in homotopy groups, the lift exists up to a shift, which depends on both
\(\delta\) and on a certain ``length'' \(n \in \mathbb{N}\) associated to the cofibration.
To make this precise, we introduce the notion of \(n\)dimensional extension.
\begin{defn}
\label{def n cofibration}
Let \(A,B \in \SS^{\RR^m}\) and let \(n \in \mathbb{N}\).
A map \(j\colon A \to B\) is a \define{\(n\)dimensional extension} (of \(A\)) if there exists a set \(I\), a family of tuples of real numbers
\(\left\{r_i \in \mathbf{R}^m\right\}_{i \in I}\), and commutative squares of the form depicted on the left below, that together give rise to the pushout square on the right below.
Here, \(\partial D^n \hookrightarrow D^n\) stands for \(S^{n1}\hookrightarrow D^n\) if \(\ \SS = \mathbf{Top}\), and for \(\partial \Delta^n \hookrightarrow \Delta^n\) if \(\ \SS = \mathbf{sSet}\).
\[\begin{tikzcd}
\partial D^n \ar[r,"f_i"] \ar[d,hook] & A(r_i) \ar[d,"j_{r_i}"] & & & & \coprod_{i \in I} r_i \odot(\partial D^n) \ar[r,"f"] \ar[d] & A \ar[d,"j"]\\
D^n \ar[r,"g_i"] & B(r_i) & & & & \coprod_i r_i \odot( D^n) \ar[r,"g"] & B
\end{tikzcd}\]
\end{defn}
A \define{single dimensional extension} is an \(n\)dimensional extension for some \(n\in \mathbb{N}\).
\begin{defn}
Let \(\iota \colon A \to B\) be a projective cofibration of \(\SS^{\RR^m}\) and let \(n \in \mathbb{N}\).
We say that \(\iota\) is an \define{\(n\)cofibration} if it factors as the composite
of \(n+1\) maps \(f_0, \dots, f_n\), with \(f_i\) an \(n_i\)dimensional extension for some \(n_i \in \mathbb{N}\).
We say that \(A\in \SS^{\RR^m}\) is \(n\)cofibrant if the map \(\emptyset \to A\) is an \(n\)cofibration.
\end{defn}
The next lemma, which follows directly from \cref{t: cofibrant = filtered}, gives a rich family of examples of \(n\)cofibrant persistent simplicial sets.
Recall that a simplicial set is \(n\)skeletal if all its simplices in dimensions above \(n\) are degenerate.
\begin{lem}
\label{ssetncofibrant}
Let \(A \in \mathbf{sSet}^{\mathbf{R}^m}\) and let \(n \in \mathbb{N}\).
If \(A\) is projective cofibrant and pointwise \(n\)skeletal, then it is \(n\)cofibrant.\qed
\end{lem}
\begin{eg}
The VietorisRips complex \(\mathsf{VR}(X)\) of a metric space \(X\), as defined in \cref{VRcomplex example}, is \(n\)cofibrant if the underlying set of \(X\) has finite cardinality \(\vert X \vert = n + 1\).
If one is interested in persistent (co)homology of some bounded degree \(n\), then one can restrict computations to the \((n+1)\)skeleton of a VietorisRips complex,
which is \((n+1)\)cofibrant.
\end{eg}
A result analogous to \cref{ssetncofibrant}, but for persistent topological spaces, does not hold, as cells are not necessarily attached in order of dimension.
This motivates the following definition.
\begin{defn}
\label{persistent CW}
Let \(n\in \mathbb{N}\).
A persistent topological space \(A \in \mathbf{Top}^{\mathbf{R}^m}\) is an
\define{\(n\)dimensional persistent CWcomplex} if
the map \(\emptyset \to X\) can be factored as a composite
of maps \(f_0, \dots, f_n\), with \(f_i\) an \(i\)dimensional extension.
\end{defn}
\begin{eg}
The geometric realization of any \(n\)cofibrant
persistent simplicial set is an \(n\)dimensional persistent CWcomplex.
\end{eg}
\begin{lem}
\label{cwncofibrant}
Every \(n\)dimensional persistent CWcomplex is \(n\)cofibrant.\qed
\end{lem}
We now make precise the notion of lifting property up to a shift.
\begin{defn}
Let \(i \colon A \to B\) and \(p \colon Y \to X\) be morphisms in \(\SS^{\RR^m}\) and let \(\delta \geq 0\).
We say that \(p\) has the \define{right \(\delta\)lifting property} with respect to \(i\)
if for all morphisms \(A \to Y\) and \(B \to X\) making the square on the left below commute, there exists
a diagonal \(\delta\)morphism \(f \colon B \to_\delta Y\) rendering the diagram commutative.
Below, the diagram on the left is shorthand for the one on the right.
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=3em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]
{ A & Y & & & A & Y & Y^\delta\\
B & X & & & B & X & X^\delta.\\};
\path[line width=0.75pt, {>[width=8pt]}]
(m11) edge node [above] {} (m12)
edge node [left] {$i$} (m21)
(m21) edge [dotted] node [above] {} node [at end, below] {$\leng{\delta}\;\;\,$} (m12)
edge node [above] {} (m22)
(m12) edge node [right] {$p$} (m22)
(m15) edge node [above] {} (m16)
edge node [left] {$i$} (m25)
(m16) edge node [right] {$p$} (m26)
edge node [above] {$\mathsf{S}_{0,\delta}(\mathsf{id}_Y)$} (m17)
(m17) edge node [right] {$p^\delta$} (m27)
(m25) edge [bend left=10,,line width=6pt,draw=white] (m17)
edge [bend left=10, dotted] node [left] {$\,\,\,\,\,\,$} (m17)
edge node [above] {} (m26)
(m26) edge node [below] {$\mathsf{S}_{0,\delta}(\mathsf{id}_X)$} (m27)
;
\end{tikzpicture}\]
\end{defn}
We now prove \cref{jardineslemma}, an adaptation of a result of Jardine, which says that fibrations inducing interleavings in homotopy groups have a shifted right lifting property, as defined above.
The main difference is that we work in the multipersistent setting.
We use simplicial notation and observe that the corresponding statement for persistent topological spaces follows from the simplicial one by using the singular complexrealization adjunction.
We recall a standard, technical lemma whose proof is given within that of, e.g., \cite[Theorem~I.7.10]{GJ}.
\begin{lem}
\label{homotopylifting}
Suppose given a commutative square of simplicial sets
\begin{equation}
\label{lp2}
\begin{tikzcd}
\partial \Delta^n \ar[r,"\alpha"] \ar[d,hook] & X \ar[d,"p"]\\
\Delta^n \ar[r,"\beta"] &Y,
\end{tikzcd}
\end{equation}
where \(p\) is a Kan fibration between Kan complexes. If there is commutative diagram like the one on the left below, for which the lifting problem on the right admits a solution, then the initial square \eqref{lp2} admits a solution.
\[\begin{tikzcd}
\partial \Delta^n \ar[ddd,hook] \ar[dr,"(\mathsf{id}_{\partial \Delta^n} \times \{1\}) "] \ar[drrr,"\alpha", bend left]&&\\
& \partial \Delta^n \times \Delta^1 \ar[d,hook] \ar[rr," h"] & & X \ar[d,"p"] & & & \partial \Delta^n \ar[rr,"h \circ (\mathsf{id}_{\partial \Delta^n} \times \{0\}) "] \ar[d,hook] & & X \ar[d,"p"]\\
&\Delta^n \times \Delta^1 \ar[rr,"g"] & & Y & & & \Delta^n \ar[rr,"g \circ( \mathsf{id}_{\Delta^n} \times \{0\})"] &&Y\\
\Delta^n \ar[ur,"(\mathsf{id}_{\Delta^n} \times \{1\}) "{swap}] \ar[urrr,"\beta"{swap},bend right] &&
\end{tikzcd}\]
\end{lem}
\begin{lem}[cf.~{\cite[Lemma~14]{Jardine2020}}]
\label{jardineslemma}
Let \(\delta \geq 0\), and let \(f \colon X \to Y \in \SS^{\mathbf{R}^m}\) induce a \((0,\delta)\)interleaving in homotopy groups.
If \(X\) and \(Y\) are projective fibrant and \(f\) is a projective fibration, then \(f\) has the right
\(2\delta\)lifting property with respect to boundary inclusions \(r\odot \partial D^n \to r \odot D^n\),
for every \(r \in \mathbf{R}^m\) and every \(n \in \mathbb{N}\).
\end{lem}
\begin{proof}
Suppose given a commutative diagram as on the left below, which corresponds to the one on the right:
\begin{equation}
\label{lifting problem}
\begin{tikzcd}
r\odot \partial \Delta^n \ar[r,"a"] \ar[d] & X \ar[d,"p"] & & & \partial \Delta^n \ar[r,"\alpha"] \ar[d] & X(r) \ar[d,"p_r"]\\
r\odot \Delta^n \ar[r,"b"] & Y & & & \Delta^n \ar[r,"\beta"] &Y(r).
\end{tikzcd}
\end{equation}
We must find a \(2\delta\)lift for the diagram on the right.
The proof strategy is to appeal to \cref{homotopylifting} to simplify \(\alpha\), then prove that at the cost of a \(\delta\)shift we can further reduce \(\alpha\) to a constant map, and then show that the simplified lifting problem can be solved at the cost of another \(\delta\)shift. So we end up with a \(2\delta\)lift, as in the statement.
We proceed by proving the claims in opposite order.
We start by showing that
\eqref{lifting problem} can be solved up to a \(\delta\)shift whenever \(\alpha\) is constant.
Let us assume that \(\alpha\) is of the form \(\alpha = \ast\) for some \(\ast \in X(r)_0\). Since, then, \(\beta\) represents an element \([ \beta] \in \pi_n(Y(r),\ast)\), there exists a map \(\alpha' \colon \Delta^n \to X(r+\delta)\) whose restriction to \(\partial \Delta^n\) is constant on \(\ast \in X(r)_0\), and such that there is a homotopy \(h\colon \beta \simeq p\alpha'\)  
# Behavioural Modelling & Timing in Verilog
#### VLSI, PLC, Microcontrollers, and Assembly Language
23 Lectures 12 hours
Behavioral models in Verilog contain procedural statements, which control the simulation and manipulate variables of the data types. These all statements are contained within the procedures. Each of the procedure has an activity flow associated with it.
During simulation of behavioral model, all the flows defined by the ‘always’ and ‘initial’ statements start together at simulation time ‘zero’. The initial statements are executed once, and the always statements are executed repetitively. In this model, the register variables a and b are initialized to binary 1 and 0 respectively at simulation time ‘zero’. The initial statement is then completed and is not executed again during that simulation run. This initial statement is containing a beginend block (also called a sequential block) of statements. In this beginend type block, a is initialized first followed by b.
### Example of Behavioral Modeling
module behave;
reg [1:0]a,b;
initial
begin
a = ’b1;
b = ’b0;
end
always
begin
#50 a = ~a;
end
always
begin
#100 b = ~b;
end
End module
## Procedural Assignments
Procedural assignments are for updating reg, integer, time, and memory variables. There is a significant difference between procedural assignment and continuous assignment as described below −
Continuous assignments drive net variables and are evaluated and updated whenever an input operand changes value.
Procedural assignments update the value of register variables under the control of the procedural flow constructs that surround them.
The righthand side of a procedural assignment can be any expression that evaluates to a value. However, partselects on the righthand side must have constant indices. The lefthand side indicates the variable that receives the assignment from the righthand side. The lefthand side of a procedural assignment can take one of the following forms −
• register, integer, real, or time variable − An assignment to the name reference of one of these data types.
• bitselect of a register, integer, real, or time variable − An assignment to a single bit that leaves the other bits untouched.
• partselect of a register, integer, real, or time variable − A partselect of two or more contiguous bits that leaves the rest of the bits untouched. For the partselect form, only constant expressions are legal.
• memory element − A single word of a memory. Note that bitselects and partselects are illegal on memory element references.
• concatenation of any of the above − A concatenation of any of the previous four forms can be specified, which effectively partitions the result of the righthand side expression and assigns the partition parts, in order, to the various parts of the concatenation.
## Delay in Assignment (not for synthesis)
In a delayed assignment Δt time units pass before the statement is executed and the lefthand assignment is made. With intraassignment delay, the right side is evaluated immediately but there is a delay of Δt before the result is place in the left hand assignment. If another procedure changes a righthand side signal during Δt, it does not effect the output. Delays are not supported by synthesis tools.
### Syntax
• Procedural Assignmentvariable = expression
• Delayed assignment#Δt variable = expression;
• Intraassignment delayvariable = #Δt expression;
### Example
reg [6:0] sum; reg h, ziltch;
sum[7] = b[7] ^ c[7]; // execute now.
ziltch = #15 ckz&h; /* ckz&a evaluated now; ziltch changed
after 15 time units. */
#10 hat = b&c; /* 10 units after ziltch changes, b&c is
evaluated and hat changes. */
## Blocking Assignments
A blocking procedural assignment statement must be executed before the execution of the statements that follow it in a sequential block. A blocking procedural assignment statement does not prevent the execution of statements that follow it in a parallel block.
### Syntax
The syntax for a blocking procedural assignment is as follows −
<lvalue> = <timing_control> <expression>
Where, lvalue is a data type that is valid for a procedural assignment statement, = is the assignment operator, and timing control is the optional intra  assignment delay. The timing control delay can be either a delay control (for example, #6) or an event control (for example, @(posedge clk)). The expression is the righthand side value the simulator assigns to the lefthand side. The = assignment operator used by blocking procedural assignments is also used by procedural continuous assignments and continuous assignments.
### Example
rega = 0;
rega[3] = 1; // a bitselect
rega[3:5] = 7; // a partselect
mema[address] = 8’hff; // assignment to a memory element
{carry, acc} = rega + regb; // a concatenation
## Nonblocking (RTL) Assignments
The nonblocking procedural assignment allows you to schedule assignments without blocking the procedural flow. You can use the nonblocking procedural statement whenever you want to make several register assignments within the same time step without regard to order or dependance upon each other.
### Syntax
The syntax for a nonblocking procedural assignment is as follows −
<lvalue> <= <timing_control> <expression>
Where lvalue is a data type that is valid for a procedural assignment statement, <= is the nonblocking assignment operator, and timing control is the optional intraassignment timing control. The timing control delay can be either a delay control or an event control (for example, @(posedge clk)). The expression is the righthand side value the simulator assigns to the lefthand side. The nonblocking assignment operator is the same operator the simulator uses for the lessthanorequal relational operator. The simulator interprets the <= operator to be a relational operator when you use it in an expression, and interprets the <= operator to be an assignment operator when you use it in a nonblocking procedural assignment construct.
How the simulator evaluates nonblocking procedural assignments When the simulator encounters a nonblocking procedural assignment, the simulator evaluates and executes the nonblocking procedural assignment in two steps as follows −
• The simulator evaluates the righthand side and schedules the assignment of the new value to take place at a time specified by a procedural timing control. The simulator evaluates the righthand side and schedules the assignment of the new value to take place at a time specified by a procedural timing control.
• At the end of the time step, in which the given delay has expired or the appropriate event has taken place, the simulator executes the assignment by assigning the value to the lefthand side.
### Example
module evaluates2(out);
output out;
reg a, b, c;
initial
begin
a = 0;
b = 1;
c = 0;
end
always c = #5 ~c;
always @(posedge c)
begin
a <= b;
b <= a;
end
endmodule
## Conditions
The conditional statement (or ifelse statement) is used to make a decision as to whether a statement is executed or not.
Formally, the syntax is as follows −
<statement>
::= if ( <expression> ) <statement_or_null>
= if ( <expression> ) <statement_or_null>
else <statement_or_null>
<statement_or_null>
::= <statement>
= ;
The <expression> is evaluated; if it is true (that is, has a nonzero known value), the first statement executes. If it is false (has a zero value or the value is x or z), the first statement does not execute. If there is an else statement and <expression> is false, the else statement executes. Since, the numeric value of the if expression is tested for being zero, certain shortcuts are possible.
For example, the following two statements express the same logic −
if (expression)
if (expression != 0)
Since, the else part of an ifelse is optional, there can be confusion when an else is omitted from a nested if sequence. This is resolved by always associating the else with the closest previous if that lacks an else.
### Example
if (index > 0)
if (rega > regb)
result = rega;
else // else applies to preceding if
result = regb;
If that association is not what you want, use a beginend block statement
to force the proper association
if (index > 0)
begin
if (rega > regb)
result = rega;
end
else
result = regb;
### Construction of: if else if
The following construction occurs so often that it is worth a brief separate discussion.
Example
if (<expression>)
<statement>
else if (<expression>)
<statement>
else if (<expression>)
<statement>
else
<statement>
This sequence of if’s (known as an ifelseif construct) is the most general way of writing a multiway decision. The expressions are evaluated in order; if any expression is  
\section{Introduction}
Resources such as datasets, pretrained models, and benchmarks are crucial for the advancement of natural language processing (NLP) research. Nevertheless, most pretrained models and datasets are developed for highresource languages such as English, French, and Chinese~\cite{Devlin2019bert,martinetal2020camembert,chenetal2020sibert}.
Although the number of datasets, models, and benchmarks has been increasing for lowresource languages such as Indonesian~\cite{wilie2020indonlu, kotoetal2020indolem}, Bangla~\cite{bhattacharjee2021banglabert}, and Filipino~\cite{cruz2020establishing}, these datasets primarily focus on natural language understanding (NLU) tasks, which only cover a subset of practical NLP systems today. In contrast, much fewer natural language generation (NLG) benchmarks have been developed for lowresource languages; most multilingual NLG resources thus far have primarily focused on machine translation, highlighting the need to generalize these lowresource NLG benchmarks to other commonly used NLG tasks such as summarization and question answering. While recent work has developed more comprehensive multilingual NLG benchmarks, such as XGLUE~\cite{Liang2020xglue} and GEM~\cite{gehrmann2021gem}, these efforts still primarily evaluate the NLG models on fairly highresource languages.
\begin{table*}[!t]
\centering
\resizebox{0.96\textwidth}{!}{
\begin{tabular}{lrrrcl}
\toprule
\textbf{Dataset} & \textbf{\# Words} & \textbf{\# Sentences} & \textbf{Size} & \textbf{Style}& \textbf{Source} \\
\midrule
\texttt{Indo4B} \cite{wilie2020indonlu} & 3,581,301,476 & 275,301,176 & 23.43 GB & mixed & IndoBenchmark \\
Wiki Sundanese$^1$ & 4,644,282 & 182,581 & 40.1 MB & formal & Wikipedia \\
Wiki Javanese$^1$ & 6,015,961 & 231,571 & 53.2 MB & formal & Wikipedia \\
CC100 Sundanese & 13,761,754 & 433,086 & 107.6 MB & mixed & Common Crawl \\
CC100 Javanese & 20,560,458 & 690,517 & 161.9 MB & mixed & Common Crawl \\
\midrule
\textbf{TOTAL} & 3,626,283,931 & 276,838,931 & 23.79 GB & & \\
\bottomrule
\end{tabular}
}
\caption{\texttt{Indo4BPlus} dataset statistics. $^1$ \hyperlink{https://dumps.wikimedia.org/backupindex.html}{https://dumps.wikimedia.org/backupindex.html}.}
\label{tab:ID4B_corpus_stats}
\end{table*}
In this paper, we take a step towards building NLG models for some lowresource languages by introducing \texttt{IndoNLG}a benchmark of multilingual resources and standardized evaluation data for three widely spoken languages of Indonesia: Indonesian, Javanese, and Sundanese. Cumulatively, these languages are spoken by more than 100 million native speakers, and thus comprise an important use case of NLG systems today.
Despite the prevalence of these languages, there has been relatively few prior work on developing accurate NLG systems for these languagesa limitation we attribute to a lack of publicly available resources and evaluation benchmarks. To help address this problem, \texttt{IndoNLG} encompasses clean pretraining data, pretrained models, and downstream NLG tasks for these three languages. For the downstream tasks, we collect preexisting datasets for EnglishIndonesian machine translation, monolingual summarization, question answering, and dialogue datasets. Beyond these existing datasets, we prepare two new machine translation datasets (SundaneseIndonesian and JavaneseIndonesian) to evaluate models on the regional languages, Javanese and Sundanese, which have substantially fewer resourcesin terms of \emph{both} unlabelled and labelled datasetsthan the Indonesian language.
How, then, can we build models that perform well for such lowresource languages? Building monolingual pretrained models solely using lowresource languages, such as Sundanese and Javanese, is ineffective since there are only few unlabelled data available for pretraining. In this paper, we explore two approaches. The first approach is to leverage existing pretrained multilingual models, such as mBART~\citep{liu2020mbart}. While this approach is quite effective, we explore a second approach that leverages positive transfer from related languages~\cite{Hu2020xtreme,Khanuja2021muril}, such as pretraining with a corpus of mostly Indonesian text. We justify this approach through the fact that Sundanese, Javanese, and Indonesian all belong to the same Austronesian language family~\cite{blust2013austronesian, novitasari2020cross}, and share various morphological and semantic features as well as common lexical items through the presence of Sundanese and Javanese loanwords in the Indonesian language~\citep{devianty2016loan}.
We show that pretraining on mostly Indonesian text achieves competitive performance to the larger multilingual modelsdespite using 5$\times$ fewer parameters and smaller pretraining dataand achieves particularly strong performance on tasks involving the very lowresource Javanese and Sundanese languages.
Our contributions are as follows:
1) we curate a multilingual pretraining dataset for Indonesian, Sundanese, and Javanese;
2) we introduce two models that support generation in these three major languages in Indonesia, IndoBART and IndoGPT;
3) to the best of our knowledge, we develop the first diverse benchmark to evaluate the capability of Indonesian, Sundanese, and Javanese generation models; and
4) we show that pretraining solely on related languages (i.e. mostly Indonesian text) can achieve strong performance on two very lowresource languages, Javanese and Sundanese, compared to existing multilingual models, despite using fewer parameters and smaller pretraining data. This finding showcases the benefits of pretraining on closely related, \emph{local} languages to enable more efficient learning of lowresource languages.
\begin{table*}[!t]
\centering
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
\textbf{Dataset} & $$\textbf{Train}$$ & $$\textbf{Valid}$$ & $$\textbf{Test}$$ & \textbf{Task Description} &\textbf{Domain} & \textbf{Style}\\ \midrule
\multicolumn{7}{c}{Language Pair Tasks} \\ \midrule
Bible En$\leftrightarrow$Id & 23,308 & 3,109 & 4,661 & machine translation & religion & formal \\
TED En$\leftrightarrow$Id & 87,406 & 2,677 & 3,179 & machine translation & mixed & formal \\
News En$\leftrightarrow$Id & 38,469 & 1,953 & 1,954 & machine translation & news & formal \\
Bible Su$\leftrightarrow$Id & 5,968 & 797 & 1193 & machine translation & religion & formal \\
Bible Jv$\leftrightarrow$Id & 5,967 & 797 & 1193 & machine translation & religion & formal \\ \midrule
\multicolumn{7}{c}{Indonesian Tasks} \\
\midrule
Liputan6 (Canonical) & \multirow{2}{*}{193,883} & 10,972 & 10,972 & \multirow{2}{*}{summarization} & \multirow{2}{*}{news} & \multirow{2}{*}{formal} \\
Liputan6 (Xtreme) & & 4,948 & 3,862 \\
Indosum & 2,495 & 311 & 311 & summarization & news & formal \\
TyDiQA (Id)$^{\dagger}$ & 4,847 & 565 & 855 & question answering & mixed & formal \\
XPersona (Id) & 16,878 & 484 & 484 & chitchat & casual & colloquial \\
\bottomrule
\end{tabular}
}
\caption{Task statistics and descriptions. $^{\dagger}$We create new splits for the train and test.}
\label{tab:dataset}
\end{table*}
\section{Related Work}
\paragraph{NLP Benchmarks.}
Numerous benchmarks have recently emerged, which have catalyzed advances in monolingual and crosslingual transfer learning. These include NLU benchmarks for lowresource languages including IndoNLU~\cite{wilie2020indonlu}, IndoLEM~\cite{kotoetal2020indolem}, and those focusing on Filipino~\cite{cruz2020establishing}, Bangla~\cite{bhattacharjee2021banglabert}, and Thai~\cite{lowphansirikul2021wangchanberta}; neural machine translation (MT) datasets for lowresource scenarios including for Indonesian \cite{guntara2020benchmarking}, African languages \cite{duhetal2020benchmarking,lakew2020low}, and Nepali and Sinhala \cite{guzman2019flores}; and largescale multilingual benchmarks such as XTREME \cite{Hu2020xtreme}, MTOP \cite{li2020mtop}, and XGLUE \cite{Liang2020xglue}.
\citet{winata2021multilingual,aguilar2020lince,khanuja2020gluecos} further developed multilingual benchmarks to evaluate the effectiveness of pretrained multilingual language models. More recently, GEM \cite{gehrmann2021gem} covers NLG tasks in various languages, together with automated and human evaluation metrics. Our benchmark compiles languages and tasks that are \emph{not} covered in those prior work, such as local multilingual (Indonesian, Javanese, Sundanese, and English) MT tasks, Indonesian summarization, and Indonesian chitchat dialogue.
\paragraph{Pretrained NLG Models.}
Recently, the paradigm of pretrainingthenfinetuning has achieved remarkable success in NLG, as evidenced by the success of monolingual pretrained NLG models. GPT2 \cite{radford2019language}, and later GPT3 \cite{NEURIPS2020_1457c0d6}, demonstrated that language models can perform zeroshot transfer to downstream tasks via generation. Other recent stateoftheart models are BART \cite{lewis2020bart}, which maps corrupted documents to their original, and the encoderdecoder T5 \cite{raffel2020exploring}, which resulted from a thorough investigation of architectures, objectives, datasets, and pretraining strategies. These monolingual models have been generalised to the \emph{multilingual} case by pretraining the architectures on multiple languages; examples include mBART~\cite{liu2020mbart} and mT5 \cite{xue2020mt5}. In this paper, we focus on local, nearmonolingual models for the languages of Indonesia, and systematically compare them on our benchmark with such larger multilingual models.
\section{\texttt{IndoNLG} Benchmark}
\subsection{\texttt{Indo4BPlus} Pretraining Dataset}
\label{sec:indo4b}
Our \texttt{Indo4BPlus} dataset consists of three languages: Indonesian, Sundanese, and Javanese. For the Indonesian data, we use the \texttt{Indo4B} dataset~\cite{wilie2020indonlu}. For the Sundanese and Javanese data, we collect and preprocess text from Wikipedia and CC100~\cite{wenzek2020ccnet}.
As shown in Table \ref{tab:ID4B_corpus_stats}, the total number of words in the local languages is minuscule ($\approx$~1\% combined) compared to the total number of words in the Indonesian language. In order to alleviate this problem, we rebalance the \texttt{Indo4BPlus} corpus. Following~\citet{liu2020mbart}, we upsample or downsample data in each language according to the following formula:
\begin{align}
\lambda_i = \frac{p_i^\alpha}{p_i \sum_j^L{p_j^\alpha}},
\end{align}
where $\lambda_i$ denotes up/downsampling ratio for language $i$ and $p_i$ is the percentage of language $i$ in \texttt{Indo4BPlus}. Following ~\newcite{liu2020mbart}, we set the smoothing parameter $\alpha$ to 0.7.
After rebalancing, the percentage of data in the local languages increases to $\sim$3\%.
\begin{table*}[!t]
\centering
\resizebox{0.88\textwidth}{!}{
\begin{tabular}{lrccccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{\#Params}} & \textbf{\#Enc} & \textbf{\#Dec} & \multirow{2}{*}{\textbf{\#Heads}} & \textbf{Emb.} & \textbf{Head} & \textbf{FFN} & \textbf{Language} \\
& & \textbf{Layers} & \textbf{Layers} &  
entries[1:]: sy.append(word)
if line[0:11] == ' abundance:':
entries = line.split()
for word in entries[1:]: ab.append(word)
assert (len(sy) == len(ab)), 'different elements in arrays sy (elemental symbols) and ab (abundances)'
abu = np.ones(99)*1e99
i = 0
for item in sy:
try:
index = symbol.index(item)
abu[index] = 10.**(float(ab[i])12.)
except ValueError:
print("the symbol ",item," is not recognized as a valid element")
i = i + 1
print('abu=',abu)
while line[0:72] != " l tstd temperature pgas pe density mu":
line = f.readline()
line = f.readline()
entries = line.split()
t = [ float(entries[2].replace('D','E')) ]
p = [ float(entries[3].replace('D','E')) ]
ne = [ float(entries[4].replace('D','E')) / bolk / float(entries[2].replace('D','E')) ]
dm = [ float(entries[3].replace('D','E')) / 10.**logg ] #assuming hydrostatic equil. and negliglible radiation and turb. pressure
for i in range(nd1):
line = f.readline()
entries = line.split()
t.append( float(entries[2].replace('D','E')))
p.append( float(entries[3].replace('D','E')))
ne.append( float(entries[4].replace('D','E')) / bolk / float(entries[2]))
dm.append ( float(entries[3].replace('D','E')) / 10.**logg )
vmicro = 0.0
while (line[0:6] != " greli"):
line = f.readline()
if line == '':
print('Cannot find a value for vmicro (vturb) in the model atmosphere file ',modelfile)
break
if line != '':
entries = line.split()
vmicro = float(entries[5])
atmos = np.zeros(nd, dtype={'names':('dm', 't', 'p','ne'),
'formats':('f', 'f', 'f','f')})
atmos['dm'] = dm
atmos['t'] = t
atmos['p'] = p
atmos['ne'] = ne
return (teff,logg,vmicro,abu,nd,atmos)
def interp_spl(xout, x, y):
"""Interpolates in 1D using cubic splines
Parameters

x: numpy array or list
input abscissae
y: numpy array or list
input ordinates
xout: numpy array or list
array of abscissae to interpolate to
Returns

yout: numpy array or list
array of interpolated values
"""
tck = interpolate.splrep(x, y, s=0)
yout = interpolate.splev(xout, tck, der=0)
return(yout)
def elements(husser=False):
"""Reads the solar elemental abundances
Parameters

husser: bool, optional
when set the abundances adopted for Phoenix models by Huser et al. (2013)
are adopted. Otherwise Asplund et al. (2005) are used  consistent with
the MARCS (Gustafsson et al. 2008) models and and Kurucz (Meszaros et al. 2012)
Kurucz model atmospheres.
Returns

symbol: numpy array of str
element symbols
mass: numpy array of floats
atomic masses (elements Z=199)
sol: numpy array of floats
solar abundances N/N(H)
"""
symbol = [
'H' ,'He','Li','Be','B' ,'C' ,'N' ,'O' ,'F' ,'Ne',
'Na','Mg','Al','Si','P' ,'S' ,'Cl','Ar','K' ,'Ca',
'Sc','Ti','V' ,'Cr','Mn','Fe','Co','Ni','Cu','Zn',
'Ga','Ge','As','Se','Br','Kr','Rb','Sr','Y' ,'Zr',
'Nb','Mo','Tc','Ru','Rh','Pd','Ag','Cd','In','Sn',
'Sb','Te','I' ,'Xe','Cs','Ba','La','Ce','Pr','Nd',
'Pm','Sm','Eu','Gd','Tb','Dy','Ho','Er','Tm','Yb',
'Lu','Hf','Ta','W' ,'Re','Os','Ir','Pt','Au','Hg',
'Tl','Pb','Bi','Po','At','Rn','Fr','Ra','Ac','Th',
'Pa','U' ,'Np','Pu','Am','Cm','Bk','Cf','Es' ]
mass = [ 1.00794, 4.00260, 6.941, 9.01218, 10.811, 12.0107, 14.00674, 15.9994,
18.99840, 20.1797, 22.98977, 24.3050, 26.98154, 28.0855, 30.97376,
32.066, 35.4527, 39.948, 39.0983, 40.078, 44.95591, 47.867, 50.9415,
51.9961, 54.93805, 55.845, 58.93320, 58.6934, 63.546, 65.39, 69.723,
72.61, 74.92160, 78.96, 79.904, 83.80, 85.4678, 87.62, 88.90585,
91.224, 92.90638, 95.94, 98., 101.07, 102.90550, 106.42, 107.8682,
112.411, 114.818, 118.710, 121.760, 127.60, 126.90447, 131.29,
132.90545, 137.327, 138.9055, 140.116, 140.90765, 144.24, 145, 150.36,
151.964, 157.25, 158.92534, 162.50, 164.93032, 167.26, 168.93421,
173.04, 174.967, 178.49, 180.9479, 183.84, 186.207, 190.23, 192.217,
195.078, 196.96655, 200.59, 204.3833, 207.2, 208.98038, 209., 210.,
222., 223., 226., 227., 232.0381, 231.03588, 238.0289, 237., 244.,
243., 247., 247., 251., 252. ]
if not husser:
#Asplund, Grevesse and Sauval (2005), basically the same as
#Grevesse N., Asplund M., Sauval A.J. 2007, Space Science Review 130, 205
sol = [ 0.911, 10.93, 1.05, 1.38, 2.70, 8.39, 7.78, 8.66, 4.56, 7.84,
6.17, 7.53, 6.37, 7.51, 5.36, 7.14, 5.50, 6.18, 5.08, 6.31,
3.05, 4.90, 4.00, 5.64, 5.39, 7.45, 4.92, 6.23, 4.21, 4.60,
2.88, 3.58, 2.29, 3.33, 2.56, 3.28, 2.60, 2.92, 2.21, 2.59,
1.42, 1.92, 9.99, 1.84, 1.12, 1.69, 0.94, 1.77, 1.60, 2.00,
1.00, 2.19, 1.51, 2.27, 1.07, 2.17, 1.13, 1.58, 0.71, 1.45,
9.99, 1.01, 0.52, 1.12, 0.28, 1.14, 0.51, 0.93, 0.00, 1.08,
0.06, 0.88, 0.17, 1.11, 0.23, 1.45, 1.38, 1.64, 1.01, 1.13,
0.90, 2.00, 0.65, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99, 0.06,
9.99, 0.52, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99 ]
sol[0] = 1.
else:
#a combination of meteoritic/photospheric abundances from Asplund et al. 2009
#chosen for the Husser et al. (2013) Phoenix model atmospheres
sol = [ 12.00, 10.93, 3.26, 1.38, 2.79, 8.43, 7.83, 8.69, 4.56, 7.93,
6.24, 7.60, 6.45, 7.51, 5.41, 7.12, 5.50, 6.40, 5.08, 6.34,
3.15, 4.95, 3.93, 5.64, 5.43, 7.50, 4.99, 6.22, 4.19, 4.56,
3.04, 3.65, 2.30, 3.34, 2.54, 3.25, 2.36, 2.87, 2.21, 2.58,
1.46, 1.88, 9.99, 1.75, 1.06, 1.65, 1.20, 1.71, 0.76, 2.04,
1.01, 2.18, 1.55, 2.24, 1.08, 2.18, 1.10, 1.58, 0.72, 1.42,
9.99, 0.96, 0.52, 1.07, 0.30, 1.10, 0.48, 0.92, 0.10, 0.92,
0.10, 0.85, 0.12, 0.65, 0.26, 1.40, 1.38, 1.62, 0.80, 1.17,
0.77, 2.04, 0.65, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99, 0.06,
9.99, 0.54, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99, 9.99 ]
sol[0] = 1.
for i in range(len(sol)1): sol[i+1] = 10.**(sol[i+1]12.0)
return (symbol,mass,sol)
def lgconv(xinput, yinput, fwhm, ppr=None):
"""convolution with a Gaussian in linear lambda scale
for a constant resolution
Parameters

xinput: numpy float array
wavelengths
yinput: numpy array of floats
fluxes
fwhm: float
FWHM of the Gaussian (same units as for xinput)
ppr: float, optional
Points per resolution element to downsample the convolved spectrum
(default None, to keep the original sampling)
Returns

x: numpy float array
wavelengths after convolution, will be a subset of xinput when that is linear,
otherwise a subset of the linearly resampled version
y: numpy array of floats
fluxes after convolution
"""
#resampling to a linear lambda wavelength scale if need be
xx = np.diff(xinput)
if max(xx)  min(xx) > 1.e7: #input not linearly sampled
nel = len(xinput)
minx = np.min(xinput)
maxx = np.max(xinput)
x = np.linspace(minx,maxx,nel)
#y = np.interp( x, xinput, yinput)
y = interp_spl( x, xinput, yinput)
else: #input linearly sampled
x = xinput
y = yinput
step = x[1]  x[0]
sigma=fwhm/2.0/np.sqrt(2.0*np.log(0.5))
npoints = 2*int(3*fwhm/2./step)+1
half = npoints * step /2.
xx = np.linspace(half,half,npoints)
kernel = np.exp((xxnp.mean(xx))**2/2./sigma**2)
kernel = kernel/np.sum(kernel)
y = np.convolve(y,kernel,'valid')
#y = ss.fftconvolve(y,kernel,'valid')
print(npoints)
edge = int(npoints/2)
x = x[edge:edge]
print(xinput.size,x.size,y.size)
if ppr != None:
fac = int(fwhm / step / ppr)
subset = np.arange(x.size / fac, dtype=int) * fac
x = x[subset]
y = y[subset]
return(x,y)
def vgconv(xinput,yinput,fwhm, ppr=None):
"""convolution with a Gaussian in log lambda scale
for a constant resolving power
Parameters

xinput: numpy float array
wavelengths
yinput: numpy array of floats
fluxes
fwhm: float
FWHM of the Gaussian (km/s)
ppr: float, optional
Points per resolution element to downsample the convolved spectrum
(default None, to keep the original sampling)
Returns

x: numpy float array
wavelengths after convolution, will be a subset of xinput when that is equidistant
in log lambda, otherwise a subset of the resampled version
y: numpy array of floats
fluxes after convolution
"""
#resampling to ln(lambda) if need be
xx = np.diff(np.log(xinput))
if max(xx)  min(xx) > 1.e7: #input not equidist in loglambda
nel = len(xinput)
minx = np.log(xinput[0])
maxx = np.log(xinput[1])
x = np.linspace(minx,maxx,nel)
step = x[1]  x[0]
x = np.exp(x)
#y = np.interp( x, xinput, yinput)
y = interp_spl( x, xinput, yinput)
else:
x = xinput
y = yinput
step = np.log(xinput[1])np.log(xinput[0])
fwhm = fwhm/clight # inverse of the resolving power
sigma=fwhm/2.0/np.sqrt(2.0*np.log(0.5))
npoints = 2*int(3*fwhm/2./step)+1
half = npoints * step /2.
xx = np.linspace(half,half,npoints)
kernel = np.exp((xxnp.mean(xx))**2/2./sigma**2)
kernel = kernel/np.sum(kernel)
y = np.convolve(y,kernel,'valid')
edge = int(npoints/2)
x = x[edge:edge]
#print(xinput.size,x.size,y.size)
if ppr != None:
fac = int(fwhm / step / ppr)
print(fwhm,step,ppr,fac)
subset = np.arange(x.size / fac, dtype=int) * fac
x = x[subset]
y = y[subset]
return(x,y)
def rotconv(xinput,yinput,vsini, ppr=None):
"""convolution with a Rotation profile
Parameters

xinput: numpy float array
wavelengths
yinput: numpy array of floats
fluxes
vsini: float
projected rotational velocity (km/s)
ppr: float, optional
Points per resolution element to downsample the convolved spectrum
(default None, to keep the original sampling)
Returns


End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
 Downloads last month
 0