set
stringclasses 1
value | id
stringlengths 5
9
| chunk_text
stringlengths 1
115k
| chunk_num_tokens
int64 1
106k
| document_num_tokens
int64 58
521k
| document_language
stringclasses 2
values |
---|---|---|---|---|---|
train | 0.4990.29 | \section{Comparison of ``optimal" with fully sequential and Alice immediately measuring (n=8)}\label{app:8segmentsimmediate}
The fully sequential scheme, in which repeater segments are sequentially filled with entangled pairs from, for example, left to right is the overall slowest scheme leading to the smallest raw rates. However, a potential benefit is that parallel qubit storage can be almost entirely avoided. More specifically, when the first segment on the left is filled and waiting for the second segment to be filled too, the first segment waits for a random number of $N_2$ steps, whereas the second segment always only waits for one constant dephasing unit
(for each distribution attempt in the second segment). Thus, omitting the constant dephasing in each segment, the accumulated time-dependent random dephasing of the fully sequential scheme has only contributions from a single memory pair subject to memory dephasing at any elementary time step.
On average, this gives a total dephasing of $(n-1)/p$ which is the sum of the average waiting time in one segment for segments 2 through $n$, as discussed
in detail in the main text.
In a QKD application, Alice's qubit can be measured immediately
(and so can Bob's qubit at the very end when the entangled pair of the most right segment is being distributed). This way there is another factor of $1/2$ improvement possible for the effective dephasing, since at any elementary time step there is always only a single memory qubit dephasing instead of a qubit pair. In Fig.~\ref{fig:Comparison_8_segments_imm_vs_non_imm}, for eight repeater segments, we compare this fully sequential scheme and immediate measurements by Alice and Bob with the ``optimal'' scheme (parallel distribution and swap as soon as possible) where Alice and Bob store their qubits during the whole long-distance distribution procedure to do the BB84 measurements only at the very end. We see that a QKD protocol in which Alice and Bob measure their qubits immediately can be useful in order to go a bit farther. However, note that in the ``optimal'' scheme Alice and Bob may also measure their qubits immediately, resulting in higher rates but also requiring a more complicated rate analysis.
\begin{figure*}
\caption{Comparison of eight-segment repeaters for a total distance \(L\) and different experimental parameters. The ``optimal'' scheme (red) performing BB84 measurements at the end is compared with the fully sequential scheme (orange without memory cut-off, green with cut-off) performing immediate measurements on Alice's / Bob's sides.}
\label{fig:Comparison_8_segments_imm_vs_non_imm}
\end{figure*}
\section{Mixed strategies for distribution and swapping}\label{app:mixedstr}
In this appendix we shall illustrate that our formalism based on the calculation
of PGFs for the two basic random variables is so versatile that we can also obtain the rates for all kinds of mixed strategies. This applies to both the initial entanglement distributions and the entanglement swappings. In fact, for the case of three repeater segments ($n=3$), we have already explicitly calculated the secret key rates for all possible schemes with swapping as soon as possible, but with variations in the initial distribution strategies, see
App.~\ref{app:Optimality 3 segments}. This enabled us to consider schemes that are overall slower (exhibiting smaller raw rates), but can have a smaller accumulated dephasing. While swapping as soon as possible is optimal with regards to a minimal dephasing time, it may sometimes also be useful to consider a different swapping strategy. The most commonly considered swapping strategy is doubling which implies that it can sometimes happen that neighboring, ready segments will not be connected, as this would be inconsistent with a doubling of the covered repeater distances at each step. A conceptual argument for doubling could be that for a scalable (nested) repeater system one can incorporate entanglement distillation steps in a systematic way. A theoretical motivation to focus on doubling has been that rates are more easy to calculate -- a motivation that is rendered obsolete through the present work, at least for repeaters of size up to $n=8$. Nonetheless we shall give a few examples for mixed strategies for $n=4$ and $n=8$ segments.
For $n=4$ segments, in addition to those schemes discussed in the main text, let us consider another possibility where we distribute entanglement over the first three segments in the optimal way and then extend it
over the last segment. Note that this scheme is a variation of the swapping strategy, while the initial distributions still occur in parallel. As a consequence, it can happen that either segment 4 waits for the first three segments to accomplish their distributions and connections or the first three segments have to wait for segment 4. This part of the dephasing corresponds to the last term in the next equation below. The scheme serves as an illustration of the rich choice of possibilities for the swapping
strategies even when only $n=4$. We have
\begin{equation}\label{eq:D4-3-1}
\begin{split}
D^{31}_4(N_1, N_2, N_3, N_4) &= D^\star_3(N_1, N_2, N_3) \\
&+ |\max(N_1, N_2, N_3) - N_4|.
\end{split}
\end{equation}
The PGF of this random variable reads as
\begin{equation}
\tilde{G}^{31}_4(t) = \frac{p^4}{1-q^4} \frac{P^{31}_4(q, t)}{Q^{31}_4(q, t)},
\end{equation}
where the numerator and denominator are given by
\begin{displaymath}
\begin{split}
P^{31}_4(q, &t) = 1 + (q^2 + 3 q^3) t + (q + 3 q^2 - q^4 - q^5 ) t^2 \\
&+ (-2 q^2 - 4 q^3 - 4 q^4 + q^5 + q^6) t^3 \\
&+ (-q^2 - 3 q^3 - q^4 -3 q^6 - 3 q^7 ) t^4 \\
&+ (-2 q^2 - q^3 + 2 q^4 - 2 q^6 + q^7 + 2 q^8 ) t^5 \\
&+ (3 q^3 + 3 q^4 + q^6 + 3 q^7 + q^8 ) t^6 \\
&+ (-q^4 - q^5 + 4 q^6 + 4 q^7 + 2 q^8 ) t^7 \\
&+ (q^5 + q^6 - 3 q^8 - q^9 ) t^8 - (3 q^7 + q^8 ) t^9 - q^{10} t^{10}, \\
Q^{31}_4(q, &t) = (1-qt)(1-q^2t)(1-q^3t)(1-qt^2)\\
&\times (1-q^2t^2)(1-qt^3).
\end{split}
\end{displaymath}
If we take the derivatives (see Eq.~\eqref{eq:PGF}),
we can obtain the following relation,
\begin{equation}
\mathbf{E}[D^{\mathrm{dbl}}_4] = \mathbf{E}[D^{31}_4].
\end{equation}
This means that the two random variables have the same expectation values, even though their distributions are different. For
the secret key fraction we need the averages of the exponential of these variables, which essentially leads to the values of the
corresponding PGFs (see Eq.~\eqref{eq:PGF_2}). These do differ, as Fig.~\ref{fig:DD} illustrates. It shows the ratio
\begin{equation}\label{eq:r}
\frac{\mathbf{E}[e^{-\alpha D^{31}_4}]}{\mathbf{E}[e^{-\alpha D^{\mathrm{dbl}}_4}]} =
\frac{\tilde{G}^{31}_4(e^{-\alpha})}{\tilde{G}^{\mathrm{dbl}}_4(e^{-\alpha})},
\end{equation}
as a function of $\alpha$. The two random variables have the same average, but the average $\mathbf{E}[e^{-\alpha
D^{31}_4}]$ is larger than the other, so in the scheme corresponding to the random variable given by Eq.~\eqref{eq:D4-3-1}, the
distributed state has a higher fidelity than the final state in the doubling scheme.
\begin{figure}
\caption{The ratio given by Eq.~\eqref{eq:r}
\label{fig:DD}
\end{figure}
\begin{figure}
\caption{The ratio in Eq.~\eqref{eq:r2}
\label{fig:Ee}
\end{figure}
\begin{figure}
\caption{The ratio in Eq.~\eqref{eq:r3}
\label{fig:Ea}
\end{figure}
For the case $n=8$, among a large number of other possibilities to swap the segments, we consider the following three (in addition, the doubling and optimal schemes are discussed in the main text). The
first scheme is to swap the two halves of the repeater in the optimal way (for four segments) and then swap the two larger
segments. We loosely denote the dephasing variable of these scheme as $D^{44}_8$, whose definition reads as
\begin{equation}
\begin{split}
D^{44}_8&(N_1, \ldots, N_8) = D^\star_4(N_1, \ldots, N_4) \\
&+ D^\star_4(N_5, \ldots, N_8) \\
&+ |\max(N_1, \ldots, N_4) - \max(N_5, \ldots, N_8)|.
\end{split}
\end{equation}
Another possibility is to divide the repeater in four pairs, swap them and then swap the four larger segments optimally.
The expression for this dephasing variable $D^{2222}_8$ is a straightforward translation of this description:
\begin{equation}
\begin{split}
D^{2222}_8&(N_1, \ldots, N_8) = |N_1 - N_2| + \ldots + |N_7 - N_8| \\
&+ D^\star_4(\max(N_1, N_2), \ldots, \max(N_7, N_8)).
\end{split}
\end{equation}
Finally, we can divide the segments into three groups, consisting of two, four, and two segments. The middle group we
swap optimally (for four segments), and then we swap the three larger segments in the optimal way (for three segments). The
definition of the corresponding random variable $D^{242}_8$ reads as
\begin{equation}
\begin{split}
D^{242}_8&(N_1, \ldots, N_8) = |N_1 - N_2| + |N_7 - N_8| \\
&+ D^\star_4(N_3, \ldots, N_6) + D^\star_3(\max(N_1, N_2), \\
&\max(N_3, \ldots, N_6), \max(N_7, N_8)).
\end{split}
\end{equation}
The PGFs of all these variables have all the same form,
\begin{equation}
\frac{p^8}{1 - q^8} \frac{P(q, t)}{Q(q, t)},
\end{equation}
with appropriate polynomials $P(q, t)$ and $Q(q, t)$. The numerator polynomials $P(q, t)$ are quite large and contain
around one thousand terms, so we do not present them here.
We can compare the performances of different schemes by plotting the ratios
\begin{equation}\label{eq:r2}
\frac{\mathbf{E}[e^{-\alpha D^{\mathrm{sch}}_8}]}{\mathbf{E}[e^{-\alpha D^{\mathrm{opt}}_8}]} =
\frac{\tilde{G}^{\mathrm{sch}}_8(e^{-\alpha})}{\tilde{G}^{\mathrm{opt}}_8(e^{-\alpha})},
\end{equation}
similar to Eq.~\eqref{eq:r}, for $\mathrm{sch} = \mathrm{dbl}, 2222, 242, 44$. We see that among the five
schemes the doubling scheme is the worst with regards to dephasing, and the scheme 44 is the closest to the optimal scheme, see Fig.~\ref{fig:Ee}.
This means that the commonly used parallel-distribution doubling scheme, though fast in terms of $K_8$, is really inefficient in terms of dephasing $D_8$
by disallowing to swap when neighboring segments are ready
on all ``nesting'' levels \cite{Shchukin2021}.
\section{Two-Segment ``Node-Receives-Photon'' Repeaters}
\label{app:nrp}
Figure~\ref{fig:NRP_Contour_2_segments}
shows the BB84 rates in a two-segment quantum repeater
based on the NRP concept with one middle station
receiving optical quantum signals sent from
two outer stations at Alice and Bob.
By circumventing the need for extra classical communication
and thus significantly reducing the effective memory dephasing,
the minimal state and gate fidelity values can even be kept constant
over large distance regimes.
For the experimental clock rate we have chosen $\tau_{\mathrm{clock}}=\unit[10]{MHz}$,
limited by the local interaction and processing times
of the light-matter interface at the middle station.
\begin{figure*}
\caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by a two-segment NRP repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\), \(\tau_{\mathrm{clock}
\label{fig:NRP_Contour_2_segments}
\end{figure*} | 3,390 | 85,618 | en |
train | 0.4990.30 | \section{Two-Segment ``Node-Receives-Photon'' Repeaters}
\label{app:nrp}
Figure~\ref{fig:NRP_Contour_2_segments}
shows the BB84 rates in a two-segment quantum repeater
based on the NRP concept with one middle station
receiving optical quantum signals sent from
two outer stations at Alice and Bob.
By circumventing the need for extra classical communication
and thus significantly reducing the effective memory dephasing,
the minimal state and gate fidelity values can even be kept constant
over large distance regimes.
For the experimental clock rate we have chosen $\tau_{\mathrm{clock}}=\unit[10]{MHz}$,
limited by the local interaction and processing times
of the light-matter interface at the middle station.
\begin{figure*}
\caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by a two-segment NRP repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\), \(\tau_{\mathrm{clock}
\label{fig:NRP_Contour_2_segments}
\end{figure*}
\section{Calculation for Cabrillo's scheme}\label{app:cabrillo}
First, we consider two entangled states of a single-rail qubit with a quantum memory ($\gamma\in\mathbb{R}$)
\begin{align}
\frac{1}{1+\gamma^2} &\left[ \ket{\uparrow,\uparrow,0,0} +\gamma \ket{\uparrow,\downarrow,0,1} \right. \nonumber \\
&\hspace{20pt} + \left. \gamma\ket{\downarrow,\uparrow,1,0}+\gamma^2\ket{\downarrow,\downarrow,1,1} \right].
\end{align}
After applying a lossy channel with transmission parameter $\eta=p_{\mathrm{link}}\exp(-\frac{L_0}{2L_{\mathrm{att}}})$ to both optical modes, we obtain the following state after introducing two additional environmental modes
\begin{widetext}
\begin{align}
\frac{1}{1+\gamma^2} &\Bigg[ \gamma^2 \ket{\downarrow,\downarrow} \otimes \left(\eta\ket{1,1,0,0} + \sqrt{\eta(1-\eta)} \left(\ket{1,0,0,1} + \ket{0,1,1,0}\right) + (1-\eta) \ket{0,0,1,1} \right) \nonumber\\
&\hspace{20pt} + \left. \gamma \ket{\uparrow,\downarrow} \otimes \left(\sqrt{\eta} \ket{0,1,0,0} + \sqrt{1-\eta} \ket{0,0,0,1} \right) \right. \nonumber\\
&\hspace{20pt} + \left. \gamma \ket{\downarrow,\uparrow} \otimes \left(\sqrt{\eta} \ket{1,0,0,0} + \sqrt{1-\eta} \ket{0.0,1,0} \right) \right.\\
&\hspace{20pt} + \ket{\uparrow,\uparrow,0,0,0,0} \Bigg]\nonumber
\end{align}
\end{widetext}
We apply a 50:50 beam splitter to the (non-environmental) optical mode and obtain the state
\begin{widetext}
\begin{align}
\frac{1}{1+\gamma^2} \Bigg[&\gamma^2 \ket{\downarrow,\downarrow} \otimes \sqrt{\frac{\eta(1-\eta)}{2}} \left(\ket{1,0,0,1}+\ket{0,1,0,1}+\ket{1,0,1,0}-\ket{0,1,1,0}\right) \nonumber\\
&\hspace{20pt} + \gamma^2 \ket{\downarrow,\downarrow} \otimes \frac{\eta}{2}\left(\ket{2,0,0,0}-\ket{0,2,0,0}\right) \nonumber \\
&\hspace{20pt} +\gamma^2 \ket{\downarrow,\downarrow} \otimes (1-\eta)\ket{0,0,1,1} \nonumber \\
&\hspace{20pt} \left. + \gamma \ket{\uparrow,\downarrow} \otimes \left(\sqrt{\frac{\eta}{2}} \left(\ket{1,0,0,0}-\ket{0,1,0,0}\right) + \sqrt{1-\eta} \ket{0,0,0,1} \right)\right.\\
&\hspace{20pt} \left. + \gamma \ket{\downarrow,\uparrow} \otimes \left(\sqrt{\frac{\eta}{2}} \left(\ket{1,0,0,0}+\ket{0,1,0,0}\right) + \sqrt{1-\eta}\ket{0,0,1,0}\right) \right.\nonumber\\
&\hspace{20pt} +\ket{\uparrow,\uparrow,0,0,0,0} \Bigg]\nonumber\,.
\end{align}
\end{widetext}
We can obtain entangled memory states by post-selecting single photon events at the detectors. If we detect a single photon at the first detector and no photon at the other, we obtain the following (unnormalized) 2-memory reduced density operator (see \cite[App. E]{tf_repeater})
\begin{align}
\frac{\gamma^2\eta}{(1+\gamma)^2}\left[\ket{\Psi^+}\bra{\Psi^+}+\gamma^2(1-\eta)\ket{\downarrow,\downarrow}\bra{\downarrow,\downarrow}\right].
\end{align}
When using simple on/off detectors instead of photon number resolving detectors (PNRD) two-photon events will also lead to a detection event. The two-memory state after a two-photon event is given by
\begin{align}
\frac{\gamma^4\eta^2}{4(1+\gamma^2)^2}\ket{\downarrow,\downarrow}\bra{\downarrow,\downarrow}\,.
\end{align}
Thus, the probability of a successful entanglement generation is given by $p_{\mathrm{PNRD}}=\frac{2\gamma^2\eta}{(1+\gamma^2)^2}(1+\gamma^2(1-\eta))$, when using PNRD, and $p_{\mathrm{on/off}}=\frac{2\gamma^2\eta}{(1+\gamma^2)^2}(1+\gamma^2(1-\frac{3}{4}\eta))$, when using on/off detectors. The factor 2 comes from the possibility to detect the photon at the other detector instead, although in this case the memory state differs by a single-qubit $Z$-operation. After a suitable twirling, we can find a one-qubit Pauli channel which maps the state $\ket{\Psi^+}\bra{\Psi^+}$ to the actual memory state, i.e. we can claim that the loss channel acting on the optical modes induces a Pauli channel on the memories. We can parametrize this Pauli channel by the tuple of error probabilities $(p_I,p_X,p_Y,p_Z)$ and for the case with PNRDs this tuple is given by
\begin{align}
\frac{1}{1+\gamma^2(1-\eta)}\left(1,\frac{\gamma^2}{2}(1-\eta),\frac{\gamma^2}{2}(1-\eta),0\right)
\end{align}
and for on/off detectors it is given by
\begin{align}
\frac{1}{1+\gamma^2(1-\frac{3}{4}\eta)}\left(1,\frac{\gamma^2}{2}(1-\frac{3}{4}\eta),\frac{\gamma^2}{2}(1-\frac{3}{4}\eta),0\right)\,.
\end{align}
When we consider an $n$-segment repeater, we have to consider a concatenation of $n$ such Pauli channels and we finally obtain the error rates
\begin{align}
e_x&=\frac{1}{2}\left(1-\mu^{n-1}\mu_0^{n}\frac{(2F_0-1)^n\mathbf{E}[e^{-\alpha D_n}]}{(1+\gamma^2(1-\eta))^n}\right),\\
e_z&=\frac{1}{2}\left(1-\mu^{n-1}\mu_0^{n}\left(\frac{1-\gamma^2(1-\eta)}{1+\gamma^2(1-\eta)}\right)^n\right)
\end{align}
in the case of PNRDs. When we consider on/off detectors, we can simply replace $\eta$ by $\frac{3}{4}\eta$ in the error rates.
\end{document} | 2,219 | 85,618 | en |
train | 0.4991.0 | \begin{document}
\frontmatter
\title{Interpolation synthesis for quadratic polynomial
inequalities and combination with \textit{EUF}}
\titlerunning{Hamiltonian Mechanics}
\author{\small Ting Gan\inst{1} \and Liyun Dai\inst{1} \and Bican Xia\inst{1} \and Naijun Zhan\inst{2} \and Deepak Kapur\inst{3}
\and Mingshuai Chen \inst{2}
}
\authorrunning{Ting Gan et al.}
\tocauthor{}
\institute{LMAM \& School of Mathematical Sciences, Peking University\\
\email{\{gant,dailiyun,xbc\}@pku.edu.cn},
\and
State Key Lab. of Computer Science, Institute of Software, CAS \\
\email{znj@ios.ac.cn}
\and
Department of Computer Science, University of New Mexico\\
\email{kapur@cs.unm.edu}
}
\maketitle
\begin{abstract}
An algorithm for generating interpolants for formulas which are
conjunctions of quadratic polynomial
inequalities (both strict and nonstrict) is proposed. The
algorithm is based on a key observation that
quadratic polynomial inequalities can be linearized if they are
concave. A generalization of Motzkin's
transposition theorem is proved, which is used to generate
an interpolant between two mutually contradictory conjunctions
of polynomial inequalities, using semi-definite programming in time complexity
$\mathcal{O}(n^3+nm))$ with a given threshold, where $n$ is the number of variables
and $m$ is the number of inequalities.
Using the framework proposed by \cite{SSLMCS2008}
for combining
interpolants for a combination of quantifier-free theories which
have their own interpolation algorithms, a combination algorithm is
given for the combined theory of concave quadratic polynomial
inequalities and the equality theory over uninterpreted functions
symbols (\textit{EUF}). The proposed approach is applicable to all existing abstract domains like
\emph{octagon}, \emph{polyhedra}, \emph{ellipsoid} and so on, therefore it can be used to improve
the scalability of existing verification techniques for programs and hybrid systems.
In addition, we also discuss how to
extend our approach to formulas beyond concave quadratic polynomials
using Gr\"{o}bner basis.
\end{abstract}
\keywords{Program verification, Interpolant, Concave quadratic polynomials, Motzin's theorem, Semi-definite programming}.
\section{Introduction}
Interpolants have been popularized by McMillan \cite{mcmillan05} for automatically generating
invariants of programs. Since then, developing efficient algorithms for
generating interpolants for various theories has become an active area
of research; in
particular, methods have been developed for generating
interpolants for Presburger arithmetic (both for integers as well
as for rationals/reals), theory of equality over uninterpreted
symbols as well as their combination. Most of these methods
assume the availability of a refutation proof of $\alpha \land
\beta$ to generate a ``reverse" interpolant of $(\alpha, \beta)$;
calculi have been proposed to label an inference node in a
refutational proof depending upon whether symbols of formulas on
which the inference is applied are purely from $\alpha$ or
$\beta$. For propositional calculus, there already existed
methods for generating interpolants from resolution proofs
\cite{krajicek97,pudlak97} prior
to McMillan's work, which generate different interpolants from
those done by McMillan's method. This led D'Silva et al \cite{SPWK10} to study
strengths of various interpolants.
In Kapur, Majumdar and Zarba \cite{KMZ06}, an intimate connection between
interpolants and quantifier elimination was established. Using
this connection, existence of quantifier-free as well as
interpolants with quantifiers were shown for a variety of
theories over container data structures. A CEGAR based
approach was generalized for verification of programs over
container data structures using interpolants. Using this connection between
interpolant
generation and quantifier elimination, Kapur \cite{Kapur13}
has shown that interpolants form a lattice ordered using
implication, with the interpolant generated from $\alpha$ being
the bottom of such a lattice and the interpolant generated from
$\beta$ being the top of the lattice.
Nonlinear polynomials inequalities have been found useful to express
invariants for software involving sophisticated number theoretic
functions as well as hybrid systems; an interested reader may see
\cite{ZZKL12,ZZK13} where different controllers involving nonlinear polynomial
inequalities are discussed for some industrial applications.
We propose an algorithm to generate interpolants for
quadratic
polynomial inequalities (including strict inequalities).
Based on the insight that for analyzing the solution space
of concave quadratic polynomial (strict) inequalities, it
suffices to linearize them. We prove
a generalization of Motzkin's transposition theorem
to be applicable for quadratic polynomial inequalities (including
strict as well as nonstrict). Based on this result, we
prove the existence of interpolants for two mutually
contradictory conjunctions $\alpha, \beta$ of concave quadratic polynomial
inequalities and give an algorithm for computing an interpolant
using semi-definite programming.
The algorithm is recursive with the basis step of the algorithm
relying on an additional condition on concave quadratic
polynomials appearing in nonstrict inequalities
that any nonpositive constant combination of these polynomials is
never a nonzero sum of square polynomial (called $\mathbf{NSOSC}$).
In this case, an interpolant output by the algorithm is either a
strict inequality or a nonstrict inequality much like in the
linear case.
In case,
this condition is not satisfied by the nonstrict inequalities,
i.e., there is a nonpositive constant combinations of polynomials
appearing as nonstrict inequalities that is a negative of a sum
of squares,
then new mutually contradictory conjunctions
of concave quadratic polynomials in fewer variables are derived
from the input augmented with the equality relation deduced, and
the algorithm is recursively invoked on the smaller problem. The
output of this algorithm is in general an interpolant that is a
disjunction of conjunction of polynomial nonstrict or strict inequalities.
The $\mathbf{NSOSC}$ condition can be checked in polynomial time using
semi-definite programming.
We also show how separating terms $t^-, t^+$ can be constructed using common
symbols in $\alpha, \beta$ such that $\alpha \Rightarrow t^- \le x \le t^+$ and
$\beta \Rightarrow t^+ \le y \le t^-$, whenever $(\alpha \land
\beta) \Rightarrow x = y$. Similar to the construction for
interpolants, this construction has the same recursive structure with
concave quadratic polynomials satisfying NSOSC as the basis
step.
This result enables the use of the
framework proposed in \cite{RS10} based on
hierarchical theories and a combination method for generating
interpolants by Yorsh and Musuvathi, from combining equality
interpolating quantifier-free theories for generating
interpolants for the combined theory of quadratic polynomial
inequalities and theory of uninterpreted symbols.
Obviously, our results are significant in program verification as
all well-known abstract domains, e.g. \emph{octagon}, \emph{polyhedra}, \emph{ellipsoid} and so on,
which are widely used in the verification of programs and hybrid systems, are
\emph{quadratic} and \emph{concave}. In addition, we also discuss the possibility to
extend our results to general polynomial formulas by allowing polynomial equalities whose
polynomials may be neither \emph{concave} nor \emph{quadratic} using Gr\"{o}bner basis.
We develop a combination algorithm for generating
interpolants for the combination of concave quadratic polynomial
inequalities and uninterpreted function symbols.
In \cite{DXZ13}, Dai et al. gave an
algorithm for generating interpolants for conjunctions of
mutually contradictory nonlinear polynomial inequalities
based on the existence of a witness guaranteed by Stengle's
\textbf{Positivstellensatz} \cite{Stengle} that can be computed using
semi-definite programming.
Their algorithm is incomplete in general but if every variables ranges
over a bounded interval (called Archimedean condition), then
their algorithm is complete. A major limitation of their work is
that formulas $\alpha, \beta$ cannot have uncommon
variables\footnote{See however an expanded version of their paper
under preparation where they propose heuristics using program
analysis for eliminating uncommon variables.}.
However, they do not give any
combination algorithm for generating interpolants in the presence
of uninterpreted function symbols appearing in $\alpha, \beta$.
The paper is organized as follows. After discussing some
preliminaries in the next section,
Section 3 defines concave quadratic polynomials, their matrix
representation and their linearization. Section 4
presents the main contribution of the paper.
A generalization of Motzkin's transposition theorem for quadratic
polynomial inequalities is presented. Using this result, we
prove the existence of interpolants for two mutually
contradictory conjunctions $\alpha, \beta$ of concave quadratic polynomial
inequalities and give an algorithm (Algorithm 2) for computing an interpolant
using semi-definite programming.
Section 5 extends this algorithm to the combined theory of
concave quadratic inequalities and \textit{EUF} using the framework used
in \cite{SSLMCS2008,RS10}.
Implementation and experimental results using the proposed
algorithms are briefly reviewed in Section 6, and
we conclude and discus future work in Section 7.
\begin{comment}
{\color{red} \em I reformulated this part accordingly. Deepak, could you have a look and revise this paragraph?. }
The paper is organized as follows. After discussing some
preliminaries in the next section, \oomit{we review in Section 3, the
results for interpolant generation for nonlinear polynomial
inequalities; we discuss the key assumptions used in this
work. Section 4 discusses how many of these assumptions can be
relaxed and the results can be generalized.}
Section 3 is devoting to the main contribution of the paper, where we present
a generalization of Motzkin's transposition theorem for quadratic
polynomial inequalities. Using this result, we
prove the existence of interpolants for two mutually
contradictory conjunctions $\alpha, \beta$ of concave quadratic polynomial
inequalities and give an algorithm for computing an interpolant
using semi-definite programming in Section 4.
In Section 5, after
briefly reviewing the framework used in \cite{RS10} for generating
interpolants for the combined theory of linear inequalities and
equality over uninterpreted symbols, we show how the result of the
previous section can be used to develop an algorithm for
generating interpolants for the combined theory of quadratic
polynomial inequalities and equality over uninterpreted symbols.
Implementation and experimental results of our approach are provided in Section 6, and
we conclude this paper and discus future work in Section 7.
\end{comment} | 2,818 | 68,445 | en |
train | 0.4991.1 | \section{Preliminaries}
Let $\mathbb{N}$, $\mathbb{Q}$ and $\mathbb{R}$
be the set of natural, rational and real numbers, respectively.
Let $\mathbb{R}[x]$ be the polynomial ring over $\mathbb{R}$ with
variables $x=(x_1,\cdots,x_n)$. An atomic polynomial formula
$\varphi$ is of the form $ p(x) \diamond 0$, where $p(x) \in
\mathbb{R}[x]$, and
$\diamond$ can be any of $=, >, \ge, \neq$; without any
loss of generality, we can assume $\diamond$ to be any of $>, \ge$.
An arbitrary polynomial formula is
constructed from atomic ones with Boolean connectives and quantifications over real numbers.
Let $\mathbf{PT} ( \mathbb{R} )$ be a first-order theory of polynomials with
real coefficient, In this paper, we are focusing on
quantifier-free fragment of $\mathbf{PT}(\mathbb{R})$.
Later we discuss quantifier-free theory of equality of terms over
uninterpreted function symbols and its combination with
the quantifier-free fragment of $\mathbf{PT}(\mathbb{R})$. Let $\Sigma$ be a set
of (new) function symbols.
Let $\PT ( \RR )^{\Sigma}$ be the extension of
the quantifier-free theory with uninterpreted function symbols in $\Sigma$.
For convenience, we use $\bot$ to stand for \emph{false} and
$\top$ for \emph{true} in what follows.
\begin{definition}
A model $\mathcal{M}=(M,{f_{\mathcal{M}}})$ of $\PT ( \RR )^{\Sigma}$ consists of a model $M$ of
$\PT ( \RR )$ and a function $f_{\mathcal{M}}: \mathbb{R}^n \rightarrow \mathbb{R}$ for each $f\in \Sigma$ with arity $n$.
\end{definition}
\oomit{We have already defined the atomic formulas, thus, a formulas can be defined
easily refer to first-order logic. A formula is called closed, or a sentence, if it
has no free variables. A formula or a term is called ground if it has no occurrences
of variables.
Let $\mathcal{T}$ be a theory (here is $\PT ( \QQ )$, $\PT ( \RR )$,$\PT ( \QQ )^{\Sigma}$ or $\PT ( \RR )^{\Sigma}$), we define truth,
satisfiability and entail of a first-order formula in the standard way. }
\begin{definition}
Let $\phi$ and $\psi$ be formulas of a considered theory $\mathcal{T}$, then
\begin{itemize}
\item $\phi$ is \emph{valid} w.r.t. $\mathcal{T}$, written as $\models_{\mathcal{T}} \phi$, iff $\phi$ is
true in all models of $\mathcal{T}$;
\item $\phi$ \emph{entails} $\psi$ w.r.t. $\mathcal{T}$, written as $\phi \models_{\mathcal{T}} \psi$,
iff for any model of $\mathcal{T}$, if $\psi$ is true in the model, so is $\phi$;
\item $\phi$ is \emph{satisfiable} w.r.t. $\mathcal{T}$, iff there exists a model of $\mathcal{T}$
such that in which $\phi$ is true; otherwise \emph{unsatisfiable}.
\end{itemize}
\end{definition}
Note that $\phi$ is unsatisfiable iff $\phi \models_{\mathcal{T}} \bot$.
Craig showed that given two formulas $\phi$ and $\psi$ in a
first-order theory $\mathcal{T}$ such that
$\phi \models \psi$, there always exists an \emph{interpolant} $I$ over
the common symbols of $\phi$ and $\psi$ such that $\phi \models
I, I \models \psi$. In the verification literature, this
terminology has been abused following \cite{mcmillan05}, where an
\emph{reverse} interpolant $I$ over the common symbols of $\phi$ and
$\psi$ is defined for $\phi \wedge\psi \models
\bot$ as: $\phi \models I$ and $I \wedge \psi \models
\bot$.
\begin{definition}
Let $\phi$ and $\psi$ be two formulas in a theory $\mathcal{T}$ such that
$\phi \wedge \psi \models_{\mathcal{T}} \bot$. A formula $I$ said to be
a \emph{(reverse) interpolant} of $\phi$ and
$\psi$ if the following conditions hold:
\begin{enumerate}
\item[i] $\phi \models_{\mathcal{T}} I$;
\item[ii] $I \wedge \psi \models_{\mathcal{T}} \bot$; and
\item[iii] $I$ only contains common symbols and free variables shared by $\phi$ and
$\psi$.
\end{enumerate}
\end{definition}
If $\psi$ is closed, then $\phi \models_{\mathcal{T}} \psi$ iff
$\phi \wedge \neg \psi \models_{\mathcal{T}} \bot$. Thus, $I$ is an interpolant of
$\phi$ and $\psi$ iff $I$ is a reverse interpolant of $\phi$ and
$\neg \psi$.
In this paper, we just deal with reveres interpolant, and from now
on, we abuse interpolant and reverse interpolant.
\subsection{Motzkin's transposition theorem}
Motzkin's transposition theorem \cite{schrijver98} is one of the
fundamental results about linear inequalities; it also served as
a basis of the interpolant generation algorithm for the
quantifier-free
theory of linear inequalities in \cite{RS10}.
The theorem has several variants as well.
Below we give two of them.
\begin{theorem}[Motzkin's transposition theorem \cite{schrijver98}] \label{motzkin-theorem} Let $A$ and $B$ be matrices and let $\va$ and $\vb$ be
column vectors. Then there exists a vector $x$ with $Ax \ge \va$ and $Bx > \vb$, iff
\begin{align*}
&{\rm for ~all~ row~ vectors~} \yy,\zz \ge 0: \\
&~(i) {\rm ~if~} \yy A + \zz B =0 {\rm ~then~} \yy \va + \zz \vb \le 0;\\
&(ii) {\rm ~if~} \yy A + \zz B =0 {\rm~and~} \zz\neq 0 {\rm ~then~} \yy \va + \zz \vb < 0.
\end{align*}
\end{theorem}
\begin{corollary} \label{cor:linear}
Let $A \in \mathbb{R}^{r \times n}$ and $B \in \mathbb{R}^{s \times n}$ be matrices and $\va \in \mathbb{R}^r$ and
$\vb \in \mathbb{R}^s$ be column vectors. Denote by $A_i, i=1,\ldots,r$ the $i$th row of $A$ and by
$B_j, j=1,\ldots,s$ the $j$th row of $B$. Then there does not exist a vector $x$ with
$Ax \ge \va$ and $Bx > \vb$, iff there exist real numbers $\lambda_1,\ldots,\lambda_r \ge 0$
and $\eta_0,\eta_1,\ldots,\eta_s \ge0$ such that
\begin{align}
&\sum_{i=1}^{r} \lambda_i (A_i x - \alpha_i) + \sum_{j=1}^{s} \eta_j (B_j x -\beta_j) + \eta_0 \mathbf{Eq}uiv 0, \label{eq:corMoz1}\\
&\sum_{j=0}^{s} \eta_j > 0.\label{eq:corMoz2}
\end{align}
\end{corollary}
\begin{proof}
The ``if" part is obvious. Below we prove the ``only if" part.
By Theorem \ref{motzkin-theorem}, if
$Ax \ge \va$ and $Bx > \vb$ have no common solution, then
there exist two row vectors $\yy \in \mathbb{R}^r$ and $\zz \in \mathbb{R}^s$ with $\yy \ge 0$ and $\zz\ge 0$
such that
\[ (\yy A+\zz B=0 \wedge \yy \va+ \zz \vb > 0) \vee (\yy A+\zz B=0 \wedge \zz \neq 0 \wedge \yy \va+ \zz \vb \ge 0).\]
Let $\lambda_i=y_i, i=1,\ldots, r$, $\eta_j = z_j, j=1,\ldots, s$ and $\eta_0 = \yy \va+ \zz \vb$.
Then it is easy to check that Eqs. (\ref{eq:corMoz1}) and (\ref{eq:corMoz2}) hold. \qed
\end{proof} | 2,265 | 68,445 | en |
train | 0.4991.2 | \section{Concave quadratic polynomials and their linearization}
\oomit{As we know, the existing algorithms for interpolant generation fell mainly into two classes. One is
proof-based, they first require explicit construction of proofs, then an interpolant can be computed,
\cite{krajicek97,mcmillan05,pudlak97,KB11}. Another is constraint solving based, they first construct an
constrained system, then solve it, from which an interpolant can be computed, \cite{RS10,DXZ13}. The works are all deal
with propositional logic or linear inequalities over reals except \cite{DXZ13,KB11}, which can deal with
nonlinear case. Unfortunately, in \cite{DXZ13} the common variables, i.e. $(iii)$ in Definition \ref{crain:int}, can not be handled well; and \cite{KB11}, which is a variant of SMT solver based on interval arithmetic, is too much
rely on the interval arithmetic. Consider
the following example,
\begin{example} \label{exam:pre}
Let $f_1 = x_1, f_2 = x_2,f_3= -x_1^2-x_2^2 -2x_2-\zz^2, g_1= -x_1^2+2 x_1 - x_2^2 + 2 x_2 - \yy^2$. Two formulas $\phi:=(f_1 \ge 0) \wedge (f_2 \ge0) \wedge (g_1 >0)$,
$\psi := (f_3 \ge 0)$. $\phi \wedge \psi \models \bot$.
\end{example}
We want to generate an interpolant for
$\phi \wedge \psi \models \bot$. The existing algorithms can not
be used directly to obtain an interpolant, since this example is
nonlinear and not all the variables are common variables in
$\phi$ and $\psi$.
An algorithm may be exploited based on CAD, but the efficiency of
CAD is fatal weakness.
In this paper, we provide a complete and efficient algorithm to generate interpolant for a special nonlinear case
(CQ case, i.e. $\phi$ and $\psi$ are defined by the conjunction of a set of concave quadratic polynomials "$>0$" or "$\ge 0$"), which
contains Example \ref{exam:pre}. }
\begin{definition} [Concave Quadratic] \label{quad:concave}
A polynomial $f \in \mathbb{R}[x]$ is called {\em concave quadratic (CQ)}, if the following two conditions hold:
\begin{itemize}
\item[(i)] $f$ has total degree at most $2$, i.e., it has the form
$f = x^T A x + 2 \va^T x + a$, where $A$ is a real symmetric matrix, $\va$ is a column vector and $a \in \mathbb{R}$ is a constant;
\item[(ii)] the matrix $A$ is negative semi-definite, written as $A \preceq
0$.\footnote{$A$ being negative semi-definite has many equivalent
characterizations: for every vector $x$, $x ^T A x \le 0$;
every $k$th minor of $A$ $\le 0$ if $k$ is odd and $\ge 0$
otherwise; a Hermitian matrix whose eigenvalues are
nonpositive.}
\end{itemize}
\end{definition}
\begin{example}
Let $g_1= -x_1^2+2 x_1 - x_2^2 + 2 x_2 - y^2$, then it can be expressed as
\begin{align*}
g_1={\left( \begin{matrix}
&x_1\\
&x_2\\
&y
\end{matrix} \right)}^T
{\left( \begin{matrix}
&-1&0&0\\
&0&-1&0\\
&0&0&-1
\end{matrix} \right)}
{\left( \begin{matrix}
&x_1\\
&x_2\\
&y
\end{matrix} \right)} +2 {\left( \begin{matrix}
&1\\
&1\\
&0
\end{matrix} \right)}^T
{\left( \begin{matrix}
&x_1\\
&x_2\\
&y
\end{matrix} \right)}.
\end{align*}
The degree of $g_1$ is 2, and the corresponding $A={\left( \begin{matrix}
&-1&0&0\\
&0&-1&0\\
&0&0&-1
\end{matrix} \right)} \preceq 0$. Thus, $g_1$ is CQ.
\end{example}
It is easy to see that if $f \in \mathbb{R}[x]$ is linear, then $f$ is
CQ because its total degree is $1$ and the
corresponding $A$
is $0$ which is of course negative semi-definite.
A quadratic polynomial can also be represented as an inner
product of matrices (cf. \cite{laurent}), i.e.,
$ f(x)=\left<P,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right)\right >.$ | 1,354 | 68,445 | en |
train | 0.4991.3 | \subsection{Linearization} \label{linearization}
Consider quadratic polynomials
$f_i$ and $g_j$ ($i=1,\ldots,r$,
$j=1,\ldots,s$),
\begin{align*}
f_i=x^T A_i x+2\va_i^Tx+a_i,\\
g_j=x^T B_j x+2\vb_j^Tx+b_j,
\end{align*}
where $A_i$, $B_j$ are symmetric $n\times n$ matrices,
$\va_i,\vb_j\in \mathbb{R}^n$, and $a_i,b_j\in \mathbb{R}$;
let
$P_i:=\left( \begin{matrix}
a_i & \va_i^T \\
\va_i & A_i
\end{matrix}
\right),~
Q_j:=\left( \begin{matrix}
b_j & \vb_j^T \\
\vb_j & B_j
\end{matrix}
\right)$
be
$(n+1)\times(n+1)$ matrices, then
\begin{align*}
f_i(x)=\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right)\right >,~~
g_j(x)=\left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right)\right >.
\end{align*}
\begin{comment}
The moment matrix $M_1(\yy)$ is of the form
$\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right)$
for some $x\in \mathbb{R}^n$ and some symmetric $n\times n $ matrix
$\XX$, and the localizing constraint $M_0(f_i\yy)$ and
$M_0(g_j\yy)$ read $\left <P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right)\right >$
and
$\left <Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right)\right >$.
Therefore, the moment relaxations of order $1$ can be reformulated as
\end{comment}
For CQ polynomials $f_i$s and $g_j$s in which each $A_i \preceq
0$, $B_j \preceq 0$, define
\begin{equation}
K=\{x \in \mathbb{R}^n \mid f_1(x) \ge0,\ldots,f_r(x)\ge0, g_1(x)>0,\ldots, g_s(x)>0 \}.
\label{eq:opt}
\end{equation}
Given a quadratic polynomial
$ f(x)=\left<P,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right)\right >$,
its \emph{linearization} is defined as
$f(x)=\left<P,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right)\right >$,
where
$\left( \begin{matrix}
1 & x^T\\
x & \XX
\end{matrix}
\right)\succeq 0$.
\
Let
\begin{align*}
\overline{\XX}=(&\XX_{(1,1)},\XX_{(2,1)},\XX_{(2,2)},\ldots,
\XX_{(k,1)}, \ldots,\XX_{(k,k)}, \ldots, \XX_{(n,1)}, \ldots,\XX_{(n,n)})
\end{align*}
be the vector variable
with $\frac{n(n+1)}{2}$ dimensions corresponding to the matrix $\XX$.
Since $\XX$ is a symmetric
matrix,
$\left<P,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >$
is a linear expression in $x,\overline{\XX}$.
Now, let
\begin{align}
&K_1 = \{x \mid \left( \begin{matrix}
1 & x^T\\
x & \XX
\end{matrix}
\right)\succeq 0 ,
\
\wedge_{i=1}^r \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \ge 0 , \nonumber \\
& \quad \quad \quad \quad
\wedge_{j=1}^s \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > > 0, \mbox{ for some } \XX \},\label{eq:mom1}
\end{align}
which is the set of all $x\in \mathbb{R}^n$ on linearizations of the above $f_i$s and $g_j$s.
\oomit{Thus,
\begin{aligned}
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \ge 0 ,
\
&\left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > > 0,
\end{aligned}
are respectively linearizations of quadratic nonstrict inequalities $f_i
\ge 0$'s and strict inequalities $g_j > 0$'s in which all
quadratic terms are abstracted by new variables.
A reader should note that the condition
\begin{aligned}
\left( \begin{matrix}
1 & x^T\\
x & \XX
\end{matrix}
\right)\succeq 0.
\end{aligned}
is critical.
}
\begin{comment}
Define
\begin{equation*}
K_2:=\left\{ x\in \mathbb{R}^n\mid \sum_{i=1}^r t_if_i(x)\ge 0 \mbox{ for all } t_i\ge 0 \mbox{ for which } \sum_{i=1}^r t_iA_i\preceq 0 \right\}.
\end{equation*}
Note that the definition of $K_2$ is independent on $g_j$.
\end{comment}
In \cite{fujie,laurent}, when $K$ and $K_1$ are defined only with $f_i$ without $g_j$, i.e., only with
non-strict inequalities, it is proved that $K=K_1$.
\oomit{The set $K$ is defined by a set of quadratic inequalities. The
set $K_1$ is defined by a positive semi-definite constraint and a
set of linear inequalities.}
By the following Theorem \ref{the:1},
we show that $K=K_1$ also holds even in
the presence of strict inequalities when $f_i$ and $g_j$ are CQ. So, when
$f_i$ and $g_j$ are CQ, the CQ
polynomial inequalities can be transformed equivalently to a set of
linear inequality constraints and a positive semi-definite
constraint.
\begin{theorem} \label{the:1}
Let $f_1,\ldots,f_r$ and $g_1,\ldots,g_s$ be CQ polynomials, $K$ and
$K_1$ as above, then $K=K_1$.
\end{theorem}
\begin{proof}
For any $x \in K$, let $\XX=x x^T$. Then it is easy to see that
$x,\XX$ satisfy (\ref{eq:mom1}). So $x \in K_1$, that is $K \subseteq K_1$.
Next, we prove $K_1 \subseteq K$.
Let $x \in K_1$, then there exists a symmetric $n\times n $ matrix $\XX$ satisfying
(\ref{eq:mom1}).
Because
$\left( \begin{matrix}
1 & x^T\\
x & \XX
\end{matrix}
\right)\succeq 0$,
we have $\XX - x x^T \succeq 0$.
Then by the last two conditions in (\ref{eq:mom1}), we have
\begin{align*}
f_i(x) &= \left< P_i,\left( \begin{matrix}
1 & x^T \\
x & x x^T
\end{matrix}
\right ) \right> =
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right> +
\left<P_i,\left( \begin{matrix}
0 & 0 \\
0 & x x^T - \XX
\end{matrix}
\right ) \right> \\
&=\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right> +
\left<A_i , x x^T - \XX \right> \ge \left<A_i , x x^T - \XX \right>, \\[1mm]
g_j(x) &= \left< Q_j,\left( \begin{matrix}
1 & x^T \\
x & x x^T
\end{matrix}
\right ) \right> =
\left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right> +
\left<Q_j,\left( \begin{matrix}
0 & 0 \\
0 & x x^T - \XX
\end{matrix}
\right ) \right> \\
&=\left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right> +
\left<B_j , x x^T - \XX \right> > \left<B_j , x x^T - \XX \right>.
\end{align*}
Since $f_i$ and $g_j$ are all CQ, $A_i \preceq 0$ and $B_j \preceq 0$.
Moreover, $\XX -x x^T \succeq 0$, i.e.,
$x x^T -\XX \preceq 0$. Thus,
$\left<A_i , x x^T - \XX \right> \ge 0$ and
$\left<B_j , x x^T - \XX \right> \ge 0$.
Hence, we have $f_i(x) \ge 0$ and $g_j(x) > 0$, so $x \in K$,
that is
$K_1 \subseteq K$.
\qed
\end{proof}
\subsection{Motzkin's theorem in Matrix Form}
If
$\left<P,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >$
is seen as a linear expression in $x,\overline{\XX}$,
then Corollary \ref{cor:linear}
can be reformulated as:
\begin{corollary} \label{cor:matrix}
Let $x$ be a column vector variable of dimension $n$ and $\XX$ be a $n \times n$
symmetric matrix variable. Suppose $P_0,P_1,\ldots, P_r$ and $Q_1,\ldots, Q_s$ are
$(n+1) \times (n+1)$ symmetric matrices. Let
\begin{align*}
W \hat{=} \{ (x,\XX) \mid
\wedge_{i=1}^r \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \ge 0,
\wedge_{i=1}^s \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > > 0 \oomit{,
~for~
\begin{matrix}
i=0,1,\ldots,r, \\
j=1,\ldots,s.
\end{matrix} }
\},
\end{align*}
then $W=\emptyset$ iff there exist $\lambda_0, \lambda_1,\ldots,\lambda_r \ge 0$
and $\eta_0,\eta_1,\ldots,\eta_s \ge 0$ such that
\begin{align*}
&\sum_{i=0}^{r}\lambda_i \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > +
\sum_{j=1}^{s}\eta_j \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > + \eta_0 \mathbf{Eq}uiv 0, \mbox{ and }\\
&\eta_0 + \eta_1 + \ldots + \eta_s > 0.
\end{align*}
\end{corollary} | 3,569 | 68,445 | en |
train | 0.4991.4 | \section{Algorithm for generating interpolants for Concave
Quadratic Polynomial inequalities}
\begin{problem} \label{CQ-problem}
Given two formulas $\phi$ and $\psi$
on $n$ variables with $\phi \wedge \psi
\models \bot$,
where
\begin{eqnarray*}
\phi & = & f_1 \ge 0 \wedge \ldots \wedge f_{r_1} \ge 0 \wedge g_1 >0 \wedge \ldots \wedge g_{s_1} > 0, \\
\psi & = & f_{r_1+1} \ge 0 \wedge \ldots \wedge f_{r} \ge 0 \wedge g_{s_1+1} >0 \wedge \ldots \wedge g_{s} > 0,
\end{eqnarray*}
in which
$f_1, \ldots, f_{r}, g_1, \ldots, g_s$ are all CQ,
develop an algorithm
to generate a (reverse) Craig interpolant
$I$ for $\phi$ and $\psi$, on the common variables of
$\phi$ and $\psi$,
such that $\phi \models I$ and $I \wedge
\psi \models \bot$. For convenience, we partition the variables appearing in the
polynomials above into three disjoint subsets $x=(x_1,\ldots,x_d)$ to stand for
the common variables appearing in both $\phi$ and $\psi$, $\yy=(y_1,\ldots,y_u)$ to stand for the
variables appearing only in $\phi$ and $\zz=(z_1,\ldots,z_v)$
to stand for the variables appearing only in $\psi$, where $d+u+v=n$.
\end{problem}
Since linear inequalities are trivially concave quadratic
polynomials, our algorithm (Algorithm $\mathbf{IGFQC}$ in Section \ref{sec:alg}) can deal
with the linear case too. In fact, it is a generalization of the
algorithm for linear inequalities.
\oomit{Actually, since our main result (Theorem \ref{the:main}) is a generalization of Motzkin's theorem, our algorithm is essentially the same as other interpolant generation algorithms (e.g., \cite{RS10}) based on Motzkin's theorem when all the $f_i$ and $g_j$ are linear.
\begin{example}
For the formulas in Example \ref{exam:pre}, the input and output of our algorithm are
\begin{description}
\item[{\sf In:}] $\phi :f_1 = x_1, f_2 = x_2,g_1= -x_1^2+2 x_1 - x_2^2 + 2 x_2 - y^2$;\\
$\psi : f_3= -x_1^2-x_2^2 -2x_2-z^2$
\item[{\sf Out:}] $\frac{1}{2}x_1^2+\frac{1}{2}x_2^2+2x_2 > 0$
\end{description}
\end{example}
In the following sections, we come up with an condition $\mathbf{NSOSC}$ (Definition \ref{def:sosc}) and generalize the
Motzkin's transposition theorem (Theorem \ref{motzkin-theorem}) to Theorem \ref{the:main} for concave quadratic case when this condition hold in Section \ref{theory}.
When the condition $\mathbf{NSOSC}$ holds, we give a method to solve Problem 1 based on Theorem \ref{the:main} in Section \ref{sec:hold}. For the general case of concave quadratic ,
Problem 1 is solved in a recursive way in Section \ref{g:int}. }
The proposed algorithm is recursive: the base case is when no sum
of squares (SOS)
polynomial can be generated by a nonpositive constant combination of
nonstrict inequalities in $\phi \land \psi$.
When this condition is not satisfied, i.e., an SOS polynomial
can be generated by a nonpositive constant combination of
nonstrict inequalities in $\phi \land \psi$, then it is possible
to identify variables which can be eliminated by replacing them
by linear expressions in terms of other variables and thus
generate equisatisfiable problem with fewer variables on which
the algorithm can be recursively invoked.
\begin{lemma} \label{lemma:1}
Let $U \in \mathbb{R}^{(n+1) \times (n+1)}$ be a matrix.
If
$\left<U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \le 0$
for any $x \in \mathbb{R}^n$ and symmetric matrix $\XX \in \mathbb{R}^{n\times n}$ with
$\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix} \right ) \succeq 0$
, then $U \preceq 0$.
\end{lemma}
\begin{proof}
Assume that $U \not \preceq 0$.
Then there exists a column vector $\yy=(y_0,y_1,\ldots,y_n)^T \in \mathbb{R}^{n+1}$ such that $c:=\yy^T U\yy=\left< U, \yy \yy^T \right>>0$. Denote $M= \yy \yy^T$, then $M \succeq 0$.
If $y_0 \neq 0$, then let $x = (\frac{y_1}{y_0},\ldots,\frac{y_n}{y_0})^T$, and $\XX= x x^T$.
Thus,
$\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) =
\left( \begin{matrix}
1 & x^T \\
x & x x^T
\end{matrix}
\right ) = \frac{1}{y_0^2} M \succeq $,
and
$\left<U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right> = \left<U, \frac{1}{y_0^2} M \right> = \frac{c}{y_0^2} >0$,
which contradicts with
$\left<U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \le 0$.
If $\yy_0= 0$, then $M_{(1,1)} = 0$.
Let $M{'}=\frac{|U_{(1,1)}|+1 }{c} M$, then $M{'} \succeq 0$. Further,
let $M{''} = M{'}+\left( \begin{matrix}
1 & 0& \cdots & 0 \\
0 & 0& \cdots & 0 \\
\vdots & \vdots &\ddots &\vdots \\
0 & 0& \cdots & 0
\end{matrix}
\right )$.
Then $M{''}\succeq 0$ and $M{''}_{(1,1)}=1$.
Let
$\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right )= M{''} $, then
\begin{align*}
\left<U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >&=
\left<U, M{''} \right > =
\left<U,
M{'}+\left( \begin{matrix}
1 & 0& \cdots & 0 \\
0 & 0& \cdots & 0 \\
\vdots & \vdots &\ddots &\vdots \\
0 & 0& \cdots & 0
\end{matrix}
\right )
\right >\\
&=
\left<U, \frac{|U_{(1,1)}|+1 }{c} M
+\left( \begin{matrix}
1 & 0& \cdots & 0 \\
0 & 0& \cdots & 0 \\
\vdots & \vdots &\ddots &\vdots \\
0 & 0& \cdots & 0
\end{matrix}
\right )
\right > \\
&=
\frac{|U_{(1,1)}|+1 }{c}\left<U, M
\right > + U_{(1,1)}\\
&=|U_{(1,1)}|+1+ U_{(1,1)} >0,
\end{align*} which also contradicts with
$\left<U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \le 0$.
Thus,
the assumption does not hold, that is $U\preceq 0$. \qed
\end{proof}
\begin{lemma} \label{lemma:2}
Let $\mathcal{A} = \{ \yy \in \mathbb{R}^m \mid A_i \yy-\va_i \ge 0, B_j \yy-\vb_j > 0,
~for~ i=1,\ldots,r, j=1,\ldots,
\}$ be a nonempty set
and $\mathcal{B} \subseteq \mathbb{R}^m$ be an nonempty convex closed set. If
$\mathcal{A} \cap \mathcal{B} = \emptyset$ and there does not exist a linear form $L(\yy)$
such that
\begin{align} \label{separ:1}
\forall \yy \in \mathcal{A}, L(\yy) >0, ~and~~
\forall \yy \in \mathcal{B}, L(\yy) \le 0,
\end{align}
then there is a linear form $L_0(\yy) \not\mathbf{Eq}uiv 0$
and $\delta_1, \ldots, \delta_r \ge 0$
such that
\begin{align}
L_0(\yy) = \sum_{i=1}^{r} \delta_i (A_i \yy - \alpha_i)~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0.
\end{align}
\end{lemma}
\begin{proof}
Since $\mathcal{A}$ is defined by a set of linear inequalities, $\mathcal{A}$ is a
convex set.
Using the separation theorem on disjoint convex sets,
cf. e.g. \cite{barvinok02},
there exists a linear
form $L_0(\yy) \not\mathbf{Eq}uiv 0$ such that
\begin{align}
\forall \yy \in \mathcal{A}, L_0(\yy) \ge 0, ~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0.
\end{align}
From (\ref{separ:1}) we have that
\begin{align} \label{y0}
\exists\yy_0 \in \mathcal{A}, ~~ L_0(\yy_0)=0.
\end{align}
Since
\begin{align}
\forall \yy \in \mathcal{A}, L_0(\yy) \ge 0,
\end{align}
then
\begin{align*}
&A_1 \yy -\alpha_1 \ge0\wedge \ldots \wedge A_r \yy -\alpha_r \ge0 \wedge \\
&B_1 \yy -\beta_1 > 0\wedge \ldots \wedge B_s \yy -\beta_s > 0 \wedge -L_0(\yy) >0
\end{align*}
has no solution w.r.t. $\yy$.
Using Corollary \ref{cor:linear}, there exist $\lambda_1,\ldots,\lambda_r \ge0$, $\eta_0, \ldots, \eta_s \ge0$ and $\eta \ge0$ such that
\begin{align}
&\sum_{i=1}^{r} \lambda_i (A_i\yy - \alpha_i) + \sum_{j=1}^{s} \eta_j (B_j \yy -\beta_j)+
\eta (-L_0(\yy)) + \eta_0 \mathbf{Eq}uiv 0, \label{eq:lem2} \\
&\sum_{j=0}^{s} \eta_j + \eta > 0. \label{ineq:lem2}
\end{align}
Applying $\yy_0$ in (\ref{y0}) to (\ref{eq:lem2}) and (\ref{ineq:lem2}), it follows
\begin{align*}
\eta_0=\eta_1=\ldots=\eta_s=0,~~ \eta >0.
\end{align*}
For $i=1,\ldots,r$, let $\delta_i=\frac{\lambda_i}{\eta} \ge 0$, then
\begin{align*}
L_0(\yy) = \sum_{i=1}^{r} \delta_i (A_i \yy -\alpha_i)~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0. ~~~ \qed
\end{align*}
\end{proof} | 3,392 | 68,445 | en |
train | 0.4991.5 | \begin{lemma} \label{lemma:2}
Let $\mathcal{A} = \{ \yy \in \mathbb{R}^m \mid A_i \yy-\va_i \ge 0, B_j \yy-\vb_j > 0,
~for~ i=1,\ldots,r, j=1,\ldots,
\}$ be a nonempty set
and $\mathcal{B} \subseteq \mathbb{R}^m$ be an nonempty convex closed set. If
$\mathcal{A} \cap \mathcal{B} = \emptyset$ and there does not exist a linear form $L(\yy)$
such that
\begin{align} \label{separ:1}
\forall \yy \in \mathcal{A}, L(\yy) >0, ~and~~
\forall \yy \in \mathcal{B}, L(\yy) \le 0,
\end{align}
then there is a linear form $L_0(\yy) \not\mathbf{Eq}uiv 0$
and $\delta_1, \ldots, \delta_r \ge 0$
such that
\begin{align}
L_0(\yy) = \sum_{i=1}^{r} \delta_i (A_i \yy - \alpha_i)~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0.
\end{align}
\end{lemma}
\begin{proof}
Since $\mathcal{A}$ is defined by a set of linear inequalities, $\mathcal{A}$ is a
convex set.
Using the separation theorem on disjoint convex sets,
cf. e.g. \cite{barvinok02},
there exists a linear
form $L_0(\yy) \not\mathbf{Eq}uiv 0$ such that
\begin{align}
\forall \yy \in \mathcal{A}, L_0(\yy) \ge 0, ~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0.
\end{align}
From (\ref{separ:1}) we have that
\begin{align} \label{y0}
\exists\yy_0 \in \mathcal{A}, ~~ L_0(\yy_0)=0.
\end{align}
Since
\begin{align}
\forall \yy \in \mathcal{A}, L_0(\yy) \ge 0,
\end{align}
then
\begin{align*}
&A_1 \yy -\alpha_1 \ge0\wedge \ldots \wedge A_r \yy -\alpha_r \ge0 \wedge \\
&B_1 \yy -\beta_1 > 0\wedge \ldots \wedge B_s \yy -\beta_s > 0 \wedge -L_0(\yy) >0
\end{align*}
has no solution w.r.t. $\yy$.
Using Corollary \ref{cor:linear}, there exist $\lambda_1,\ldots,\lambda_r \ge0$, $\eta_0, \ldots, \eta_s \ge0$ and $\eta \ge0$ such that
\begin{align}
&\sum_{i=1}^{r} \lambda_i (A_i\yy - \alpha_i) + \sum_{j=1}^{s} \eta_j (B_j \yy -\beta_j)+
\eta (-L_0(\yy)) + \eta_0 \mathbf{Eq}uiv 0, \label{eq:lem2} \\
&\sum_{j=0}^{s} \eta_j + \eta > 0. \label{ineq:lem2}
\end{align}
Applying $\yy_0$ in (\ref{y0}) to (\ref{eq:lem2}) and (\ref{ineq:lem2}), it follows
\begin{align*}
\eta_0=\eta_1=\ldots=\eta_s=0,~~ \eta >0.
\end{align*}
For $i=1,\ldots,r$, let $\delta_i=\frac{\lambda_i}{\eta} \ge 0$, then
\begin{align*}
L_0(\yy) = \sum_{i=1}^{r} \delta_i (A_i \yy -\alpha_i)~and~~
\forall \yy \in \mathcal{B}, L_0(\yy) \le0. ~~~ \qed
\end{align*}
\end{proof}
The lemma below asserts the existence of a strict linear inequality
separating $\mathcal{A}$ and $\mathcal{B}$ defined above,
for the case when any nonnegative constant combination of the linearization of $f_i$s is
positive.
\begin{lemma} \label{linear}
Let $\mathcal{A} = \{ \yy \in \mathbb{R}^m \mid A_i \yy-\va_i \ge 0, B_j \yy-\vb_j > 0,
~for~ i=1,\ldots,r, j=1,\ldots,
\}$ be a nonempty set
and $\mathcal{B} \subseteq \mathbb{R}^m$ be an nonempty convex closed set, $\mathcal{A} \cap \mathcal{B} = \emptyset$.
There exists a linear form $L(x,\overline{\XX})$ such that
\begin{align*}
\forall (x,\overline{\XX}) \in \mathcal{A}, L(x,\overline{\XX}) >0, ~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L(x,\overline{\XX}) \le0,
\end{align*}
whenever there does not exist $\lambda_i \ge 0$, s.t.,
$ \sum_{i=1}^{r} \lambda_i P_i \preceq 0$.
\end{lemma}
\begin{proof}
Proof is by contradiction. \oomit{ i.e., there does not exists a linear form $L(x,\overline{\XX})$ such that
\begin{align*}
\forall (x,\overline{\XX}) \in \mathcal{A}, L(x,\overline{\XX}) >0, ~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L(x,\overline{\XX}) \le0.
\end{align*}}
Given that $\mathcal{A}$ is defined by a set of linear inequalities
and $\mathcal{B}$ is a closed convex nonempty set,
by Lemma \ref{lemma:2},
there exist a linear form $L_0(x,\overline{\XX}) \not\mathbf{Eq}uiv 0$
and $\delta_1, \ldots, \delta_r \ge 0$
such that
\begin{align*}
L_0(x,\overline{\XX}) = \sum_{i=1}^{r} \delta_i
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L_0(x,\overline{\XX}) \le0.
\end{align*}
I.e. there exists an symmetrical matrix $\mathbf{L} \not\mathbf{Eq}uiv 0$ such that
\begin{align}
&\left<\mathbf{L},\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >
\mathbf{Eq}uiv \sum_{i=1}^{r} \delta_i
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >, \label{eq:mat}\\
& \forall (x,\overline{\XX}) \in \mathcal{B},
\left<\mathbf{L},\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \le0. \label{ineq:mat}
\end{align}
Applying Lemma \ref{lemma:1} to (\ref{ineq:mat}), it follows $\mathbf{L}
\preceq 0$. This implies that $\sum_{i=1}^{r} \delta_i P_i =\mathbf{L} \preceq 0$,
which is in contradiction to
the assumption that there does not exist $\lambda_i \ge 0$, s.t.,
$ \sum_{i=1}^{r} \lambda_i P_i \preceq 0$
\qed
\end{proof} | 2,143 | 68,445 | en |
train | 0.4991.6 | The lemma below asserts the existence of a strict linear inequality
separating $\mathcal{A}$ and $\mathcal{B}$ defined above,
for the case when any nonnegative constant combination of the linearization of $f_i$s is
positive.
\begin{lemma} \label{linear}
Let $\mathcal{A} = \{ \yy \in \mathbb{R}^m \mid A_i \yy-\va_i \ge 0, B_j \yy-\vb_j > 0,
~for~ i=1,\ldots,r, j=1,\ldots,
\}$ be a nonempty set
and $\mathcal{B} \subseteq \mathbb{R}^m$ be an nonempty convex closed set, $\mathcal{A} \cap \mathcal{B} = \emptyset$.
There exists a linear form $L(x,\overline{\XX})$ such that
\begin{align*}
\forall (x,\overline{\XX}) \in \mathcal{A}, L(x,\overline{\XX}) >0, ~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L(x,\overline{\XX}) \le0,
\end{align*}
whenever there does not exist $\lambda_i \ge 0$, s.t.,
$ \sum_{i=1}^{r} \lambda_i P_i \preceq 0$.
\end{lemma}
\begin{proof}
Proof is by contradiction. \oomit{ i.e., there does not exists a linear form $L(x,\overline{\XX})$ such that
\begin{align*}
\forall (x,\overline{\XX}) \in \mathcal{A}, L(x,\overline{\XX}) >0, ~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L(x,\overline{\XX}) \le0.
\end{align*}}
Given that $\mathcal{A}$ is defined by a set of linear inequalities
and $\mathcal{B}$ is a closed convex nonempty set,
by Lemma \ref{lemma:2},
there exist a linear form $L_0(x,\overline{\XX}) \not\mathbf{Eq}uiv 0$
and $\delta_1, \ldots, \delta_r \ge 0$
such that
\begin{align*}
L_0(x,\overline{\XX}) = \sum_{i=1}^{r} \delta_i
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >~and~~
\forall (x,\overline{\XX}) \in \mathcal{B}, L_0(x,\overline{\XX}) \le0.
\end{align*}
I.e. there exists an symmetrical matrix $\mathbf{L} \not\mathbf{Eq}uiv 0$ such that
\begin{align}
&\left<\mathbf{L},\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >
\mathbf{Eq}uiv \sum_{i=1}^{r} \delta_i
\left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >, \label{eq:mat}\\
& \forall (x,\overline{\XX}) \in \mathcal{B},
\left<\mathbf{L},\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \le0. \label{ineq:mat}
\end{align}
Applying Lemma \ref{lemma:1} to (\ref{ineq:mat}), it follows $\mathbf{L}
\preceq 0$. This implies that $\sum_{i=1}^{r} \delta_i P_i =\mathbf{L} \preceq 0$,
which is in contradiction to
the assumption that there does not exist $\lambda_i \ge 0$, s.t.,
$ \sum_{i=1}^{r} \lambda_i P_i \preceq 0$
\qed
\end{proof}
\begin{definition} \label{def:sosc}
For given formulas $\phi$ and $\psi$ as in Problem 1,
it satisfies the non-existence of an SOS condition ($\mathbf{NSOSC}$) iff
there do not exist $\delta_1\ge0, \ldots, \delta_r\ge 0$, such that
$-(\delta_1 f_1 + \ldots + \delta_r f_r)$ is a non-zero SOS.
\end{definition}
\oomit{The above condition implies that there is no \textcolor{blue}{nonnegative} constant combination
of nonstrict inequalities which is always {nonpositive}.
\textcolor{green}{If quadratic polynomials appearing in $\phi$ and $\psi$ are
linearized, then the above condition is equivalent to requiring
that every nonnegative linear combination of the linearization of
nonstrict inequalities in $\phi$ and $\psi$ is {negative.}}
The following theorem gives a method for generating an
interpolant for this case by considering linearization of the
problem and using Corollary \ref{cor:matrix}. In that sense, this
theorem is a generalization of Motzkin's theorem to CQ polynomial inequalities. }
The following theorem gives a method for generating an
interpolant when the condition $\mathbf{NSOSC}$\ holds by considering linearization of the
problem and using Corollary \ref{cor:matrix}. In that sense, this
theorem is a generalization of Motzkin's theorem to CQ polynomial inequalities.
The following separation lemma about a nonempty convex set $\mathcal{A}$ generated by
linear inequalities that is disjoint from another nonempty closed
convex set $\mathcal{B}$ states that if there is no strict linear
inequality that holds over $\mathcal{A}$ and does not hold on any
element in $\mathcal{B}$, then there is a hyperplane separating
$\mathcal{A}$ and $\mathcal{B}$, which is a nonnegative linear
combination of nonstrict inequalities.
\begin{theorem}
\label{the:main}
Let $f_1,\ldots,f_r,g_1,\ldots,g_s$ are CQ polynomials
and the $K$ is defined as in (\ref{eq:opt}) with $K=\emptyset$. If the condition $\mathbf{NSOSC}$ holds,
then there exist $\lambda_i\ge 0$ ($i=1,\cdots,r$), $\eta_j \ge 0$ ($j=0,1,\cdots,s$) and a quadratic SOS polynomial $h \in \mathbb{R}[x]$ such that
\begin{align}
&\sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0 + h \mathbf{Eq}uiv 0,\\
&\eta_0+\eta_1 + \ldots + \eta_s = 1.
\end{align}
\end{theorem}
The proof uses the fact that if $f_i$s satisfy the $\mathbf{NSOSC}$
condition, then the linearization of $f_i$s and $g_j$s can be
exploited to generate an interpolant expressed in terms of $x$.
The main issue is to decompose the result from the
linearized problem into two components giving an interpolant.
\begin{proof}
Recall from Section \ref{linearization} that
\begin{align*}
f_i = \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right > ,~~
g_j = \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right >.
\end{align*}
Let
\begin{equation}\label{eq:mom}
\begin{aligned}
&\mathcal{A}:=\{ (x,\overline{\XX}) \mid
\wedge_{i=1}^r \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > \ge 0,
\wedge_{j=1}^s \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > > 0
\oomit{~for~
\begin{matrix}
i=1,\ldots,r \\
j=1,\ldots,s
\end{matrix} }
\}, \\
&\mathcal{B}:=\{(x,\overline{\XX})\mid \left( \begin{matrix}
1 & x^T\\
x & \XX
\end{matrix}
\right)\succeq 0 \}, \\
\end{aligned}
\end{equation}
be linearizations of the CQ polynomials $f_i$s
and $g_j$s,
where \begin{align*}
\overline{\XX}=(&\XX_{(1,1)},\XX_{(2,1)},\XX_{(2,2)},\ldots,
\XX_{(k,1)}, \ldots,\XX_{(k,k)}, \ldots, \XX_{(n,1)}, \ldots,\XX_{(n,n)}).
\end{align*}
By Theorem \ref{the:1}, $\mathcal{A} \cap \mathcal{B} =K_1=K=\emptyset$.
Since $f_i$s satisfy the $\mathbf{NSOSC}$ condition, its linearization
satisfy the condition of Lemma \ref{linear}; thus
there exists a linear form $\mathcal{L}(x,\XX)=\left<L,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right>$ such that
\begin{align}
& \mathcal{L}(x,\XX) > 0, ~for~ (x,\XX) \in \mathcal{A}, \label{lin-sep1}\\
& \mathcal{L}(x,\XX) \le 0, ~for~ (x,\XX) \in \mathcal{B} \label{lin-sep2}.
\end{align}
Applying Lemma \ref{lemma:1}, it follows $L \preceq 0$.
Additionally, applying Lemma \ref{cor:matrix} to
(\ref{lin-sep1}) and denoting $-L$ by $P_0$,
there exist $\overline{\lambda_0}, \overline{\lambda_1},\ldots,\overline{\lambda_r} \ge 0$
and $\overline{\eta_0},\overline{\eta_1},\ldots,\overline{\eta_s} \ge 0$ such that
\begin{align*}
& \sum_{i=0}^{r}\overline{\lambda_i} \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > +
\sum_{j=1}^{s}\overline{\eta_j} \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > + \overline{\eta_0} \mathbf{Eq}uiv 0, \\
& \overline{\eta_0} + \overline{\eta_1} + \ldots + \overline{\eta_s} > 0.
\end{align*}
Let $\lambda_i=\frac{\overline{\lambda_i}}{\sum_{j=0}^s \overline{\eta_j}}$,
$\eta_j=\frac{\overline{\eta_j}}{\sum_{j=0}^s \overline{\eta_j}}$, then
\begin{align}
& \lambda_0
\left<-U,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right >+
\sum_{i=1}^{r}\lambda_i \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > +
\sum_{j=1}^{s}\eta_j \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & \XX
\end{matrix}
\right ) \right > + \eta_0 \mathbf{Eq}uiv 0, \label{lin-sep5}\\
& \eta_0 + \eta_1 + \ldots + \eta_s =1. \label{lin-sep6}
\end{align}
Since for any $x$ and symmetric matrix $\XX$, (\ref{lin-sep5}) holds, by setting $\XX=xx^T$,
\begin{align*}
\lambda_0
\left<-U,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right >+
\sum_{i=1}^{r}\lambda_i \left<P_i,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right > +
\sum_{j=1}^{s}\eta_j \left<Q_j,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right > + \eta_0 \mathbf{Eq}uiv 0,
\end{align*}
which means that
\begin{align*}
h+\sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0 \mathbf{Eq}uiv 0,
\end{align*}
where $h=\lambda_0 \left<-U,\left( \begin{matrix}
1 & x^T \\
x & xx^T
\end{matrix}
\right ) \right >$. Since $U\preceq 0$, $-U \succeq 0$. Hence $h$ is a quadratic SOS polynomial.
\qed
\end{proof} | 3,594 | 68,445 | en |
train | 0.4991.7 | \subsection{Base Case: Generating Interpolant when NSOSC is satisfied} \label{sec:hold}
Using the above theorem, it is possible to generate an
interpolant for $\phi$ and $\psi$ from the SOS polynomial $h$ obtained using
the theorem which can be split into two SOS polynomials in the
common variables of $\phi$ and $\psi$.
This is
proved in the following theorem using some lemma as follows.
\begin{lemma} \label{h:sep}
Given a quadratic SOS polynomial
$h(x,\yy,\zz) \in \mathbb{R}[x,\yy,\zz]$ on variables
$x=(x_1,\cdots,x_d) \in
\mathbb{R}^{d}$,$\yy=(y_1,\cdots,y_u)
\in \mathbb{R}^{u}$
and $\zz=(z_1,\cdots,z_v) \in \mathbb{R}^{v}$ such that the coefficients of
$y_i z_j$ ($i=1,\cdots,u,j=1,\cdots,v$) are all vanished when
expanding $h(x,\yy,\zz)$, there exist two quadratic
polynomial $h_1(x,\yy) \in \mathbb{R}[x,\yy]$ and
$h_2(x,\zz) \in \mathbb{R}[x,\zz]$ such that $h=h_1+h_2$,
moreover, $h_1$ and $h_2$ both are SOS.
\end{lemma}
\begin{proof}
Since $h(x,\yy,\zz)$ is a quadratic polynomial and the coefficients of $y_i z_j$ ($i=1,\cdots,u,j=1,\cdots,v$) are all vanished when expanding $h(x,\yy,\zz)$, we have
\begin{align*}
h(x,\yy_1,\cdots,\yy_u,\zz)=a_1 y_1^2 + b_1(x,y_2,\cdots,y_u) y_1 + c_1(x,y_2,\cdots,y_u,\zz),
\end{align*}
where $a_1 \in \mathbb{R}$, $b_1(x,y_2,\cdots,y_u) \in \mathbb{R}[x,y_2,\cdots,y_u]$ is a linear function and $c_1(x,y_2,\cdots,y_u,\zz)\in \mathbb{R}[x,y_2,\cdots,y_u,\zz]$ is a
quadratic polynomial.
Since $h(x,\yy,\zz)$ is an SOS polynomial, so
\begin{align*}
\forall (x,y_1,\cdots,y_u,\zz) \in \mathbb{R}^{d+u+v} ~~~ h(x,y_1,\cdots, y_u, \zz) \geq 0.
\end{align*}
Thus $a_1=0 \wedge b_1 \mathbf{Eq}uiv 0$ or $a_1>0$.
If $a_1=0 \wedge b_1 \mathbf{Eq}uiv 0$ then we denote
\begin{align*}
p_1(x,y_2,\cdots,y_u,\zz)=c_1(x,y_2,\cdots,y_u,\zz), ~~ q_1(x,y_1,\cdots,y_u)=0;
\end{align*}
otherwise, $a_1 >0$, and we denote
{\small \begin{align*}
p_1(x,y_2,\cdots,y_u,\zz)=h(x,-\frac{b_1}{2 a_1},y_2,\cdots,y_u,\zz),~~
q_1(x,y_1,\cdots,y_u)=a_1 (y_1 + \frac{b_1}{2 a_1})^2.
\end{align*} }
Then, it is easy to see $p_1(x,y_2,\cdots,y_u,\zz)$ is a quadratic polynomial satisfying
\begin{align*}
h(x,y_1,\cdots,y_u,\zz)= p_1(x,y_2,\cdots,y_u,\zz)+q_1(x,y_1,\cdots,y_u),
\end{align*}
and
\begin{align*}
\forall (x,y_2,\cdots,y_u,\zz) \in \mathbb{R}^{r+s-1+t} ~~~ p_1(x,y_2,\cdots, y_u, \zz) \geq 0,
\end{align*}
moreover, the coefficients of $y_i z_j$ ($i=2,\cdots,s,j=1,\cdots,t$) are all vanished when expanding $p_1(x,y_2,\cdots,y_u,\zz)$, and $q_1(x,y_1,\cdots,y_u)\in \mathbb{R}[x,\yy]$ is an SOS.
With the same reason, we can obtain $p_2(x,y_3,\cdots,y_u,\zz)$, $\cdots$, $p_u(x,\zz)$ and
$q_2(x,y_2, \cdots,y_u)$, $\cdots$, $q_s(x,y_u)$ such that
\begin{align*}
p_{i-1}(x,y_{i},\cdots,y_u,\zz) = p_{i}(x,y_{i+1},\cdots,y_u,\zz)+
q_i(x,y_{i},\cdots,y_u), \\[2mm]
\forall (x,y_{i+1},\cdots,y_u,\zz) \in \mathbb{R}^{d+u-i+v} ~ p_i(x,y_{i+1},\cdots, y_u, \zz) \geq 0,\\
q_i(x,y_{i},\cdots,y_u) {\rm~ is ~ a ~ SOS ~polynomial},
\end{align*}
for $i=2,\cdots,u$.
Therefore, let
\begin{align*}
h_1(x,\yy)=q_1(x,y_1,\cdots,y_u) + \cdots + q_s(x,y_u), ~~
h_2(x,\zz)=p_u(x,\zz),
\end{align*}
we have $h_1(x,\yy) \in \mathbb{R}[x,\yy]$ is an SOS and
$\forall (x,\zz) \in \mathbb{R}^{r+t} ~ h_2(x,\zz)= p_u(x,\zz) \geq 0$.
Hence, $h_2(x,\zz)$ is also an SOS, because that for the case of degree $2$, a polynomial is
positive semi-definite iff it is an SOS polynomial.
Thus $h_1(x,\yy) \in \mathbb{R}[x,\yy]$ and $h_2(x,\zz) \in \mathbb{R}[x,\zz]$ are both SOS,
moreover,
{\small \begin{align*}
h_1+h_2=q_1+\cdots +q_{u-1}+q_u +p_u
=q_1+\cdots+ q_{u-1} +p_{u-1}
=&\cdots
= q_1+p_1
=h. ~~ \qed
\end{align*} }
\end{proof} | 1,848 | 68,445 | en |
train | 0.4991.8 | The above proof of Lemma \ref{h:sep} gives a method to express
$h, h_1, h_2$ as sums of squares of linear expressions and a
nonnegative real number.
\begin{lemma}\label{lem:split}
Let $h, h_1, h_2$ be as in the statement of Lemma \ref{h:sep}. Then,
{\small \begin{align*}
\mathrm{(H)}:~ h(x,\yy,\zz)=&a_1 (y_1 -l_1(x,y_2,\ldots,y_u))^2 + \ldots + a_u (y_u -l_u(x))^2+\\
&a_{u+1} (z_1 -l_{u+1}(x,z_2,\ldots,z_v))^2 + \ldots + a_{u+v} (z_v -l_{u+v}(x))^2+\\
&a_{u+v+1}(x_1 - l_{u+v+1}(x_2,\ldots,x_d))^2 + \ldots + a_{u+v+d} (x_d - l_{u+v+d})^2 \\
&+a_{u+v+d+1},
\end{align*} }
where $a_i \ge 0$ and $l_j$ is a linear expression
in the corresponding variables, for $ i=1,\ldots,u+v+d+1$, $
j=1,\ldots,u+v+d$. Further,
{\small \begin{align*}
\mathrm{(H1)}:~ &h_1(x,\yy)=a_1 (y_1 -l_1(x,y_2,\ldots,y_u))^2 + \ldots + a_u (y_u -l_u(x))^2+\\
&\frac{a_{u+v+1}}{2}(x_1 - l_{u+v+1}(x_2,\ldots,x_d))^2 + \ldots + \frac{a_{u+v+d}}{2} (x_d - l_{u+v+d})^2 +\frac{a_{u+v+d+1}}{2}, \\[2mm]
\mathrm{(H2)}:~ &h_2(x,\zz)=a_{u+1} (z_1 -l_{u+1}(x,z_2,\ldots,z_v))^2 + \ldots + a_{u+v} (\zz_v -l_{u+v}(x))^2+\\
&\frac{a_{u+v+1}}{2}(x_1 - l_{u+v+1}(x_2,\ldots,x_d))^2 + \ldots + \frac{a_{u+v+d}}{2} (x_d - l_{u+v+d})^2+\frac{a_{u+v+d+1}}{2}.
\end{align*} }
\end{lemma}
\begin{theorem} \label{the:int}
Let $\phi$ and $\psi$ as defined in Problem 1 with $\phi\wedge\psi\models\bot$, which satisfy
$\mathbf{NSOSC}$.
Then there exist $\lambda_i\ge 0$ ($i=1,\cdots,r$), $\eta_j \ge 0$ ($j=0,1,\cdots,s$) and two quadratic SOS polynomial $h_1 \in \mathbb{R}[x,\yy]$ and $h_2 \in \mathbb{R}[x,\zz]$ such that
\begin{align}
& \sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0 + h_1+h_2 \mathbf{Eq}uiv 0, \label{con1:inte}\\
& \eta_0+\eta_1 + \ldots + \eta_s = 1.\label{con2:inte}
\end{align}
Moreover, if $\sum_{j=0}^{s_1} \eta_j > 0$, then $I >0$ is an interpolant,
otherwise $I \ge 0$ is an interpolant, where $I = \sum_{i=1}^{r_1}
\lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j + \eta_0 + h_1 \in \mathbb{R}[x]$.
\end{theorem}
\begin{proof}
From Theorem \ref{the:main}, there exist $\lambda_i\ge 0$ ($i=1,\cdots,r$), $\eta_j \ge 0$ ($j=0,1,\cdots,s$) and a quadratic SOS polynomial $h \in \mathbb{R}[x,\yy,\zz]$ such that
\begin{align}
& \sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0 + h \mathbf{Eq}uiv 0, \label{r1}\\
& \eta_0+\eta_1 + \ldots + \eta_s = 1. \label{r2}
\end{align}
Obviously, (\ref{r1}) is equivalent to the following formula
\begin{align*}
\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0+
\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j+ h \mathbf{Eq}uiv 0,
\end{align*}
It's easy to see that
\begin{align*}
\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0 \in \mathbb{R}[x,\yy], ~~
\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j \in \mathbb{R}[x,\zz].
\end{align*}
Thus, for any $1\le i \le u$, $1\le j \le v$, the term $\yy_i \zz_j$ does not appear in
\begin{align*}
\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0+
\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j .
\end{align*}
Since all the conditions in Lemma \ref{h:sep} are satisfied, there exist two quadratic
SOS polynomial $h_1 \in \mathbb{R}[x,\yy]$ and $h_2 \in \mathbb{R}[x,\zz]$ such that
$h=h_1+h_2$.
Thus, we have
\begin{align*}
&\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0+h_1 \in \mathbb{R}[x,\yy],\\
&\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j+h_2 \in \mathbb{R}[x,\zz],\\
&\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0+h_1+
\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j+h_2 \mathbf{Eq}uiv 0
\end{align*}
Besides, as
\begin{align*}
I=\sum_{i=1}^{r_1} \lambda_i f_i +\sum_{j=1}^{s_1} \eta_j g_j +\eta_0+h_1
=-(\sum_{i=r_1+1}^{r} \lambda_i f_i +\sum_{j=s_1+1}^{s} \eta_j g_j+h_2),
\end{align*}
we have $I \in \mathbb{R}[x]$.
It is easy to see that
\begin{itemize}
\item if $\sum_{j=0}^{s_1} \eta_j > 0$ then
$\phi \models I >0$ and $\psi \wedge I>0 \models \bot$,
so $I > 0$ is an interpolation; and
\item if $\sum_{j=s_1+1}^{s} \eta_j > 0$ then
$\phi \models I \ge 0$ and $\psi \wedge I\ge 0 \models \bot$,
hence $I \ge 0$ is an interpolation. \qed
\end{itemize}
\oomit{ Since $\sum_{j=0}^s \eta_j =1$ implies either $ \sum_{j=0}^{s_1} \eta_j > 0$
or $\sum_{j=s_1+1}^{s} \eta_j > 0$.
\qed}
\end{proof} | 2,322 | 68,445 | en |
train | 0.4991.9 | \subsection{Computing Interpolant using Semi-Definite Programming} \label{sec:sdp}
\oomit{When the condition $\mathbf{NSOSC}$ hold, from Theorem \ref{the:int} we can see that, the problem of
interpolant generation can be reduced to the following constraint solving problem.
{\em Find:
\begin{center}
real numbers $\lambda_i\ge 0~(i=1,\cdots,r)$, $\eta_j \ge 0~(j=0,1,\cdots,s)$, and\\
two quadratic SOS polynomials $h_1 \in \mathbb{R}[x,\yy]$ and $h_2 \in \mathbb{R}[x,\zz]$,
\end{center}
such that
\begin{align*}
& \sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0 + h_1+h_2 \mathbf{Eq}uiv 0, \\
& \eta_0+\eta_1 + \ldots + \eta_s = 1.
\end{align*} } }
Below, we formulate computing $\lambda_i$s, $\eta_j$s and $h_1$
and $h_2$ as a semi-definite programming problem.
Let
\[W=\left( \begin{matrix}
1 & x^T & \yy^T & \zz^T\\
x & xx^T & x\yy^T & x\zz^T\\
\yy & \yyx^T & \yy\yy^T & \yy\zz^T\\
\zz & \zzx^T & \zz\yy^T & \zz\zz^T
\end{matrix}
\right )\]
\begin{comment}
\begin{align} \label{pq:def}
f_i = \left<P_i,\left( \begin{matrix}
1 & x^T & \yy^T & \zz^T\\
x & xx^T & x\yy^T & x\zz^T\\
\yy & \yyx^T & \yy\yy^T & \yy\zz^T\\
\zz & \zzx^T & \zz\yy^T & \zz\zz^T
\end{matrix}
\right ) \right > ,~~
g_j = \left<Q_j,\left( \begin{matrix}
1 & x^T & \yy^T & \zz^T\\
x & xx^T & x\yy^T & x\zz^T\\
\yy & \yyx^T & \yy\yy^T & \yy\zz^T\\
\zz & \zzx^T & \zz\yy^T & \zz\zz^T
\end{matrix}
\right ) \right >,
\end{align}
\end{comment}
\begin{align} \label{pq:def}
f_i = \langle P_i, W\rangle ,~~ g_j = \langle Q_j, W\rangle,
\end{align}
where $P_i$ and $Q_j$ are $(1+d+u+v) \times (1+d+u+v)$ matrices,
and
\begin{comment}
\begin{align*}
&h_1 = \left<
M,
\left( \begin{matrix}
1 & x^T & \yy^T & \zz^T\\
x & xx^T & x\yy^T & x\zz^T\\
\yy & \yyx^T & \yy\yy^T & \yy\zz^T\\
\zz & \zzx^T & \zz\yy^T & \zz\zz^T
\end{matrix}
\right )
\right > ,
h_2 = \left<
\hat{M},
\left( \begin{matrix}
1 & x^T & \yy^T & \zz^T\\
x & xx^T & x\yy^T & x\zz^T\\
\yy & \yyx^T & \yy\yy^T & \yy\zz^T\\
\zz & \zzx^T & \zz\yy^T & \zz\zz^T
\end{matrix}
\right )
\right >,
\end{align*}
\end{comment}
\begin{align*}
h_1 = \langle M, W\rangle , ~~ h_2 = \langle \hat{M}, W\rangle,
\end{align*}
where $M=(M_{ij})_{4\times4}, \hat{M}=(\hat{M}_{ij})_{4\times4}$
\begin{comment}
\begin{align*}
&M=\left( \begin{matrix}
M_{11} & M_{12} & M_{13} & M_{14}\\
M_{21} & M_{22} & M_{23} & M_{24}\\
M_{31} & M_{32} & M_{33} & M_{34}\\
M_{41} & M_{42} & M_{43} & M_{44}
\end{matrix} \right),~~~~~~~
\hat{M} = \left( \begin{matrix}
\hat{M}_{11} & \hat{M}_{12} & \hat{M}_{13} & \hat{M}_{14}\\
\hat{M}_{21} & \hat{M}_{22} & \hat{M}_{23} & \hat{M}_{24}\\
\hat{M}_{31} & \hat{M}_{32} & \hat{M}_{33} & \hat{M}_{34}\\
\hat{M}_{41} & \hat{M}_{42} & \hat{M}_{43} & \hat{M}_{44}
\end{matrix} \right)
\end{align*}
\end{comment}
with appropriate dimensions, for example $M_{12} \in \mathbb{R}^{1 \times d}$ and
$\hat{M}_{34} \in \mathbb{R}^{u \times v}$.
Then, with $\mathbf{NSOSC}$, by Theorem~\ref{the:int}, Problem 1 is reduced to
the following $\mathbf{SDP}$ feasibility problem.
\textbf{Find:}
\begin{center}
$\lambda_1,\ldots, \lambda_r,\eta_0,\ldots,\eta_s \in \mathbb{R}$ and real symmetric matrices $M, \hat{M} \in \mathbb{R}^{(1+d+u+v)\times (1+d+u+v)}$
\end{center}
subject to
\begin{eqnarray*}
\left\{ ~ \begin{array}{l}
\sum_{i=1}^{r} \lambda_i P_i +\sum_{j=1}^{s} \eta_j Q_j + \eta_0 E_{1,1} + M+\hat{M} = 0$, $\sum_{j=0}^{s} \eta_j=1,\\[1mm]
M_{41}=(M_{14})^T=0,M_{42}=(M_{24})^T=0,M_{43}=(M_{34})^T=0,M_{44}=0,\\[1mm]
\hat{M}_{31}=(\hat{M}_{13})^T=0,\hat{M}_{32}=(\hat{M}_{23})^T=0,\hat{M}_{33}=0,\hat{M}_{34}=(\hat{M}_{43})^T=0,\\[1mm]
M\succeq 0, \hat{M}\succeq 0,\lambda_i \ge0, \eta_j \ge 0, \mbox{ for }
i=1,\ldots,r,j=0,\ldots,s,
\end{array}
\right.
\end{eqnarray*}
where $E_{1,1}$ is a $(1+d+u+v) \times (1+d+u+v)$ matrix, whose $(1,1)$ entry is $1$ and the others are $0$.
This is a standard $\mathbf{SDP}$ feasibility problem, which can be
solved efficiently by well known $\mathbf{SDP}$ solvers, e.g., CSDP
\cite{CSDP},
SDPT3
\cite{SDPT3}, SeDuMi \cite{SeDuMi}, etc., with
time complexity polynomial in $n=d + u + v$.
\begin{remark} \label{remark:1}
Problem 1 is a typical quantifier elimination (QE) problem, which can be solved symbolically. However, it is very hard to solve large problems by general QE algorithms because of their high complexity. So, reducing Problem 1 to $\mathbf{SDP}$ problem makes it possible to solve many large problems in practice. Nevertheless, one may doubt whether we can use numerical result in verification. We think that verification must be rigorous and numerical results should be verified first. For example, after solving the above $\mathbf{SDP}$ problem numerically, we verify that whether $-(\sum_{i=1}^{r} \lambda_i f_i +\sum_{j=1}^{s} \eta_j g_j + \eta_0)$ is an SOS by the method of Lemma \ref{lem:split}, which is easy to do. If it is, the result is guaranteed and output. If not, the result is unknown (in fact, some other techniques can be employed in this case, which we do not discuss in this paper.). Thus, our algorithm is sound but not complete.
\end{remark} | 2,385 | 68,445 | en |
train | 0.4991.10 | \subsection{General Case}
The case of
$\textit{Var}(\phi) \subset \textit{Var}(\psi)$ is
not an issue since $\phi$ serves as an interpolant of $\phi$
and $\psi$.
We thus assume
that $\textit{Var}(\phi) \nsubseteq \textit{Var}(\psi)$.
We show below how an interpolant can be generated in the general case.
If $\phi$ and $\psi$ do not satisfy the $\mathbf{NSOSC}$
condition, i.e., an SOS polynomial $h(x, \yy, \zz)$ can be computed from nonstrict
inequalities $f_i$s using nonpositive constant multipliers, then
by the lemma below, we can construct ``simpler'' interpolation
subproblems $\phi', \psi'$
from $\phi$ and $\psi$ by constructing from $h$ an SOS polynomial
$f(x)$ such that $\phi \models f(x) \ge 0$ as well as $\psi
\models - f(x) \ge 0$.
Each $\phi'$ $\psi'$ pair has the following characteristics
because of which the algorithm
is recursively applied to $\phi'$ and $\psi'$.
\begin{enumerate}
\item[(i)] $\phi' \wedge \psi' \models \bot$,
\item[(ii)] $\phi',\psi'$ have the same form as $\phi,\psi$, i.e., $\phi'$ and $\psi'$
are defined by some $f_i' \ge 0$ and $g_j'>0$, where $f_i'$ and $g_j'$ are CQ,
\item[(iii)] $\#(\textit{Var}(\phi') \cup \textit{Var}(\psi')) <
\#(\textit{Var}(\phi) \cup \textit{Var}(\psi))$ to ensure
termination of the recursive algorithm, and
\item[(iv)] an interpolant
$I$ for $\phi$ and $\psi$ can be computed
from an interpolant $I'$ for $\phi'$ and $\psi'$ using $f$.
\end{enumerate}
\oomit{Now, suppose $\textit{Var}(\phi) \nsubseteq \textit{Var}(\psi)$ and the condition $\mathbf{NSOSC}$ does not hold,
i.e., there exist
$\delta_1,\ldots,\delta_r \ge 0$ such that
$(-\sum_{i=1}^r \delta_i f_i)$ is a nonzero SOS polynomial, we construct such
$\phi'$ and $\psi'$ satisfy $(i)-(iv)$ below. }
\begin{lemma} \label{lemma:dec}
If Problem 1 does not satisfy the $\mathbf{NSOSC}$ condition, there exists $f \in \mathbb{R}[x]$,
such that
$\phi \Leftrightarrow \phi_1 \vee \phi_2$ and
$\psi \Leftrightarrow \psi_1 \vee \psi_2$,
where,
\begin{align}
&\phi_1= (f > 0 \wedge \phi) ,~~
\phi_2 = (f = 0 \wedge \phi),\label{phi2}\\
&\psi_1 = (-f > 0 \wedge \psi),~~
\psi_2 = (f = 0 \wedge \psi).\label{psi2}
\end{align}
\end{lemma}
\begin{proof}
Since $\mathbf{NSOSC}$ does not hold, there exist $\delta_1,\ldots, \delta_r \in \mathbb{R}^+$ such that $-\sum_{i=1}^r \delta_i f_i$ is a nonzero SOS. Let $h(x,\yy,\zz)$ denote this quadratic SOS polynomial.
Since $(-\sum_{i=1}^{r_1} \delta_i f_i) \in \mathbb{R}[x,\yy]$
and $(-\sum_{i=r_1+1}^r \delta_i f_i) \in \mathbb{R}[x,\zz]$, the coefficient of any term
$\yy_i \zz_j, 1\le i \le u,1 \le j \le v,$ is 0 after expanding
$h$. By Lemma \ref{h:sep}
there exist two quadratic SOS polynomials $h_1 \in \mathbb{R}[x,\yy]$ and
$h_2 \in \mathbb{R}[x,\zz]$ such that $h=h_1+h_2$
with the following form:
{\small \begin{align*}
\mathrm{(H1)}: & ~ h_1(x,\yy)=a_1 (\yy_1 -l_1(x,\yy_2,\ldots,\yy_u))^2 + \ldots + a_u (\yy_u -l_u(x))^2+\\
&\frac{a_{u+v+1}}{2}(x_1 - l_{u+v+1}(x_2,\ldots,x_d))^2 + \ldots + \frac{a_{u+v+d}}{2} (x_d - l_{u+v+d})^2 +\frac{a_{u+v+d+1}}{2}, \\[3mm]
\mathrm{(H2)}:& ~ h_2(x,\zz)=a_{u+1} (\zz_1 -l_{u+1}(x,\zz_2,\ldots,\zz_v))^2 + \ldots + a_{u+v} (\zz_v -l_{u+v}(x))^2+\\
&\frac{a_{u+v+1}}{2}(x_1 - l_{u+v+1}(x_2,\ldots,x_d))^2 + \ldots + \frac{a_{u+v+d}}{2} (x_d - l_{u+v+d})^2+\frac{a_{u+v+d+1}}{2}.
\end{align*} }
Let
\begin{align} \label{f:form}
f= \sum_{i=1}^{r_1} \delta_i f_i + h_1=-\sum_{i=r_1+1}^r \delta_i f_i-h_2.
\end{align}
Obviously, $f \in \mathbb{R}[x,\yy]$ and $f \in \mathbb{R}[x,\zz]$,
this implies $f \in \mathbb{R}[x]$.
Since $h_1, h_2$ are SOS, it is easy to see that
$\phi \models f(x) \ge 0, ~~ \psi \models -f(x) \ge 0$.
Thus,
$\phi \Leftrightarrow \phi_1 \vee \phi_2$,
$\psi \Leftrightarrow \psi_1 \vee \psi_2$.
\qed
\end{proof}
Using the above lemma, an interpolant $I$ for $\phi$ and $\psi$
can be constructed from an interpolant $I_{2,2}$ for $\phi_2$ and
$\psi_2$.
\begin{theorem} \label{lemma:p22}
Let $\phi$, $\psi$, $\phi_1, \phi_2, \psi_1, \psi_2$ as defined in
Lemma \ref{lemma:dec}, then given an interpolant $I_{2,2}$ for $\phi_2$ and $\psi_2$,
$I:= (f >0 ) \vee (f \ge0 \wedge I_{2,2})$
is an interpolant for $\phi$ and $\psi$.
\end{theorem}
\begin{proof}
It is easy to see that $f > 0$ is an interpolant for both
$(\phi_1, \psi_1)$ and $(\phi_1, \psi_2)$, and
$f \ge 0$ is an interpolant for $(\phi_2, \psi_1)$.
Thus, if $I_{2,2}$ is an interpolant for $(\phi_2,\psi_2)$, then $I$
is an interpolant for $\phi$ and $\psi$. \qed
\end{proof}
An interpolant for $\phi_2$ and $\psi_2$ is constructed
recursively since the new constraint included in $\phi_2$
(similarly, as well
as in $\psi_2$) is: $\sum_{i=1}^{r_1} \delta_i f_i + h_1=0$ with
$h_1$ being an SOS.
Let $\phi'$ and $\psi'$ stand for the formulas constructed
after analyzing $\phi_2$ and $\psi_2$ respectively.
Given that $\delta_i$ as well as $f_i \ge
0$ for each $i$, case analysis is performed on $h_1$ depending upon
whether it has a positive constant
$a_{u+v+d+1} > 0$ or not.
\begin{theorem} \label{the:gcase:1}
Let $\phi'\hat{=} (0>0)$ and $\psi'\hat{=} (0>0)$. In the proof of Lemma \ref{lemma:dec}, if $a_{u+v+d+1} > 0$, then
$\phi'$ and $\psi'$ satisfy $(i)-(iv)$.
\end{theorem}
\begin{proof}
$(i),(ii)$ and $(iii)$ are trivially satisfied.
Since $a_{u+v+d+1} > 0$, it is easy to see that
$h_1 >0$ and $h_2 >0$.
From (\ref{phi2}), (\ref{psi2}) and (\ref{f:form}), we have
$\phi_2 \models h_1 = 0$, and $\psi_2 \models h_2=0$.
Thus $\phi_2 \Leftrightarrow \phi' \Leftrightarrow\bot$ and
$\psi_2 \Leftrightarrow \psi' \Leftrightarrow\bot$.
\qed
\end{proof} | 2,415 | 68,445 | en |
train | 0.4991.11 | Using the above lemma, an interpolant $I$ for $\phi$ and $\psi$
can be constructed from an interpolant $I_{2,2}$ for $\phi_2$ and
$\psi_2$.
\begin{theorem} \label{lemma:p22}
Let $\phi$, $\psi$, $\phi_1, \phi_2, \psi_1, \psi_2$ as defined in
Lemma \ref{lemma:dec}, then given an interpolant $I_{2,2}$ for $\phi_2$ and $\psi_2$,
$I:= (f >0 ) \vee (f \ge0 \wedge I_{2,2})$
is an interpolant for $\phi$ and $\psi$.
\end{theorem}
\begin{proof}
It is easy to see that $f > 0$ is an interpolant for both
$(\phi_1, \psi_1)$ and $(\phi_1, \psi_2)$, and
$f \ge 0$ is an interpolant for $(\phi_2, \psi_1)$.
Thus, if $I_{2,2}$ is an interpolant for $(\phi_2,\psi_2)$, then $I$
is an interpolant for $\phi$ and $\psi$. \qed
\end{proof}
An interpolant for $\phi_2$ and $\psi_2$ is constructed
recursively since the new constraint included in $\phi_2$
(similarly, as well
as in $\psi_2$) is: $\sum_{i=1}^{r_1} \delta_i f_i + h_1=0$ with
$h_1$ being an SOS.
Let $\phi'$ and $\psi'$ stand for the formulas constructed
after analyzing $\phi_2$ and $\psi_2$ respectively.
Given that $\delta_i$ as well as $f_i \ge
0$ for each $i$, case analysis is performed on $h_1$ depending upon
whether it has a positive constant
$a_{u+v+d+1} > 0$ or not.
\begin{theorem} \label{the:gcase:1}
Let $\phi'\hat{=} (0>0)$ and $\psi'\hat{=} (0>0)$. In the proof of Lemma \ref{lemma:dec}, if $a_{u+v+d+1} > 0$, then
$\phi'$ and $\psi'$ satisfy $(i)-(iv)$.
\end{theorem}
\begin{proof}
$(i),(ii)$ and $(iii)$ are trivially satisfied.
Since $a_{u+v+d+1} > 0$, it is easy to see that
$h_1 >0$ and $h_2 >0$.
From (\ref{phi2}), (\ref{psi2}) and (\ref{f:form}), we have
$\phi_2 \models h_1 = 0$, and $\psi_2 \models h_2=0$.
Thus $\phi_2 \Leftrightarrow \phi' \Leftrightarrow\bot$ and
$\psi_2 \Leftrightarrow \psi' \Leftrightarrow\bot$.
\qed
\end{proof}
\oomit{For the case $a_{u+v+d+1} > 0$, we construct $\phi'$ and $\psi'$ in Theorem \ref{the:gcase:1},
then we construct $\phi'$ and $\psi'$ on the case $a_{u+v+d+1} = 0$ below.}
In case
$a_{u+v+d+1} = 0$, from the fact that $h_1$ is an SOS and has
the form $\mathrm{(H1)}$, each nonzero square term in $h_1$ is
identically 0. This implies that some of the variables in $x, \yy$ can be
linearly expressed in term of other variables; the same argument applies
to $h_2$ as well. In particular, at least
one variable is eliminated in both $\phi_2$ and $\psi_2$,
reducing the number of variables appearing in $\phi$ and
$\psi$, which ensures the termination of the algorithm. A
detailed analysis is given in following lemmas, where it is shown how
this elimination of variables is performed,
generating $\phi'$ and $\psi'$ on which the algorithm can be
recursively invoked; an a theorem is also proved to ensures this.
\begin{lemma} \label{lemma:elim}
In the proof of Lemma \ref{lemma:dec}, if $a_{u+v+d+1} = 0$, then $x$ can be
represented as $(x^1, x^2)$, $\yy$ as $(\yy^1, \yy^2)$ and $\zz$ as $(\zz^1, \zz^2)$,
such that
\begin{align*}
&\phi_2 \models ( (\yy^1 = \Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ), \\
&\psi_2 \models ((\zz^1 = \Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ),
\end{align*}
and $\#(\textit{Var}(x^1)+\textit{Var}(\yy^1)+\textit{Var}(\zz^1)) > 0$,
for matrixes $\Lambda_1,\Lambda_2,\Lambda_3$ and vectors $\gamma_1,\gamma_2,\gamma_3$.
\end{lemma}
\begin{proof}
From (\ref{phi2}), (\ref{psi2}) and (\ref{f:form}) we have
\begin{align} \label{phi-psi:2}
\phi_2 \models h_1 = 0, ~~~~ \psi_2 \models h_2=0.
\end{align}
Since $h_1+h_2 =h$ is a nonzero polynomial, $a_{u+v+d+1} = 0$ ,
then there exist some $a_i \neq 0$, i.e. $a_i > 0$, for $1\le i \le u+v+d$.
Let
\begin{align*}
&N_1:=\{i \mid a_i > 0 \wedge 1 \le i \le u \}, \\
&N_2:=\{i \mid a_{u+i} > 0 \wedge 1 \le i \le v \}, \\
&N_3:=\{i \mid a_{u+v+i} > 0 \wedge 1 \le i \le d \}.
\end{align*}
Thus, $N_1$, $N_2$ and $N_3$ cannot all be empty. In addition, $h_1=0 $ implies that
\begin{align*}
&\yy_i=l_i(x,\yy_{i+1},\ldots,\yy_u), ~~~~for ~ i \in N_1,\\
&x_i=l_{u+v+i}(x_{i+1},\ldots,\zz_d), ~for ~ i \in N_3.
\end{align*}
Also, $h_2=0 $ implies that
\begin{align*}
&\zz_i=l_{u+i}(x,\zz_{i+1},\ldots,\zz_v), ~for ~ i \in N_2,\\
&x_i=l_{u+v+i}(x_{i+1},\ldots,\zz_d), ~for ~ i \in N_3.
\end{align*}
Now, let \oomit{Let divide each of $\yy,\zz,x$ into two parts by $N_1,N_2,N_3$}
\begin{align*}
& \yy^1 = (y_{i_1},\ldots, y_{i_{|N_1|}}),
\yy^2= (y_{j_1}, \ldots, y_{j_{u-|N_1|}}), \\
& \quad \quad \mbox{ where }
\{i_1,\ldots, i_{|N_1|}\} =N_1, \{j_1,\ldots,j_{u-|N_1|}\} = \{1,\ldots, u\} -N_1,\\
& \zz^1 = (z_{i_1},\ldots, z_{i_{|N_2|}}),
\zz^2= (z_{j_1}, \ldots, z_{j_{u-|N_2|}}), \\
& \quad \quad \mbox{ where }
\{i_1,\ldots, i_{|N_2|}\} =N_2, \{j_1,\ldots,j_{v-|N_2|}\} = \{1,\ldots, v\} -N_2,\\
& x^1 = (x_{i_1},\ldots, x_{i_{|N_3|}}),
x^2= (x_{j_1}, \ldots, x_{j_{u-|N_3|}}), \\
& \quad \quad \mbox{ where }
\{i_1,\ldots, i_{|N_3|}\} =N_3, \{j_1,\ldots,j_{d-|N_3|}\} = \{1,\ldots, d\} -N_3.
\end{align*}
Clearly, $\#(\textit{Var}(x^1)+\textit{Var}(\yy^1)+\textit{Var}(\zz^1)) > 0$.
By linear algebra, there exist three matrices $\Lambda_1,\Lambda_2,\Lambda_3$ and three vectors $\gamma_1,\gamma_2,\gamma_3$ s.t.
\begin{align*}
&\yy^1 = \Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1,\\
&\zz^1 = \Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2,\\
&x^1 = \Lambda_3 x^2 + \gamma_3.
\end{align*}
Since
$\phi_2 \models h_1 = 0, ~~~~ \psi_2 \models h_2=0$,
then,
\begin{align*}
&\phi_2 \models ( (\yy^1 = \Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ), \\
&\psi_2 \models ((\zz^1 = \Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ).
\end{align*}
\qed
\end{proof}
So, replacing $(x^1,\yy^1)$ in $f_i(x,\yy)$ and $g_j(x,\yy)$ by $\Lambda_3 x^2 + \gamma_3$
$\Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1$ respectively,
results in new polynomials $\hat{f_i}(x^2,\yy^2)$ and $\hat{g_j}(x^2,\yy^2)$, for $i=1,\ldots,r_1$, $j=1,\ldots,s_1$.
Similarly, replacing $(x^1,\zz^1)$ in $f_i(x,\zz)$ and $g_j(x,\zz)$ by $ \Lambda_3 x^2 + \gamma_3$ and
$\Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2$ respectively, derives new polynomials $\hat{f_i}(x^2,\zz^2)$ and $\hat{g_j}(x^2,\zz^2)$, for $i=r_1+1,\ldots,r$, $j=s_1+1,\ldots,s$. Regarding the resulted polynomials above, we have the following property.
\begin{lemma} \label{concave-hold}
Let $\xi \in \mathbb{R}^m$ and $\zeta \in \mathbb{R}^n$ be two vector variables,
$g(\xi,\zeta)= \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) + a^T \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) + \alpha$ be a CQ polynomial on $(\xi,\zeta)$,
i.e. $G \preceq 0$. Replacing $\zeta$ in $g$ by $\Lambda \xi + \gamma$ derives
$\hat{g}(\xi) = g(\xi, \Lambda \xi + \gamma)$, then $\hat{g}(\xi)$ is a CQ
polynomial in $\xi$.
\end{lemma}
\begin{proof}
$G \preceq 0$ iff $- \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)$ is an SOS. Thus, there exist $l_{i,1} \in \mathbb{R}^m$,
$l_{i,2} \in \mathbb{R}^n$, for
$i=1,\ldots, s$, $s \in \mathbb{N}^{+}$ s.t.
$\left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) = - \sum_{i=1}^s (l_{i,1}^T \xi + l_{i,2}^T \zeta)^2$.
Hence,
\begin{align*}
\left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right) = - \sum_{i=1}^s (l_{i,1}^T \xi + l_{i,2}^T (\Lambda \xi + \gamma))^2\\
=- \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi + l_{i,2}^T \gamma)^2 \\
=- \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi)^2 + l(\xi),
\end{align*}
where $l(\xi)$ is a linear function in $\xi$.
Then we have
\begin{align*}
\hat{g}(\xi) = - \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi)^2 + l(\xi)+
\va^T \left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right) + \alpha.
\end{align*}
Obviously, there exist $\hat{G} \preceq 0$, $\hat{\va}$ and $\hat{\alpha}$ such that
\begin{align*}
\hat{g} = \xi \hat{G} \xi^T + \hat{\va}^T \xi + \hat{\alpha}.
\end{align*}
Therefore, $\hat{g}$ is concave quadratic polynomial in $\xi$. \qed
\end{proof} | 3,860 | 68,445 | en |
train | 0.4991.12 | So, replacing $(x^1,\yy^1)$ in $f_i(x,\yy)$ and $g_j(x,\yy)$ by $\Lambda_3 x^2 + \gamma_3$
$\Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1$ respectively,
results in new polynomials $\hat{f_i}(x^2,\yy^2)$ and $\hat{g_j}(x^2,\yy^2)$, for $i=1,\ldots,r_1$, $j=1,\ldots,s_1$.
Similarly, replacing $(x^1,\zz^1)$ in $f_i(x,\zz)$ and $g_j(x,\zz)$ by $ \Lambda_3 x^2 + \gamma_3$ and
$\Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2$ respectively, derives new polynomials $\hat{f_i}(x^2,\zz^2)$ and $\hat{g_j}(x^2,\zz^2)$, for $i=r_1+1,\ldots,r$, $j=s_1+1,\ldots,s$. Regarding the resulted polynomials above, we have the following property.
\begin{lemma} \label{concave-hold}
Let $\xi \in \mathbb{R}^m$ and $\zeta \in \mathbb{R}^n$ be two vector variables,
$g(\xi,\zeta)= \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) + a^T \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) + \alpha$ be a CQ polynomial on $(\xi,\zeta)$,
i.e. $G \preceq 0$. Replacing $\zeta$ in $g$ by $\Lambda \xi + \gamma$ derives
$\hat{g}(\xi) = g(\xi, \Lambda \xi + \gamma)$, then $\hat{g}(\xi)$ is a CQ
polynomial in $\xi$.
\end{lemma}
\begin{proof}
$G \preceq 0$ iff $- \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)$ is an SOS. Thus, there exist $l_{i,1} \in \mathbb{R}^m$,
$l_{i,2} \in \mathbb{R}^n$, for
$i=1,\ldots, s$, $s \in \mathbb{N}^{+}$ s.t.
$\left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\zeta \end{matrix} \right) = - \sum_{i=1}^s (l_{i,1}^T \xi + l_{i,2}^T \zeta)^2$.
Hence,
\begin{align*}
\left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right)^T G \left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right) = - \sum_{i=1}^s (l_{i,1}^T \xi + l_{i,2}^T (\Lambda \xi + \gamma))^2\\
=- \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi + l_{i,2}^T \gamma)^2 \\
=- \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi)^2 + l(\xi),
\end{align*}
where $l(\xi)$ is a linear function in $\xi$.
Then we have
\begin{align*}
\hat{g}(\xi) = - \sum_{i=1}^s ((l_{i,1}^T + l_{i,2}^T \Lambda) \xi)^2 + l(\xi)+
\va^T \left( \begin{matrix} &\xi \\
&\Lambda \xi + \gamma
\end{matrix} \right) + \alpha.
\end{align*}
Obviously, there exist $\hat{G} \preceq 0$, $\hat{\va}$ and $\hat{\alpha}$ such that
\begin{align*}
\hat{g} = \xi \hat{G} \xi^T + \hat{\va}^T \xi + \hat{\alpha}.
\end{align*}
Therefore, $\hat{g}$ is concave quadratic polynomial in $\xi$. \qed
\end{proof}
\begin{theorem} \label{the:gcase:2}
In the proof of Lemma \ref{lemma:dec}, if $a_{u+v+d+1} = 0$, then
Lemma \ref{lemma:elim} holds. So, let $\hat{f_i}$ and $\hat{g_j}$ as above,
and
\begin{align*}
&\phi' = \bigwedge_{i=1}^{r_1} \hat{f_i} \ge 0 \wedge
\bigwedge_{j=1}^{s_1} \hat{g_j} >0, \\
&\psi' = \bigwedge_{i=r_1+1}^{r} \hat{f_i} \ge 0 \wedge
\bigwedge_{j=s_1+1}^{s} \hat{g_j} >0.
\end{align*}
Then $\phi'$ and $\psi'$ satisfy $(i)-(iv)$.
\end{theorem}
\begin{proof}
From Lemma \ref{lemma:elim}, we have
\begin{align*}
&\phi_2 \models ( (\yy^1 = \Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ), \\
&\psi_2 \models ((\zz^1 = \Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) ).\\
\end{align*}
Let
\begin{align*}
&\phi_2' := ( (\yy^1 = \Lambda_1 \left( \begin{matrix} x^2 \\ \yy^2 \end{matrix} \right) + \gamma_1)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) \wedge \phi ), \\
&\psi_2' := ((\zz^1 = \Lambda_2 \left( \begin{matrix} x^2 \\ \zz^2 \end{matrix} \right) + \gamma_2)\wedge (x^1 = \Lambda_3 x^2 + \gamma_3) \wedge \psi ).\\
\end{align*}
Then $\phi_2 \models \phi_2'$, $\phi_2 \models \phi_2'$ and $\phi_2'\wedge\psi_2'\models\bot$. Thus any interpolant for $\phi_2'$ and $\psi_2'$ is also an interpolant of $\phi_2$ and $\psi_2$.
By the definition of $\phi'$ and $\psi'$, it follows $\phi' \wedge \psi' \models \bot$ iff $\phi_2^{'}\wedge\psi_2^{'}\models\bot$,
so $\phi' \wedge \psi' \models \bot$, $(i)$ holds.
Moreover, $\phi_2{'}\models \phi'$, $\psi_2{'}\models \psi'$, $\textit{Var}(\phi') \subseteq \textit{Var}(\phi_2{'})$ and $\textit{Var}(\psi') \subseteq \textit{Var}(\psi_2{'})$, then any interpolant for
$\phi'$ and $\psi'$ is also an interpolant for $\phi_2{'}$ and $\psi_2{'}$, then also an
interpolant for $\phi_2$ and $\psi_2$. By Theorem \ref{lemma:p22}, $(iii)$ holds.
Since $\#(\textit{Var}(\phi)+\textit{Var}(\psi)) - \#(\textit{Var}(\phi')+\textit{Var}(\psi'))=\#(x^1,\yy^1,\zz^1) >0$,
then $(vi)$ holds.
For $(ii)$, $\phi',\psi'$ have the same form with $\phi,\psi$, means that
$\hat{f_i}, i=1,\ldots,r$ are CQ and $\hat{g_j}, j=1,\ldots,s$
are CQ. This is satisfied directly by
Lemma \ref{concave-hold}.
\qed
\end{proof}
\begin{comment}
\begin{theorem} \label{the:gcase:2}
In the proof of Lemma \ref{lemma:dec}, if $a_{u+v+d+1} = 0$,
by eliminating (at least one) variables in $\phi$ and
$\psi$ in terms of other variables (Lemma \ref{lemma:elim}), resulting in
$\hat{f_i}$s and $\hat{g_j}$s defined as in Lemma
\ref{concave-hold}, mutually contradictory formulas with fewer variables
\begin{align*}
&\phi' = \bigwedge_{i=1}^{r_1} \hat{f_i} \ge 0 \wedge
\bigwedge_{j=1}^{s_1} \hat{g_j} >0, \\
&\psi' = \bigwedge_{i=r_1+1}^{r} \hat{f_i} \ge 0 \wedge
\bigwedge_{j=s_1+1}^{s} \hat{g_j} >0,
\end{align*}
are generated that satisfy $(i)-(iv)$.
\end{theorem}
\end{comment}
The following simple example illustrates how the above
construction works.
\begin{example} \label{exam1}
Let $f_1 = x_1, f_2 = x_2,f_3= -x_1^2-x_2^2 -2x_2-z^2, g_1= -x_1^2+2 x_1 - x_2^2 + 2 x_2 - y^2$. Two formulas $\phi:=(f_1 \ge 0) \wedge (f_2 \ge0) \wedge (g_1 >0)$,
$\psi := (f_3 \ge 0)$. $\phi \wedge \psi \models \bot$.
The condition $\mathbf{NSOSC}$ does not hold, since
\begin{align*}
-(0 f_1 + 2 f_2 + f_3) = x_1^2 +x_2^2 + z^2 {\rm ~ is ~ a ~ sum ~ of ~ square}.
\end{align*}
Then we have $h=x_1^2 +x_2^2 + z^2$, and
\begin{align}
h_1 = \frac{1}{2}x_1^2+\frac{1}{2}x_2^2,~~ h_2 =\frac{1}{2}x_1^2+ \frac{1}{2}x_2^2 + z^2. \label{h:choose}
\end{align}
Let $f = 0 f_1 + 2 f_2 + h_1 =
\frac{1}{2}x_1^2+\frac{1}{2}x_2^2+2x_2$.
For the recursive call, we have $f = 0$ as well as $x_1 = 0, x_2
= 0$ from $h_1=0$ to construct
$\phi'$ from $\phi$; similarly $\psi'$ is constructing
by setting $x_1=x_2=0,z=0$ in $\psi$ as derived from $h_2 = 0$.
\begin{align*}
\phi' =0 \ge 0 \wedge 0 \ge 0 \wedge -y^2 > 0 = \bot, ~~\psi^{'} = 0 \ge 0 = \top.
\end{align*}
Thus, $I(\phi',\psi'):=(0 > 0)$ is an interpolant for $(\phi',\psi')$.
An interpolant for $\phi$ and $\psi$ is thus
$(f(x) >0 ) \vee (f(x)=0 \wedge I(\phi',\psi'))$, which is
$\frac{1}{2}x_1^2+\frac{1}{2}x_2^2+2x_2 > 0$.
\end{example}
\oomit{ By Theorem \ref{the:gcase:1} and Theorem \ref{the:gcase:2}, when
$\textit{Var}(\phi) \nsubseteq \textit{Var}(\psi)$ and the
condition $\mathbf{NSOSC}$ does not hold, we can solve Problem 1 in a
recursive way. From $(vi)$ we know that this recursion must
terminate at most $d+u+v$ times. If it terminates at
$\phi',\psi'$ with $\mathbf{NSOSC}$, then Problem 1 is solved by Theorem
\ref{the:int}; otherwise, it terminates at $\phi',\psi'$ with
$\textit{Var}(\phi') \subseteq \textit{Var}(\psi')$, then
$\phi'$ itself is an interpolant for $\phi'$ and $\psi'$.
} | 3,328 | 68,445 | en |
train | 0.4991.13 | \subsection{Algorithms} \label{sec:alg}
Algorithm $\mathbf{IGFCH}$ deals with the case when $\phi$ and $\psi$
satisfy the $\mathbf{NSOSC}$ condition.
\begin{algorithm}[!htb]
\label{alg:int}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\mathbf{IGFCH}$ }}
\Input{Two formulas $\phi$, $\psi$ with $\mathbf{NSOSC}$ and $\phi \wedge \psi \models \bot$, where
$\phi= f_1 \ge 0 \wedge \ldots \wedge f_{r_1} \ge 0 \wedge g_1 >0 \wedge \ldots \wedge g_{s_1} > 0 $,
$\psi= f_{r_1+1} \ge 0 \wedge \ldots \wedge f_{r} \ge 0 \wedge g_{s_1+1} >0 \wedge \ldots \wedge g_{s} > 0 $,
$f_1, \ldots, f_{r}, g_1, \ldots, g_s$ are all concave quadratic polynomials,
$f_1, \ldots, f_{r_1}, g_1, \ldots, g_{s_1} \in \mathbb{R}[x,\yy]$,
$f_{r_1+1}, \ldots, f_{r}, g_{s_1+1}, \ldots, g_{s} \in \mathbb{R}[x,\zz]$
}
\Output{A formula $I$ to be a Craig interpolant for $\phi$ and $\psi$}
\SetAlgoLined
\BlankLine
\textbf{Find} $\lambda_1,\ldots,\lambda_r \ge 0,\eta_0,\eta_1,\ldots,\eta_s \ge 0, h_1 \in \mathbb{R}[x,\yy], h_2 \in \mathbb{R}[x,\zz]$ by SDP s.t.
\begin{align*}
& \sum_{i=1}^{r} \lambda_i g_j+\sum_{j=1}^{s} \eta_j g_j + \eta_0 + h_1+h_2 \mathbf{Eq}uiv 0,\\
& \eta_0+\eta_1 + \ldots + \eta_s = 1,\\
& h_1, h_2 {\rm ~ are ~ SOS~polynomial};
\end{align*}\\
\tcc{This is essentially a $\mathbf{SDP}$ problem, see Section \ref{sec:hold}}
$f:=\sum_{i=1}^{r_1} \lambda_i g_j+\sum_{j=1}^{s_1} \eta_j g_j + \eta_0 + h_1$\;
\textbf{if } $\sum_{j=0}^{s_1} \eta_j > 0$ \textbf{ then } $I:=(f>0)$;
\textbf{else} $I:=(f\ge 0)$\;
\KwRet $I$ \label{subalg:return}
\end{algorithm}
\begin{theorem}[Soundness and Completeness of $\mathbf{IGFCH}$] \label{thm:correctness-1}
$\mathbf{IGFCH}$ computes an interpolant $I$ of mutually contradictory $\phi, \psi$ with CQ
polynomial inequalities satisfying the
$\mathbf{NSOSC}$ condition .
\end{theorem}
\begin{proof}
It is guaranteed by Theorem \ref{the:int}. \qed
\end{proof}
The recursive algorithm $\mathbf{IGFCH}$ is given below. For the base
case when $\phi, \psi$ satisfy the $\mathbf{NSOSC}$ condition, it invokes $\mathbf{IGFCH}$.
\begin{algorithm}[!htb]
\label{alg:int}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\mathbf{IGFQC}$ }\label{prob:in-out}}
\Input{Two formulas $\phi$, $\psi$ with $\phi \wedge \psi \models \bot$, where
$\phi= f_1 \ge 0 \wedge \ldots \wedge f_{r_1} \ge 0 \wedge g_1 >0 \wedge \ldots \wedge g_{s_1} > 0 $,
$\psi= f_{r_1+1} \ge 0 \wedge \ldots \wedge f_{r} \ge 0 \wedge g_{s_1+1} >0 \wedge \ldots \wedge g_{s} > 0 $,
$f_1, \ldots, f_{r}, g_1, \ldots, g_s$ are all CQ polynomials,
$f_1, \ldots, f_{r_1}, g_1, \ldots, g_{s_1} \in \mathbb{R}[x,\yy]$, and
$f_{r_1+1}, \ldots, f_{r}, g_{s_1+1}, \ldots, g_{s} \in \mathbb{R}[x,\zz]$
}
\Output{A formula $I$ to be a Craig interpolant for $\phi$ and $\psi$}
\SetAlgoLined
\BlankLine
\textbf{if} $\textit{Var}(\phi)\subseteq \textit{Var}(\psi)$ \textbf{then} $I:=\phi$; \KwRet $I$\; \label{alg2:1}
\textbf{Find} $\delta_1,\ldots,\delta_r \ge 0, h \in \mathbb{R}[x,\yy,\zz]$ by SDP s.t. $\sum_{i=1}^r \delta_i f_i +h \mathbf{Eq}uiv 0$ and $h$ is
SOS; \label{alg2:2}\\
\tcc{Check the condition $\mathbf{NSOSC}$}
\textbf{if} \emph{no solution} \textbf{then} $I := \mathbf{IGFCH}(\phi, \psi)$; \label{cond:hold}
\KwRet $I$\; \label{alg2:3}
\tcc{ $\mathbf{NSOSC}$ holds}
Construct $h_1 \in \mathbb{R}[x,\yy]$ and $h_2 \in \mathbb{R}[x,\zz]$ with the forms $\mathrm{(H1)}$ and $\mathrm{(H2)}$\;
\label{alg2:4}
$f:=\sum_{i=1}^{r_1} \delta_i f_i +h_1 =-\sum_{i=r_1}^{r} \delta_i f_i -h_2 $\; \label{alg2:5}
Construct
$\phi'$ and $\psi'$ using Theorem \ref{the:gcase:1} and Theorem
\ref{the:gcase:2} by eliminating variables due to
$h_1 = h_2 = 0$\; \label{alg2:6}
$I' = \mathbf{IGFQC}(\phi', \psi')$\; \label{alg2:7}
$I:=(f>0) \vee (f \ge 0 \wedge I')$\; \label{alg2:8}
\KwRet $I$ \label{alg2:9}
\end{algorithm}
\begin{theorem}[Soundness and Completeness of $\mathbf{IGFQC}$] \label{thm:correctness-2}
$\mathbf{IGFQC}$ computes an interpolant $I$ of mutually contradictory $\phi, \psi$ with CQ
polynomial inequalities.
\end{theorem}
\begin{proof}
If $\textit{Var}(\phi) \subseteq \textit{Var}(\psi)$, $\mathbf{IGFQC}$ terminates at step \ref{alg2:1}, and
returns $\phi$ as an interpolant. Otherwise, there are two cases:
(i) If $\mathbf{NSOSC}$ holds, then $\mathbf{IGFQC}$ terminates at step \ref{alg2:3} and
returns an interpolant for $\phi$ and $\psi$ by calling
$\mathbf{IGFCH}$. Its soundness and completeness follows from the
previous theorem.
(ii) $\textit{Var}(\phi) \nsubseteq \textit{Var}(\psi)$ and
$\mathbf{NSOSC}$ does not hold: The proof is by induction on the number
of recursive calls to $\mathbf{IGFQC}$, with the case of 0 recursive
calls being (i) above.
In the induction step, assume that for a $k^{th}$-recursive call to
$\mathbf{IGFQC}$ gives a correct interpolant $I'$ for $\phi'$ and
$\psi'$, where $\phi'$ and $\psi'$ are constructed by Theorem \ref{the:gcase:1} or
Theorem \ref{the:gcase:2}.
By Theorem \ref{the:gcase:2},
the interpolant $I$ constructed from $I'$ is the correct answer
for $\phi$ and $\psi$.
The recursive algorithm terminates in all three cases: (i)
$\textit{Var}(\phi) \subseteq \textit{Var}(\psi)$, (ii) $\mathbf{NSOSC}$
holds, which is achieved at most $u+v+d$ times by Theorem
\ref{the:gcase:2}, and (iii) the number of variables in $\phi',
\psi'$ in the recursive call is smaller than the number of
variables in $\phi, \psi$.
\oomit{Meanwhile, for these two basic cases,
we have already know that this algorithm return the right
answer, by inductive method, the algorithm return the right
answer with input $\phi$ and $\psi$.} \qed
\end{proof}
\subsection{Complexity analysis of $\mathbf{IGFCH}$ and $\mathbf{IGFQC}$}
It is well known that an $\mathbf{SDP}$ problem
can be solved in polynomial time complexity. We analyze the
complexity of the above algorithms assuming that
the complexity of an $\mathbf{SDP}$ problem is of time complexity $g(k)$,
where $k$ is the input size.
\begin{theorem} \label{thm:complexity-1}
The complexity of $\mathbf{IGFCH}$ is $\mathcal{O}(g(r+s+n^2))$, where $r$
is the number of nonstrict inequalities $f_i$s and $s$ is the
number of strict inequalities $g_j$s, and $n$
is the number of variables in $f_i$s and $g_j$s.
\end{theorem}
\begin{proof}
In this algorithm we first need to solve a constraint solving problem in step $1$, see Section
\ref{sec:hold}, it is an $\mathbf{SDP}$ problem with size $\mathcal{O}(r+s+n^2)$, so the complexity
of step $1$ is $\mathcal{O}(g(r+s+n^2))$. Obviously, the complexity of steps
$2-4$ is linear in $(r+s+n^2)$, so the complexity of $\mathbf{IGFCH}$ is $\mathcal{O}(g(r+s+n^2))$. \qed
\end{proof}
\begin{theorem} \label{thm:complexity-2}
The complexity of $\mathbf{IGFQC}$ is $\mathcal{O}(n* g(r+s+n^2) )$,
where $r, s, n$ are as defined in the previous theorem.
\end{theorem}
\begin{proof}
The algorithm $\mathbf{IGFQC}$ is a recursive algorithm, which is called
at most $n$ times, since in every recursive call, at least one
variable gets eliminated. Finally, it terminates at step $1$ or step
$3$ with complexity $\mathcal{O}(g(r+s+n^2))$.
The complexity of each recursive call, i.e., the complexity
for step $2$ and steps $4-9$, can be analyzed as follows:
For step $2$, checking if $\mathbf{NSOSC}$ holds is done by solving the following problem: \\
\textbf{ find: }
$\delta_1,\ldots,\delta_r \ge 0$, and an
SOS polynomial $ h \in \mathbb{R}[x,\yy,\zz]$
s.t.
$\sum_{i=1}^r \delta_i f_i +h \mathbf{Eq}uiv 0$,
\noindent which is equivalent to the following linear matrix inequality ($\mathbf{LMI}$), \\
{\bf find: }
$\delta_1,\ldots,\delta_r \ge 0$, $M \in R^{(n+1 \times (n+1)}$,
s.t.
$M=-\sum_{i=1}^r \delta_i P_i$, $M\succeq 0$,
where $P_i \in R^{(n+1) \times (n+1)}$ is defined as (\ref{pq:def}).
Clearly, this is an $\mathbf{SDP}$ problem with size $\mathcal{O}(r+n^2)$, so the complexity of this step is $\mathcal{O}( g(r+n^2) )$.
For steps $4-9$, by the proof of Lemma \ref{h:sep}, it is easy to see that to represent $h$ in the
form $\mathrm{(H)}$ in Lemma \ref{lem:split}
can be done with complexity
$\mathcal{O}(n^2)$,
$h_1$ and $h_2$ can be computed with complexity $\mathcal{O}(n^2)$. Thus,
the complexity of step $4$ is $\mathcal{O}(n^2)$. Step $5$ is much easy. For step $6$,
using linear algebra operations, it is
easy to see that the complexity is $\mathcal{O}(n^2+r+s)$.
So, the complexity is $\mathcal{O}(n^2+r+s)$ for step $4-9$.
In a word, the overall complexity of $\mathbf{IGFQC}$ is
\begin{eqnarray*}
\mathcal{O}(g(r+s+n^2))+n \mathcal{O}(n^2+r+s)
& = & \mathcal{O}(n * g(r+s+n^2) ).
\end{eqnarray*}
\qed
\end{proof} | 3,648 | 68,445 | en |
train | 0.4991.14 | \section{Combination: quadratic concave polynomial inequalities
with uninterpreted function symbols (\textit{EUF})}
This section combines the quantifier-free theory of quadratic
concave polynomial inequalities with the theory of equality over
uninterpreted function symbols (\textit{EUF}).
\oomit{Using hierarchical
reasoning framework proposed in \cite{SSLMCS2008} which was
applied in \cite{RS10} to generate interpolants for mutually
contradictory formulas in the combined quantfier-free theory of linear
inequalities over the reals and equality over uninterpreted
symbols, we show below how the algorithm $\mathbf{IGFQC}$ for quadratic concave
polynomial inequalities over the reals can be extended to
generate interpolants for mutually contradictory formulas
consisting of quadratic concave polynomials expressed using terms
built from unintepreted symbols.}
The proposed algorithm for generating interpolants for the combined
theories is presented in Algorithm~\ref{alg:euf}. As the reader would observe,
it is patterned after the algorithm $\text{INTER}_{LI(Q)^\Sigma}$ in Figure 4 in
\cite{RS10} following the hierarchical reasoning and
interpolation generation framework in \cite{SSLMCS2008} with the following key differences\footnote{The
proposed algorithm andd its way of handling of combined theories
do not crucially depend upon using algorithms in \cite{RS10};
however, adopting their approach makes proofs and presentation
easier by focusing totally on the quantifier-free theory of CQ polynomial
inequalities.}:
\begin{enumerate}
\item To generate interpolants for mutually contradictory
conjunctions of CQ polynomial
inequalities, we call $\mathbf{IGFQC}$.
\item We prove below that (i) a nonlinear equality over
polynomials cannnot be generated from CQ
polynomials, and furthermore (ii) in the base case when the $\mathbf{NSOSC}$
condition is satisfied by CQ polynomial
inequalities, linear equalities are deduced only from the linear
inequalities in a problem (i.e., nonlinear inequalities do not play any
role); separating terms for mixed equalities are computed the
same way as in the algorithm SEP in \cite{RS10}, and (iii) as shown in Lemmas \ref{h:sep},
\ref{lem:split} and Theorem \ref{the:gcase:2}, during recursive calls to $\mathbf{IGFQC}$, additional
linear unmixed equalities are deduced which are local to either $\phi$
or $\psi$, we can use these equalities as well as those in
(ii) for the base case to reduce the number of variables
appearing in $\phi$ and $\psi$ thus reducing the complexity of
the algorithm;
{ equalities relating variables of $\phi$ are also
included in the interpolant}.
\end{enumerate}
Other than that, the proposed algorithm reduces to
$\text{INTER}_{LI(Q)^\Sigma}$ if $\phi, \psi$ are purely from $LI(Q)$ and/or
$\textit{EUF}$.
In order to get directly to the key concepts used, we assume the reader's
familiarity with the basic construction of flattening and
purification by introducing fresh variables for the arguments
containing uninterpreted functions.
\oomit{\begin{definition}
Given two formulas $\phi$ and $\psi$ with $\phi \wedge \psi \models \bot$. A formula $I$ is said to be
an interpolant for $\phi$ and $\psi$, if the following three conditions hold, $(i)$ $\phi \models I$, $(ii)$
$I \wedge \psi \models \bot$, and $(iii)$ the variables and function symbols in $I$ is in both $\phi$ and $\psi$,
i.e., $\textit{Var}(I) \subset \textit{Var}(\phi) \cap \textit{Var}(\psi) \wedge FS(I) \subset FS(\phi) \cap FS(\psi)$, where $FS(w)$ means that
the function symbols in $w$.
\end{definition} }
\subsection{Problem Formulation}
Let $\Omega = \Omega_1 \cup \Omega_2 \cup \Omega_3$ be a finite
set of uninterpreted function symbols in $\textit{EUF};$ further, denote
$\Omega_1 \cup \Omega_2$ by $\Omega_{12}$
and $\Omega_1 \cup \Omega_3$ by $\Omega_{13}$.
Let $\mathbb{R}[x,\yy,\zz]^{\Omega}$ be the extension of $\mathbb{R}[x,\yy,\zz]$
in which polynomials can have terms built using function symbols
in $\Omega$ and variables in $x, \yy, \zz$.
\begin{problem} \label{EUF-problem}
Suppose two formulas $\phi$ and $\psi$ with
$\phi \wedge \psi \models \bot$, where
$\phi= f_1 \ge 0 \wedge \ldots \wedge f_{r_1} \ge 0 \wedge g_1 >0
\wedge \ldots \wedge g_{s_1} > 0 $,
$\psi= f_{r_1+1} \ge 0 \wedge \ldots \wedge f_{r} \ge 0 \wedge
g_{s_1+1} >0 \wedge \ldots \wedge g_{s} > 0 $,
where $f_1, \ldots, f_{r}, g_1, \ldots, g_s$ are all CQ polynomial,
$f_1, \ldots, f_{r_1}, g_1, \ldots, g_{s_1} \in
\mathbb{R}[x,\yy]^{\Omega_{12}}$,
$f_{r_1+1}, \ldots, f_{r}, g_{s_1+1}, \ldots, g_{s} \in
\mathbb{R}[x,\zz]^{\Omega_{13}}$,
the goal is to generate an
interpolant $I$ for $\phi$ and $\psi$, expressed using the common
symbols $x, \Omega_1$, i.e., $I$ includes only polynomials in $\mathbb{R}[x]^{\Omega_1}$.
\end{problem}
{\bf Flatten and Purify:} Purify and flatten the formulas $\phi$
and $\psi$ by introducing fresh variables for each term with
uninterpreted symbols as well as for the terms with uninterpreted
symbols. Keep track of new variables introduced exclusively for
$\phi$ and $\psi$ as well as new common variables.
Let $\overline{\phi} \wedge \overline{\psi} \wedge \bigwedge D$
be obtained from $\phi \wedge \psi$ by flattening and purification
where $D$ consists of unit clauses of the form
$\omega(c_1,\ldots,c_n)=c$, where $c_1,\ldots,c_n$ are variables
and $\omega \in \Omega$.
Following \cite{SSLMCS2008,RS10}, using the axiom of
an uninterpreted function symbol, a set $N$ of Horn clauses are generated as follows,
$$
N=\{ \bigwedge_{k=1}^n c_k=b_k \rightarrow c=b \mid \omega(c_1,\ldots,c_n)=c \in D, \omega(b_1,\ldots,b_n)=b \in D \}.
$$
The set $N$ is partitioned into $N_{\phi}, N_{\psi}, N_{\text{mix}}$
with all symbols in $N_{\phi}, N_{\psi}$ appearing in $\overline{\phi}$, $\overline{\psi}$,
respectively, and $N_{\text{mix}}$ consisting of symbols from both $\overline{\phi}, \overline{\psi}$.
It is easy to see that for every Horn clause in $N_{\text{mix}}$, each of
equalities in the hypothesis as well as conclusion is mixed.
\begin{eqnarray} \label{eq:reducedP}
\phi \wedge \psi \models \bot \mbox{ iff } \overline{\phi} \wedge \overline{\psi} \wedge D \models \bot
\mbox{ iff } (\overline{\phi}\wedge N_{\phi}) \wedge (\overline{\psi} \wedge N_{\psi}) \wedge N_{\text{mix}} \models \bot.
\end{eqnarray}
Notice that $ \overline{\phi} \wedge \overline{\psi} \wedge N
\models \bot$ has no uninterpreted function symbols. An interpolant generated
for this problem\footnote{after properly handling
$N_{\text{mix}}$ since Horn clauses have symbols both from
$\overline{\phi}$ and $\overline{\psi}$.} can be used to
generate an interpolant for $\phi, \psi$ after uniformly
replacing all new symbols by their corresponding expressions from $D$. | 2,129 | 68,445 | en |
train | 0.4991.15 | \subsection{Combination algorithm}
If $N_{\text{mix}}$ is empty, implying there are
no mixed Horn clauses, then the algorithm invokes $\mathbf{IGFQC}$ on a
finite set of subproblems generated from a disjunction of
conjunction of polynomial inequalities obtained after expanding Horn
clauses in $N_{\phi}$ and $N_\psi$ and applying De Morgan's
rules. The resulting interpolant is a disjunction of the
interpolants generated for each subproblem.
The case when $N_{\text{mix}}$ is nonempty is more interesting, but it has the same structure as
the algorithm $\text{INTER}_{LI(Q)^\Sigma}$ in \cite{RS10} except that
instead of $\text{INTER}_{LI(Q)}$, it calls $\mathbf{IGFQC}$.
The following lemma proves that if a conjunction of polynomial
inequalities satisfies the $\mathbf{NSOSC}$ condition and an
equality on variables can be deduced from it, then it suffices to
consider only linear inequalities in the conjunction. This
property enables us to use algorithms used in \cite{RS10} to
generate such equalities as well as separating terms for the
constants appearing in mixed equalities (algorithm SEP in
\cite{RS10}).
\begin{lemma} \label{lemma:qc2lin}
Let $f_i$, $i=1,\ldots,r$ be CQ polynomials, and $\lambda_i \ge 0$, if
$\sum_{i=1}^{r} \lambda_i f_i \mathbf{Eq}uiv 0$,
then for any $1\le i \le r$,
$\lambda_i=0$ or $f_i$ is linear.
\end{lemma}
\begin{proof}
Let $f_i = x^T A_i x + l_i^T x + \gamma_i$, then $A_i \preceq 0$, for $i=1, \ldots, r$.
Since $\sum_{i=1}^{r} \lambda_i f_i = 0$, we have $\sum_{i=1}^{r} \lambda_i A_i = 0$.
Thus for any $1\le i \le r$, $\lambda_i=0$ or $A_i=0$. \qed
\end{proof}
\begin{lemma} \label{lem:linear-part}
Let $\overline{\phi}$ and $\overline{\psi}$ be obtained as above with $\mathbf{NSOSC}$. If $\overline{\phi} \wedge \overline{\psi}$ is satisfiable, $\overline{\phi} \wedge \overline{\psi} \models c_k=b_k$, then
$LP(\overline{\phi}) \wedge LP(\overline{\psi}) \models c_k=b_k$,
where $LP(\overline{\phi})$ ($LP(\overline{\psi})$) is a formula defined by all the linear constraints in
$\overline{\phi}$ ($\overline{\psi}$).
\end{lemma}
\begin{proof}
Since $\overline{\phi} \wedge \overline{\psi} \models c_k=b_k$, then $\overline{\phi} \wedge \overline{\psi} \wedge c_k>b_k \models \bot$. By Theorem \ref{the:int},
there exist $\lambda_i\ge 0$ ($i=1,\cdots,r$), $\eta_j \ge 0$ ($j=0,1,\cdots,s$), $\eta \ge 0$ and two quadratic SOS polynomials $\overline{h}_1 $ and $\overline{h}_2 $ such that
\begin{align}
& \sum_{i=1}^{r} \lambda_i \overline{f}_i +\sum_{j=1}^{s} \eta_j \overline{g}_j+\eta (c_k-b_k) + \eta_0 + \overline{h}_1+\overline{h}_2 \mathbf{Eq}uiv 0, \label{cond:sep:1}\\
& \eta_0+\eta_1 + \ldots + \eta_s +\eta= 1.\label{cond:sep:2}
\end{align}
As $\overline{\phi} \wedge \overline{\psi}$ is satisfiable and $\overline{\phi} \wedge \overline{\psi} \models c_k=b_k$, there exist $x_0,\yy_0,\zz_0,\aa_0,\bb_0,\cc_0$ s.t.
$\overline{\phi}[x/x_0, \yy/\yy_0,\aa/\aa_0,\cc/\cc_0]$, $\overline{\psi}[x/x_0, \zz/\zz_0,\bb/\bb_0,\cc/\cc_0]$,
and $c_k=b_k[\aa/\aa_0,\bb/\bb_0,\cc/\cc_0]$. Thus, it follows that
$\eta_0=\eta_1=\ldots=\eta_s=0$ from (\ref{cond:sep:1}) and $\eta=1$ from (\ref{cond:sep:2}).
Hence, (\ref{cond:sep:1}) is equivalent to
\begin{align} \label{amb}
\sum_{i=1}^{r} \lambda_i \overline{f}_i + (c_k-b_k) + \overline{h}_1+\overline{h}_2 \mathbf{Eq}uiv 0.
\end{align}
Similarly, we can prove that there exist $\lambda_i'\ge 0$ ($i=1,\cdots,r$) and two quadratic SOS polynomials $h_1'$ and $h_2'$ such that
\begin{align} \label{bma}
\sum_{i=1}^{r} \lambda_i' \overline{f}_i + (b_k-c_k) + \overline{h}_1'+\overline{h}_2' \mathbf{Eq}uiv 0.
\end{align}
From (\ref{amb}) and (\ref{bma}), it follows
\begin{align} \label{bmap}
\sum_{i=1}^{r} (\lambda+\lambda_i') \overline{f}_i + \overline{h}_1+\overline{h}_1'+\overline{h}_2+\overline{h}_2' \mathbf{Eq}uiv 0.
\end{align}
In addition, $\mathbf{NSOSC}$ implies $\overline{h}_1\mathbf{Eq}uiv \overline{h}_1' \mathbf{Eq}uiv \overline{h}_2 \mathbf{Eq}uiv \overline{h}_2' \mathbf{Eq}uiv0$. So
\begin{align} \label{amb1}
\sum_{i=1}^{r} \lambda_i \overline{f}_i + (c_k-b_k) \mathbf{Eq}uiv 0,
\end{align}
and
\begin{align} \label{bma1}
\sum_{i=1}^{r} \lambda_i' \overline{f}_i + (b_k-c_k) \mathbf{Eq}uiv 0.
\end{align}
Applying Lemma \ref{lemma:qc2lin} to (\ref{amb1}), we have that
$\lambda_i=0$ or $f_i$ is linear.
So
\begin{align*}
LP(\overline{\phi}) \wedge LP(\overline{\psi}) \models c_k \le b_k.
\end{align*}
Likewise, by applying Lemma \ref{lemma:qc2lin} to (\ref{bma1}), we have
\begin{align*}
LP(\overline{\phi}) \wedge LP(\overline{\psi}) \models c_k \ge b_k. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qed
\end{align*}
\end{proof} | 1,910 | 68,445 | en |
train | 0.4991.16 | If $\mathbf{NSOSC}$ is not satisfied, then the recursive
call to $\mathbf{IGFQC}$ can generate linear equalities as stated
in Theorems \ref{the:gcase:1} and \ref{the:gcase:2} which can
make hypotheses in a Horn clause in $N_{\text{mix}}$ true, thus
deducing a mixed equality on symbols .
\begin{algorithm}[!htb]
\label{alg:euf}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\mathbf{IGFQC}eu$} \label{prob:in-out}}
\Input{two formulas $\overline{\phi}$, $\overline{\psi}$, which are
constructed respectively from $\phi$ and $\psi$ by flattening and purification, \\
$N_{\phi}$ : instances of functionality axioms for functions in $D_{\phi}$,\\
$N_{\psi}$ : instances of functionality axioms for functions in $D_{\psi}$,\\
where $\overline{\phi} \wedge \overline{\psi} \wedge N_{\phi} \wedge N_{\psi} \models \bot$,
}
\Output{A formula $I$ to be a Craig interpolant for $\phi$ and $\psi$.}
\SetAlgoLined
\BlankLine
Transform $\overline{\phi}\wedge N_{\phi} $ to a DNF $\vee_i \phi_i$\; \label{alg4:1}
Transform $\overline{\psi}\wedge N_{\psi} $ to a DNF $\vee_j \psi_j$\; \label{alg4:2}
\KwRet $I:= \vee_i \wedge_j \mathbf{IGFQC}(\phi_i, \psi_j)$ \label{alg4:3}
\end{algorithm}
\begin{algorithm}[!htb]
\label{alg:euf}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\mathbf{IGFQC}e$ }\label{prob:in-out}}
\Input{ $\overline{\phi}$ and $\overline{\psi}$: two formulas, which are constructed
respective from $\phi$ and $\psi$ by flattening and purification, \\
$D$ : definitions for fresh variables introduced during flattening and purifying $\phi$ and $\psi$,\\
$N$ : instances of functionality axioms for functions in $D$,\\
where $\phi \wedge \psi \models \bot$, \\
$\overline{\phi}= f_1 \ge 0 \wedge \ldots \wedge f_{r_1} \ge 0 \wedge g_1 >0 \wedge \ldots \wedge g_{s_1} > 0 $, \\
$\overline{\psi}= f_{r_1+1} \ge 0 \wedge \ldots \wedge f_{r} \ge 0 \wedge g_{s_1+1} >0 \wedge \ldots \wedge g_{s} > 0 $,
where \\
$f_1, \ldots, f_{r}, g_1, \ldots, g_s$ are all CQ polynomial,\\
$f_1, \ldots, f_{r_1}, g_1, \ldots, g_{s_1} \in \mathbb{R}[x,\yy]$, and\\
$f_{r_1+1}, \ldots, f_{r}, g_{s_1+1}, \ldots, g_{s} \in \mathbb{R}[x,\zz]$
}
\Output{A formula $I$ to be a Craig interpolant for $\phi$ and $\psi$}
\SetAlgoLined
\BlankLine
\eIf { $\mathbf{NSOSC}$ holds }
{ $L_1:=LP(\overline{{\phi}})$; $L_2:=LP(\overline{{\psi}})$\; \label{alg3:7}
separate $N$ to $N_{\phi}$, $N_{\psi}$ and $N_{mix}$\;
$N_{\phi}, N_{\psi} := \textbf{SEPmix}(L_1, L_2, \emptyset, N_{\phi}, N_{\psi}, N_{mix})$\;
$\overline{I} := \textbf{IGFQCEunmixed}(\overline{\phi}, \overline{\psi}, N_{\phi}, N_{\psi})$\;
}
{ Find $\delta_1,\ldots,\delta_r \ge 0$ and an SOS
polynomial $h$ using SDP
s.t. $\sum_{i=1}^r \delta_i f_i +h \mathbf{Eq}uiv 0$,\; \label{alg3:19}
Construct $h_1 \in \mathbb{R}[x,\yy]$ and $h_2 \in \mathbb{R}[x,\zz]$ with form $(H1)$ and $(H2)$\;
\label{alg3:20}
$f:=\sum_{i=1}^{r_1} \delta_i f_i +h_1 =-\sum_{i=r_1}^{r} \delta_i f_i -h_2 $\; \label{alg3:22}
Construct $\overline{\phi'}$ and $\overline{\psi'}$ by Theorem \ref{the:gcase:1} and Theorem
\ref{the:gcase:2} by eliminating variables due to condition
$h_1 = h_2 = 0$\; \label{alg3:22}
$I' := \mathbf{IGFQC}e(\overline{\phi'}, \overline{\psi'},D,N)$\; \label{alg3:24}
$\bar{I}:=(f>0) \vee (f \ge 0 \wedge I')$\; }
Obtain $I$ from $\overline{I}$\;
\KwRet $I$
\end{algorithm}
\begin{algorithm}[!htb]
\label{alg:euf}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\textbf{SEPmix}$ }\label{prob:in-out}}
\Input{ $L_1,L_2$: two sets of linear inequalities,\\
$W$: a set of equalities,\\
$N_{\phi}, N_{\psi}, N_{mix}$: three sets of instances of functionality axioms.
}
\Output{$N_{\phi}, N_{\psi}$: s.t. $N_{mix}$ is separated into $N_{\phi}$ or $N_{\psi}$.}
\SetAlgoLined
\BlankLine
\eIf {there exists $(\bigwedge_{k=1}^K c_k=b_k \rightarrow c=b) \in N_{mix}$ s.t
$L_1 \wedge L_2 \wedge W \models \bigwedge_{k=1}^K c_k=b_k$}
{
\eIf { $c$ is $\phi$-local and $b$ is $\psi$-local}
{
for each $k \in \{ 1, \ldots, K \}$,
$t_k^{-}, t_k^{+} := \textbf{SEP}(L_1,L_2,c_k,b_k)$\;
$\alpha:=$ function symbol corresponding to $\bigwedge_{k=1}^K c_k=b_k \rightarrow c=b$\;
$t:=$ fresh variable;
$D := D \cup \{ t=f(t_1^{+}, \ldots, t_K^{+}) \}$\;
$C_{\phi}:=\bigwedge_{k=1}^K c_k=t_k^{+} \rightarrow c=t$;
$C_{\psi}:=\bigwedge_{k=1}^K t_k^{+}=b_k \rightarrow t=b$\;
$N_{mix}:=N_{mix}-\{ C \}$; $N_{\phi} := N_{\phi} \cup \{ C_{\phi} \}$\;
$N_{\psi} := N_{\psi} \cup \{ C_{\psi} \}$;
$W:= W \cup \{ c=t,t=d \}$\;
}
{
\eIf {$c$ and $b$ are $\phi$-local}
{
$N_{mix} := N_{mix} -\{ C \}$; $N_{\phi} := N_{\phi} \cup \{ C \}$; $W:=W \cup \{ c = b \}$\;
}
{
$N_{mix} := N_{mix} -\{ C \}$; $N_{\phi} := N_{\phi} \cup \{ C \}$; $W:=W \cup \{ c = b \}$\;
}
}
call $\textbf{SEPmix} (L_1, L_2, W, N_{\phi}, N_{\psi}, N_{mix})$\;
}
{
\KwRet $N_{\phi}$ and $N_{\psi}$\;
}
\end{algorithm}
\begin{algorithm}[!htb]
\label{alg:euf}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\textbf{SEP}$ }\label{prob:in-out}}
\Input{ $L_1,L_2$: two sets of linear inequalities,\\
$c_k, b_k$: local variables from $L_1$ and $L_2$ respectively.
}
\Output{$t^{-}, t^{+}$: expressions over common variables of $L_1$ and $L_2$
s.t $L_1 \models t^{-} \le c_k \le t^{+}$ and $L_2 \models t^{+} \le b_k \le t^{-}$}
\SetAlgoLined
\BlankLine
rewrite $L_1$ and $L_2$ as constraints in matrix form $a - A x \ge 0$ and $b - B x \ge 0$\;
$x_i, x_j$ in $x$ is the variable $c_k$ and $b_k$\;
$e^{+} := \nu^{+} A + \mu^{+} B$; $e^{-} := \nu^{-} A + \mu^{-} B$\;
$\nu^{+},\mu^{+} :=$ solution for $\nu^{+} \ge 0 \wedge \mu^{+} \ge 0 \wedge \nu^{+} a+ \mu^{+} b \le 0 \wedge
e_i^{+}=1 \wedge e_j^{+}=-1 \wedge \bigwedge_{l \neq i,j} e_l^{+}=0$\;
$\nu^{-},\mu^{-} :=$ solution for $\nu^{-} \ge 0 \wedge \mu^{-} \ge 0 \wedge \nu^{-} a+ \mu^{-} b \le 0 \wedge
e_i^{-}=-1 \wedge e_j^{-}=1 \wedge \bigwedge_{l \neq i,j} e_l^{-}=0$\;
$t^{+} := \mu^{+}Bx + x_j - \mu^{+} b$\;
$t^{-} := \nu^{-} Ax + x_i - \nu^{-} a$\;
\KwRet $t^{+}$ and $t^{-}$\;
\end{algorithm}
\begin{theorem} (Soundness and Completeness of $\mathbf{IGFQC}e$)
$\mathbf{IGFQC}e$ computes an interpolant $I$ of mutually contradictory $\phi, \psi$ with CQ
polynomial inequalities and \textit{EUF}.
\end{theorem}
\begin{proof}
Let $\phi$ and $\psi$ are two formulas satisfy the conditions of the input of the
Algorithm $\mathbf{IGFQC}e$, $D$ is the set of definitions of fresh variables introduced during
flattening and purifying $\phi$ and $\psi$, and $N$ is the set of instances of functionality
axioms for functions in $D$.
If the condition $\mathbf{NSOSC}$ is satisfied, then from Lemma \ref{lem:linear-part},
we could deal with $N$ just using the linear constraints in $\phi$ and $\psi$, which
is the same as \cite{RS10}. Since $N$ is easy to be divided into three parts,
$N_{\phi} \wedge N_{\psi} \wedge N_{\text{mix}}$. From the algorithm in \cite{RS10}, $N_{\text{mix}}$
can be divided into two parts $N_{\phi}^{\text{mix}}$ and $N_{\psi}^{\text{mix}}$ and add them to
$N_{\phi}$ and $N_{\psi}$, respectively. Thus, we have
\begin{eqnarray*}
\phi \wedge \psi \models \bot& ~\Leftrightarrow ~ & \overline{\phi} \wedge \overline{\psi} \wedge D \models \bot ~ \Leftrightarrow ~
\overline{\phi} \wedge \overline{\psi} \wedge N_{\phi} \wedge N_{\psi} \wedge N_{\text{mix}} \models \bot \\
& \Leftrightarrow &\overline{\phi} \wedge N_{\phi} \wedge N_{\phi}^{\text{mix}}
\wedge \overline{\psi} \wedge N_{\psi} \wedge N_{\psi}^{\text{mix}} \models \bot.
\end{eqnarray*}
The correctness of step $4$ is guaranteed by Lemma~\ref{lem:linear-part} and Theorem 8 in \cite{RS10}.
After step $4$, $N_{\phi}$ is replaced by $N_{\phi}\wedge N_{\phi}^{\text{mix}}$, and $N_{\psi}$ is
replaced by $N_{\psi} \wedge N_{\psi}^{\text{mix}}$.
An interpolant for
$\overline{\phi} \wedge N_{\phi} \wedge N_{\phi}^{\text{mix}}$
and $\overline{\psi} \wedge N_{\psi} \wedge N_{\psi}^{\text{mix}}$ is generated in step $5$, the correctness of
this step is guaranteed
by Theorem~\ref{thm:correctness-2}.
Otherwise if the condition $\mathbf{NSOSC}$ is not satisfied, we can obtain two polynomials $h_1$ and
$h_2$, and derive two formulas
$\overline{\phi'}$ and $\overline{\psi'}$. By Theorem \ref{lemma:p22}, if there is an
interpolant $I'$ for $\overline{\phi'}$ and $\overline{\psi'}$, then we can get an interpolant $I$
for $\overline{\phi}$ and $\overline{\psi}$ at step $11$.
Similar to the proof of Theorem~\ref{thm:correctness-2},
it is easy to argue that this reduction will terminate at the case when $\mathbf{NSOSC}$ holds in finite steps.
Thus, this completes the proof. \qed
\end{proof} | 3,853 | 68,445 | en |
train | 0.4991.17 | \begin{example}
Let two formulae $\phi$ and $\psi$ be defined as follows,
\begin{align*}
\phi:=&(f_1=-(y_1-x_1+1)^2-x_1+x_2 \ge 0) \wedge (y_2=\alpha(y_1)+1) \\
&\wedge ( g_1= -x_1^2-x_2^2-y_2^2+1 > 0),
\end{align*}
\begin{align*}
\psi:=&(f_2=-(z_1-x_2+1)^2+x_1-x_2 \ge 0) \wedge (z_2=\alpha(z_1)-1) \\
&\wedge (g_2= -x_1^2-x_2^2-z_2^2+1 > 0),
\end{align*}
where $\alpha$ is an uninterpreted function. Then
\begin{align*}
\overline{\phi}:=&(f_1=-(y_1-x_1+1)^2-x_1+x_2 \ge 0) \wedge (y_2=y+1) \\
&\wedge ( g_1= -x_1^2-x_2^2-y_2^2+1 > 0),\\
\overline{\psi}:=&(f_2=-(z_1-x_2+1)^2+x_1-x_2 \ge 0) \wedge (z_2=z-1) \\
&\wedge (g_2= -x_1^2-x_2^2-z_2^2+1 > 0),\\
D=(&y_1=z_1 \rightarrow y=z).
\end{align*}
The condition NSOSC is not satisfied, since
$-f_1-f_2=(y_1-x_1+1)^2+(z_1-x_2+1)^2$ is a SOS. It is easy to have
$$h_1=(y_1-x_1+1)^2~, ~~h_2=(z_1-x_2+1)^2.$$
Let $f:=f_1+h_1=-f_2-h_2=-x_1+x_2$, then it is easy to see that
$${\phi} \models f \ge0 ~,~~{\psi} \models f \le0.$$
Next we turn to find an interpolant for the following formulae
$$((\phi \wedge f>0) \vee (\phi \wedge f=0)) ~~and ~~ ((\psi \wedge -f>0) \vee (\psi \wedge f=0)).$$
Then
\begin{align}
\label{int:eq:e}
(f>0) \vee (f\ge0 \wedge I_2)
\end{align}
is an interpolant for $\phi$ and $\psi$,
where $I_2$ is an interpolant for $ \phi \wedge f=0$ and $\psi \wedge f=0$.
It is easy to see that
\begin{align*}
\phi \wedge f=0 \models y_1=x_1-1 ~,~~ \psi \wedge f=0 \models z_1=x_2-1.
\end{align*}
Substitute then into $f_1$ in $\overline{\phi}$ and $\overline{\psi}$, we have
\begin{align*}
\overline{\phi'}=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1,\\
\overline{\psi'}=&~~~~x_1-x_2 \ge 0 \wedge z_2=z-1 \wedge g_2>0 \wedge z_1=x_2-1.
\end{align*}
Only using the linear form in $\overline{\phi'}$ and $\overline{\psi'}$ we deduce that $y_1=z_1$ as
\begin{align*}
\overline{\phi'} \models t^{-}=x_1-1 \le y_1 \le t^{+}=x_2-1~~,~~\overline{\psi'} \models x_2-1 \le z_1 \le x_1-1.
\end{align*}
Let $t=\alpha(t)$, then separate $y_1=z_1 \rightarrow y=z$ into two parts,
\begin{align*}
y_1=t^{+} \rightarrow y=t, ~~ t^{+}=z_1 \rightarrow t=z.
\end{align*}
Add them to $\overline{\phi'}$ and $\overline{\psi'}$ respectively, we have
\begin{align*}
\overline{\phi'}_1=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1 \wedge y_1=x_2-1 \rightarrow y=t,\\
\overline{\psi'}_1=&~~~~x_1-x_2 \ge 0 \wedge z_2=z-1 \wedge g_2>0 \wedge z_1=x_2-1 \wedge x_2-1=z_1 \rightarrow t=z.
\end{align*}
Then
\begin{align*}
\overline{\phi'}_1=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1 \wedge \\
& (x_2-1>y_1 \vee y_1>x_2-1 \vee y=t),\\
\overline{\psi'}_1=&~~~~x_1-x_2 \ge 0 \wedge z_2=z-1 \wedge g_2>0 \wedge z_1=x_2-1 \wedge t=z.
\end{align*}
Thus,
\begin{align*}
\overline{\phi'}_1=&\overline{\phi'}_2\vee \overline{\phi'}_3 \vee \overline{\phi'}_4,\\
\overline{\phi'}_2=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1 \wedge x_2-1>y_1,\\
\overline{\phi'}_3=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1 \wedge y_1>x_2-1,\\
\overline{\phi'}_4=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1 \wedge y=t.
\end{align*}
Since
$\overline{\phi'}_3=false$, then
$\overline{\phi'}_1=\overline{\phi'}_2\vee \overline{\phi'}_4$.
Then find interpolant
$$I(\overline{\phi'}_2,\overline{\psi'}_1),~~~~I(\overline{\phi'}_4,\overline{\psi'}_1). $$
$=$ replace by two $\ge$, like,
$y_1=x_1-1$ replace by $y_1\ge x_1-1$ and $x_1-1 \ge y_1$.
Then let $I_2=I(\overline{\phi'}_2,\overline{\psi'}_1) \vee I(\overline{\phi'}_4,\overline{\psi'}_1) $
an interpolant is found from (\ref{int:eq:e}) .
\end{example}
\begin{comment}
\begin{example}
Consider $\phi$ and $\psi$ where $\alpha$ is an uninterpereted
function symbol:
\begin{align*}
\phi:=&(f_1=-(y_1-x_1+1)^2-x_1+x_2 \ge 0) \wedge (y_2=\alpha(y_1)+1) \\
&\wedge ( g_1= -x_1^2-x_2^2-y_2^2+1 > 0),
\end{align*}
\begin{align*}
\psi:=&(f_2=-(z_1-x_2+1)^2+x_1-x_2 \ge 0) \wedge (z_2=\alpha(z_1)-1) \\
&\wedge (g_2= -x_1^2-x_2^2-z_2^2+1 > 0),
\end{align*}
Flattening and purification, with
$D = \{ y = \alpha(y_1), z = \alpha(z_1) \}, ~~ N=(y_1=z_1 \rightarrow y=z)$:
gives
\begin{align*}
\overline{\phi}:=&(f_1=-(y_1-x_1+1)^2-x_1+x_2 \ge 0) \wedge (y_2=y+1) \\
&\wedge ( g_1= -x_1^2-x_2^2-y_2^2+1 > 0),\\
\overline{\psi}:=&(f_2=-(z_1-x_2+1)^2+x_1-x_2 \ge 0) \wedge (z_2=z-1) \\
&\wedge (g_2= -x_1^2-x_2^2-z_2^2+1 > 0),\\
\end{align*}
The condition $\mathbf{NSOSC}$ is not satisfied, since
$-f_1-f_2=(y_1-x_1+1)^2+(z_1-x_2+1)^2$ is an SOS.
We follow the steps given in Subsection \ref{sec:gen}
$h_1=(y_1-x_1+1)^2~, ~~h_2=(z_1-x_2+1)^2.$
This gives $f:=f_1+h_1=-f_2-h_2=-x_1+x_2$;
${\phi} \models f \ge0 ~,~~{\psi} \models f \le0.$
Next we turn to find an interpolant for the following formulas
$$((\phi \wedge f>0) \vee (\phi \wedge f=0)) ~~and ~~ ((\psi \wedge -f>0) \vee (\phi \wedge f=0)).$$
Then $(f>0) \vee (f\ge0 \wedge I_2)$ is an interpolant for $\phi$ and $\psi$, where $I_2$ is an interpolant for $ \phi \wedge f=0$ and $\phi \wedge f=0$.
It is easy to see that
\begin{align*}
\phi \wedge f=0 \models y_1=x_1-1 ~,~~ \phi \wedge f=0 \models z_1=x_2-1.
\end{align*}
Substitute then into $f_1$ in $\overline{\phi}$ and $\overline{\psi}$, we have
\begin{align*}
\overline{\phi'}=&-x_1+x_2 \ge 0 \wedge y_2=y+1 \wedge g_1>0 \wedge y_1=x_1-1,\\
\overline{\psi'}=&~~~~x_1-x_2 \ge 0 \wedge z_2=z-1 \wedge g_2>0 \wedge z_1=x_2-1.
\end{align*}
Only using the linear form in $\overline{\phi'}$ and $\overline{\psi'}$ we deduce that $y_1=z_1$ as
\begin{align*}
\overline{\phi'} \models x_1-1 \le y_1 \le x_2-1~~,~~\overline{\psi'} \models x_2-1 \le z_1 \le x_1-1.
\end{align*}
Let $t=x_1-1$, $\alpha_t=\alpha(t)$, then
\begin{align*}
\overline{\phi'} \models (x_1 \ge x_2 \rightarrow y=\alpha_t)~~,~~\overline{\psi'} \models (x_2 \ge x_1 \rightarrow z=\alpha_t).
\end{align*}
Thus, $\overline{\phi'} \wedge \overline{\phi'} \wedge D \models \perp$ be divide into two parts $\phi$-part and $\psi$-part as follows,
\begin{align*}
(\overline{\phi'} \wedge (x_1 \ge x_2 \rightarrow y=\alpha_t)) \wedge (\overline{\psi'} \wedge (x_2 \ge x_1 \rightarrow z=\alpha_t)) \models \perp.
\end{align*}
Then find an interpolant for $(\overline{\phi'} \wedge (x_1 \ge x_2 \rightarrow y=\alpha_t))$ and $ (\overline{\psi'} \wedge (x_2 \ge x_1 \rightarrow z=\alpha_t))$
without uninterpreted function.
\end{example}
\end{comment} | 3,177 | 68,445 | en |
train | 0.4991.18 | \section{Proven interpolant}
Since our result is obtained by numerical calculation, it can't guard the solution satisfy the constraints strictly.
Thus, we should verify the solution obtained from a $\mathbf{SDP}$ solver to get a proven interpolant.
In the end of section \ref{sec:sdp}, the remark \ref{remark:1} said one can use Lemma \ref{lem:split} to verify the
result obtained from some $\mathbf{SDP}$ solver. In this section, we illuminate how to verify the result obtained from some
$\mathbf{SDP}$ solver to get a proven interpolant by an example.
\begin{example}
{ \begin{align*}
\phi :&=f_1=4-(x-1)^2-4y^2 \ge 0 \wedge f_2=y- \frac{1}{2} \ge 0, \\
\psi :&=f_3=4-(x+1)^2-4y^2 \ge 0 \wedge f_4=x+2y \ge 0.
\end{align*} }
\end{example}
Constructing SOS constraints as following,
\begin{align*}
&\lambda_1\ge 0, \lambda_2\ge 0, \lambda_3\ge 0, \lambda_4 \ge 0, \\
&-(\lambda_1 f_1+ \lambda_2f_2+\lambda_3 f_3+ \lambda_4f_4+1) \mbox{ is a SOS polynomial}
\end{align*}
Using the $\mathbf{SDP}$ solver \textit{Yalmip} to solve the above constraints for $\lambda_1, \lambda_2, \lambda_3, \lambda_4$,
take two decimal places, we obtain
\begin{align*}
\lambda_1=3.63, \lambda_2=38.39, \lambda_3=0.33, \lambda_4=12.70.
\end{align*}
Then we have,
\begin{align*}
-(\lambda_1 f_1+ \lambda_2f_2+\lambda_3 f_3+ \lambda_4f_4+1)
=3.96x^2+6.10x+15.84y^2-12.99y+6.315.
\end{align*}
Using Lemma \ref{lem:split}, we have
{\small
\begin{align*}
3.96x^2+6.10x+15.84y^2-12.99y+6.315
=3.96(x+\frac{305}{396})^2+15.84(y+\frac{1299}{3168})^2+\frac{825383}{6336},
\end{align*} }
which is a SOS polynomial obviously.
Thus, $I:=\lambda_1f_1+\lambda_2f_2+1>0$, i.e.,
$-3.63X^2-14.52y^2+7.26x+38.39y-7.305>0$, is a proven interpolant for $\phi$ and $\psi$. | 788 | 68,445 | en |
train | 0.4991.19 | \section{Beyond concave quadratic polynomials}
Theoretically speaking, \emph{concave quadratic} is quite restrictive. But in practice, the results obtained above are powerful enough
to scale up the existing verification techniques of programs and hybrid systems, as all well-known abstract domains, e.g. \emph{octagon}, \emph{polyhedra},
\emph{ellipsoid}, etc. are concave quadratic, which will be further demonstrated in the case study below. Nonetheless, we now discuss how to generalize our approach
to more general formulas by allowing polynomial equalities whose polynomials may be neither concave nor quadratic
using Gr\"{o}bner basis.
\oomit{In the above sections, we give an algorithm to find an interpolant when all the constraints are concave quadratic,
which means that if there exists an equation in the constraints it must be linear.
In this section, we allow the polynomial equation join the constraints, i.e., we suppose that all inequation constraints are
the same concave quadratic, but equation constraints are normal polynomial equations without the condition to be linear.
We give a sufficient method to generate an interpolant.}
Let's start the discussion with the following running example.
\begin{example} \label{nonCQ-exam}
Let $G=A \wedge B$, where
\begin{align*}
A:~&x^2+2x+(\alpha(\beta(a))+1)^2 \leq 0 \wedge \beta(a)=2c+z \wedge\\
&2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0,\\
B:~&x^2-2x+(\alpha(\gamma(b))-1)^2 \leq 0 \wedge \gamma(b)=d-z \wedge\\
&d^2+d+y^2+y+z=0 \wedge -d^2+y+2z=0,
\end{align*}
try to find an interpolant for $A$ and $B$.
\end{example}
It is easy to see that there exist some constraints which are not concave quadratic, as some equations are not linear.
Thus, the interpolant generation algorithm above is not applicable directly.
For easing discussion, in what follows, we use $\mathbf{IEq}(S), \mathbf{Eq}(S)$ and $\mathbf{LEq}(S)$ to stand for the sets of polynomials respectively from inequations,
equations and linear equations of $S$, for any polynomial formula $S$. E.g.,
in Example \ref{nonCQ-exam}, we have
\begin{align*}
\mathbf{IEq}(A)&= \{ x^2+2x+(\alpha(\beta(a))+1)^2 \},\\
\mathbf{Eq}(A)&= \{\beta(a)-2c-z, 2c^2+2c+y^2+z, -c^2+y+2z\},\\
\mathbf{LEq}(A)&=\{\beta(a)-2c-z \}.
\end{align*}
\oomit{
\begin{problem} \label{nonCQ-problem}
Let $A(x,\zz)$ and $B(\yy,\zz)$ be
\begin{align}
A\, :\, &f_1(xx,\zz)\ge 0 \wedge \ldots \wedge f_{r_1}(xx,\zz) \ge 0
\wedge g_1(xx,\zz)> 0 \wedge \ldots \wedge g_{s_1}(xx,\zz) > 0 \nonumber\\
\, &\wedge h_1(xx,\zz)= 0 \wedge \ldots \wedge h_{p_1}(xx,\zz) = 0, \\
B \, :\, &f_{r_1+1}(\yy,\zz)\ge 0 \wedge \ldots \wedge f_{r}(\yy,\zz) \ge 0
\wedge g_{s_1+1}(\yy,\zz)> 0 \wedge \ldots \wedge g_{s}(\yy,\zz) > 0 \nonumber\\
\, &\wedge h_{p_1+1}(\yy,\zz)= 0 \wedge \ldots \wedge h_{p}(\yy,\zz) = 0,
\end{align}
where $f_1, \ldots, f_r$ and $g_1, \ldots, g_s$ are concave quadratic polynomials,
$h_1, \ldots, h_t$ are general polynomials, unnecessary to be concave quadratic, and
\begin{align}
A(xx,\zz) \wedge B(\yy,\zz) \models \bot,
\end{align}
try to find an interpolant for $A(xx,\zz)$ and $B(\yy,\zz)$.
\end{problem}
\begin{note}
Let's denote $\mathbf{IEq}(S), \mathbf{Eq}(S)$ and $\mathbf{LEq}(S)$ to be the set of all inequations in $S$,
all equations in $S$ and all linear equations in $S$, where $S$ is a formula. E.g.,
formula $A$ in Example \ref{nonCQ-exam}, we have
\begin{align*}
\mathbf{IEq}(A)&= \{ x^2+2x+(\alpha(\beta(a))+1)^2 \},\\
\mathbf{Eq}(A)&= \{\beta(a)-2c-z, 2c^2+2c+y^2+z=0, -c^2+y+2z\},\\
\mathbf{LEq}(A)&=\{\beta(a)-2c-z \}.
\end{align*}
\end{note} }
In the following, we will use Example \ref{nonCQ-exam} as a running example to explain the basic idea how to
apply Gr\"{o}bner basis method to extend our approach to more general polynomial formulas.
Step $1$: Flatten and purify.
Similar to the concave quadratic case, we purify and
flatten $A$ and
$B$ by introducing fresh variables $a_1,a_2,b_1,b_2$, and obtain
\begin{align*}
A_0:~&x^2+2x+(a_2+1)^2 \leq 0 \wedge a_1=2c+z \wedge \\
& 2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0,\\
D_A:~&a_1=\beta(a) \wedge a_2=\alpha(a_1),\\
B_0:~&x^2-2x+(b_2-1)^2 \leq 0 \wedge b_1=d-z \wedge \\
& d^2+d+y^2+y+2z=0 \wedge -d^2+y+z=0,\\
D_B:~&b_1=\gamma(b) \wedge b_2=\alpha(b_1).
\end{align*}
Step $2$: { Hierarchical reasoning}.
Obviously, $A \wedge B$ is unsatisfiable in
$\PT ( \QQ )^{ \{ \alpha,\beta,\gamma\} }$
if and only if $A_0 \wedge B_0 \wedge N_0$ is unsatisfiable in $\PT ( \QQ )$,
where $N_0$
corresponds to the conjunction of Horn clauses constructed from $D_A \wedge D_B$ using
the axioms of uninterpreted functions (see the following table).
{\small \begin{center}
\begin{tabular}{c|c|c}\hline
D & $G_0$ & $N_0$ \\\hline
$D_A:~a_1=\beta(a) \wedge$ & $A_0:~x^2+2x+(a_2+1)^2 \leq 0 \wedge
a_1=2c+z \wedge$ & \\
\quad ~~~~ $a_2=\alpha(a_1)$ & $2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0$ & $N_0: b_1=a_1 \rightarrow
b_2=a_2$\\
& & \\
$D_B:~b_1=\gamma(b) \wedge$ & $B_0:~x^2-2x+(b_2-1)^2 \leq 0
\wedge b_1=d-z \wedge$ & \\
\quad ~~~~ $b_2=\alpha(b_1)$ & $d^2+d+y^2+y+2z=0 \wedge -d^2+y+z=0$ & \\
\end{tabular}
\end{center} }
To prove $A_0 \wedge B_0 \wedge N_0 \models \bot$,
we compute the Grobner basis of $\mathbb{G}$ of $\mathbf{Eq}(A_0) \cup \mathbf{Eq}(B_0)$ under the order
$c\succ d \succ y\succ z \succ a_1 \succeq b_1$, and have $a_1-b_1 \in \mathbb{G}$. That is,
$A_0 \wedge B_0 \models a_1=b_1$.
Thus, $A_0 \wedge B_0 \wedge N_0$ entails
\begin{align*}
a_2=b_2 \wedge x^2+2x+(a_2+1)^2 \leq 0 \wedge x^2-2x+(b_2-1)^2 \leq 0.
\end{align*}
This implies
$$2x^2+a_2^2+b_2^2+2\leq0,$$
which is obviously unsatisfiable in $\mathbb{Q}$.
Step $2$ gives a proof of $A \wedge B \models \bot$.
In order to find an interpolant for $A$ and $B$, we need to divide $N_0$ into two parts,
$A$-part and $B$-part, i.e., to find a term $t$ only with common symbols, such that
\begin{align*}
A_0 \models a_1=t ~~~B_0 \models b_1=t.
\end{align*}
Then we can choose a new variable $\alpha_t=\alpha(t)$ to be a common variable, since the term $t$ and
the function $\alpha$ both are common. Thus $N_0$ can be divided into two parts as follows,
\begin{align*}
a_2=\alpha_t \wedge b_2=\alpha_t.
\end{align*}
Finally, if we can find an interpolant $I(x,y,z,\alpha_t)$ for
\begin{align*}
(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge a_2=\alpha_t) \wedge (\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge b_2=\alpha_t),
\end{align*} using Algorithm $\mathbf{IGFQC}$,
then $I(x,y,z,\alpha(t))$ will be an interpolant for $A \wedge B$.
Step $3$: Dividing $N_0$ into two parts. According to the above analysis, we need to find a witness $t$
such that $A_0 \models a_1=t$, $B_0 \models b_1=t$, where $t$ is an expression over the common symbols of $A$ and $B$.
Fortunately, such $t$ can be computed by Gr\"{o}bner basis method as follows:
First, with the variable order $c \succ a_1 \succ y \succ z$, the Gr\"{o}bner basis $\mathbb{G}_1$ of $\mathbf{Eq}(A_0)$
is computed to be
\begin{align*}
\mathbb{G}_1=&\{ y^4+4y^3+10y^2z+4y^2+20yz+25z^2-4y-8z, \\
&y^2+a_1+2y+4z, y^2+2c+2y+5z \}.
\end{align*}
Thus, we have
\begin{align} \label{eq-a1}
A_0 \models a_1=-y^2-2y-4z.
\end{align}
Simiarly, with the variable order $d \succ b_1 \succ y \succ z$, the Gr\"{o}bner basis $\mathbb{G}_2$ of $\mathbf{Eq}(B_0)$
is computed to be
\begin{align*}
\mathbb{G}_2=&\{ y^4+4y^3+6y^2z+4y^2+12yz+9z^2-y-z, \\
&y^2+b_1+2y+4z, y^2+d+2y+3z \}.
\end{align*}
Thus, we have
\begin{align} \label{eq-b1}
B_0 \models b_1=-y^2-2y-4z.
\end{align}
Whence, $t=-y^2-2y-4z$ is the witness.
Let $\alpha_t=\alpha(-y^2-2y-4z)$, which is an expression constructed from the common symbols of $A$ and $B$.
Next, find an interpolant for following formula
\begin{align*}
(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge a_2=\alpha_t) \wedge (\mathbf{IEq}(B_0) \wedge \mathbf{LEq}(B_0) \wedge b_2=\alpha_t).
\end{align*}
Using $\mathbf{IGFQC}$, we obtain an interpolant for the above formula as
\begin{align*}
I(x,y,z,\alpha_t)=x^2+2x+(\alpha_t+1)\le 0.
\end{align*}
Thus, $x^2+2x+(\alpha(-y^2-2y-4z)+1)\le 0$ is an interpolant for $A \wedge B$.
\begin{problem} \label{nonCQ-problem}
Generally, let $A(x,\zz)$ and $B(\yy,\zz)$ be
\begin{align}
A\, :\, &f_1(xx,\zz)\ge 0 \wedge \ldots \wedge f_{r_1}(xx,\zz) \ge 0
\wedge g_1(xx,\zz)> 0 \wedge \ldots \wedge g_{s_1}(xx,\zz) > 0 \nonumber\\
\, &\wedge h_1(xx,\zz)= 0 \wedge \ldots \wedge h_{p_1}(xx,\zz) = 0, \\
B \, :\, &f_{r_1+1}(\yy,\zz)\ge 0 \wedge \ldots \wedge f_{r}(\yy,\zz) \ge 0
\wedge g_{s_1+1}(\yy,\zz)> 0 \wedge \ldots \wedge g_{s}(\yy,\zz) > 0 \nonumber\\
\, &\wedge h_{p_1+1}(\yy,\zz)= 0 \wedge \ldots \wedge h_{p}(\yy,\zz) = 0,
\end{align}
where $f_1, \ldots, f_r$ and $g_1, \ldots, g_s$ are concave quadratic polynomials,
$h_1, \ldots, h_t$ are general polynomials, unnecessary to be concave quadratic, and
\begin{align}
A(xx,\zz) \wedge B(\yy,\zz) \models \bot,
\end{align}
try to find an interpolant for $A(xx,\zz)$ and $B(\yy,\zz)$.
\end{problem} | 3,895 | 68,445 | en |
train | 0.4991.20 | Step $2$ gives a proof of $A \wedge B \models \bot$.
In order to find an interpolant for $A$ and $B$, we need to divide $N_0$ into two parts,
$A$-part and $B$-part, i.e., to find a term $t$ only with common symbols, such that
\begin{align*}
A_0 \models a_1=t ~~~B_0 \models b_1=t.
\end{align*}
Then we can choose a new variable $\alpha_t=\alpha(t)$ to be a common variable, since the term $t$ and
the function $\alpha$ both are common. Thus $N_0$ can be divided into two parts as follows,
\begin{align*}
a_2=\alpha_t \wedge b_2=\alpha_t.
\end{align*}
Finally, if we can find an interpolant $I(x,y,z,\alpha_t)$ for
\begin{align*}
(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge a_2=\alpha_t) \wedge (\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge b_2=\alpha_t),
\end{align*} using Algorithm $\mathbf{IGFQC}$,
then $I(x,y,z,\alpha(t))$ will be an interpolant for $A \wedge B$.
Step $3$: Dividing $N_0$ into two parts. According to the above analysis, we need to find a witness $t$
such that $A_0 \models a_1=t$, $B_0 \models b_1=t$, where $t$ is an expression over the common symbols of $A$ and $B$.
Fortunately, such $t$ can be computed by Gr\"{o}bner basis method as follows:
First, with the variable order $c \succ a_1 \succ y \succ z$, the Gr\"{o}bner basis $\mathbb{G}_1$ of $\mathbf{Eq}(A_0)$
is computed to be
\begin{align*}
\mathbb{G}_1=&\{ y^4+4y^3+10y^2z+4y^2+20yz+25z^2-4y-8z, \\
&y^2+a_1+2y+4z, y^2+2c+2y+5z \}.
\end{align*}
Thus, we have
\begin{align} \label{eq-a1}
A_0 \models a_1=-y^2-2y-4z.
\end{align}
Simiarly, with the variable order $d \succ b_1 \succ y \succ z$, the Gr\"{o}bner basis $\mathbb{G}_2$ of $\mathbf{Eq}(B_0)$
is computed to be
\begin{align*}
\mathbb{G}_2=&\{ y^4+4y^3+6y^2z+4y^2+12yz+9z^2-y-z, \\
&y^2+b_1+2y+4z, y^2+d+2y+3z \}.
\end{align*}
Thus, we have
\begin{align} \label{eq-b1}
B_0 \models b_1=-y^2-2y-4z.
\end{align}
Whence, $t=-y^2-2y-4z$ is the witness.
Let $\alpha_t=\alpha(-y^2-2y-4z)$, which is an expression constructed from the common symbols of $A$ and $B$.
Next, find an interpolant for following formula
\begin{align*}
(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0) \wedge a_2=\alpha_t) \wedge (\mathbf{IEq}(B_0) \wedge \mathbf{LEq}(B_0) \wedge b_2=\alpha_t).
\end{align*}
Using $\mathbf{IGFQC}$, we obtain an interpolant for the above formula as
\begin{align*}
I(x,y,z,\alpha_t)=x^2+2x+(\alpha_t+1)\le 0.
\end{align*}
Thus, $x^2+2x+(\alpha(-y^2-2y-4z)+1)\le 0$ is an interpolant for $A \wedge B$.
\begin{problem} \label{nonCQ-problem}
Generally, let $A(x,\zz)$ and $B(\yy,\zz)$ be
\begin{align}
A\, :\, &f_1(xx,\zz)\ge 0 \wedge \ldots \wedge f_{r_1}(xx,\zz) \ge 0
\wedge g_1(xx,\zz)> 0 \wedge \ldots \wedge g_{s_1}(xx,\zz) > 0 \nonumber\\
\, &\wedge h_1(xx,\zz)= 0 \wedge \ldots \wedge h_{p_1}(xx,\zz) = 0, \\
B \, :\, &f_{r_1+1}(\yy,\zz)\ge 0 \wedge \ldots \wedge f_{r}(\yy,\zz) \ge 0
\wedge g_{s_1+1}(\yy,\zz)> 0 \wedge \ldots \wedge g_{s}(\yy,\zz) > 0 \nonumber\\
\, &\wedge h_{p_1+1}(\yy,\zz)= 0 \wedge \ldots \wedge h_{p}(\yy,\zz) = 0,
\end{align}
where $f_1, \ldots, f_r$ and $g_1, \ldots, g_s$ are concave quadratic polynomials,
$h_1, \ldots, h_t$ are general polynomials, unnecessary to be concave quadratic, and
\begin{align}
A(xx,\zz) \wedge B(\yy,\zz) \models \bot,
\end{align}
try to find an interpolant for $A(xx,\zz)$ and $B(\yy,\zz)$.
\end{problem}
According to the above discussion, Problem~\ref{nonCQ-problem} can be solved by Algorithm~\ref{ag:nonCQ} below.
\begin{algorithm}[!htb] \label{ag:nonCQ}
\label{alg:int}
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\caption{ {\tt $\mathbf{IGFQC}$ }\label{prob:in-out}}
\Input{Two formulae $A$, $B$ as Problem \ref{nonCQ-problem} with $A \wedge B \models \bot$}
\Output{An formula $I$ to be a Craig interpolant for $A$ and $B$}
\SetAlgoLined
\BlankLine
\textbf{Flattening, purification and hierarchical reasoning}
obtain $A_0$, $B_0$, $N_A$, $N_B$, $N_{mix}$;\\
$A_0:=A_0 \wedge N_A, B_0:=B_0\wedge N_B$;\\
\While{$(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0)) \wedge (\mathbf{IEq}(B_0)\wedge \mathbf{LEq}(B_0)) \not\models \bot$}
{
\If{$N_{mix}=\emptyset$}{\textbf{break}}
Choose a formula $a_1=b_1 \rightarrow a_2=b_2 \in N_{mix}$ corresponding to function $\alpha$;\\
$N_{mix}:=N_{mix}\setminus \{ a_1=b_1 \rightarrow a_2=b_2\}$;\\
Computing Grobner basis $\mathbb{G}_1$ for $\mathbf{Eq}(A_0)$ under purely dictionary ordering with
some variable ordering that other local variable $\succ a_1 \succ$ common variable; \\
Computing Grobner basis $\mathbb{G}_2$ for $\mathbf{Eq}(B_0)$ under purely dictionary ordering with
some variable ordering that other local variable $\succ b_1 \succ$ common variable; \\
\If{there exists a expression $t$ with common variable s.t. $a_1 \in \mathbb{G}_1\wedge b_1 \in \mathbb{G}_2$}
{introduce a new variable $\alpha_t=\alpha(t)$ as a common variable;
$A_0:=A_0 \wedge a_2=\alpha_t, B_0:=B_0 \wedge b_2=\alpha_t$}
}
\If{$(\mathbf{IEq}(A_0) \wedge \mathbf{LEq}(A_0)) \wedge (\mathbf{IEq}(B_0)\wedge \mathbf{LEq}(B_0)) \models \bot$}
{
Using $\mathbf{IGFQC}$ to obtain an interpolant $I_0$ for above formula;\\
Obtain an interpolant $I$ for $A \wedge B$ from $I_0$;\\
\KwRet $I$ }
\Else{\KwRet Fail}
\end{algorithm}
\oomit{
Let
\begin{align}
\mathcal{L}:=\{ l_1, \ldots, l_m \}
\end{align}
Step $1$: Choose all the linear polynomial $l_1, \ldots, l_{m_1}$ from $h_1, \ldots, h_{t_1}$ and
choose all the linear polynomial $l_{m_1+1}, \ldots, l_{m}$ from $h_{t_1+1}, \ldots, h_{t}$. Let
\begin{align}
\varphi_1:&f_1(x,\zz)\ge 0 \wedge \ldots \wedge f_{r_1}(x,\zz) \ge 0
\wedge g_1(x,\zz)> 0 \wedge \ldots \wedge g_{s_1}(x,\zz) > 0 \nonumber\\
&\wedge l_1(x,\zz)= 0 \wedge \ldots \wedge l_{m_1}(x,\zz) = 0, \\
\varphi_1:&f_{r_1+1}(\yy,\zz)\ge 0 \wedge \ldots \wedge f_{r}(\yy,\zz) \ge 0
\wedge g_{s_1+1}(\yy,\zz)> 0 \wedge \ldots \wedge g_{s}(\yy,\zz) > 0 \nonumber\\
&\wedge l_{t_1+1}(\yy,\zz)= 0 \wedge \ldots \wedge l_{m}(\yy,\zz) = 0.
\end{align}
Then $\varphi_1(x,\zz)$ and $\psi_1(\yy,\zz)$ are in the concave quadratic case,
if
\begin{align}
\varphi_1(x,\zz)\wedge\psi_1(\yy,\zz) \models \bot,
\end{align}
we can find an interpolant for $\varphi_1(x,\zz)$ and $\psi_1(\yy,\zz)$, which is also an
interpolant for $\varphi(x,\zz)$ and $\psi(\yy,\zz)$, we obtain an interpolant; else jump to step 2.
Step $1'$: Using Grobner basis method (or any other computer algebraic method) to obtain linear equations as much as
possible from $h_1=0, \ldots, h_{t_1}=0$, note as $l_1, \ldots, l_{m_1}$; the same, obtain linear equations as much as
possible from $h_{t_1+1}=0, \ldots, h_{t}=0$, note as $l_{m_1+1}, \ldots, l_{m}$. Let
\begin{align}
\varphi_1:&f_1(x,\zz)\ge 0 \wedge \ldots \wedge f_{r_1}(x,\zz) \ge 0
\wedge g_1(x,\zz)> 0 \wedge \ldots \wedge g_{s_1}(x,\zz) > 0 \nonumber\\
&\wedge l_1(x,\zz)= 0 \wedge \ldots \wedge l_{m_1}(x,\zz) = 0, \\
\varphi_1:&f_{r_1+1}(\yy,\zz)\ge 0 \wedge \ldots \wedge f_{r}(\yy,\zz) \ge 0
\wedge g_{s_1+1}(\yy,\zz)> 0 \wedge \ldots \wedge g_{s}(\yy,\zz) > 0 \nonumber\\
&\wedge l_{t_1+1}(\yy,\zz)= 0 \wedge \ldots \wedge l_{m}(\yy,\zz) = 0.
\end{align}
Then $\varphi_1(x,\zz)$ and $\psi_1(\yy,\zz)$ are in the concave quadratic case,
if
\begin{align}
\varphi_1(x,\zz)\wedge\psi_1(\yy,\zz) \models \bot,
\end{align}
we can find an interpolant for $\varphi_1(x,\zz)$ and $\psi_1(\yy,\zz)$, which is also an
interpolant for $\varphi(x,\zz)$ and $\psi(\yy,\zz)$, we obtain an interpolant; else jump to step 2.
Step $2$: Choose a linear polynomial $L(x,\yy,\zz)$ from the Grobner basis of $h_1, \ldots, h_t$ under some ordering,
which is different from all the element form $\mathcal{L}$. And add $L$ into $\mathcal{L}$.
It is easy to see that we can divide $L(x,\yy,\zz)$ into two part, that
\begin{align}
L(x,\yy,\zz)=L_1(x,\zz)-L_2(\yy,\zz),
\end{align}
where $L_1(x,\zz)$ and $L_2(\yy,\zz)$ both are linear polynomial.
Introduce two new variable $\alpha$ and $\beta$. Compute the Grobner basis $\mathbb{G}_1$ of
$h_1, \ldots, h_{t_1}, \alpha-L_1$ under the ordering $x > \alpha >\zz$; Compute the Grobner basis
$\mathbb{G}_2$ of $h_{t_1+1}, \ldots, h_{t}, \beta-L_2$ under the ordering $x > \alpha >\zz$.
Find a polynomial $\theta(\zz)$ such that
\begin{align}
\alpha-\theta(\zz) \in \mathbb{G}_1 \wedge \beta-\theta(\zz) \in \mathbb{G}_2.
\end{align}
Introduce a new variable $\gamma$($\gamma=\theta(\zz)$), update $\varphi_1(x,\zz)$ and $\psi_1(\yy,\zz)$
as follow
\begin{align}
\varphi_1(x,\zz,\gamma) \leftarrow \varphi \wedge \gamma-L_1(x,\zz)=0,\\
\psi_1(\yy,\zz,\gamma) \leftarrow \psi \wedge \gamma-L_2(\yy,\zz)=0.
\end{align}
It is easy to see that
\begin{align}
\varphi(x,\zz) \models \varphi_1(x,\zz,\gamma), \\
\psi(\yy,\zz) \models \psi_1(\yy,\zz,\gamma).
\end{align}
And $\varphi_1(x,\zz,\gamma)$ and $\psi_1(\yy,\zz,\gamma)$ are in the concave quadratic case,
if
\begin{align}
\varphi_1(x,\zz,\gamma)\wedge\psi_1(\yy,\zz,\gamma) \models \bot,
\end{align}
we can find an interpolant $I(\zz,\gamma)$ for $\varphi_1(x,\zz,\gamma)$ and $\psi_1(\yy,\zz,\gamma)$.
Thus, $I(\zz,\theta(\zz))$ is an
interpolant for $\varphi(x,\zz)$ and $\psi(\yy,\zz)$, we obtain an interpolant; else jump to step 2, repeat.
}
\oomit{ | 4,000 | 68,445 | en |
train | 0.4991.21 | \section{example}
\begin{example} \label{exam}
Let $G=A \wedge B$, where
\begin{align*}
A:~&x^2+2x+(f(g(a))+1)^2 \leq 0 \wedge g(a)=2c+z \wedge\\
&2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0,\\
B:~&x^2-2x+(f(h(b))-1)^2 \leq 0 \wedge h(b)=d-z \wedge\\
&d^2+d+y^2+y+z=0 \wedge -d^2+y+2z=0.
\end{align*}
\end{example}
We show that $A \wedge B$ is unsatisfiable in $\PT ( \QQ )^{ \{ f,g,h\} }$ as
follows:
Step $1$: Flattening and purification.
We purify and
flatten the formulae $A$ and
$B$ by replacing the terms starting with $f$ with new variables. We
obtain the
following purified form:
\begin{align*}
A_0:~&x^2+2x+(a_2+1)^2 \leq 0 \wedge a_1=2c+z \wedge\\
&2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0,\\
D_A:~&a_1=g(a) \wedge a_2=f(a_1),\\
B_0:~&x^2-2x+(b_2-1)^2 \leq 0 \wedge b_1=d-z \wedge\\
&d^2+d+y^2+y+2z=0 \wedge -d^2+y+z=0,\\
D_B:~&b_1=h(b) \wedge b_2=f(b_1).
\end{align*}
Step $2$: { Hierarchical reasoning}.
By Theorem \ref{flat-puri} we have that $A \wedge B$ is unsatisfiable in
$\PT ( \QQ )^{ \{ f,g,h\} }$
if and only if $A_0 \wedge B_0 \wedge N_0$ is unsatisfiable in $\PT ( \QQ )$,
where $N_0$
corresponds to the consequences of the congruence axioms for those
ground terms which
occur in the definitions $D_A \wedge D_B$ for the newly introduced
variables.
\begin{center}
\begin{tabular}{c|c|c}\hline
Def & $G_0$ & $N_0$ \\\hline
$D_A:~a_1=g(a) \wedge a_2=f(a_1)$ & $A_0:~x^2+2x+(a_2+1)^2 \leq 0 \wedge
a_1=2c+z \wedge$ & \\
& $2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0$ & $N_0: b_1=a_1 \rightarrow
b_2=a_2$\\
& & \\
$D_B:~b_1=g(b) \wedge b_2=f(b_1)$ & $B_0:~x^2-2x+(b_2-1)^2 \leq 0
\wedge b_1=d-z \wedge$ & \\
& $d^2+d+y^2+y+2z=0 \wedge -d^2+y+z=0$ & \\
\end{tabular}
\end{center}
To prove that $A_0 \wedge B_0 \wedge N_0$ is unsatisfiable, note that
$A_0 \wedge
B_0 \models a_1=b_1$. Thus, $A_0 \wedge B_0 \wedge N_0$ entails
\begin{align*}
a_2=b_2 \wedge x^2+2x+(a_2+1)^2 \leq 0 \wedge x^2-2x+(b_2-1)^2 \leq 0.
\end{align*}
Plus the second inequation and the third inequation and using the first
equation we have,
$$2x^2+a_2^2+b_2^2+2\leq0,$$
which is inconsistent over $\mathbb{Q}$.
\begin{example}
Consider the clause $a_1=b_1 \rightarrow a_2=b_2$ of $N_0$ in Example
\ref{n-mix}. Since
this clause contains both $A$-local and $B$-local, it should be
divided into $N_{mix}$.
Then we need to separate it into two parts, one is $A$-pure and other
is $B$-pure.
We try to find them as follow:
Firstly, we note that $A_0 \wedge B_0 \models_{\PT ( \QQ )} a_1=b_1$. The
proof is following,
\begin{align*}
A_0:~&x^2+2x+(a_2+1)^2 \leq 0 \wedge a_1=2c+z \wedge\\
&2c^2+2c+y^2+z=0 \wedge -c^2+y+2z=0,\\
B_0:~&x^2-2x+(b_2-1)^2 \leq 0 \wedge b_1=d-z \wedge\\
&d^2+d+y^2+y+2z=0 \wedge -d^2+y+z=0.
\end{align*}
Let $A_{0,i}$, and $B_{o,j}$ be the $i^{th}$ clause in $A_0$ and the
$j^{th}$ clause in
$B_0$ respectively, where $1 \leq i,j \leq 4$.
Then we have:
\begin{align*}
A_{0,2}+A_{0,3}+2A_{0,4}~~ \rightarrow ~~a_1+y^2+2y+4z=0;\\
B_{0,2}+B_{0,3}+~~B_{0,4}~~ \rightarrow ~~b_1+y^2+2y+4z=0.
\end{align*}
Thus,
$$a_1=-y^2-2y-4z=b_1.$$
This complete the proof of $A_0 \wedge B_0 \models_{\PT ( \QQ )} a_1=b_1$.
From the proof, we can see that there exists a term $t=-y^2-2y-4z$
containing only
variables common to $A_0$ and $B_0$ such that $A_0 \models_{\PT ( \QQ )}
a_1=-y^2-2y-4z$ and
$B_0 \models_{\PT ( \QQ )} b_1=-y^2-2y-4z$. From which we can separate the clause
$a_1=b_1 \rightarrow a_2=b_2$ into $A$-part and $B$-part as follow,
\begin{align*}
a_1=-y^2-2y-4z \rightarrow a_2=e_{f(-y^2-2y-4z)};\\
b_1=-y^2-2y-4z \rightarrow b_2=e_{f(-y^2-2y-4z)}.
\end{align*}
\end{example}
Thus we have
\begin{align*}
N_{sep}^A= \{ a_1=-y^2-2y-4z \rightarrow a_2=e_{f(-y^2-2y-4z)} \};\\
A_{sep}^B= \{ b_1=-y^2-2y-4z \rightarrow b_2=e_{f(-y^2-2y-4z)} \}.
\end{align*}
Now, we try to obtain an interpolant for
$$(A_0 \wedge N_{sep}^A) \wedge (B_0 \wedge N_{sep}^B).$$
Note that $(A_0 \wedge N_{sep}^A)$ is logically equivalent to $(A_0
\wedge a_2 = e_{f(-y^2-2y-4z)})$, and $(B_0 \wedge N_{sep}^B)$ is
logically equivalent to
$(B_0 \wedge b_2 = e_{f(-y^2-2y-4z)})$. The conjunction
$(A_0 \wedge a_2 = e_{f(-y^2-2y-4z)}) \wedge (B_0 \wedge b_2 =
e_{f(-y^2-2y-4z)})$ is
unsatisfiable, which is because two circles
$(x+1)^2+(e_{f(-y^2-2y-4z)}+1)^2 \leq 1$ and
$(x-1)^2+(e_{f(-y^2-2y-4z)}-1)^2 \leq 1$ have an empty intersection. Any
curve separate
them could be an interpolant for $(A_0 \wedge a_2 = e_{f(-y^2-2y-4z)})
\wedge (B_0 \wedge b_2 = e_{f(-y^2-2y-4z)})$,
we might choose $I_0: x+e_{f(-y^2-2y-4z)}<0$. It is easy to see
$(A_0 \wedge a_2 = e_{f(-y^2-2y-4z)}) \models_{\PT ( \QQ )} I_0$,
$(B_0 \wedge b_2 = e_{f(-y^2-2y-4z)}) \wedge I_0 \models_{\PT ( \QQ )} \bot$ and
$I_0$ contain only the common variables in $A_0$ and $B_0$.
So, $I_0$ is an interpolant for
$(A_0 \wedge a_2 = e_{f(-y^2-2y-4z)}) \wedge (B_0 \wedge b_2 =
e_{f(-y^2-2y-4z)})$.
Replacing the newly introduced constant $e_{f(-y^2-2y-4z)}$ with the
term it denotes,
let $I : x+f(-y^2-2y-4z)<0$ .
It is easy to see that:
\begin{align*}
(A_0 \wedge D_A) \models_{\PT ( \QQ )^{\Sigma}} I,\\
(B_0 \wedge D_B) \wedge I \models_{\PT ( \QQ )^{\Sigma}} \bot
\end{align*}
Therefore, $I$ is an interpolant for $(A_0 \wedge D_A) \wedge (B_0
\wedge D_B)$, obviously,
is an interpolant for $A \wedge B$.
} | 2,713 | 68,445 | en |
train | 0.4991.22 | \section{Implementation and experimental results}
We have implemented the presented algorithms in \textit{Mathematica} to synthesize interpolation for concave quadratic polynomial inequalities as well as their combination with \textit{EUF}. To deal with SOS solving and semi-definite programming, the Matlab-based optimization tool \textit{Yalmip} \cite{Yalmip} and the SDP solver \textit{SDPT3} \cite{SDPT3} are invoked. In what follows we demonstrate our approach by some examples, which have been evaluated on a 64-bit Linux computer with a 2.93GHz Intel Core-i7 processor and 4GB of RAM.
\begin{example} \label{exp:4}
Consider the example:
\begin{eqnarray*}
\phi := (f_1 \ge 0) \wedge (f_2 \ge0) \wedge (g_1 >0),\quad
\psi := (f_3 \ge 0). \quad
\phi \wedge \psi \models \bot.
\end{eqnarray*}
where $f_1 = x_1, f_2 = x_2,f_3= -x_1^2-x_2^2 -2x_2-z^2, g_1= -x_1^2+2 x_1 - x_2^2 + 2 x_2 - y^2$.
The interpolant returned after $0.394$ s is
\begin{eqnarray*}
I:= \frac{1}{2}x_1^2+\frac{1}{2}x_2^2+2x_2 > 0
\end{eqnarray*}
\end{example}
\begin{example} \label{exp:5}
Consider the unsatisfiable conjunction $\phi \wedge \psi$:
\begin{eqnarray*}
\phi := f_1 \ge 0 \wedge f_2 \ge 0 \wedge f_3 \ge 0 \wedge g_1>0, \quad
\psi := f_4 \ge 0 \wedge f_5 \ge 0 \wedge f_6 \ge 0 \wedge g_2 >0 .
\end{eqnarray*}
where
$f_1 = -y_1+x_1-2$,
$f_2 = -y_1^2-x_1^2+2x_1 y_1 -2y_1+2x_1$,
$f_3 = -y_2^2-y_1^2-x_2^2 -4y_1+2x_2-4$,
$f_4 = -z_1+2x_2+1$,
$f_5 = -z_1^2-4x_2^2+4x_2 z_1 +3z_1-6x_2-2$,
$f_6 = -z_2^2-x_1^2-x_2^2+2x_1+z_1-2x_2-1$,
$g_1 = 2x_2-x_1-1$,
$g_2 = 2x_1-x_2-1$.
The condition NSOSC does not hold, since
$$-(2f_1 + f_2) = (y_1 -x_1 +2)^2 \textrm{ is a sum of square}.$$
Then we have $h=(y_1 -x_1 +2)^2$, and
\begin{align*}
h_1=h=(y_1 -x_1 +2)^2, \quad h_2=0.
\end{align*}
Let $f=2f_1+f_2+h_1=0$. Then construct $\phi'$ by setting $y_1=x_1-2$ in $\phi$, $\psi'$ is $\psi$. That is
\begin{align*}
\phi':=0\ge0 \wedge 0 \ge 0 \wedge -y_2^2-x_1^2-x_2^2+2x_2 \ge 0 \wedge g_1>0, \quad \psi':=\psi.
\end{align*}
Then the interpolation for $\phi$ and $\psi$ is reduced as
\begin{align*}
I(\phi,\psi)=(f> 0) \vee (f=0 \wedge I(\phi', \psi')) = I(\phi',\psi').
\end{align*}
For $\phi'$ and $\psi'$, the condition NSOSC is still unsatisfied, since $-f_4-f_5 = (z_1-2x_2-1)^2$ is an SOS.
Then we have $h=h_2=(z_1-2x_2-1)^2$, $h_1=0$, and thus $f=0$.
\begin{align*}
\phi''=\phi', \quad \psi''=0\ge0 \wedge 0 \ge 0 \wedge -z_2^2-x_1^2 - x_2^2+2x_1\ge0 \wedge
g_2 >0.
\end{align*}
The interpolation for $\phi'$ and $\psi'$ is further reduced by $ I(\phi',\psi')=I(\phi'',\psi'')$, where
\begin{align*}
\phi'':=(f_1'=-y_2^2-x_1^2-x_2^2+2x_2\ge0) \wedge 2x_2-x_1-1>0,\\
\psi'':=(f_2'=-z_2^2-x_1^2 - x_2^2+2x_1\ge0) \wedge
2x_1-x_2-1 >0.
\end{align*}
Here the condition NSOSC holds for $\phi''$ and $\psi''$, then by SDP we find $\lambda_1=\lambda_2=0.25, \eta_0=0, \eta_1=\eta_2=0.5$ and SOS polynomials $h_1=0.25*((x_1-1)^2+(x2-1)^2+y_2^2)$ and $h_2=0.25*((x_1-1)^2+(x_2-1)^2+z_2^2)$ such that
$\lambda_1 f_1'+\lambda_2 f_2' + \eta_0 + \eta_1 g_1 + \eta_2 g_2 +h_1 +h_2 \mathbf{Eq}uiv 0$ and
$\eta_0+\eta_1+\eta_2=1$.
For $\eta_0+\eta_1=0.5>0$, the interpolant returned after $2.089$ s is $f>0$, i.e. $I:= -x_1 + x_2 > 0$.
\end{example}
\begin{example} \label{exp:6}
Consider the example:
\begin{align*}
\phi:=&(f_1=-(y_1-x_1+1)^2-x_1+x_2 \ge 0) \wedge (y_2=\alpha(y_1)+1) \\
&\wedge ( g_1= -x_1^2-x_2^2-y_2^2+1 > 0), \\
\psi:=&(f_2=-(z_1-x_2+1)^2+x_1-x_2 \ge 0) \wedge (z_2=\alpha(z_1)-1) \\
&\wedge (g_2= -x_1^2-x_2^2-z_2^2+1 > 0).
\end{align*}
where $\alpha$ is an uninterpreted function. It takes $0.369$ s in our approach to reduce the problem to find an interpolant as $I(\overline{\phi'}_2,\overline{\psi'}_1) \vee (\overline{\phi'}_4,\overline{\psi'}_1)$, and another $2.029$ s to give the final interpolant as
\begin{eqnarray*}
I:= (-x_1 + x_2 > 0) \vee (\frac{1}{4}(-4\alpha(x_2-1)-x_1^2-x_2^2)>0)
\end{eqnarray*}
\end{example}
\begin{example} \label{exp:7}
Let two formulae $\phi$ and $\psi$ be defined as
\begin{align*}
\phi:=&(f_1=4-x^2-y^2 \ge 0) \wedge f_2=y \ge 0 \wedge ( g= x+y-1 > 0), \\
\psi:=&(f_4=x \ge 0) \wedge (f_5= 1-x^2- (y+1)^2 \ge 0).
\end{align*}
The interpolant returned after $0.532$ s is $I:= \frac{1}{2}(x^2+y^2+4y)>0$ \footnote{In order to give a more objective comparison of performance with the approach proposed in \cite{DXZ13}, we skip over line 1 in the previous algorithm $\mathbf{IGFQC}$.}.
\oomit{While AiSat gives
\begin{small}
\begin{align*}
I':= &-0.0732 x^4+0.1091 x^3 y^2+199.272 x^3 y+274818 x^3-0.0001 x^2 y^3-0.079 x^2 y^2 \\
&-0.1512 x^2 y-0.9803 x^2+0.1091 x y^4+199.491 x y^3+275217 x y^2+549634 x y \\
&+2.0074 x-0.0001 y^5-0.0056 y^4-0.0038 y^3-0.9805 y^2+2.0326 y-1.5 > 0
\end{align*}
\end{small}}
\end{example}
\begin{example} \label{exp:8}
This is a linear interpolation problem adapted from \cite{RS10}. Consider the unsatisfiable conjunction $\phi \wedge \psi$:
\begin{eqnarray*}
\phi := z-x \ge 0 \wedge x-y \ge 0 \wedge -z > 0, \quad
\psi := x+y \ge 0 \wedge -y \ge 0 .
\end{eqnarray*}
It takes 0.250 s for our approach to give an interpolant as $I:= - 0.8x - 0.2y > 0$.
\end{example}
\begin{example} \label{exp:9}
Consider another linear interpolation problem combined with \textit{EUF}:
\begin{eqnarray*}
\phi := f(x) \ge 0 \wedge x-y \ge 0 \wedge y-x \ge 0, \quad
\psi := -f(y) > 0 .
\end{eqnarray*}
The interpolant returned after 0.236 s is $I:= f(y) \ge 0$.
\end{example}
\begin{example} \label{exp:10}
Consider two formulas $A$ and $B$ with $A \wedge B \models \bot$, where\\
\begin{minipage}{0.6\textwidth}
\begin{align*}
A :=& -{x_1}^2 + 4 x_1 +x_2 - 4 \ge 0 \wedge \\
& -x_1 -x_2 +3 -y^2 > 0,\\
B :=& \mathbf{-3 {x_1}^2 - {x_2}^2 + 1 \ge 0} \wedge x_2 - z^2 \ge 0.
\end{align*}
Note that a concave quadratic polynomial (the bold one) from the \textit{ellipsoid} domain is involved in $B$. It takes 0.388 s using our approach to give an interpolant as $ I:=-3 + 2 x_1 + {x_1}^2 + \frac{1}{2} {x_2}^2 > 0.$ An intuitive description of the interpolant is as the purple curve in the right figure, which separates $A$ and $B$ in the panel of common variables $x_1$ and $x_2$.
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=\linewidth]{ellipsoid.png}
\end{minipage}
\end{example}
\begin{example} \label{exp:11}
Consider two formulas $\phi$ and $\psi$ both are defined by an ellipse joint a half-plane:
{\small \begin{align*}
\phi :=4-(x-1)^2-4y^2 \ge 0 \wedge y- \frac{1}{2} \ge 0,~~
\psi :=4-(x+1)^2-4y^2 \ge 0 \wedge x+2y \ge 0.
\end{align*} }
The interpolant returned after 0.248 s is $I:=-3.63x^2-14.52y^2+7.26x+38.39y-7.305 > 0$.
\end{example}
\begin{example} \label{exp:12}
Consider two formulas $\phi$ and $\psi$ both are defined by an octagon joint a half-plane:
{\small \begin{align*}
\phi &:= -3 \le x \le 1 \wedge -2 \le y \le 2 \wedge -4 \le x-y \le 2
\wedge -4 \le x+y \le 2 \wedge x+2y+1\le 0, \\
\psi &:= -1 \le x \le 3 \wedge -2 \le y \le 2 \wedge -2 \le x-y \le 4
\wedge -2 \le x+y \le 4 \wedge 2x-5y+6\le 0.
\end{align*} }
The interpolant returned after 0.225 s is $I:=-13.42x-29.23y-1.7 > 0$.
\end{example}
\begin{example} \label{exp:13}
Consider two formulas $\phi$ and $\psi$ both are defined by an octagon joint a half-plane:
{\small \begin{align*}
\phi &:= 2 \le x \le 7 \wedge 0 \le y \le 3 \wedge 0 \le x-y \le 6
\wedge 3 \le x+y \le 9 \wedge 23-3x-8y\le 0, \\
\psi &:= 0 \le x \le 5 \wedge 2 \le y \le 5 \wedge -4 \le x-y \le 2
\wedge 3 \le x+y \le 9 \wedge y-3x-2\le 0.
\end{align*} }
The interpolant returned after 0.225 s is $I:=12.3x-7.77y+4.12 > 0$.
\end{example}
\vspace*{-2mm}
\begin{table}
\begin{center}
\begin{tabular}{llp{1.9cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.9cm}<{\centering}} | 3,878 | 68,445 | en |
train | 0.4991.23 | It takes 0.250 s for our approach to give an interpolant as $I:= - 0.8x - 0.2y > 0$.
\end{example}
\begin{example} \label{exp:9}
Consider another linear interpolation problem combined with \textit{EUF}:
\begin{eqnarray*}
\phi := f(x) \ge 0 \wedge x-y \ge 0 \wedge y-x \ge 0, \quad
\psi := -f(y) > 0 .
\end{eqnarray*}
The interpolant returned after 0.236 s is $I:= f(y) \ge 0$.
\end{example}
\begin{example} \label{exp:10}
Consider two formulas $A$ and $B$ with $A \wedge B \models \bot$, where\\
\begin{minipage}{0.6\textwidth}
\begin{align*}
A :=& -{x_1}^2 + 4 x_1 +x_2 - 4 \ge 0 \wedge \\
& -x_1 -x_2 +3 -y^2 > 0,\\
B :=& \mathbf{-3 {x_1}^2 - {x_2}^2 + 1 \ge 0} \wedge x_2 - z^2 \ge 0.
\end{align*}
Note that a concave quadratic polynomial (the bold one) from the \textit{ellipsoid} domain is involved in $B$. It takes 0.388 s using our approach to give an interpolant as $ I:=-3 + 2 x_1 + {x_1}^2 + \frac{1}{2} {x_2}^2 > 0.$ An intuitive description of the interpolant is as the purple curve in the right figure, which separates $A$ and $B$ in the panel of common variables $x_1$ and $x_2$.
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=\linewidth]{ellipsoid.png}
\end{minipage}
\end{example}
\begin{example} \label{exp:11}
Consider two formulas $\phi$ and $\psi$ both are defined by an ellipse joint a half-plane:
{\small \begin{align*}
\phi :=4-(x-1)^2-4y^2 \ge 0 \wedge y- \frac{1}{2} \ge 0,~~
\psi :=4-(x+1)^2-4y^2 \ge 0 \wedge x+2y \ge 0.
\end{align*} }
The interpolant returned after 0.248 s is $I:=-3.63x^2-14.52y^2+7.26x+38.39y-7.305 > 0$.
\end{example}
\begin{example} \label{exp:12}
Consider two formulas $\phi$ and $\psi$ both are defined by an octagon joint a half-plane:
{\small \begin{align*}
\phi &:= -3 \le x \le 1 \wedge -2 \le y \le 2 \wedge -4 \le x-y \le 2
\wedge -4 \le x+y \le 2 \wedge x+2y+1\le 0, \\
\psi &:= -1 \le x \le 3 \wedge -2 \le y \le 2 \wedge -2 \le x-y \le 4
\wedge -2 \le x+y \le 4 \wedge 2x-5y+6\le 0.
\end{align*} }
The interpolant returned after 0.225 s is $I:=-13.42x-29.23y-1.7 > 0$.
\end{example}
\begin{example} \label{exp:13}
Consider two formulas $\phi$ and $\psi$ both are defined by an octagon joint a half-plane:
{\small \begin{align*}
\phi &:= 2 \le x \le 7 \wedge 0 \le y \le 3 \wedge 0 \le x-y \le 6
\wedge 3 \le x+y \le 9 \wedge 23-3x-8y\le 0, \\
\psi &:= 0 \le x \le 5 \wedge 2 \le y \le 5 \wedge -4 \le x-y \le 2
\wedge 3 \le x+y \le 9 \wedge y-3x-2\le 0.
\end{align*} }
The interpolant returned after 0.225 s is $I:=12.3x-7.77y+4.12 > 0$.
\end{example}
\vspace*{-2mm}
\begin{table}
\begin{center}
\begin{tabular}{llp{1.9cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.4cm}<{\centering}p{1.9cm}<{\centering}}
\toprule
\multirow{2}{*}{\begin{minipage}{1.7cm}
Example
\end{minipage}}
& \multirow{2}{*}{\begin{minipage}{1.5cm}
Type
\end{minipage}}
& \multicolumn{5}{c}{Time (sec)}\\
\cmidrule(lr){3-7}
& & \textsc{CLP-Prover} & \textsc{Foci} & \textsc{CSIsat} & AiSat
& Our Approach\\ \hline
Example \ref{exp:4} & NLA & -- & -- & -- & -- & 0.394 \\
Example \ref{exp:5} & NLA & -- & -- & -- & -- & 2.089 \\
Example \ref{exp:6} & NLA+\textit{EUF} & -- & -- & -- & -- & 2.398 \\
Example \ref{exp:7} & NLA & -- & -- & -- & 0.023 & 0.532 \\
Example \ref{exp:8} & LA & 0.023 & $\times$ & 0.003 & -- & 0.250 \\
Example \ref{exp:9} & LA+\textit{EUF} & 0.025 & 0.006 & 0.007 & -- & 0.236 \\
Example \ref{exp:10} & Ellipsoid & -- & -- & -- & -- & 0.388 \\
Example \ref{exp:11} & Ellipsoid2 & -- & -- & -- & 0.013 & 0.248 \\
Example \ref{exp:12} & Octagon1 & 0.059 & $\times$ & 0.004 & 0.021 & 0.225 \\
Example \ref{exp:13} & Octagon2 & 0.065 & $\times$ & 0.004 & 0.122 & 0.216 \\
\bottomrule
\end{tabular}\\
-- means that the interpolant generation fails, and $\times$ specifies a particularly wrong answer.
\vspace*{0.1in}
\caption{Evaluation results of the presented examples}\label{tab1}
\vspace*{-10mm}
\end{center}
\end{table}
The experimental evaluation on the above examples is illustrated in Table~\ref{tab1}, where we have also compared on the same platform with the performances of AiSat, a tool for nonlinear interpolant generation proposed in \cite{DXZ13}, as well as three publicly available interpolation procedures for linear-arithmetic cases, i.e. Rybalchenko's tool \textsc{CLP-Prover}) in \cite{RS10}, McMillan's procedure \textsc{Foci} in \cite{mcmillan05}, and Beyer's tool \textsc{CSIsat} in \cite{CSIsat}. Table \ref{tab1} shows that our approach can successfully solve all the examples and it is especially the completeness that makes it an extraordinary competitive candidate for synthesizing interpolation. Besides, \textsc{CLP-Prover}, \textsc{Foci}, and \textsc{CSIsat} can handle only linear-arithmetic expressions with an efficient optimization (and thus the performances in linear cases are better than our raw implementation). As for AiSat, a rather limited set of applications is acceptable because of the weakness of tackling local variables, and whether an interpolant can be found or not depends on a pre-specified total degree. In \cite{DXZ13}, not only all the
constraints in formula $\phi$ should be considered but also some of their products, for instance,
$f_1,f_2,f_3 \ge0$ are three constraints in $\phi$, then four constraints $f_1f_2,f_1f_3,f_2f_3,f_1f_2f_3 \ge0$ are added in $\phi$.
Table~\ref{tab1} indicates the efficiency of our tool is lower than any of other tools whenever a considered example is
solvable by both. This is mainly because our tool is implemented in \textit{Mathematica}, and therefore have to invoke some SDP solvers with low efficiency. As a future work, we plan to re-implement the tool using C, thus we can call SDP solver CSDP which is
much more efficient. Once a considered problem is linear, an existing interpolation procedure will be invoked directly, thus,
SDP solver is not needed.
\oomit{
The experimental evaluation on the above motivating examples is illustrated in table \ref{tab1}, where we have also compared on the same platform with the performances of AiSat, a tool for nonlinear interpolant generation proposed in \cite{DXZ13}, as well as three publicly available interpolation procedures for linear-arithmetic cases, i.e. Rybalchenko's tool \textsc{CLP-Prover}) in \cite{RS10}, McMillan's procedure \textsc{Foci} in \cite{mcmillan05}, and Beyer's tool \textsc{CSIsat} in \cite{CSIsat}. Table \ref{tab1} shows that our approach can successfully solve all the examples and it is especially the completeness that makes it an extraordinary competitive candidate for synthesizing interpolation. Besides, \textsc{CLP-Prover}, \textsc{Foci}, and \textsc{CSIsat} can handle only linear-arithmetic expressions with an efficient optimization. As for AiSat, a rather limited set of applications is acceptable because of the weakness of tackling local variables.} | 2,667 | 68,445 | en |
train | 0.4991.24 | \section{Conclusion}
The paper proposes a polynomial time algorithm for generating
interpolants from mutually contradictory conjunctions of concave
quadratic polynomial inequalities over the reals. Under a
technical condition that if no nonpositive constant combination
of nonstrict inequalities is a sum of squares polynomials, then
such an interpolant can be generated essentially using
the linearization of quadratic polynomials. Otherwise, if this
condition is not satisified, then the algorithm is recursively
called on smaller problems after deducing linear equalities
relating variables. The resulting interpolant is a disjunction of
conjunction of polynomial inequalities.
Using the hierarchical calculus framework proposed in
\cite{SSLMCS2008}, we give an interpolation algorithm for the
combined quantifier-free theory of concave quadratic polynomial
inequalities and equality over uninterpreted function
symbols. The combination algorithm is patterned after a
combination algorithm for the combined theory of linear
inequalities and equality over uninterpreted function symbols.
In addition, we also discuss how to
extend our approach to formulas with polynomial equalities whose polynomials may be neither concave nor quadratic
using Gr\"{o}bner basis.
The proposed approach is applicable to all existing abstract domains like
\emph{octagon}, \emph{polyhedra}, \emph{ellipsoid} and so on, therefore it can be used to improve
the scalability of existing verification techniques for programs and hybrid systems.
An interesting issue raised by the proposed framework for dealing
with nonlinear polynomial inequalities is the extent to which their linearization
with some additional conditions
on the coefficients (such as concavity for quadratic polynomials)
can be exploited. We are also investigating how results reported
for nonlinear polynomial inequalities based on positive
nullstellensatz \cite{Stengle} in \cite{DXZ13} and
the Archimedian condition on variables, implying that every
variable ranged over a bounded interval, can be exploited in the
proposed framework for dealing with polynomial inequalities.
\end{document} | 494 | 68,445 | en |
train | 0.4992.0 | \begin{document}
\title{f Fundamental Groups of Commuting Elements in Lie Groups}
\begin{abstract} We compute the fundamental group of the spaces of ordered commuting $n$-tuples
of elements in the Lie groups $SU(2)$, $U(2)$ and $SO(3)$. For $SO(3)$ the mod-2 cohomology of
the components of these spaces is also obtained.
\end{abstract}
\section {\bf Introduction}\label{intro}
In this paper we calculate the fundamental groups of the connected components of the spaces
$$M_n(G):=Hom({\bbb Z}^n,G),\ \mbox{ where $G$ is one of $SO(3),\ SU(2)$ or $U(2)$.}$$
The space $M_n(G)$ is just the space of ordered commuting $n$-tuples of elements from $G,$ topologized as a subset of $G^n.$
The spaces $M_n(SU(2))$ and $M_n(U(2))$ are connected (see~\cite{AC}), but $M_n(SO(3))$ has
many components if $n>1.$ One of the components is the one containing the element $(id,id,\ldots,id);$ see Section~\ref{thespaces}. The other components are all homeomorphic to $V_2(\mathbb R^3)/\mathbb Z_2\oplus\mathbb Z_2,$ where $V_2(\mathbb R^3)$ is the Stiefel manifold of orthonormal $2$-frames in $\mathbb R^3$ and the action of
$\mathbb Z_2\oplus\mathbb Z_2$ on $V_2(\mathbb R^3)$ is given by
$$(\epsilon_1,\epsilon_2)(v_1,v_2)=(\epsilon_1 v_1,\epsilon_2 v_2),\
\mbox{where $\epsilon_j=\pm 1$ and $(v_1,v_2)\in V_2(\mathbb R^3). $}
$$
Let $e_1,e_2,e_3$ be the standard basis of $\mathbb R^3.$ Under the homeomorphism $SO(3)\to V_2(\mathbb R^3)$ given by $A\mapsto (Ae_1,Ae_2)$ the action of
$\mathbb Z_2\oplus\mathbb Z_2$ on $V_2(\mathbb R^3)$ corresponds to the action defined by right multiplication by the elements
of the group generated by the transformations
$$(x_1,x_2,x_3)\mapsto(x_1,-x_2,-x_3),\ (x_1,x_2,x_3)\mapsto(-x_1,x_2,-x_3).$$
The orbit space of this action is homeomorphic to ${\bbb S}^3/Q_8$, where $Q_8$ is
the quaternion group of order eight.
Then $M_n(SO(3))$ will be a disjoint union of many copies of ${\bbb S}^3/Q_8$ and the component containing $(id,\ldots,id).$ For brevity let $\vec{1}$ denote the $n$-tuple $(id,\ldots,id).$
\begin{definition}\label{comps}
{\em Let $M_n^+(SO(3))$ be the component of $\vec{1}$ in $M_n(SO(3))$, and let $M_n^-(SO(3))$ be the complement
$M_n(SO(3))-M_n^+(SO(3))$.}
\end{definition}
Our main result is the following
\begin{theorem}\label{mainth}
For all $n\ge 1$
$$\begin{array}{rcc}
\pi_1(M_n^+(SO(3))) & = & {\bbb Z}_2^n \\
\pi_1(M_n(SU(2))) & =& 0 \\
\pi_1(M_n(U(2))) & = & {\bbb Z}^n \end{array}$$
\end{theorem}
The other components of $M_n(SO(3)),$ $n>1,$ all have fundamental group $Q_8.$
\begin{remark}\label{cover}{\em To prove this theorem we first prove that
$\pi_1(M_n^+(SO(3))) = {\bbb Z}_2^n,$ and then use the following property of spaces of homomorphisms (see~\cite{G}).
Let $\Gamma$ be a discrete group, $p:\tilde{G}\to G$ a covering
of Lie groups, and $C$ a component of the image of the induced map
$p_*:Hom(\Gamma,\tilde{G})\to Hom(\Gamma,G)$. Then $p_*:p_*^{-1}(C)\to C$ is a regular covering with
covering group $Hom(\Gamma,Ker\ p)$.
Applying this to the universal coverings $SU(2)\to SO(3)$ and $SU(2)\times{\bbb R} \to U(2)$ induces coverings
$${\bbb Z}_2^n\to M_n(SU(2))\to M_n^+(SO(3))$$ $${\bbb Z}^n\to M_n(SU(2))\times{\bbb R}^n\to M_n(U(2))$$}
\end{remark}
\begin{remark}{\em The spaces of homomorphisms arise in different contexts (see~\cite{J}).
In physics for instance, the orbit space $Hom({\bbb Z}^n,G)/G$, with $G$ acting by conjugation, is the moduli space of isomorphism
classes of flat connections on principal $G$-bundles over the $n$-dimensional torus. Note that, if $G$ is connected,
the map $\pi_0(Hom({\bbb Z}^n,G))\to\pi_0(Hom({\bbb Z}^n,G)/G)$ is a bijection of sets. The study of these spaces arises
from questions concerning the understanding of the structure of the components of this moduli space and their number.
These questions are part of the study of the quantum field theory of gauge theories over the $n$-dimensional
torus (see~\cite{BFM},\cite{KS}).
}\end{remark}
The organization of this paper is as follows. In Section 2 we study the toplogy of $M_n(SO(3))$ and
compute its number of components. In Section 3 we prove Theorem~\ref{mainth} and apply this result to
mapping spaces. Section 4 treats the cohomology of $M_n^+(SO(3))$. Part of the content of this paper is part
of the Doctoral Dissertation of the first author (\cite{E}). | 1,619 | 11,228 | en |
train | 0.4992.1 | \section{\bf The Spaces $M_n(SO(3))$}\label{thespaces}
In this section we describe the topolgy of the spaces $M_n(SO(3)),\ n\ge 2.$
If $A_1,A_2$ are commuting elements from $SO(3)$ then there are $2$ possibilities:
either $A_1,A_2$ are rotations about a common axis; or
$A_1,A_2$ are involutions about axes meeting at right angles.
The first possibility covers the case where one of $A_1,A_2$ is the identity since the identity can be considered as a rotation about any axis.
It follows that there are 2 possibilities for an $n$-tuple
$(A_1,\ldots,A_n)\in M_n(SO(3)):$
\begin{enumerate}
\item Either $A_1,\ldots,A_n$ are all rotations about a common axis $L$; or
\item There exists at least one pair $i,j$ such that $A_i,A_j$ are involutions about perpendicular axes. If $v_i,v_j$ are unit vectors representing these axes then all the other $A_k$ must be one of $id,A_i,A_j$ or $A_iA_j=A_jA_i$ (the involution about the cross product $v_i\times v_j).$
\end{enumerate}
It is clear that if $\omega(t)=(A_1(t),\ldots,A_n(t))$ is a path in
$M_n(SO(3))$ then exactly one of the following 2 possibilities occurs:
either the rotations $A_1(t),\ldots,A_n(t)$ have a common axis $L(t)$ for all $t$; or there exists a pair $i,j$ such that $A_i(t),A_j(t)$ are involutions about perpendicular axes for all $t$. In the second case the pair $i,j$ does not depend on $t.$
\begin{proposition}\label{compvec{1}}
$M_n^+(SO(3))$ is the space of $n$-tuples $(A_1,\ldots,A_n)\in SO(3)^n$ such that all the $A_j$ have a common axis of rotation.
\end{proposition}
\begin{proof}
Let $A_1,\ldots,A_n$ have a common axis of rotation $L$. Thus
$A_1,\ldots,A_n$ are rotations about $L$ by some angles $\theta_1,\ldots,\theta_n.$ We can change all angles to $0$ by a path (while maintaining the common axis). Conversely, if $\omega(t)=(A_1(t),\ldots,A_n(t))$ is a path containing $\vec{1}$ then the $A_j(t)$ will have a common axis of rotation for all $t$ (which might change with $t$).
\end{proof}
Thus any component of $M_n^-(SO(3))$ can be represented by an $n$-tuple $(A_1,\ldots,A_n)$ satisfying possibility 2 above.
\begin{corollary} The connected components of $M_2(SO(3))$ are $M_2^{\pm}(SO(3)).$ \end{corollary}
\begin{proof} Let $(A_1,A_2)$ be a pair in $M_2^-(SO(3)).$ Then there are unit vectors $v_1,v_2$ in ${\bbb R}^3$ such that $v_1,v_2$ are
perpendicular and $A_1,A_2$ are involutions about $v_1,v_2$ respectively. The pair $(v_1,v_2)$ is not unique since any one of the
four pairs $(\pm v_1,\pm v_2)$ will determine the same involutions. In fact there is a 1-1 correspondence between pairs $(A_1,A_2)$
in $M_2^-(SO(3))$ and sets $\{(\pm v_1,\pm v_2) \}.$ Thus $M_2^-(SO(3))$ is homeomorphic to the orbit space
$V_2({\bbb R}^3)/{\bbb Z}_2\oplus{\bbb Z}_2.$ Since $V_2({\bbb R}^3)$ is connected so is $M_2^-(SO(3)).$
\end {proof}
Next we determine the number of components of $M_n^-(SO(3))$ for $n>2$. The following example will give an indication of the complexity.
\begin{example}\label{example1}
Let $(A_1,A_2,A_3)$ be an element of $M_3^-(SO(3)).$ Then there exists at least one pair $A_i,A_j$ without a
common axis of rotation. For example suppose $A_1,A_2$ don't have a common axis. Then
$A_1,A_2$ are involutions about perpendicular axes $v_1,v_2$, and there are
$4$ possibilities for $A_3:$
$A_3=id,A_1,A_2\ \mbox{or}\ A_3=A_1A_2.$
We will see that the triples
$$(A_1,A_2,id),(A_1,A_2,A_1),(A_1,A_2,A_2),(A_1,A_2,A_1A_2)$$ belong to different components. Similarly if $A_1,A_3$ or $A_2,A_3$ don't have a common axis of rotation. This leads to 12 components, but some of them are the same component.
An analysis leads to a total of 7 distinct components corresponding to the following 7 triples:
$(A_1,A_2,id)$, $(A_1,A_2,A_1)$, $(A_1,A_2,A_2)$, $(A_1,A_2,A_1A_2)$, $(A_1,id,A_3)$, $(A_1,A_1,A_3)$, $(id,A_2,A_3)$;
where $A_1,A_2$ are distinct involutions in the first 4 cases; $A_1,A_3$ are distinct involutions in the next 2 cases; and $A_2,A_3$ are
distinct involutions in the last case. These components are all homeomorphic to ${\bbb S}^3/ Q_8$.
Thus $M_3(SO(3))$ is homeomorphic to the disjoint union
$$M_3^+(SO(3))\sqcup {\bbb S}^3/Q_8\sqcup\ldots\sqcup {\bbb S}^3/Q_8,$$ where there are 7 copies of ${\bbb S}^3/Q_8$.
\end{example}
The pattern of this example holds for all $n\ge 3.$ A simple analysis shows that
$M_n^-(SO(3))$ consists of $n$-tuples
$\vec{A}=(A_1,\ldots,A_n)\in SO(3)^n$ satisfying the following conditions:
\begin{enumerate}
\item Each $A_i$ is either an involution about some axis $v_i,$ or the
identity.
\item If $A_i,A_j$ are distinct involutions then their axes are at right
angles.
\item There exists at least one pair $A_i,A_j$ of distinct involutions.
\item If $A_i,A_j$ are distinct involutions then every other $A_k$ is one of $id,A_i,A_j$ or $A_iA_j.$
\end{enumerate}
This leads to 5 possibilities for any element
$(B_1,\ldots,B_n)\in M_n^-(SO(3)):$
$$
(B_1,B_2,*,\ldots,*),(B_1,id,*,\ldots,*),(id,B_2,*,\ldots,*),
(B_1,B_1,*,\ldots,*),(id,id,*,\ldots,*),
$$
where $B_1,B_2$ are distinct involutions about perpendicular axes and the asterisks are choices from amongst $id,B_1,B_2,B_3=B_1B_2.$ The choices must satisfy the conditions above.
These 5 cases account for all components of $M_n^-(SO(3)),$ but not all choices lead to distinct components. If
$\omega(t)= (B_1(t),B_2(t),\ldots,B_n(t))$ is a path in
$M_n^-(SO(3))$ then it is easy to verify the following statements:
\begin{enumerate}
\item If some $B_i(0)=id$ then $B_i(t)=id$ for all $t$.
\item If $B_i(0)=B_j(0)$ then $B_i(t)=B_j(t)$ for all $t$.
\item If $B_i(0),B_j(0)$ are distinct involutions then so are $B_i(t),B_j(t)$ for all $t$.
\item If $B_k(0)=B_i(0)B_j(0)$ then $B_k(t)=B_i(t)B_j(t)$ for all $t$.
\end{enumerate}
These 4 statements are used repeatedly in the proof of the next theorem.
\begin{theorem}
The number of components of $M_n^-(SO(3))$ is
$$\left\{\begin{array}{ll}
\displaystyle\frac{1}{6}(4^{n}-3\times 2^{n}+2) & \mbox{if $n$ is even}\\
& \\
\displaystyle \frac{2}{3}(4^{n-1}-1)-2^{n-1}+1 &\mbox{if $n$ is odd}
\end{array}\right.
$$
Moreover, each component is homeomorphic to ${\bbb S}^3/Q_8.$
\end{theorem}
\begin{proof}
Let $x_n$ denote the number of components. The first 3 values of $x_n$ are $x_1=0,$ $ x_2=1$ and $x_3=7,$ in agreement with the statement in the theorem. We consider the above 5 possibilities one by one. First assume $\vec{B}=(B_1,B_2,*,\ldots,*).$ Then different choices of the asterisks lead to different components. Thus the contribution in this case is $4^{n-2}.$ Next assume
$\vec{B}=(B_1,id,*,\ldots,*).$ Then all choices for the asterisks are admissible, except for those choices involving only $id$ and $B_1.$ This leads to
$4^{n-2}-2^{n-2}$ possibilities. However, changing every occurrence of $B_2$ to $B_3,$ and $B_3$ to $B_2,$ keeps us in the same component. Thus the total contribution in this case is $(4^{n-2}-2^{n-2})/2.$ This is the same contribution for cases 3 and 4.
Finally, there are $x_{n-2}$ components associated to $\vec{B}=(id,id,*,\ldots,*).$
This leads to the recurrence relation
$$\displaystyle x_n= 4^{n-2}+\frac{3}{2}(4^{n-2}-2^{n-2})+x_{n-2}$$
Now we solve this recurrence relation for the $x_n.$
Given any element $(B_1,\ldots,B_n)\in M_n^-(SO(n))$ we select a pair of
involutions, say $B_i,B_j,$ with perpendicular axes $v_i,v_j.$ All the other $B_k$ are determined uniquely by $B_i,B_j.$ Thus the element $(v_i,v_j)\in V_2(\mathbb R^3)$ determines $(B_1,\ldots,B_n).$ But all the elements $(\pm v_i,\pm v_j)$ also determine $(B_1,\ldots,B_n).$ Thus the component to which $(B_1,\ldots,B_n)$ belongs is homeomorphic to
$V_2(\mathbb R^3)/\mathbb Z_2\oplus\mathbb Z_2\cong {\bbb S}^3/Q_{8}.$
\end{proof} | 3,091 | 11,228 | en |
train | 0.4992.2 | \section{\bf Fundamental Group of $M_n(G)$}
In this section we prove Theorem \ref{mainth}, and we start by finding an appropriate description of $M_n^+(SO(3))$.
Let $T^n=({\bbb S}^1)^n$ denote the $n$-torus. Then
\begin{theorem}\label{quotient}
$M_n^+(SO(3))$ is homeomorphic to the quotient space ${\bbb S}^2\times T^n/\sim,$ where $\sim$ is the equivalence relation generated by
$$\displaystyleplaystyle
(v,z_1,\dots,z_n)\sim (-v,\bar{z}_1,\ldots,\bar{z}_n)\ \mbox{ and }\
(v,\vec{1})\sim(v^{\prime},\vec{1})\
\mbox{for all $v,v^{\prime}\in {\bbb S}^2,z_i\in {\bbb S}^1.$}
$$
\end{theorem}
\begin{proof}
If $(A_1,\ldots,A_n)\in M_n^+(SO(3))$ then there exists $v\in {\bbb S}^2$ such that $A_1,\ldots,A_n$ are rotations about $v$. Let $z_j\in {\bbb S}^1$ be the elements corresponding to these rotations. The $(n+1)$-tuple $(v,z_1,\ldots,z_n)$ is not unique. For example, if one of the $A_i$'s is not the identity then $(-v,\bar{z}_1,\ldots,\bar{z}_n)$ determines the same $n$-tuple of rotations. On the other hand, if all the $A_i$'s are the identity then any element $v\in {\bbb S}^2$ is an axis of rotation.
\end{proof} | 448 | 11,228 | en |
train | 0.4992.3 | We will use the notation $[v,z_1,\ldots,z_n]$ to denote the equivalence class of $(v,z_1,\dots,z_n).$
Thus $x_0=[v,\vec{1}]\in {\bbb S}^2\times T^n/\sim$ is a single point, which we choose to be the base point. It corresponds to the $n$-tuple $(id,\dots, id)\in M_n^+(SO(3))$. Then
$M_n^+(SO(3))$ is locally homeomorphic to $\mathbb R^{n+2}$ everywhere except at the point $x_0$ where it is singular.
{\it Proof of Theorem 1.2}: Notice that the result holds for $n=1$ since $Hom({\bbb Z},G)$ is homeomorphic to $G$.
The first step is to compute $\pi_1(M_n^+(SO(3))).$ Let $T^n_0=T^n-\{\vec{1}\}$ and
$M_n^+=M_n^+(SO(3)).$
Removing the singular point $x_0=[v,\vec{1}]$ from $M_n^+$ we have
$M_n^+-\{x_0\}\cong {\bbb S}^2\times T_0^n/\mathbb Z_2,$ see Theorem \ref{quotient}.
If $t$ denotes the generator of $\mathbb Z_2$ then the $\mathbb Z_2$ action
on ${\bbb S}^2\times T_0^n$ is given by
$$t(v,z_1,\ldots,z_n)=(-v,\bar{z}_1,\ldots,\bar{z}_n),\ v\in {\bbb S}^2, z_j\in {\bbb S}^1$$ This action is fixed point free and so there is
a two-fold covering
${\bbb S}^2\times T^n_0\stackrel{p}{\to}M_n^+-\{x_0\}$ and a short exact sequence
$$1\to \pi_1({\bbb S}^2\times T^n_0)\to\pi_1(M_n^+-\{x_0\})\to{\bbb Z}_2\to 1$$
Let ${\bf n}$ denote the north pole of ${\bbb S}^2.$ Then for base points in ${\bbb S}^2\times T^n_0$ and $M_n^+-\{x_0\}$ we take
$({\bf n},-1,\ldots,-1)=({\bf n},-\vec{1})$ and $[{\bf n},-1,\ldots,-1]=[{\bf n},-\vec{1}]$ respectively.
This sequence splits. To see this note that the composite
${\bbb S}^2\to {\bbb S}^2\times T^n_0 \to {\bbb S}^2$ is the identity, where the first map is
$ v\mapsto(v,-\vec{1})$ and the second is just the projection. Both maps are equivariant with respect to the ${\bbb Z}_2$-actions, and therefore $ M_n^+-\{x_0\}$ retracts onto $\mathbb R P^2.$
First we consider the case $n=2.$ Choose $-1$ to be the base point in ${\bbb S}^1.$ The above formula for the action of $\mathbb Z_2$ also defines a ${\bbb Z}_2$ action on ${\bbb S}^2\times({\bbb S}^1\vee {\bbb S}^1).$
This action is fixed point free. The inclusion ${\bbb S}^2\times({\bbb S}^1\vee {\bbb S}^1)\subset {\bbb S}^2\times {\bbb S}^1\times {\bbb S}^1$ is equivariant and there exists a ${\bbb Z}_2$-equivariant strong deformation retract from
${\bbb S}^2\times T^2_0$ onto ${\bbb S}^2\times({\bbb S}^1\vee {\bbb S}^1).$ Let $a_1,a_2$ be the generators
$({\bf n},{\bbb S}^1,-1)$ and $({\bf n},-1,{\bbb S}^1)$ of $\pi_1({\bbb S}^2\times T^2_0)={\bbb Z}*{\bbb Z}.$ See the Figure below.
The involution $t:{\bbb S}^2\times T^2_0\to {\bbb S}^2\times T^2_0$ induces isomorphisms
$$\begin{array}{ccc}
\pi_1({\bbb S}^2\times({\bbb S}^1\vee {\bbb S}^1),\{{\bf n},-1,-1\})&\stackrel{c}{\to}&
\pi_1({\bbb S}^2\times({\bbb S}^1\vee {\bbb S}^1),\{{\bf s},-1,-1\})\\
\pi_1({\bbb S}^2\vee({\bbb S}^1\vee {\bbb S}^1),\{{\bf n},-1,-1\})&\stackrel{c}{\to}&
\pi_1({\bbb S}^2\vee({\bbb S}^1\vee {\bbb S}^1),\{{\bf n},-1,-1\})
\end{array}$$
where ${\bf s}=-{\bf n}$ is the south pole in ${\bbb S}^2.$
We have the following commutative diagram
$$\xymatrix{
{\bbb S}^2\vee_{{\bf n}}({\bbb S}^1\vee {\bbb S}^1)\ar[d]^{i_{{\bf n}}}\ar[rr]^t & & {\bbb S}^2\vee_{{\bf s}}({\bbb S}^1\vee {\bbb S}^1)\ar[d]^{i_{{\bf s}}} \\
{\bbb S}^2\times_{{\bf n}}({\bbb S}^1\vee {\bbb S}^1)\ar[rr]^t\ar[rd]_p & & {\bbb S}^2\times_{{\bf s}}({\bbb S}^1\vee {\bbb S}^1)\ar[ld]^p \\
& M_2^+-\{x_0\} & }$$
where $i_{{\bf n}}$ and $i_{{\bf s}}$ are inclusions. Here the subscripts ${\bf n}$ and ${\bf s}$ refer to the north and south poles respectively, which we take to be base points of ${\bbb S}^2$ in the one point unions. The inclusions $i_{\bf n},i_{\bf s}$ induce isomorphims on $\pi_1$ and therefore
$p_*\pi_1({\bbb S}^2\vee_{\bf n}({\bbb S}^1\vee {\bbb S}^1))=p_*\pi_1({\bbb S}^2\vee_{\bf s}({\bbb S}^1\vee {\bbb S}^1)).$ Thus $t$ sends $a_1$ to the loop based at $s$ but with
the opposite orientation (similarly for $a_2$). See the Figure below.
\begin{center}
\includegraphics [scale=0.5] {figure1.eps}
\end{center}
We now have $\pi_1(M_2^+-\{x_0\})=
<a_1,a_2,t\ |\ t^2=1, a_1^t=a_1^{-1}, a_2^t=a_2^{-1}>$.
For the computation of $\pi_1(M_n^+-\{ x_0\}),\ n\ge 3,$ note that the inclusion $T^n_0 \subset T^n$ induces an isomorphism on $\pi_1.$ Therefore
$\pi_1(T^n_0)=<a_1,\ldots,a_n\ |\ [a_i,a_j]=1\ \forall\ i,j>.$
The various inclusions of $T_0^2$ into $T_0^n$ (corresponding to pairs of generators) show that the action of $t$ on the
generators is still
given by $a_i^t=a_i^{-1}.$
Thus
$$\pi_1(M_n^+-x_0)=<a_1,\ldots,a_n,t\ |\ t^2=1,[a_i,a_j]=1, a_i^t=a_i^{-1}>,\
\mbox{ for $n\geq 3$.}$$
The final step in the calculation of $\pi_1(M_n^+)$ is to use van Kampen's theorem. To do this let
$U\subset {\bbb S}^1$ be a small open connected neighbourhood of $1\in {\bbb S}^1$ which is invariant under conjugation. Here small means
$-1\not\in U$. Then $N_n={\bbb S}^2\times U^n/\sim$ is a contractible neighborhood of $x_0$ in $M_n^+.$
We apply van Kampen's theorem to the situation
$M_n^+=\displaystyle (M_n^+-\{x_0\})\cup N_n.$
The intersection $N_n\cap(M_n^+-\{x_0\})$ is homotopy equivalent to
$({\bbb S}^2\times{\bbb S}^{n-1})/{\bbb Z}_2$ where ${\bbb Z}_2$ acts by multiplication by
$-1$ on both factors. Therefore $\pi_1(N_n\cap(M_n^+-\{x_0\})) \cong{\bbb Z}$ when $n =2$, and ${\bbb Z}_2$ when $n\geq 3$.
Thus we need to understand the homomorphism induced by the inclusion
$N_n\cap(M_n^+-\{ x_0\} )\to M_n^+-\{ x_0\}$.
When $n=2$ the inclusion of $N_2\cap(M_2^+-\{x_0\})$ into $M_2^+-\{x_0\}$ induces the following commutative diagram
$$\xymatrix{
{\bbb Z}\ar[r]\ar[d]^2 & {\bbb Z}*{\bbb Z}\ar[d] \\
\pi_1(N_2\cap(M_2^+-\{x_0\}))\ar[r]\ar[d] & \pi_1(M_2^+-\{x_0\})\ar[d] \\
{\bbb Z}_2\ar[r]^= & {\bbb Z}_2 }$$
where the map on top is the commutator map. So if the generator of $\pi_1(N_2\cap(M_2^+-\{x_0\}))$ is sent to
$w\in\pi_1(M_2^+-\{x_0\})$, then $w^2=[a_1,a_2]$, and the image of $w$ in ${\bbb Z}_2$ is $t$.
Thus we can write $w=a_1^{n_1}a_2^{m_1}\cdots a_1^{n_r}a_2^{m_r}t$ with $n_i,m_i\in{\bbb Z}$.
Then
$$w^2=a_1^{n_1}a_2^{m_1}\cdots a_1^{n_r}a_2^{m_r}a_1^{-n_1}a_2^{-m_1}\cdots a_1^{-n_r}a_2^{-m_r}=
a_1 a_2 a_1^{-1}a_2^{-1}$$
which occurs only if $r=1$ and $n_1=m_1=1$. It follows that $w=a_1a_2 t$.
Thus $$\pi_1(M_2^+)=<a_1,a_2,t\ |\ t^2=1, a_1^t=a_1^{-1}, a_2^t=a_2^{-1}, a_1a_2 t=1>$$ and routine computations
show that this is the Klein four group.
For $n\geq 3$ the inclusion map $N_n\cap(M_n^+-\{x_0\})\to M_n^+-\{x_0\}$ can be understood by looking at the following diagram
$$\xymatrix{
{\bbb S}^2\times {\bbb S}^1\ar[rr]\ar[dd]\ar[rd] & & {\bbb S}^2\times T^2_0\ar[dd]\ar[rd] & \\
& {\bbb S}^2\times S^{n-1}\ar[rr]\ar[dd] & & {\bbb S}^2\times T^n_0\ar[dd] \\
N_2\cap(M_2^+-\{x_0\})\ar[rr]\ar[rd] & & M_2^+-\{x_0\}\ar[rd] & \\
& N_n\cap(M_n^+-\{x_0\})\ar[rr] & & M_n^+-\{x_0\}
}$$
Note that the map $N_2\cap(M_2^+-\{x_0\})\to N_n\cap(M_n^+-\{x_0\})$ induces the canonical projection ${\bbb Z}\to{\bbb Z}_2$. A chase
argument shows that the inclusion $N_n\cap(M_n^+-\{x_0\})\to M_n^+-\{x_0\}$ imposes the relation $a_1a_2t$ as well, and
therefore
$$\pi_1(M_n^+)=<a_1,\ldots,a_n,t\ |\ t^2=1,[a_i,a_j]=1, a_i^t=a_i^{-1}, a_1a_2 t=1>.$$
By performing some routine computations we see that this group is isomorphic to ${\bbb Z}_2^n$.
This completes the proof of Theorem~\ref{mainth} for $SO(3)$. The cases of $SU(2)$ and $U(2)$ follow from Remark~\ref{cover}.
$\Box$
Since the map $\pi_1(\vee_n G)\to\pi_1(G^n)$ is an epimorphism, it follows that the inclusion maps
$$M_n^+(G)\to G^n\hspace{.6cm}if\ \ G=SO(3)$$
$$M_n(G)\to G^n\hspace{.5cm}if\ \ G=SU(2),U(2)$$
are isomorphisms in $\pi_1$ for all $n\geq 1$.
Recall that there is a map $Hom(\Gamma,G)\to Map_*(B\Gamma,BG)$, where $Map_*(B\Gamma,BG)$ is the space of pointed
maps from the classifying space of $\Gamma$ into the classifying space of $G$. Let $Map_*^+(T^n,BG)$ be the component of
the map induced by the trivial representation.
\begin{corollary}\label{5.2} The maps
$$M_n^+(G)\to Map_*^+(T^n,BG)\hspace{.5cm}if\ \ G=SO(3)$$
$$M_n(G)\to Map_*^+(T^n,BG)\hspace{.5cm}if\ \ G=U(2)$$
are injective in $\pi_1$ for all $n\geq 1$.
\end{corollary}
\begin{proof} By induction on $n$, with the case $n=1$ being trivial. Assume $n>1$, and
note that there is a commutative diagram
$$\xymatrix{
M_n^+(SO(3))\ar[r]\ar[d] & Map_*^+(B\pi_1(T^n),BSO(3))\ar[d] \\
Hom(\pi_1(T^{n-1}\vee {\bbb S}^1),SO(3))\ar[r]\ar[d] & Map_*^+(B\pi_1(T^{n-1}\vee {\bbb S}^1),BSO(3))\ar[d] \\
Hom(\pi_1(T^{n-1}),SO(3))\times SO(3)\ar[r] & Map_*^+(B\pi_1(T^{n-1}),BSO(3))\times SO(3) }$$
in which the bottom map is injective in $\pi_1$ by inductive hypothesis, the lower vertical maps are homeomorphisms,
and the upper left vertical map is injective in $\pi_1$. Thus the map on top is also injective as wanted.
The proof for $U(2)$ is the same.
\end{proof} | 4,059 | 11,228 | en |
train | 0.4992.4 | \begin{remark}{\em We have the following observations.
\begin{enumerate}
\item The two-fold cover ${\bbb Z}_2\to {\bbb S}^3\times {\bbb S}^3\to SO(4)$ allows us to study $Hom({\bbb Z}^n,SO(4))$.
Let $M_n^+(SO(4))$ be the component covered by $Hom({\bbb Z}^n,{\bbb S}^3\times {\bbb S}^3)$. Since $Hom({\bbb Z}^n,{\bbb S}^3\times {\bbb S}^3)$ is
homeomorphic to $Hom({\bbb Z}^n,{\bbb S}^3)\times Hom({\bbb Z}^n,{\bbb S}^3)$, it follows that
$$\pi_1(M_n^+(SO(4)))={\bbb Z}_2^n$$
\item The space $Hom({\bbb Z}^2,SO(4))$ has two components. One is $M_2^+(SO(4)),$ which is covered by $\partial^{-1}_{SU(2)^2}(1,1)$,
and the other is covered by $\partial^{-1}_{SU(2)^2}(-1,-1)$, where $\partial_{SU(2)^2}$ is the commutator map of $SU(2)\times SU(2)$.
Recall $\partial^{-1}_{SU(2)}(-1)$ is homeomorphic to $SO(3)$ (see~\cite{AM}), so $\partial^{-1}_{SU(2)^2}(-1,-1)$ is
homeomorphic to $SO(3)\times SO(3)/{\bbb Z}_2\times{\bbb Z}_2$, where the group ${\bbb Z}_2\times{\bbb Z}_2$ acts by left diagonal multiplication when it is
thought of as the subgroup of $SO(3)$ generated by the transformations $(x_1,x_2,x_3)\mapsto(x_1,-x_2,-x_3)$ and
$(x_1,x_2,x_3)\mapsto(-x_1,x_2,-x_3)$.
\item Corollary~\ref{5.2} holds similarly for $SO(4)$, and trivially for $SU(2)$.
\end{enumerate}
}\end{remark} | 535 | 11,228 | en |
train | 0.4992.5 | \section{\bf Homological Computations}
In this section we compute the ${\bbb Z}_2$-cohomology of $M_n^+(SO(3))$. The ${\bbb Z}_2$-cohomology of the other components
of $M_n(SO(3))$ is well-known since these are all homeomorphic to ${\bbb S}^3/Q_8$.
To perform the computation we will use the description of $M_n^+(SO(3))$ that we
saw in the proof of Theorem~\ref{mainth}. The ingredients we have to consider are the spectral sequence of the fibration
${\bbb S}^2\times T^n_0\to(M_n^+-\{ x_0\})\to {\bbb R} P^\infty$ whose $E_2$ terms is
$${\bbb Z}_2[u]\otimes E(v)\otimes E(x_1,\ldots,x_n)/(x_1\cdots x_n)$$
with $deg(u)=(1,0)$, $deg(v)=(0,2)$ and $deg(x_i)=(0,1)$;
and the spectral sequence of the fibration ${\bbb S}^2\times{\bbb S}^{n-1}\to N_n\cap (M_n^+-\{ x_0\})\to{\bbb R} P^\infty$ whose
$E_2$-term is $${\bbb Z}_2[u]\otimes E(v)\otimes E(w)$$
with $deg(u)=(1,0)$, $deg(v)=(0,2)$ and $deg(w)=(0,n-1)$. Note that in both cases $d_2(v)=u^2$, whereas $d_2(x_i)=0$ since
$H^1(M_n^+-\{ x_0\},{\bbb Z}_2)={\bbb Z}_2^{n+1}$. Therefore the first spectral sequence collapses at the third term.
As $d_n(w)=u^n$ and $d_j(w)=0$ for $j\neq n$, the second spectral sequence collapses at the third term when $n=2$
and at the fourth term when $n\geq 3$.
The last step is to use the Mayer-Vietoris long exact sequence of the pair $(M_n^+-\{ x_0\},N_n)$ which yields the following: for $n=2,3$,
$$H^q(M_2^+(SO(3)),{\bbb Z}_2)=\left\{\begin{array}{ccl}
{\bbb Z}_2 & & q=0\\
{\bbb Z}_2\oplus{\bbb Z}_2 & & q=1\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=2\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=3\\
{\bbb Z}_2 & & q=4\\
0 & & q\geq 5 \end{array}\right.$$
$$H^q(M_3^+(SO(3)),{\bbb Z}_2)=\left\{\begin{array}{ccl}
{\bbb Z}_2 & & q=0\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=1\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=2\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=3\\
{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2\oplus{\bbb Z}_2 & & q=4\\
{\bbb Z}_2 & & q=5\\
0 & & q\geq 6 \end{array}\right.$$
whereas for $n\geq 4$,
$$H^q(M_n^+(SO(3)),{\bbb Z}_2)=\left\{\begin{array}{ccl}
{\bbb Z}_2 & & q=0\\
& & \\
{\bbb Z}_2^n & & q=1\\
& & \\
{\bbb Z}_2^{{n\choose 1}+{n\choose 2}} & & q=2\\
& & \\
\displaystyle {\bbb Z}_2^{{n\choose q-2}+{n\choose q-1}+{n\choose q}} & & 3\leq q\leq n\\
& & \\
{\bbb Z}_2^{{n\choose n-1}+1} & & q=n+1\\
& & \\
{\bbb Z}_2 & & q=n+2 \\
& & \\
0 & & q\geq n+3 \end{array}\right.$$
So the Euler characteristic of $M_n^+(SO(3))$ is given by
$$\chi(M_n^+(SO(3)))=\left\{
\begin{array}{ccl}
0 & & n=2\ or\ odd \\
& & \\
\displaystyle 2+n(n-1)-{n\choose k-1}-{n\choose k}-{n\choose k+1} & & n=2k,\ \ k\geq 2
\end{array}\right.$$
\
{
\parbox{6cm}{Denis Sjerve\\
{\it Department of Mathematics},\\
University of British Columbia\\
Vancouver, B.C.\\
Canada\\
{\sf sjer@math.ubc.ca}}\
{
}\
\parbox{6cm}{Enrique Torres-Giese\\
{\it Department of Mathematics},\\
University of British Columbia\\
Vancouver, B.C.\\
Canada\\
{\sf enrique@math.ubc.ca}}}\
\end{document} | 1,476 | 11,228 | en |
train | 0.4993.0 | \begin{document}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{theorem}[thm]{Theorem}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem{corollary and definition}[thm]{Corollary and Definition}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{example}[thm]{Example}
\theoremstyle{definition}
\newtheorem{notation}[thm]{Notation}
\newtheorem{claim}[thm]{Claim}
\newtheorem{remark}[thm]{Remark}
\newtheorem{remarks}[thm]{Remarks}
\newtheorem{conjecture}[thm]{Conjecture}
\newtheorem{definition}[thm]{Definition}
\newtheorem{problem}[thm]{Problem}
\newcommand{{\rm Diff}}{{\rm Diff}}
\newcommand{\frak{z}}{\frak{z}}
\newcommand{{\rm zar}}{{\rm zar}}
\newcommand{{\rm an}}{{\rm an}}
\newcommand{{\rm red}}{{\rm red}}
\newcommand{{\rm codim}}{{\rm codim}}
\newcommand{{\rm rank}}{{\rm rank}}
\newcommand{{\rm Pic}}{{\rm Pic}}
\newcommand{{\rm Div}}{{\rm Div}}
\newcommand{{\rm Hom}}{{\rm Hom}}
\newcommand{{\rm im}}{{\rm im}}
\newcommand{{\rm Spec}}{{\rm Spec}}
\newcommand{{\rm sing}}{{\rm sing}}
\newcommand{{\rm reg}}{{\rm reg}}
\newcommand{{\rm char}}{{\rm char}}
\newcommand{{\rm Tr}}{{\rm Tr}}
\newcommand{{\rm Gal}}{{\rm Gal}}
\newcommand{{\rm Min \ }}{{\rm Min \ }}
\newcommand{{\rm Max \ }}{{\rm Max \ }}
\newcommand{\soplus}[1]{\stackrel{#1}{\oplus}}
\newcommand{{\rm dlog}\,}{{\rm dlog}\,}
\newcommand{\limdir}[1]{{\displaystyle{\mathop{\rm
lim}_{\buildrel\longrightarrow\over{#1}}}}\,}
\newcommand{\liminv}[1]{{\displaystyle{\mathop{\rm
lim}_{\buildrel\longleftarrow\over{#1}}}}\,}
\newcommand{{{\mathbb B}ox\kern-9.03pt\raise1.42pt\hbox{$\times$}}}{{{\mathbb B}ox\kern-9.03pt\raise1.42pt\hbox{$\times$}}}
\newcommand{\mbox{${\mathcal E}xt\,$}}{\mbox{${\mathcal E}xt\,$}}
\newcommand{\mbox{${\mathcal H}om\,$}}{\mbox{${\mathcal H}om\,$}}
\newcommand{{\rm coker}\,}{{\rm coker}\,}
\renewcommand{\mbox{ $\Longleftrightarrow$ }}{\mbox{ $\Longleftrightarrow$ }}
\newcommand{\mbox{$\,\>>>\hspace{-.5cm}\to\hspace{.15cm}$}}{\mbox{$\,\>>>\hspace{-.5cm}\to\hspace{.15cm}$}}
\catcode`\@=11
\def\opn#1#2{\def#1{\mathop{\kern0pt\fam0#2}\nolimits}}
\def\bold#1{{\bf #1}}
\def\mathpalette\underrightarrow@{\mathpalette\mathpalette\underrightarrow@@}
\def\mathpalette\underrightarrow@@#1#2{\vtop{\ialign{$##$\cr
\hfil#1#2\hfil\cr\noalign{\nointerlineskip}
#1{-}\mkern-6mu\cleaders\hbox{$#1\mkern-2mu{-}\mkern-2mu$}
\mkern-6mu{\to}\cr}}}
\let\underarrow\mathpalette\underrightarrow@
\def\mathpalette\underleftarrow@{\mathpalette\mathpalette\underleftarrow@@}
\def\mathpalette\underleftarrow@@#1#2{\vtop{\ialign{$##$\cr
\hfil#1#2\hfil\cr\noalign{\nointerlineskip}#1{\leftarrow}\mkern-6mu
\cleaders\hbox{$#1\mkern-2mu{-}\mkern-2mu$}
\mkern-6mu{-}\cr}}}
\let\amp@rs@nd@\relax
\newdimen\ex@
\ex@.2326ex
\newdimen\bigaw@
\newdimen\minaw@
\minaw@16.08739\ex@
\newdimen\minCDaw@
\minCDaw@2.5pc
\newif\ifCD@
\def\minCDarrowwidth#1{\minCDaw@#1}
\newenvironment{CD}{\@CD}{\cr\egroup\egroup}
\def\@CD{\def{\mathbb A}##1A##2A{\llap{$\vcenter{\hbox
{$\scriptstyle##1$}}$}{\mathbb B}ig\uparrow\rlap{$\vcenter{\hbox{
$\scriptstyle##2$}}$}&&}
\def{\mathbb V}##1V##2V{\llap{$\vcenter{\hbox
{$\scriptstyle##1$}}$}{\mathbb B}ig\downarrow\rlap{$\vcenter{\hbox{
$\scriptstyle##2$}}$}&&}
\def\={&\hskip.5em\mathrel
{\vbox{\hrule width\minCDaw@\vskip3\ex@\hrule width
\minCDaw@}}\hskip.5em&}
\def\Big\Vert&{{\mathbb B}ig{\mathbb V}ert&&}
\def&&{&&}
\def\vspace##1{\noalign{\vskip##1\relax}}\relax\mbox{ $\Longleftrightarrow$ }alse{
\fi\let\amp@rs@nd@&\mbox{ $\Longleftrightarrow$ }alse}\fi
{\mathbb C}D@true\vcenter\bgroup\relax\mbox{ $\Longleftrightarrow$ }alse{
\fi\let\\=\cr\mbox{ $\Longleftrightarrow$ }alse}\fi\tabskip\z@skip\baselineskip20\ex@
\lineskip3\ex@\lineskiplimit3\ex@\halign\bgroup
&
$\m@th##$
\cr}
\def\cr\egroup\egroup{\cr\egroup\egroup}
\def\>#1>#2>{\amp@rs@nd@\setbox\z@\hbox{$\scriptstyle
\;{#1}\;\;$}\setbox\@ne\hbox{$\scriptstyle\;{#2}\;\;$}\setbox\tw@
\hbox{$#2$}\ifCD@
\global\bigaw@\minCDaw@\else\global\bigaw@\minaw@\fi
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\ifdim\wd\@ne>\bigaw@\global\bigaw@\wd\@ne\fi
\ifCD@\hskip.5em\fi
\ifdim\wd\tw@>\z@
\mathrel{\mathop{\hbox to\bigaw@{\rightarrowfill}}\limits^{#1}_{#2}}\else
\mathrel{\mathop{\hbox to\bigaw@{\rightarrowfill}}\limits^{#1}}\fi
\ifCD@\hskip.5em\fi\amp@rs@nd@}
\def\<#1<#2<{\amp@rs@nd@\setbox\z@\hbox{$\scriptstyle
\;\;{#1}\;$}\setbox\@ne\hbox{$\scriptstyle\;\;{#2}\;$}\setbox\tw@
\hbox{$#2$}\ifCD@
\global\bigaw@\minCDaw@\else\global\bigaw@\minaw@\fi
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\ifdim\wd\@ne>\bigaw@\global\bigaw@\wd\@ne\fi
\ifCD@\hskip.5em\fi
\ifdim\wd\tw@>\z@
\mathrel{\mathop{\hbox to\bigaw@{\leftarrowfill}}\limits^{#1}_{#2}}\else
\mathrel{\mathop{\hbox to\bigaw@{\leftarrowfill}}\limits^{#1}}\fi
\ifCD@\hskip.5em\fi\amp@rs@nd@}
\newenvironment{CDS}{\@CDS}{\cr\egroup\egroupS}
\def\@CDS{\def{\mathbb A}##1A##2A{\llap{$\vcenter{\hbox
{$\scriptstyle##1$}}$}{\mathbb B}ig\uparrow\rlap{$\vcenter{\hbox{
$\scriptstyle##2$}}$}&}
\def{\mathbb V}##1V##2V{\llap{$\vcenter{\hbox
{$\scriptstyle##1$}}$}{\mathbb B}ig\downarrow\rlap{$\vcenter{\hbox{
$\scriptstyle##2$}}$}&}
\def\={&\hskip.5em\mathrel
{\vbox{\hrule width\minCDaw@\vskip3\ex@\hrule width
\minCDaw@}}\hskip.5em&}
\def\Big\Vert&{{\mathbb B}ig{\mathbb V}ert&}
\def&{&}
\def&&{&&}
\def\SE##1E##2E{\slantedarrow(0,18)(4,-3){##1}{##2}&}
\def\SW##1W##2W{\slantedarrow(24,18)(-4,-3){##1}{##2}&}
\def{\mathbb N}E##1E##2E{\slantedarrow(0,0)(4,3){##1}{##2}&}
\def{\mathbb N}W##1W##2W{\slantedarrow(24,0)(-4,3){##1}{##2}&}
\def\slantedarrow(##1)(##2)##3##4{
\thinlines\unitlength1pt\lower 6.5pt\hbox{\begin{picture}(24,18)
\put(##1){\vector(##2){24}}
\put(0,8){$\scriptstyle##3$}
\put(20,8){$\scriptstyle##4$}
\end{picture}}}
\def\vspace##1{\noalign{\vskip##1\relax}}\relax\mbox{ $\Longleftrightarrow$ }alse{
\fi\let\amp@rs@nd@&\mbox{ $\Longleftrightarrow$ }alse}\fi
{\mathbb C}D@true\vcenter\bgroup\relax\mbox{ $\Longleftrightarrow$ }alse{
\fi\let\\=\cr\mbox{ $\Longleftrightarrow$ }alse}\fi\tabskip\z@skip\baselineskip20\ex@
\lineskip3\ex@\lineskiplimit3\ex@\halign\bgroup
&
$\m@th##$
\cr}
\def\cr\egroup\egroupS{\cr\egroup\egroup}
\newdimen{\rm Tr}iCDarrw@
\newif\ifTriV@
\newenvironment{TriCDV}{\TriV@true\def\TriCDpos@{6}\@TriCD}{\egroup}
\newenvironment{TriCDA}{\TriV@false\def\TriCDpos@{10}\@TriCD}{\egroup}
\def\TriV@true\def\TriCDpos@{6}\@TriCD{{\rm Tr}iV@true\def{\rm Tr}iCDpos@{6}\@TriCD}
\def\TriV@false\def\TriCDpos@{10}\@TriCD{{\rm Tr}iV@false\def{\rm Tr}iCDpos@{10}\@TriCD}
\def\@TriCD#1#2#3#4#5#6{
\setbox0\hbox{$\ifTriV@#6\else#1\fi$}
{\rm Tr}iCDarrw@=\wd0 \advance{\rm Tr}iCDarrw@ 24pt
\advance{\rm Tr}iCDarrw@ -1em
\def\SE##1E##2E{\slantedarrow(0,18)(2,-3){##1}{##2}&}
\def\SW##1W##2W{\slantedarrow(12,18)(-2,-3){##1}{##2}&}
\def{\mathbb N}E##1E##2E{\slantedarrow(0,0)(2,3){##1}{##2}&}
\def{\mathbb N}W##1W##2W{\slantedarrow(12,0)(-2,3){##1}{##2}&}
\def\slantedarrow(##1)(##2)##3##4{\thinlines\unitlength1pt
\lower 6.5pt\hbox{\begin{picture}(12,18)
\put(##1){\vector(##2){12}}
\put(-4,{\rm Tr}iCDpos@){$\scriptstyle##3$}
\put(12,{\rm Tr}iCDpos@){$\scriptstyle##4$}
\end{picture}}}
\def\={\mathrel {\vbox{\hrule
width{\rm Tr}iCDarrw@\vskip3\ex@\hrule width
{\rm Tr}iCDarrw@}}}
\def\>##1>>{\setbox\z@\hbox{$\scriptstyle
\;{##1}\;\;$}\global\bigaw@{\rm Tr}iCDarrw@
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\hskip.5em
\mathrel{\mathop{\hbox to {\rm Tr}iCDarrw@
{\rightarrowfill}}\limits^{##1}}
\hskip.5em}
\def\<##1<<{\setbox\z@\hbox{$\scriptstyle
\;{##1}\;\;$}\global\bigaw@{\rm Tr}iCDarrw@
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\mathrel{\mathop{\hbox to\bigaw@{\leftarrowfill}}\limits^{##1}}
}
{\mathbb C}D@true\vcenter\bgroup\relax\mbox{ $\Longleftrightarrow$ }alse{\fi\let\\=\cr\mbox{ $\Longleftrightarrow$ }alse}\fi
\tabskip\z@skip\baselineskip20\ex@
\lineskip3\ex@\lineskiplimit3\ex@
\ifTriV@
\halign\bgroup
&
$\m@th##$
\cr
#1&\multispan3
$#2$
\\
\\
&\cr\egroup
\else
\halign\bgroup
&
$\m@th##$
\cr
&\\%
\\
#4&\multispan3
$#5$
\cr\egroup
\fi}
\def\egroup{\egroup}
\newcommand{{\mathcal A}}{{\mathcal A}}
\newcommand{{\mathcal B}}{{\mathcal B}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{{\mathcal E}}{{\mathcal E}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal I}}{{\mathcal I}}
\newcommand{{\mathcal J}}{{\mathcal J}}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{{\mathcal L}}{{\mathcal L}}
\newcommand{{\mathcal M}}{{\mathcal M}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\mathcal Q}}{{\mathcal Q}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal W}}{{\mathcal W}}
\newcommand{{\mathcal X}}{{\mathcal X}}
\newcommand{{\mathcal Y}}{{\mathcal Y}}
\newcommand{{\mathcal Z}}{{\mathcal Z}} | 3,943 | 14,099 | en |
train | 0.4993.1 | \newdimen{\rm Tr}iCDarrw@
\newif\ifTriV@
\newenvironment{TriCDV}{\TriV@true\def\TriCDpos@{6}\@TriCD}{\egroup}
\newenvironment{TriCDA}{\TriV@false\def\TriCDpos@{10}\@TriCD}{\egroup}
\def\TriV@true\def\TriCDpos@{6}\@TriCD{{\rm Tr}iV@true\def{\rm Tr}iCDpos@{6}\@TriCD}
\def\TriV@false\def\TriCDpos@{10}\@TriCD{{\rm Tr}iV@false\def{\rm Tr}iCDpos@{10}\@TriCD}
\def\@TriCD#1#2#3#4#5#6{
\setbox0\hbox{$\ifTriV@#6\else#1\fi$}
{\rm Tr}iCDarrw@=\wd0 \advance{\rm Tr}iCDarrw@ 24pt
\advance{\rm Tr}iCDarrw@ -1em
\def\SE##1E##2E{\slantedarrow(0,18)(2,-3){##1}{##2}&}
\def\SW##1W##2W{\slantedarrow(12,18)(-2,-3){##1}{##2}&}
\def{\mathbb N}E##1E##2E{\slantedarrow(0,0)(2,3){##1}{##2}&}
\def{\mathbb N}W##1W##2W{\slantedarrow(12,0)(-2,3){##1}{##2}&}
\def\slantedarrow(##1)(##2)##3##4{\thinlines\unitlength1pt
\lower 6.5pt\hbox{\begin{picture}(12,18)
\put(##1){\vector(##2){12}}
\put(-4,{\rm Tr}iCDpos@){$\scriptstyle##3$}
\put(12,{\rm Tr}iCDpos@){$\scriptstyle##4$}
\end{picture}}}
\def\={\mathrel {\vbox{\hrule
width{\rm Tr}iCDarrw@\vskip3\ex@\hrule width
{\rm Tr}iCDarrw@}}}
\def\>##1>>{\setbox\z@\hbox{$\scriptstyle
\;{##1}\;\;$}\global\bigaw@{\rm Tr}iCDarrw@
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\hskip.5em
\mathrel{\mathop{\hbox to {\rm Tr}iCDarrw@
{\rightarrowfill}}\limits^{##1}}
\hskip.5em}
\def\<##1<<{\setbox\z@\hbox{$\scriptstyle
\;{##1}\;\;$}\global\bigaw@{\rm Tr}iCDarrw@
\ifdim\wd\z@>\bigaw@\global\bigaw@\wd\z@\fi
\mathrel{\mathop{\hbox to\bigaw@{\leftarrowfill}}\limits^{##1}}
}
{\mathbb C}D@true\vcenter\bgroup\relax\mbox{ $\Longleftrightarrow$ }alse{\fi\let\\=\cr\mbox{ $\Longleftrightarrow$ }alse}\fi
\tabskip\z@skip\baselineskip20\ex@
\lineskip3\ex@\lineskiplimit3\ex@
\ifTriV@
\halign\bgroup
&
$\m@th##$
\cr
#1&\multispan3
$#2$
\\
\\
&\cr\egroup
\else
\halign\bgroup
&
$\m@th##$
\cr
&\\%
\\
#4&\multispan3
$#5$
\cr\egroup
\fi}
\def\egroup{\egroup}
\newcommand{{\mathcal A}}{{\mathcal A}}
\newcommand{{\mathcal B}}{{\mathcal B}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathcal D}}{{\mathcal D}}
\newcommand{{\mathcal E}}{{\mathcal E}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{{\mathcal G}}{{\mathcal G}}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal I}}{{\mathcal I}}
\newcommand{{\mathcal J}}{{\mathcal J}}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{{\mathcal L}}{{\mathcal L}}
\newcommand{{\mathcal M}}{{\mathcal M}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{{\mathcal P}}{{\mathcal P}}
\newcommand{{\mathcal Q}}{{\mathcal Q}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{{\mathcal S}}{{\mathcal S}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal U}}{{\mathcal U}}
\newcommand{{\mathcal V}}{{\mathcal V}}
\newcommand{{\mathcal W}}{{\mathcal W}}
\newcommand{{\mathcal X}}{{\mathcal X}}
\newcommand{{\mathcal Y}}{{\mathcal Y}}
\newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{{\mathbb A}}{{\mathbb A}}
\newcommand{{\mathbb B}}{{\mathbb B}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb D}}{{\mathbb D}}
\newcommand{{\mathbb E}}{{\mathbb E}}
\newcommand{{\mathbb F}}{{\mathbb F}}
\newcommand{{\mathbb G}}{{\mathbb G}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\newcommand{{\mathbb I}}{{\mathbb I}}
\newcommand{{\mathbb J}}{{\mathbb J}}
\newcommand{{\mathbb M}}{{\mathbb M}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\renewcommand{{\mathbb P}}{{\mathbb P}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb T}}{{\mathbb T}}
\newcommand{{\mathbb U}}{{\mathbb U}}
\newcommand{{\mathbb V}}{{\mathbb V}}
\newcommand{{\mathbb W}}{{\mathbb W}}
\newcommand{{\mathbb X}}{{\mathbb X}}
\newcommand{{\mathbb Y}}{{\mathbb Y}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\title{A Note on an Asymptotically Good Tame Tower}
\author{Siman Yang}
\thanks{The author was partially supported by the program for Chang Jiang Scholars and Innovative Research Team in University}
\begin{abstract}
The explicit construction of function fields tower with many rational points relative to the genus in the tower play a key role for the construction of asymptotically good algebraic-geometric codes. In 1997 Garcia, Stichtenoth and Thomas [6] exhibited two recursive asymptotically good Kummer towers over any non-prime field. Wulftange determined the limit of one tower in his PhD thesis [13]. In this paper we determine the limit of another tower [14].
\end{abstract}
\maketitle
\noindent \underline{Keywords:} Function fields tower, rational
places, genus.
\ \\ | 1,708 | 14,099 | en |
train | 0.4993.2 | \section{Introduction}
Let $K={\mathbb F}_q$ be the finite field of cardinality $q$, and let $\mathcal{F}=(F_i)_{i\geq 0}$ be a sequence of
algebraic function fields each defined over $K$. If $F_i\varsubsetneqq F_{i+1}$ and $K$ is the full constant field for all $i\geq 0$, and $g(F_j)>1$ for some $j\geq 0$, we call $\mathcal{F}$ a tower.
Denoted by $g(F)$ the genus of the function field $F/{\mathbb F}_q$ and $N(F)$ the number of ${\mathbb F}_q$-rational places of $F$.
It is well-known that for given genus $g$ and finite field ${\mathbb F}_q$, the number of ${\mathbb F}_q$-rational places of a function field is upper bounded due to the Weil's theorem (cf. [11]). Let $N_q(g):=\max \{N(F)|F\,\,\mbox{is a function field of genus}\, g\,\mbox{over}\,{\mathbb F}_q\}$ and let
\begin{equation*}
A(q)=\displaystyle\limsup_{g\rightarrow \infty}N_q(g)/g,
\end{equation*}
the Drinfeld-Vladut bound [2] provides a general upper bound of $A(q)$
\begin{equation*}
A(q)\leq \sqrt{q}-1.
\end{equation*}
Ihara[7], and Tsfasman, Vladut and Zink [12] independently showed that this bound is met when $q$ is a square by the theory of Shimura modular curves and elliptic modular curves, respectively. For non-square $q$ the exact value of $A(q)$ is unknown. Serre[10] first showed that $A(q)$ is positive for any prime power $q$
$$
A(q)\geq c\cdot \log q
$$
with some constant $c>0$ irrelevant to $q$. It was proved in [6]
that for any tower $\mathcal{F}=(F_i)_{i\geq 0}$ defined over
${\mathbb F}_q$ the sequence $N(F_n)/g(F_n)_{n\geq 0}$ is convergent. We
define the limit of the tower as
$$
\lambda(\mathcal{F})=\displaystyle\lim _{i\rightarrow \infty} N(F_i)/g(F_i).
$$
Clearly, $0\leq \lambda(\mathcal{F})\leq A(q)$. We call a tower $\mathcal{F}$ asymptotically good if
$\lambda(\mathcal{F})>0$.
To be useful towards the aim of yielding asymptotically good codes, a tower must be asymptotically good.
Practical implementation of the codes also requires explicit equations for each extension step in the tower. In 1995, Garcia and Stichtenoth [4] exhibited the first explicit tower of Artin-Schreier extensions over any finite field of square cardinality which met the upper bound of Drinfeld and Vladut.
In 1997 Garcia, Stichtenoth and Thomas [6] exhibited two explicit asymptotically good Kummer towers over any non-prime field which were later generalized by Deolalikar [1].
For other explicit tame towers, readers may look at [3], [5], [9].
The two asymptotically good Kummer towers in [6] are given as below.
Let $q=p^e$ with $e>1$, and let $F_n={\mathbb F}_q(x_0, \cdots, x_n)$ with
\begin{equation}
x_{i+1}^{\frac{q-1}{p-1}}+(x_i+1)^{\frac{q-1}{p-1}}=1 \,\,\,\,(i=0, \cdots, n-1).
\end{equation}
Then $\mathcal{F}=(F_0, F_1, \cdots)$ is an asymptotically good
tower over ${\mathbb F}_q$ with $\lambda (\mathcal{F})\geq 2/(q-2)$.
Let $q$ be a prime power larger than two, and let $F_n={\mathbb F}_q(x_0, \cdots, x_n)$ with
\begin{equation}
x_{i+1}^{q-1}+(x_i+1)^{q-1}=1 \,\,\,\,(i=0, \cdots, n-1).
\end{equation}
Then $\mathcal{F}=(F_0, F_1, \cdots)$ is an asymptotically good tower over ${\mathbb F}_{q^2}$ with $\lambda (\mathcal{F})\geq 2/(q-2)$.
Wulftange showed in [13] that $\lambda (\mathcal{F})=2/(q-2)$ for the first tower, we will show in the next section
that the limit of the second tower is also $2/(q-2)$. | 1,180 | 14,099 | en |
train | 0.4993.3 | \section{The limit of the tower}
\begin{lemma}
Let $F_1=K(x,y)$ defined by Eq. (2).\\
Over $K(x)$ exactly the zeroes of $x-\alpha$, $\alpha \in {\mathbb F}_q \backslash \{-1\}$ are ramified in $F_1$, each of ramification index $q-1$.\\
Over $K(y)$ exactly the zeroes of $y-\alpha$, $\alpha \in {\mathbb F}_q^*$ are ramified in $F_1$, each of ramification index $q-1$.
\end{lemma}
\begin{proof} By applying the theory of Kummer extension (cf. [11, Chap. III.7.3]).
\end{proof}
\begin{proposition}
Let $P_\alpha \in {\mathcal P} (F_0)$ be a zero of $x_0-\alpha$, $\alpha \in {\mathbb F}_q\backslash \{-1\}$. Then, $P_\alpha$ is totally ramified in $F_{n+1}/F_n$ for any $n\geq 0$.
\end{proposition}
\begin{proof} Let $P\in {\mathcal P} (F_n)$ lying above $P_\alpha$ for some $\alpha \in {\mathbb F}_q\backslash \{-1\}$. From Eq. (2), one can check $x_1(P)=x_2(P)=\cdots=x_n(P)=0$. Thus the ramification index of the extension of the restriction $P$ in $K(x_i, x_{i+1})/K(x_i)$ is $q-1$ for $i=0, 1, \cdots, n$, also the ramification index of the extension of the restriction $P$ in $K(x_i, x_{i+1})/K(x_{i+1})$ is $1$ for $i=0, 1, \cdots, n$. The proof is finished by diagram chasing and repeated application of Abhyankar's lemma.
\end{proof}
Let $Q \in {\mathcal P} (F_n)$ be a place ramified in $F_{n+1}$. Then $P:=Q\cap K(x_n)$ is ramified in $K(x_n, x_{n+1})$ due to Abhyankar's lemma. From Lemma 2.1, $x_n(P)=\alpha$ for some $\alpha \in {\mathbb F}_q\backslash \{-1\}$. If $\alpha \not= 0$, $P$ is ramifed in $K(x_{n-1}, x_n)$ of ramification index $q-1$ due to Lemma 2.1, and due to Abhyankar's lemma, the place in $K(x_{n-1},x_n)$ lying above $P$ is unramified in $K(x_{n-1}, x_n, x_{n+1})$, again by Abhyankar's lemma, $Q$ is unramified in $F_{n+1}$. Thus $Q$ is a zero of $x_n$. This implies $Q$ is a zero of $x_{n-1}-\beta$ for some $\beta \in {\mathbb F}_q\backslash \{-1\}$. From Eq. (2), one has the following possibilities for a place $Q \in {\mathcal P} (F_n)$ ramified in $F_{n+1}$.
(a) The place $Q$ is a common zero of $x_0, x_1, \cdots, x_n$.
(b) There is some $t$, $-1\leq t <n-1$ such that
(b1) $Q$ is a common zero of $x_{t+2}, x_{t+3}, \cdots, x_n$.
(b2) $Q$ is a zero of $x_{t+1}-\alpha $ for some $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$.
(b3) $Q$ is a common zero of $x_0 +1, x_1 +1, \cdots, x_t +1$.
(Note that condition (b2) implies (b1) and (b3)).
\begin{lemma}
Let $-1\leq t <n$ and $Q\in {\mathcal P} (F_n)$ be a place which is a zero of $x_{t+1}-\alpha$ for some $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$. Then one has
(i)\,\, If $n<2t+2$, then $Q$ is unramified in $F_{n+1}$.
(ii) If $n\geq 2t+2$, then $Q$ is ramified in $F_{n+1}$ of ramification index $q-1$.
\end{lemma}
\begin{proof}
The assertion in (i) and (ii) follow by diagram chasing with the help of Lemma 2.1 and repeated applications of Abhyankar's lemma.
\end{proof}
For $0 \leq t <\lfloor n/2 \rfloor $ and $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$, set
$X_{t, \alpha}:=\{ Q\in {\mathcal P} (F_n)| Q \,\,\text{is a zero of}\,\, x_{t+1}-\alpha \}$
and $A_{t, \alpha}:=\displaystyle\sum _{Q\in X_{t, \alpha}}Q$. Denote by $Q_{t+1}$ the restriction of $Q$ to $K(x_{t+1})$, we have $[F_n:K(x_{t+1})]=(q-1)^n$ and $e(Q|Q_{t+1})=(q-1)^{n-t-1}$. Then deg $A_{t, \alpha}=(q-1)^{t+1}$ follows from the fundemental equality $\sum e_i f_i =n$. Combining the above results one obtains
\begin{align}
\text{deg Diff}(F_{n+1}/F_n)&=(q-1)(q-2)+\displaystyle\sum _{\alpha \in {\mathbb F}_q^*\backslash \{-1\}} \displaystyle\sum _{t=0}^{\lfloor n/2 \rfloor -1}(q-2)(q-1)^{t+1}\\
&=(q-2)(q-1)^{\lfloor n/2 \rfloor +1}.
\end{align}
Now we can easily determine the genus of $F_n$ by applying the transitivity of different exponents and Hurwitz genus formula. The result is:
\[g(F_{n+1})=\left\{
\begin{array}{cccccc}
(q-2)(q-1)^{n+1}/2-(q-1)^{n/2+1}+1,\,\, \mbox{if}\,\, n\,\, \mbox{is even}, \\
(q-2)(q-1)^{n+1}/2-q(q-1)^{(n+1)/2}/2+1 ,\,\, \mbox{if}\,\, n\,\, \mbox{is odd}.
\end{array}\right.
\]
Thus $\gamma (\mathcal{F}):=\displaystyle\lim_{n\rightarrow \infty} g(F_n)/[F_n:F_0]=(q-2)/2$.
\begin{remark}
Note that from the proof of [6, Theorem 2.1 and Example 2.4],
$\gamma(\mathcal{F})$ is upper bounded by $(q-2)/2$.
\end{remark}
Next we consider the rational places in each function field $F_n$.
First we consider places over $P_{\infty}$. It is easy to see that
$P_{\infty}$ splits completely in the tower. From Prop. 2.2,
there's a unique ${\mathbb F}_q$-rational place in $F_n$ over $P_{\alpha}$
for any $\alpha \in {\mathbb F}_q\backslash \{-1\}$. Then we consider the
$K$-rational place over $P_{-1}$ in $F_n$. Let $0 \leq t <n$ and
$Q\in {\mathcal P} (F_n)$ be a place which is a zero of $x_{t+1}-\alpha$
for some $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$. We study the
condition for such place $Q$ to be $K$-rational.
\begin{lemma}
Let $Q'$ be a place of $F_2$ and $Q'$ is a zero of $x_1-\beta$ for some $\beta \in {\mathbb F}_q^*\backslash \{-1\}$. Then, if char$(\mathcal{F})\not=2$, $Q'$ is not a ${\mathbb F}_q$-rational place and $Q'$ is a ${\mathbb F}_{q^2}$-rational place if and only if $\beta =-1/2$; if
char$(\mathcal{F})=2$, $Q'$ is not a ${\mathbb F}_{q^2}$-rational place.
\end{lemma}
\begin{proof}
Note that $x_2$ and $x_0+1$ both are $Q'$-prime elements. Eq. (2) implies $(\frac{x_2}{x_0+1})^{q-1}=\frac{x_1}{1+x_1}$, which is equivalent to $\beta /(1+\beta)$. $(\frac{x_2}{x_0+1})^{q-1}(Q')\not=1$ implies $Q'$ is not ${\mathbb F}_q$-rational, and $(\frac{x_2}{x_0+1})^{q^2-1}(Q')=1$ if and only if $\beta =-1/2$ as $\beta \in {\mathbb F}_q^*\backslash \{-1\}$.
\end{proof}
We generalize this result to the following proposition.
\begin{proposition}
Assume char$(\mathcal{F})$ is odd. Fix positive integers $t\leq m$. There are $2^{t-1}(q-1)$ many ${\mathbb F}_{q^2}\backslash {\mathbb F}_q$-rational places $Q$ in ${\mathbb F}_{q^2}(x_{m-t}, x_{m-t+1}, \cdots, x_{m+t})$ which are zeroes of $x_m-\beta$ for some $\beta \in {\mathbb F}_q^*\backslash \{-1\}$ if $q\equiv -1$ (mod $2^t$), with each of them corresponds to a tuple $(\alpha _1, \alpha _2, \cdots, \alpha _t)$ satisfying
\[\left\{
\begin{array}{cccccc}
x_m\equiv -1/2,\\
x_{m+1}/(x_{m-1}+1)&\equiv& \alpha _1,\,\,\mbox{with}\,\, \alpha _1 ^2&=&-1,\\
x_{m+2}/(x_{m-2}+1)&\equiv& \alpha _2,\,\,\mbox{with}\,\,\alpha _2 ^2&=&-1/\alpha_1,\\
\cdots &\equiv&\cdots, \,\, \cdots&=&\cdots,\\
x_{m+t-1}/(x_{m-t+1}+1)&\equiv& \alpha _{t-1},\,\,\mbox{with}\,\,\alpha_{t-1}^2&=&-1/\alpha_{t-2},\\
x_{m+t}/(x_{m-t}+1)&\equiv& \alpha _t,\,\,\mbox{with}\,\,\alpha _t ^{q-1}&=&-\alpha_{t-1}.
\end{array}\right.
\]
\end{proposition}
\begin{proof}
Prove by induction on $t$. For $t=1$, this is the case in Lemma 2.5, here we take $\alpha _0=1$. For $t\geq 1$, it is easily checked $(\frac{x_{m+t+1}}{x_{m-t-1}+1})^{q-1} \equiv \frac{-x_{m+t}}{x_{m-t}+1}$ from definition. Thus, $\alpha _{t+1}\in {\mathbb F}_{q^2}$ if and only if $\alpha_t^{q+1}=1$. By induction hypothesis on $t$, $\alpha _t ^{q-1}=-\alpha _{t-1}$. Therefore $Q$ is a ${\mathbb F}_{q^2}$-rational place implies $\alpha _t^2=-1/\alpha_{t-1}$. Note $\alpha_{t-1}^{2^{t-1}}=-1$. Let $q=2^tk-1$, we have $(-1)^k=1$, thus $k$ is even, i.e., $q\equiv -1$ (mod $2^{t+1}$). This finishes the induction on $t+1$.
\end{proof}
Using this proposition and Lemma 2.3, we yield the following result.
\begin{proposition}
Assume char$(\mathcal{F})$ is odd. Suppose $2^l||(q+1)$. The number of ${\mathbb F}_{q^2}$-rational place in $F_n$ which is a zero of $x_m-\alpha (0<m\leq n)$ for any $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$ is counted as below.
\[\left\{
\begin{array}{cccccc}
2^{m-1}(q-1),\,\, &\mbox{when}&\, 1\leq m \leq n/2\,\, \mbox{and}\,\, m\leq l, \\
0,\,\, &\mbox{when}&\, 1\leq m \leq n/2\,\, \mbox{and}\,\, m>l, \\
2^{n-m-1}(q-1),\,\, &\mbox{when}&\, n>m>n/2\,\, \mbox{and}\,\, n-m\leq l, \\
0,\,\, &\mbox{when}&\, n>m>n/2\,\, \mbox{and}\,\, n-m>l, \\
q-2,\,\, &\mbox{when}&\, m=n.
\end{array}\right.
\]
\end{proposition}
\begin{proof}
Let $0<m<n$ and $a=\min \{m, n-m \}$. If $a=m$ (resp. $a=n-m$), from Lemma 2.5, there exists ${\mathbb F}_{q^2}$-rational place in $F_{2m}$ (resp. $K(x_{2m-n}, \cdots, x_n)$) with $x_m\equiv \alpha$ for some $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$ if and only if $2^a||(q+1)$, the number of such places is $2^{a-1}(q-1)$, and all these places totally ramified in $F_n$ according to Lemma 2.1.
\end{proof} | 3,536 | 14,099 | en |
train | 0.4993.4 | We generalize this result to the following proposition.
\begin{proposition}
Assume char$(\mathcal{F})$ is odd. Fix positive integers $t\leq m$. There are $2^{t-1}(q-1)$ many ${\mathbb F}_{q^2}\backslash {\mathbb F}_q$-rational places $Q$ in ${\mathbb F}_{q^2}(x_{m-t}, x_{m-t+1}, \cdots, x_{m+t})$ which are zeroes of $x_m-\beta$ for some $\beta \in {\mathbb F}_q^*\backslash \{-1\}$ if $q\equiv -1$ (mod $2^t$), with each of them corresponds to a tuple $(\alpha _1, \alpha _2, \cdots, \alpha _t)$ satisfying
\[\left\{
\begin{array}{cccccc}
x_m\equiv -1/2,\\
x_{m+1}/(x_{m-1}+1)&\equiv& \alpha _1,\,\,\mbox{with}\,\, \alpha _1 ^2&=&-1,\\
x_{m+2}/(x_{m-2}+1)&\equiv& \alpha _2,\,\,\mbox{with}\,\,\alpha _2 ^2&=&-1/\alpha_1,\\
\cdots &\equiv&\cdots, \,\, \cdots&=&\cdots,\\
x_{m+t-1}/(x_{m-t+1}+1)&\equiv& \alpha _{t-1},\,\,\mbox{with}\,\,\alpha_{t-1}^2&=&-1/\alpha_{t-2},\\
x_{m+t}/(x_{m-t}+1)&\equiv& \alpha _t,\,\,\mbox{with}\,\,\alpha _t ^{q-1}&=&-\alpha_{t-1}.
\end{array}\right.
\]
\end{proposition}
\begin{proof}
Prove by induction on $t$. For $t=1$, this is the case in Lemma 2.5, here we take $\alpha _0=1$. For $t\geq 1$, it is easily checked $(\frac{x_{m+t+1}}{x_{m-t-1}+1})^{q-1} \equiv \frac{-x_{m+t}}{x_{m-t}+1}$ from definition. Thus, $\alpha _{t+1}\in {\mathbb F}_{q^2}$ if and only if $\alpha_t^{q+1}=1$. By induction hypothesis on $t$, $\alpha _t ^{q-1}=-\alpha _{t-1}$. Therefore $Q$ is a ${\mathbb F}_{q^2}$-rational place implies $\alpha _t^2=-1/\alpha_{t-1}$. Note $\alpha_{t-1}^{2^{t-1}}=-1$. Let $q=2^tk-1$, we have $(-1)^k=1$, thus $k$ is even, i.e., $q\equiv -1$ (mod $2^{t+1}$). This finishes the induction on $t+1$.
\end{proof}
Using this proposition and Lemma 2.3, we yield the following result.
\begin{proposition}
Assume char$(\mathcal{F})$ is odd. Suppose $2^l||(q+1)$. The number of ${\mathbb F}_{q^2}$-rational place in $F_n$ which is a zero of $x_m-\alpha (0<m\leq n)$ for any $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$ is counted as below.
\[\left\{
\begin{array}{cccccc}
2^{m-1}(q-1),\,\, &\mbox{when}&\, 1\leq m \leq n/2\,\, \mbox{and}\,\, m\leq l, \\
0,\,\, &\mbox{when}&\, 1\leq m \leq n/2\,\, \mbox{and}\,\, m>l, \\
2^{n-m-1}(q-1),\,\, &\mbox{when}&\, n>m>n/2\,\, \mbox{and}\,\, n-m\leq l, \\
0,\,\, &\mbox{when}&\, n>m>n/2\,\, \mbox{and}\,\, n-m>l, \\
q-2,\,\, &\mbox{when}&\, m=n.
\end{array}\right.
\]
\end{proposition}
\begin{proof}
Let $0<m<n$ and $a=\min \{m, n-m \}$. If $a=m$ (resp. $a=n-m$), from Lemma 2.5, there exists ${\mathbb F}_{q^2}$-rational place in $F_{2m}$ (resp. $K(x_{2m-n}, \cdots, x_n)$) with $x_m\equiv \alpha$ for some $\alpha \in {\mathbb F}_q^*\backslash \{-1\}$ if and only if $2^a||(q+1)$, the number of such places is $2^{a-1}(q-1)$, and all these places totally ramified in $F_n$ according to Lemma 2.1.
\end{proof}
Hence, the number of ${\mathbb F}_{q^2}$-rational place in $F_n$ lying
above $P_{-1}$ is
\[\left\{
\begin{array}{cccccc}
(q-1)(2^{l+1}-1),\,\, &\mbox{if}& \,\, n>2l, \\
(q-1)(2^{(n+1)/2}-1), \,\, &\mbox{if n is odd}&\mbox{and}\, n\leq 2l, \\
(q-1)(3\times 2^{n/2-1}-1), \,\, &\mbox{if n is even}&\mbox{and}\, n\leq 2l.
\end{array}\right.
\]
\begin{remark}
If char$(\mathcal{F})>2$, among all ${\mathbb F}_{q^2}$-rational place in $F_n$ lying above $P_{-1}$, exactly $q-1$ are ${\mathbb F}_q$-rational, corresponding to $x_n\equiv \alpha$ for some $\alpha \in {\mathbb F}_q^*$, respectively.
If char$(\mathcal{F})=2$, from Lemma 2.5, there are exactly $q-1$ ${\mathbb F}_{q^2}$-rational places in $F_n$ lying above $P_{-1}$, which are all ${\mathbb F}_q$-rational, corresponding to $x_n\equiv \alpha$ for some $\alpha \in {\mathbb F}_q^*$, respectively.
\end{remark}
Next we determine the ${\mathbb F}_{q^2}$-rational place $Q$ in $F_n$ lying
above $P_\alpha$ for some $\alpha \in {\mathbb F}_{q^2}\backslash {\mathbb F}_q$.
Direct calculation gives $x_1(Q)=\alpha _1$ for some $\alpha _1
\not\in {\mathbb F}_q$. Similarly, $x_2(Q)=\alpha -2$, $\cdots$,
$x_n(Q)=\alpha _n$, with $\alpha _i \in \overline{{\mathbb F}}_q \backslash
{\mathbb F}_q$. We observe that $Q$ is ${\mathbb F}_{q^2}$-rational in $F_n$ if and
only if $\alpha, \alpha _1, \cdots, \alpha _n$ are all in
${\mathbb F}_{q^2}$. To verify it, assume $\alpha, \alpha _1, \cdots,
\alpha _n$ are all in ${\mathbb F}_{q^2}$. Then $Q$ is completely splitting
in each extension $F_i/F_{i-1} (i=1, 2, \cdots , n)$, with
$x_i\equiv c\alpha _i$ for some $c\in {\mathbb F}_q ^*$ in each place
respectively.
We have $\alpha _1\in {\mathbb F}_{q^2}$ if and only if $(1+\alpha)^{q-1}+(1+\alpha)^{1-q}=1$. Similarly, $\alpha _i (i=1, 2, \cdots, n-1) \in {\mathbb F}_{q^2}$ if and only if $(1+\alpha _{i-1})^{q-1}+(1+\alpha _{i-1})^{1-q}=1$. Thus, $Q$ is $F_{q^2}$-rational implies $(1+\alpha)^{q-1}, (1+\alpha _1)^{q-1}, \cdots, (1+\alpha _{n-1})^{q-1}$ all are the root of $x^2-x+1=0$.
{\bf Claim.} $(1+\alpha)^{q-1}, (1+\alpha _1)^{q-1}, \cdots, (1+\alpha _{n-1})^{q-1}$ are equal.
{\bf Proof of the claim.} Prove by contradiction. For simplicity assume $(1+\alpha)^{q-1}\not=(1+\alpha _1)^{q-1}$.
Thus $x^2-x+1=(x-(1+\alpha)^{q-1})(x-(1+\alpha _1)^{q-1})$. Comparing the coefficient of $x^1$,
one has $1=(1+\alpha)^{q-1}+(1+\alpha _1)^{q-1}=(2+\alpha _1-\alpha _1^{q-1})/(1+\alpha_1)$. This implies $\alpha _1 \in {\mathbb F}_q$, which is a contradiction.
Let $p=$char$({\mathbb F}_q)$, we consider the following two cases respectively.
{\bf Case 1: $p=3$.}
Since the unique root of $x^2-x+1=0$ is $-1$, $1-\alpha _1^{q-1}=-1$, and $(1+\alpha_1)^{q-1}=-1$. It is easily checked these two equalities lead to a contradiction.
{\bf Case 2: $p\not=3$.}
Thus, $-1$ is not a root of $x^2-x+1=0$, which implies $(1+\alpha)^{q-1}$ and $(1+\alpha)^{1-q}$ are distinct roots of $x^2-x+1=0$. Thus, $(1+\alpha)^{q-1}+(1+\alpha)^{1-q}=1$. By assuming $(1+\alpha)^{q-1}+(1+\alpha)^{1-q}=1$, we have $x_1(Q)=\alpha_1$, with $\alpha_1 ^{q-1}=(1+\alpha)^{1-q}$. Hence, $\alpha _1=c/(1+\alpha)$ for some $c\in {\mathbb F}_q^*$. Since $(1+\alpha)^{q-1}=(1+\alpha _1)^{q-1}$, direct calculation gives $c=\frac{(1+\alpha)^{2q-1}-(1+\alpha)^q}{1-(1+\alpha)^{2q-2}}$. Iterating this procedure, we have a ${\mathbb F}_{q^2}$-rational place $Q$ in $F_n$ lying above $P_\alpha$ for some $\alpha \in {\mathbb F}_{q^2}\backslash {\mathbb F}_q$ is one-one corresponding to a tuple $(c_1, c_2, \cdots, c_n)$ satisfying
\[\left\{
\begin{array}{cccccc}
x_0\equiv \alpha,\,\,&\mbox{with}&\,\, (1+\alpha)^{q-1}+(1+\alpha)^{1-q}=1, \\
x_1\equiv c_1/(1+\alpha):=\alpha_1 \,\, &\mbox{with}& \,\,c_1=\frac{(1+\alpha)^{2q-1}-(1+\alpha)^q}{1-(1+\alpha)^{2q-2}}\in {\mathbb F}_q^*, \\
x_2\equiv c_2/(1+\alpha_1):=\alpha_2 \,\, &\mbox{with}& \,\,c_2=\frac{(1+\alpha_1)^{2q-1}-(1+\alpha_1)^q}{1-(1+\alpha_1)^{2q-2}}\in {\mathbb F}_q^*, \\
\cdots &\mbox{with}& \,\,\cdots \\
x_{n-1}\equiv c_{n-1}/(1+\alpha_{n-2}):=\alpha_{n-1} \,\, &\mbox{with}& \,\,c_{n-1}=\frac{(1+\alpha_{n-2})^{2q-1}-(1+\alpha_{n-2})^q}{1-(1+\alpha_{n-2})^{2q-2}}\in {\mathbb F}_q^*, \\
x_n\equiv c_n/(1+\alpha_{n-1}),\,\, &\mbox{with}& \,\, c_n\in{\mathbb F}_q^*.
\end{array}\right.
\]
Therefore, for any $\alpha \in {\mathbb F}_{q^2}\backslash {\mathbb F}_q$, the number of ${\mathbb F}_{q^2}$-rational places in $F_n$ lying above $P_\alpha$ is zero if char$(\mathcal{F})=3$; and $(q-1)\#\{\alpha \in {\mathbb F}_{q^2}\backslash {\mathbb F}_q: (1+\alpha)^{q-1}+(1+\alpha)^{1-q}=1\}$ if char$(\mathcal{F})\not=3$.
As we have determined all ${\mathbb F}_{q}$-rational places and ${\mathbb F}_{q^2}$-rational places in $F_n$, we are now able to determine the value of $\nu(\mathcal{F})$. If char$(\mathcal{F})\not=2$ and the constant field is ${\mathbb F}_q$, then $\nu(\mathcal{F})=0$, and $\nu(\mathcal{F})=1$ if the constant field is ${\mathbb F}_{q^2}$.
If char$(\mathcal{F})=2$, and the constant field is ${\mathbb F}_q$ ($q>2$), then $\nu(\mathcal{F})=1$.
\begin{remark}
One can check that the function field tower recursively defined by Eq. (2) is isomorphic in some extension field of ${\mathbb F}_{q^2}$, to a tower recursively defined by $y^{q-1}=1-(x+\alpha)^{q-1}$, where $\alpha$ is any nonzero element of ${\mathbb F}_q$.
\end{remark}
From above discussion, Eq.(2) defines an asymptotically bad tower over any prime field (it does not define a tower over ${\mathbb F}_2$). Lenstra showed in [8, Theorem 2] that there does not exist a tower of function fields $\mathcal{F}=(F_0, F_1, \cdots)$ over a prime field which is recursively defined by $y^m=f(x)$, where $f(x)$ is a polynomial $f(x)$, $m$ and $q$ are coprime, such that the infinity place of $F_0$ splits completely in the tower, and the set $V(\mathcal{F})=\{P\in {\mathcal P}(F_0)|P \,\,\mbox{is ramified in}\,\,F_n/F_0\,\, \mbox{for some}\,\, n\geq 1\}$ is finite. A tower recursively defined by Eq. (2) falls in this form with a finite set $V(\mathcal{F})$, but no place of $F_0$ splits completely in the tower. Thus arises a problem: can one find an asymptotically good, recursive tower of the above form, over a prime field, with a finite set $V(\mathcal{F})$ and a finite place splitting completely in the tower?
\vskip .2in
\noindent
Siman Yang\\
Department of Mathematics, East China Normal University,\\
500, Dongchuan Rd., Shanghai, P.R.China 200241. \ \ e-mail: smyang@math.ecnu.edu.cn
\\ \\
\end{document} | 3,732 | 14,099 | en |
train | 0.4994.0 | \begin{document}
\title[On the smallest trees with the same restricted $U$-polynomial]{On the smallest trees with the same restricted $U$-polynomial and the rooted $U$-polynomial}
\author{Jos\'e Aliste-Prieto \and Anna de Mier \and Jos\'e Zamora}
\address{Jos\'e Aliste-Prieto. Departamento de Matematicas, Universidad Andres Bello, Republica 498, Santiago, Chile}
\email{jose.aliste@unab.cl}
\address{Jos\'e Zamora. Departamento de Matem\'aticas, Universidad Andres Bello, Republica 498, Santiago, Chile
}
\email{anna.de.mier@upc.edu}
\address{Anna de Mier. Departament de Matem\`atiques, Universitat Polit\`ecnica de Catalunya, Jordi Girona 1-3, 08034 Barcelona, Spain
}
\email{josezamora@unab.cl}
\maketitle
\begin{abstract}
In this article, we construct explicit examples of pairs of non-isomorphic trees with the same restricted $U$-polynomial for every $k$; by this we mean that the polynomials agree on terms with degree at most $k+1$. The main tool for this construction is a generalization of the $U$-polynomial to rooted graphs, which we introduce and study in this article. Most notably we show that rooted trees can be reconstructed from its rooted $U$-polynomial.
\end{abstract}
\tikzstyle{every node}=[circle, draw, fill=black,
inner sep=0pt, minimum width=4pt,font=\small]
\section{Introduction}\label{sec:intro}
The chromatic symmetric function \cite{stanley95symmetric} and the $U$-polynomial \cite{noble99weighted} are powerful graph invariants as they generalize many other invariants like, for instance, the chromatic polynomial, the matching polynomial and the Tutte polynomial. It is well known that the chromatic symmetric function and the $U$-polynomial are equivalent when restricted to trees, and there are examples of non-isomorphic graphs with cycles having the same $U$-polynomial (see \cite{brylawski1981intersection} for examples of graphs with the same polychromate and \cite{sarmiento2000polychromate,merino2009equivalence} for the equivalence between the polychromate and the $U$-polynomial) and also the same is true for the chromatic symmetric function (see \cite{stanley95symmetric}) . However, it is an open question to know whether there exist non-isomorphic trees with the same chromatic symmetric function (or, equivalently, the same $U$-polynomial). The negative answer to the latter question, that is, the assertion that two trees that have the same chromatic symmetric function must be isomorphic, is sometimes referred to in the literature as \emph{Stanley's (tree isomorphism) conjecture}. This conjecture has been so far verified for trees up to 29 vertices~\cite{heil2018algorithm} and also for some classes of trees, most notably caterpillars \cite{aliste2014proper,loebl2019isomorphism} and spiders \cite{martin2008distinguishing}.
A natural simplification for Stanley's conjecture is to define a truncation of the $U$-polynomial, and then search for non-isomorphic trees with the same truncation. A study of these examples could help to better understand the picture for solving Stanley's conjecture.
To be more precise, suppose that $T$ is a tree with $n$ vertices. Recall that a parition $\lambda$ of $n$ is a sequence $\lambda_1,\lambda_2,\ldots,\lambda_l$ where $\lambda_1\geq \lambda_2\geq\cdots\geq \lambda_l$.Recall that $U(T)$ can be expanded as
\begin{equation}
\label{eq:Uintro}
U(T)=\sum_{\lambda} c_\lambda \mathbf x_\lambda,
\end{equation}
where the sum is over all partitions $\lambda$ of $n$, $\mathbf x_\lambda=x_{\lambda_1}x_{\lambda_2}\cdots x_{\lambda_l}$ and the $c_\lambda$ are non-negative integer coefficients (for details of this expansion see Section 2).
In a previous work~\cite{Aliste2017PTE}, the authors studied the $U_k$-polynomial defined by restricting {the sum in \eqref{eq:Uintro} to the partitions of length smaller or equal than $k+1$}, and then showed the existence of non-isomorphic trees with the same $U_k$-polynomial for every $k$. This result is based on a remarkable connection between the $U$-polynomial of a special class of trees and the Prouhet-Tarry-Escott problem in number theory. Although the Prouhet-Tarry-Escott problem is known to have solutions for every $k$, in general it is difficult to find explicit solutions, specially if $k$ is large. Hence, it was difficult to use this result to find explicit examples of trees with the same $U_k$-polynomial.
The main result of this paper is to give an explicit and simple construction of non-isomorphic trees with the same $U_k$-polynomial for every $k$. It turns out that for $k=2,3,4$ our examples coincide with the minimal examples already found by Smith, Smith and Tian~\cite{smith2015symmetric}. This leads us to conjecture that for every $k$ our construction yields the smallest non-isomorphic trees with the same $U_k$-polynomial. We also observe that if this conjecture is true, then Stanley's conjecture is also true.
To prove our main result, we first introduce and study a generalization of the $U$-polynomial to rooted graphs, which we call the rooted $U$-polynomial or $U^r$-polynomial. As it is the case for several invariants of rooted graphs, the rooted $U$-polynomial distinguishes rooted trees up to isomorphism. Under the correct interpretation, it can also be seen as a generalization of the pointed chromatic symmetric function introduced in \cite{pawlowski2018chromatic} (See Remark \ref{pawlowski}). The key fact for us is that the rooted $U$-polynomial exhibits simple product formulas when applied to some joinings of rooted graphs. These formulas together with some non-commutativity is what allows our constructions to work.
Very recently, another natural truncation for the $U$-polynomial was considered in \cite{heil2018algorithm}. Here, they restrict the range of the sum in \eqref{eq:Uintro} to partitions whose parts are smaller or equal than $k$. They also verified that trees up to $29$ vertices are distinguished by the truncation with $k=3$ and proposed the conjecture that actually $k=3$ suffices to distinguish all trees.
This paper is organized as follows. In Section \ref{sec:rooted}, we introduce the rooted $U$-polynomial and prove our main product formulas. In Section \ref{sec:dist}, we show that the rooted $U$-polynomial distinguishes rooted trees up to isomorphism. In Section \ref{sec:main}, we recall the definition of the $U_k$-polynomial and prove our main result. | 1,822 | 13,506 | en |
train | 0.4994.1 | \section{The rooted $U$-polynomial}\label{sec:rooted}
We give the definition of the $U$-polynomial first introduced by
Noble and Welsh~\cite{noble99weighted}.
We consider graphs where we allow loops and parallel edges.
Let $G = (V, E)$ be a graph. Given $A\subseteq E$, the restriction $G|_A$ of $G$ to $A$ is the subgraph of $G$ obtained from $G$ after deleting every edge that is not contained in $A$ (but keeping all the vertices). The \emph{rank} of $A$ is defined as $r(A) = |V| - k(G|_A)$, where $k(G|_A)$ is the number of connected components of $G|_A$.
The \emph{partition induced by $A$}, denoted by $\lambda(A)$, is the partition of $|V|$ whose parts are the sizes of the connected components of $G|_A$.
Let $y$ be an indeterminate and $\mathbf{x} = x_1,x_2,\ldots$ be an infinite set of commuting indeterminates that commute with $y$. Given an integer partition $\lambda=\lambda_1,\lambda_2,\cdots,\lambda_l$, define $\mathbf{x}_\lambda:=x_{\lambda_1}\cdots x_{\lambda_l}$. The \emph{$U$-polynomial} of a graph $G$ is defined as
\begin{equation}
\label{def:W_poly}
U(G;\mathbf x, y)=\sum_{A\subseteq E}\mathbf x_{\lambda(A)}(y-1)^{|A|-r(A)}.
\end{equation}
Now we recall the definition of the $W$-polynomial for weighted graphs, from which the $U$-polynomial is a specialization. A \emph{weighted graph} is a pair $(G,\omega)$ where $G$ is a graph and $\omega:V(G)\rightarrow {\mathbb P}$ is a function. We say that $\omega(v)$ is the weight of the vertex $v$.
Given a weighted graph $(G,\omega)$ and an edge $e$,
the graph $(G-e,\omega)$ is defined by deleting the edge $e$ and leaving $\omega$ unchanged. If $e$ is not a loop, then the graph $(G/e,\omega)$ is defined by first deleting $e$ then by identifying the vertices $u$ and $u'$ incident to $e$ into a new vertex $v$. We set $\omega(v)=\omega(u)+\omega(u')$ and leave all other weights unchanged.
The $W$-polynomial of a weighted graph $(G,\omega)$ is defined by the following properties:
\begin{enumerate}
\item If $e$ is not a loop, then $W(G,\omega)$ satisfies
\[W(G,\omega) = W(G-e,\omega) + W(G/e,\omega);\]
\item If $e$ is a loop, then
\[W(G,\omega) = y W(G-e,\omega);\]
\item If $G$ consists only of isolated vertices $v_1,\ldots,v_n$ with weights $\omega_1,\ldots,\omega_n$, then
\[W(G,\omega) = x_{\omega_1}\cdots x_{\omega_n}.\]
\end{enumerate}
In \cite{noble99weighted}, it is proven that the $W$-polynomial is well-defined and that $U(G)=W(G,1_G)$ where $1_G$ is the weight function assigning weight $1$ to all vertices of $G$. The deletion-contraction formula is very powerful, but in this paper we will only use it in the beginning of the proof of Theorem \ref{theo:YZ} in Section \ref{sec:main}.
A \emph{rooted graph} is a pair $(G,v_0)$, where $G$ is a graph and $v_0$ is a vertex of $G$ that we call the \emph{root} of $G$.
Given $A\subseteq E$, define $\lambda_r(A)$ to be the size of the component of $G|_A$ that contains the root $v_0$, and $\lambda_-(A)$ to be
the partition induced by the sizes of all the other components. The \emph{rooted $U$-polynomial} is
\begin{equation}
\label{def:U_poly_rooted}
U^r({G,v_0};\mathbf x, y, z)=\sum_{A\subseteq E}\mathbf x_{\lambda_{-}(A)}z^{\lambda_{r}(A)}(y-1)^{|A|-r(A)},
\end{equation}
where $z$ is a new indeterminate that commutes with $y$ and $x_1,x_2,\ldots$.
We often write $G$ instead of $(G,v_0)$ when $v_0$ is clear from the context, and so we will write
$U^r(G)$ instead of $U^r(G,v_0)$. Also, if $(G,v_0)$ is a rooted graph, we will write $U(G)$ for the $U$-polynomial of $G$ (seen as an unrooted graph). If we compare $U^r(G)$ with $U(G)$, then we see that for each term of the form $\mathbf{x}_\lambda y^n z^m$ appearing in $U^r(G)$ there is a corresponding term of the form $\mathbf{x}_\lambda y^n x_m$ in $U(G)$. This motivates the following notation and lemma, whose proof follows directly from the latter observation.
\begin{notation}
If $P(\mathbf{x}, y, z)$ is a polynomial, then $(P(\mathbf{x},y,z))^*$ is the polynomial obtained by expanding $P$ as a polynomial in $z$ (with coefficients that are polynomials in $\mathbf{x}$ and $y$) and then substituting $z^n\mapsto x_n$ for every $n\in{\mathbb N}$. For instance,
if $P(\mathbf{x},y,z) = x_1yz-x_2x_3z^3$, then $P(\mathbf{x},y,z)^* = x_1^2y-x_2x_3^2$. Note that in general
$(P(\mathbf{x},y,z)Q(\mathbf{x},y,z))^* \neq P(\mathbf{x},y,z)^*Q(\mathbf{x},y,z)^*$.
\end{notation}
\begin{lemma}
For every graph $G$ we have
\begin{equation}
(U^r(G))^* = U(G).
\end{equation}
\end{lemma}
\begin{remark}
We could also define a rooted version of the $W$-polynomial, but we will not need this degree of generality for the purposes of this article.
\end{remark}
\subsection{Joining of rooted graphs and product formulas}
In this section we show two product formulas for the rooted $U$-polynomial. These will play a central role in the proofs of the results in the following sections.
Let $(G,v)$ and $(H,v')$ be two rooted graphs. Define
$G\odot H$ to be the rooted graph obtained after first taking the disjoint union of $G$ and $H$ and then by identifying $v$ and $v'$. We refer to $G\odot H$ as \emph{the joining} of $G$ and $H$. Note that from the definition it is clear that $G\odot H = H\odot G$.
We also define $G\cdot H$ to be the rooted graph obtained after first taking the disjoint union of $G$ and $H$, then adding an edge between $v$ and $v'$ and finally declaring $v$ as the root of the resulting graph. Since we made a choice for the root, in general $G\cdot H$ and $H\cdot G$ are isomorphic as unrooted graphs, but not as rooted graphs.
\begin{figure}
\caption{Example of two rooted graphs $G$ and $H$ and their different products $G\cdot H$ and $G\odot H$.}
\end{figure}
\begin{lemma}
\label{lemma:joining}
Let $G$ and $H$ be two rooted graphs. We have
\begin{equation}
\label{eq:pseudo}
U^r(G\odot H) = \frac{1}{z}U^r(G)U^r(H).
\end{equation}
\end{lemma}
\begin{proof}
By substituting the definition of $U^r$ to $G$ and $H$ in the r.h.s. of \eqref{eq:pseudo}
\begin{equation}
\label{W_poly_rooted}
\sum_{A_G\subseteq E(G)}\sum_{A_H\subseteq E(H)}\mathbf x_{\lambda_{-}(A_G)\cup\lambda_{-}(A_H)}z^{\lambda_{r}(A_G)+\lambda_{r}(A_H)-1}(y-1)^{|A_G|+|A_H|-r(A_G)-r(A_H)}.
\end{equation}
Given $A_G\subseteq E(G)$ and $A_H\subseteq E(H)$, set $A = A_G\cup A_H$. By the definition of the joining, there is a set $A'\subseteq E(G\odot H)$ corresponding to $A$ such that $\lambda_{-}(A') = \lambda_{-}(A_G)\cup\lambda_{-}(A_H)$ and $\lambda_r(A) = \lambda_r(A_G)+\lambda_r(A_H)-1$. From these equations, one checks that $r(A)=r(A_G)+r(A_H)$. Plugging these relations into \eqref{W_poly_rooted} and then rearranging the sum yields $U^r(G\odot H)$ and the conclusion now follows.
\end{proof}
\begin{lemma}
Let $G$ and $H$ be two rooted graphs. Then we have
\begin{equation}
\label{eq:sep_concat}
U^r(G\cdot H) = U^r({G})(U^r(H) + U(H)).
\end{equation}
\end{lemma}
\begin{proof}
By definition, $E(G\cdot H) = E(G)\cup E(H)\cup\{e\}$, where $e$ is the edge joining the roots of $G$ and $H$. Thus, given $A\subseteq E(G\cdot H)$, we can write it as $A = A_G\cup A_H\cup F$ where $A_G\subseteq E(G)$, $A_H\subseteq E(H)$ and $F$
is either empty or $\{e\}$. Let $\delta_F$ equal to one if $F=\{e\}$ and zero otherwise. The following relations are easy to check:
\begin{eqnarray*}
\lambda_{-}(A)&=&\begin{cases}
\lambda_{-}(A_G)\cup \lambda(A_H), &\text{if $F=\emptyset$,}\\
\lambda_{-}(A_G)\cup \lambda_{-}(A_H),&\text{otherwise};
\end{cases}\\
\lambda_r(A)&=& \lambda_r(A_G)+\lambda_r(A_H)\delta_{F};\\
r(A)&=& r(A_G)+r(A_H)+\delta_{F};\\
|A|&=&|A_G|+|A_H|+\delta_F.
\end{eqnarray*}
Now replacing the expansions of $U^r(G)$, $U^r(H)$ and $U(H)$ into the r.h.s. of
\eqref{eq:sep_concat} yields
\begin{multline}
\sum_{A_G\subseteq E(G),A_H\subseteq E(H)} \mathbf x_{\lambda_{-}(A_G)\cup\lambda_{-}(A_H)} z^{\lambda_{r}(A_G)+\lambda_{r}(A_H)}(y-1)^{|A_G|-r(A_G)+|A_H|-r(A_H)}
\\+
\sum_{A_G\subseteq E(G),A_H\subseteq E(H)} \mathbf x_{\lambda_{-}(A_G)\cup\lambda(A_H)}z^{\lambda_{r}(A_G)}(y-1)^{|A_G|-r(A_G)+|A_H|-r(A_H)}
\end{multline}
Using the previous relations we can simplify the last equation to
\begin{multline}
\sum_{A = A_G\cup A_H\cup\{e\}} \mathbf x_{\lambda_{-}(A)} z^{\lambda_{r}(A)}(y-1)^{|A|-r(A)}
\\+
\sum_{A = A_G\cup A_H} \mathbf x_{\lambda_{-}(A)}z^{\lambda_{r}(A)}(y-1)^{|A|-r(A)},
\end{multline}
where in both sums $A_G$ ranges over all subsets of $E(G)$ and $A_H$ ranges over all subsets of $E(H)$. Finally, we can combine the sums to get $U^r(G\cdot H)$, which finishes the proof.
\end{proof}
\begin{remark}
\label{pawlowski}
It is well-known (see \cite{stanley95symmetric,noble99weighted}) that the chromatic symmetric function of a graph can be recovered from the $U$-polynomial by
\[X(G) = (-1)^{|V(G)|}U(G;x_i=-p_i,y=0).\]
In \cite{pawlowski2018chromatic}, Pawlowski introduced the rooted chromatic symmetric function. It is not difficult to check that
\[X^r(G,v_0) = (-1)^{|V(G)|}\frac{1}{z}U^r(G,v_0;x_i=-p_i,y=0).\]
By performing this substitution on \eqref{eq:pseudo} we obtain Proposition 3.4 in \cite{pawlowski2018chromatic}.
\end{remark} | 3,470 | 13,506 | en |
train | 0.4994.2 | \section{The rooted $U$-polynomial distinguishes rooted trees}
\label{sec:dist}
In this section we will show that the rooted $U$-polynomial distinguishes rooted trees up to isomorphism. Similar results for other invariants of rooted trees appear in \cite{bollobas2000polychromatic,gordon1989greedoid,hasebe2017order}. The proof given here follows closely the one in \cite{gordon1989greedoid} but one can also adapt the proof of \cite{bollobas2000polychromatic}. Before stating the result we need the two following lemmas.
\begin{lemma}
\label{lem:degree}
Let $(T,v)$ be a rooted tree. Then, the number of vertices of $T$ and the degree of $v$ can be recognized from $U^r(T)$.
\end{lemma}
\begin{proof}
It is easy to see that {$U^r(T)=z^{n}+q(z)$} where $q(z)$ is a polynomial in $z$ of degree less than $n$ with coefficients in ${\mathbb Z}[y,\mathbf x]$ and $n$ is the number of vertices of $T$. Hence, to recognize the number of vertices of $T$, it suffices to take the term of the form $z^j$ with the largest exponent in $U^r(T)$ and this exponent is the number of vertices. To recognize the degree of $v$, observe that a term of $U^r(T)$ has the form $zx_\lambda$ for some $\lambda$ corresponding to $A$ if and only the edges of $A$ are not incident with $v$. In particular, the term of this form with smaller degree correspond to $A=E\setminus I(v)$ where $I(v)$ denotes the set of edges that are incident with $v$ and in fact the term is $zx_{n_1}x_{n_2}\ldots x_{n_d}$ where $n_1,n_2,\ldots,n_d$ are the number of vertices in each connected component of $T-v$. Since each connected component is connected to $v$ by an edge, this means that the degree of $v$ is equal to $d$ and hence it is the degree of this term minus one.
\end{proof}
\begin{lemma}
\label{lem:irreducible}
Let $(T,v)$ be a rooted tree. Then,
$\frac{1}{z}U^r(T,v)$ is irreducible if and only if the degree of $v$ is one.
\end{lemma}
\begin{proof}
Let $n$ denote the number of vertices of $T$. Suppose that the degree of $v$ is one. We will show that $\frac{1}{z}U^r(T,v)$
is irreducible. Denote by $e$ the only edge of $T$ that is incident with $v$. It is easy to check that $\lambda_r(A)\geq 1$ for all $A\subseteq E$ and that, if $A=E-e$, then $\lambda(A)=(n-1,1)$. Consequently,
\[\frac{1}{z}U^r(T,v) = x_{n-1}+
\sum_{A\subseteq E, A\neq E-e}\mathbf x_{\lambda_{-}(A)}z^{\lambda_{r}(A)-1} \]
where the second sum is a polynomial in ${\mathbb Z}[z,x_1,x_2,\ldots,x_{n-2}]$. This implies that $\frac{1}{z}U^r(T,v)$ is a monic polynomial in $x_{n-1}$ of degree one, and hence it is irreducible.
To see the converse, it suffices to observe that if the degree of $v$ is equal to $l>1$ then there are $T_1,T_2,\ldots, T_l$ rooted trees having a root of degree one and
$(T,v) = T_1\odot T_2\odot\cdots\odot T_l$. This implies that
\[\frac{1}{z}U^r(T) = \frac{1}{z}U^r(T_1)\frac{1}{z}U^r(T_2)\ldots\frac{1}{z}U^r(T_l)\]
and hence $\frac{1}{z}U^r(T)$ is not irreducible.
\end{proof}
We say that a rooted tree $(T,v)$ can be reconstructed from its $U^r$-polynomial if we can determine $(T,v)$ up to rooted isomorphism from $U^r(T,v)$. We show the following result.
\begin{theorem}
\label{teo8}
All rooted trees can be reconstructed from its $U^r$-polynomial.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:degree} we can recognize the number of vertices of a rooted tree from its $U^r$-polynomial. Thus, we proceed by induction on the number of vertices. For the base case, there is only one tree with $1$ vertex, hence the assertion is trivially true. Now suppose that all rooted trees with $k-1$ vertices can be reconstructed from their $U^r$-polynomial and let $U^r(T,v)$ be the $U^r$-polynomial of some unknown tree $(T,v)$ with $k$ vertices. Again by Lemma \ref{lem:degree} we can determine the degree $d$ of $v$ from $U^r(T)$. We distinguish two cases:
\begin{itemize}
\item $d=1$: In this case, let $T'=T-v$ rooted the unique vertex of $T$ that is incident to $v$. This means that $T = 1\cdot T'$ where $1$ is the rooted tree with only one vertex. From \eqref{eq:pseudo} it follows
that \[U^r(T) = z(U^r(T') + U(T'))=(\frac{1}{z}U^r(T'))z^2+U(T')z.\]
Since the variable $z$ does not appear in $U(T')$, we can determine $U^r(T')$
from $U^r(T)$ by collecting all the terms in the expansion of $U^r(T)$ that are divisible by $z^2$ and then dividing them by $z$. Since $T'$ has $k-1$ vertices, by the induction hypothesis, we can reconstruct $T'$ and hence the equality $T=1\cdot T'$ allows us to reconstruct $T$.
\item $d>1$: In this case, we know that $\frac{1}{z}U^r(T)$ is not irreducible by Lemma \ref{lem:irreducible} and hence it decomposes as
\[\frac{1}{z}U^r(T) = P_1P_2\cdots P_d,\]
where the $P_i$ are the irreducible factors in ${\mathbb Z}[z,x_1,\ldots]$. On the other hand, as in the proof of Lemma \ref{lem:irreducible}, $T$ can be decomposed into $d$ branches $T_1,T_2,\ldots, T_d$, which are
rooted trees with the root having degree one, $T = T_1\cdot T_2\cdot T_d$ and
\[\frac{1}{z}U^r(T) =\frac{1}{z}U^r(T_1)\frac{1}{z}U^r(T_2)\cdots \frac{1}{z}U^r(T_d).\]
Since ${\mathbb Z}[z,x_1,\ldots]$ is a unique factorization domain, up to reordering factors, we have $U^r(T_i) = zP_i$ for all $i\in\{1,\ldots,d\}$. Since $d>1$ and by the definition of the $T_i$'s they have at least one edge (and hence two vertices), it follows that each $T_i$ has at most $k-1$ vertices. Since we know each of their $U^r$-polynomials, by the hypothesis induction, we can reconstruct each of them, and so we can reconstruct $T$.
\end{itemize}
\end{proof}
\begin{corollary}
The $U^r$-polynomial distinguishes trees up to rooted isomorphism.
\end{corollary}
\begin{figure}
\caption{The reconstructed tree from Example \ref{example}
\end{figure}
\begin{example}
\label{example}
Suppose $U^r(T,v)=x_{1}^{5} z + 3 \, x_{1}^{4} z^{2} + 4 \, x_{1}^{3} z^{3} + 4 \, x_{1}^{2} z^{4} + 3 \, x_{1} z^{5} + z^{6} + 2 \, x_{1}^{3} x_{2} z + 5 \, x_{1}^{2} x_{2} z^{2} + 4 \, x_{1} x_{2} z^{3} + x_{2} z^{4} + x_{1}^{2} x_{3} z + 2 \, x_{1} x_{3} z^{2} + x_{3} z^{3}$. From the term $z^6$, we know that $T$ has $6$ vertices. The terms of the form $z\mathbf{x}_\lambda$ are $x_1^5z+2x_1^3x_2z+x_1^2x_3z$. Thus, the degree of $v$ is $3$. Moreover, if we factorize $\frac{1}{z}U^r(T,v)$ into irreducible factors we obtain
\[\frac{1}{z}U^r(T,v)={\left(x_{1}^{3} + x_{1}^{2} z + x_{1} z^{2} + z^{3} + 2 \, x_{1} x_{2} + x_{2} z + x_{3}\right)} {\left(x_{1} + z\right)}{\left(x_{1} + z\right)}.\]
This means that
\begin{eqnarray}
U_r(T_1,v_1)&=& x_{1}^{3}z + x_{1}^{2} z^2 + x_{1} z^{3} + z^{4} + 2 \, x_{1} x_{2} z + x_{2} z^2 + x_{3}z,\\
U_r(T_2,v_2)&=&x_{1}z + z^2,\\
U_r(T_3,v_3)&=&{x_{1}z + z^2}.
\end{eqnarray}
From the terms $z^4$ and $x_3z$ in $U^r(T_1)$ it is easy to see that $T_1$ has $4$ vertices and $v_1$ has degree 1. Hence, $T_1=1\cdot T_1'$,
where
\[U^r(T_1') = \frac{1}{z}\left(x_{1}^{2} z^2 + x_{2} z^2 + x_{1} z^{3} + z^{4}\right) = x_{1}^{2} z + x_{2} z + x_{1} z^{2} + z^{3}. \]
Similarly $T_1'=1\cdot T_1''$, where
\[U^r(T_1'') = \frac{1}{z}\left( x_{1} z^{2} + z^{3}\right)=x_{1} z + z^{2}.\]
From this, it is not difficult to see that $T_2,T_3$ and $T_1''$ are rooted isomorphic to $1\cdot 1$. Finally, we have
\[T = (1\cdot (1\cdot (1\cdot 1)))\odot (1\cdot 1)\odot (1\cdot 1).\]
\end{example} | 2,915 | 13,506 | en |
train | 0.4994.3 | \section{The restricted $U$-polynomial}
\label{sec:main}
Let $T$ be a tree with $n$ vertices.
It is well known that in this case $r(A)=|A|$ for every $A\subseteq E(T)$. Hence, $U(T)$ and $U^r(T)$ (if $T$ is rooted) do not depend on $y$. Given an integer $k$, the $U_k$-polynomial of $T$ is defined by
\begin{equation}
U_k(T;\mathbf x)=\sum_{A\subseteq E,|A|\geq n-k}\mathbf x_{\lambda(A)}.
\end{equation}
Observe that since $T$ is a tree, every term in $U_k(T)$ has degree at most $k+1$ and that restricting the terms in the expansion of $U(T)$ to those of degree at most $k+1$ yields $U_k(T)$.
As noted in the introduction, it is proved in \cite{Aliste2017PTE} that for every integer $k$ there are non-isomorphic trees $T$ and $T'$ that have the same $U_k$-polynomial but distinct $U_{k+1}$-polynomial. However, the trees found in \cite{Aliste2017PTE} are not explicit. In this section, with the help of the tools developed in previous sections, we will explicitly construct such trees.
We start by defining two sequences of rooted trees. Let us denote the path on three vertices, rooted at the central vertex, by $A_0$ and the path on three vertices, rooted at one of the leaves, by $B_0$. The trees $A_k$ and $B_k$ for $k\in{\mathbb N}$ are defined inductively as follows:
\begin{equation}
\label{AK}
A_k := A_{k-1}\cdot B_{k-1}\quad\text{and}\quad
B_k := B_{k-1}\cdot A_{k-1}.
\end{equation}
\begin{figure}
\caption{The rooted trees $A_2$ and $B_2$}
\end{figure}
We first observe that $A_0$ and $B_0$ are isomorphic as unrooted trees but not isomorphic as rooted trees, which means that they have different $U^r$. In fact, a direct calculation shows that
\[\Delta_0:=U^r(A_0)-U^r(B_0) = x_1z^2-x_2z.\]
By applying Lemma \ref{lemma:joining} we deduce:
\begin{proposition}
\label{prop}
For all $k\in{\mathbb N}$, the trees $A_k$ and $B_k$ are isomorphic but not rooted-isomorphic. Moreover, we have
\begin{equation}
\label{DeltaK}
U^r(A_k) - U^r(B_k) = \Delta_0P_k,
\end{equation}
where $P_k := U(A_0)U(A_1)\cdots U(A_{k-1})$.
\end{proposition}
\begin{proof}
The proof is done by induction. The basis step is clear from the definition of $\Delta_0$. For the induction step,
we assume that for a given $k$, the graphs $A_{k-1}$ and $B_{k-1}$ are isomorphic and that $U^r(A_{k-1}) - U^r(B_{k-1}) = \Delta_0P_{k-1}$.
From \eqref{AK}, it is easy to see that $A_k$ and $B_k$ are isomorphic as unrooted trees. Also, combining \eqref{AK} with \eqref{eq:sep_concat} we get
\[U^r(A_k) = U^r(A_{k-1})(U^r(B_{k-1})+U(B_{k-1})).\]
Similarly for $B_k$ we get
\[U^r(B_k) = U^r(B_{k-1})(U^r(A_{k-1})+U(A_{k-1})).\]
Subtracting these two equations, using that $U(A_{k-1})=U(B_{k-1})$ and plugging the induction hypothesis yields
\[U^r(A_k) - U^r(B_k) = U(A_{k-1})\big(U^r(A_{k-1})-U^r(B_{k-1})\big) = U(A_{k-1})P_{k-1}\Delta_0 = P_{k}\Delta_0.\]
Hence, by induction, \eqref{DeltaK} holds for every $k$. To finish the proof, notice that since $A_k$ and $B_k$ have distinct $U^r$, they are not rooted-isomorphic by Theorem~\ref{teo8}.
\end{proof}
Observe that all the terms of $P_k$ have degree at least $k$. Now we can state our main result.
\begin{theorem}\label{theo:YZ}
Given $k,l\in{\mathbb N}$, let
\begin{equation}
Y_{k,l}={(A_k\odot A_l)\cdot (B_k\odot B_l)}\quad \text{and}\quad
Z_{k,l} = (A_l \odot B_k)\cdot (B_l\odot A_k).
\end{equation}
Then the graphs $Y_{k,l}$ and $Z_{k,l}$ (seen as unrooted trees) are not isomorphic, have the same $U_{k+l+2}$-polynomial and distinct $U_{k+l+3}$-polynomial.
\end{theorem}
Before giving the proof, we need the following lemma, which is
a corollary of Lemma \ref{lemma:joining} and Proposition \ref{prop}.
\begin{lemma}\label{l:D}
Let $T$ be a rooted tree and $i$ an integer. Then
\begin{equation}
\label{eq:lD}
U(A_i\odot T) - U(B_i\odot T) = P_i\mathcal{D}(T),
\end{equation}
where \begin{equation}
\label{eq:DT}
\mathcal{D}(T) = x_1(z U^r(T))^* - x_2 U(T).
\end{equation} In particular all the terms in $\mathcal{D}(T)$ have degree at least $2$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:joining}, we have
\[
U^r(A_i \odot T)- U^r(B_i \odot T) = z^{-1}U^r(T) \big(U^r(A_i)-U^r(B_i)\big).
\]
Applying Proposition \ref{prop} to the last term yields
\[U^r(A_i \odot T)- U^r(B_i \odot T) = P_iU^r(T)\frac{\Delta_0}{z}.
\]
The conclusion now follows by taking the specialization $z^n\rightarrow x_n$ in the last equation to obtain (note that $P_i$ does not depend on $z$)
\[U(A_i\odot T) - U(B_i\odot T) = P_i \left[U^r(T)(x_1z-x_2)\right]^* = P_i\mathcal{D}(T).\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo:YZ}]
We start by applying the deletion-contraction formula to the edges corresponding to the $\cdot$ operation in the definitions of $Y_{k,l}$ and $Z_{k,l}$; it is easy to see that
\begin{equation} \label{eq:first_difference}
U(Y_{k,l}) - U(Z_{k,l}) = U(A_k \odot A_l) U(B_k\odot B_l) - U(A_l\odot B_k)U(B_l\odot A_k),
\end{equation}
since after contracting the respective edges we get isomorphic weighted trees.
We apply Lemma~\ref{l:D} twice, to $T = A_k$ and $i=l$ first, and then to $T=B_k$ and $i=l$, and replace the terms $U(A_k\odot A_l)$ and $U(A_l\odot B_k)$ in~\eqref{eq:first_difference}. Recalling that $\odot$ is commutative and after some cancellations, we obtain
$$U(Y_{k,l})-U(Z_{k,l}) = P_l\Big( \mathcal{D}(A_k) U(B_k\odot B_l) -\mathcal{D}(B_k) U(B_l\odot A_k) \Big).$$
We use Lemma~\ref{l:D} once more, with $T=B_l$ and $i=k$, to arrive at
\begin{equation}
\label{eq:ykzk:final}
U(Y_{k,l})-U(Z_{k,l}) = P_l \Big(\big(\mathcal{D}(A_k)-\mathcal{D}(B_k)\big) U(B_l\odot A_k) - \mathcal{D}(A_k)\mathcal{D}(B_l)P_k\Big)
\end{equation}
Using \eqref{eq:DT} and Proposition~\ref{prop} we get
\[\mathcal{D}(A_k)-\mathcal{D}(B_k) = x_1P_k(z\Delta_0)^* = x_1(x_1x_3-x_2^2)P_k,\]
and substituting this into \eqref{eq:ykzk:final} yields
\begin{equation}
\label{eq:last}
U(Y_{k,l})-U(Z_{k,l}) = P_lP_k\Big((x_1^2x_3-x_1x_2^2)U(B_l\odot A_k) - \mathcal{D}(A_k)\mathcal{D}(B_l)\Big).
\end{equation}
This implies that all the terms that appear in the difference have degree at least $l+ k + 4$. Hence $Y_{k,l}$ and $Z_{k,l}$ have the same $U_{k+l+2}$-polynomial. To see that they have distinct $U_{k+l+3}$-polynomial, from \eqref{eq:last} we can deduce that the only terms of degree $l+k+4$ come from terms of degree $4$ in the difference
\[
\Big((x_1^2x_3-x_1x_2^2)U(B_l\odot A_k) - \mathcal{D}(A_k)\mathcal{D}(B_l)\Big).\]
An explicit computation of these terms yields
\[ (x_1^2x_3-x_1x_2^2)x_{n(l)+n(k)-1}-(x_1 x_{n(k)+1}-x_2x_{n(k)})(x_1x_{n(l)+1}-x_2x_{n(l)}),\]
where $n(k)$ is the number of vertices of $A_k$ (and also $B_k$). From this last equation, the conclusion follows.
\end{proof}
We may consider the following quantity:
\[\Phi(m) := \min\{l: \exists \text{ non-isomorphic trees $H,G$ of size $l$ s.t. $U_m(H)=U_m(G)$}\}.\]
\begin{proposition}
\label{prop:Phi}
We have
\[\Phi(m)\leq
\begin{cases}
6\cdot 2^{\frac{m}{2}}-2, &\text{if $m$ is even}\\
6\cdot 3\cdot 2^{\lfloor\frac{m}{2}\rfloor-1}-2,& \text{if $m$ is odd.}
\end{cases}\]
In particular $\Phi(m)$ is finite.
\end{proposition}
\begin{proof}
By Theorem \ref{theo:YZ}, we see that $\Phi(m)\leq|Y_{k,l}|$ for all $(k,l)$ such that $k+l+2=m$. It is easy to check that $|A_i|=|B_i|=3\cdot 2^i$ for all $i$. Thus,
\[|Y_{k,l}|= 2(|A_k|+|B_l|-1)=6(2^k+2^l)-2\quad\text{for all $(k,l)$}.\]
If $m=k+l+2$ is fixed, then we see that $|Y_{k,l}|$ is minimized
when $k=l=\frac{m}{2}-1$ if $m$ is even and otherwise is minimized when $k=\lfloor\frac{m}{2}\rfloor$ and $l=\lfloor\frac{m}{2}\rfloor-1$. Replacing the values of $k$ and $l$ yields the desired inequality.
\end{proof}
Observe that when $(k,l) \in \{ (0,0), (1,0), (1,1)\}$ (respectively), the graphs $Y_{k,l}$ and $Z_{k,l}$ are the smallest examples of non-isomorphic trees with the same $U_m$ for $m\in \{2,3,4\}$ (respectively). This fact was verified computationally in \cite{smith2015symmetric}. This leads us to make the following conjecture
\begin{conjecture}\label{c:YZ}
If $m$ is even, then $Y_{m/2-1,m/2-1}$ and $Z_{m/2-1,m/2-1}$ are the smallest non-isomorphic trees with the same $U_m$-polynomial and if $m$ is odd, then
the same is true for $Y_{\lfloor m/2\rfloor,\lfloor m/2\rfloor-1}$ and $Z_{\lfloor m/2\rfloor,\lfloor m/2\rfloor-1}$. In other words,
\[\Phi(m) =
\begin{cases}
6\cdot 2^{\frac{m}{2}}-2, &\text{if $m$ is even}\\
6\cdot 3\cdot 2^{\lfloor\frac{m}{2}\rfloor-1}-2,& \text{if $m$ is odd.}
\end{cases}\]
\end{conjecture}
The following proposition relates $\Phi$ with Stanley's conjecture.
\begin{proposition}
The following assertions are true:
\begin{enumerate}[a)]
\item For every $m$, Stanley's conjecture is true for trees with at most $\Phi(m)-1$ vertices.
\item Stanley's conjecture is true if and only if $\lim_m\Phi(m)=\infty$.
\item Conjecture \ref{c:YZ} implies Stanley's conjecture.
\end{enumerate}
\end{proposition}
\begin{proof}
To show a), observe that the existence of non-isomorphic trees $T$ and $T'$ of size smaller than $\Phi(m)$ with the same $U$-polynomial contradicts the definition of $\Phi(m)$. To see b), if $\lim_m\Phi(m)=\infty$, then by a), then clearly Stanley's conjecture is true for all (finite) trees. For the converse, suppose that $\Phi(m)$ is uniformly bounded by $N$, and let $T_m,T_m'$ be two non-isomorphic trees of size smaller or equal than $N$ with the same $U_m$-polynomial. Since there finitely many pairs of trees of size smaller or equal than $N$, it follows that there exist $T$ and $T'$ two trees such that $T=T_m$ and $T'=T'_m$ for infinitely many $m$. This implies that $U(T)=U(T')$ and this would contradict Stanley's conjecture. This finish the proof of b). Assertion c) follows directly from Conjecture \ref{c:YZ} and b).
\end{proof} | 3,998 | 13,506 | en |
train | 0.4994.4 | We may consider the following quantity:
\[\Phi(m) := \min\{l: \exists \text{ non-isomorphic trees $H,G$ of size $l$ s.t. $U_m(H)=U_m(G)$}\}.\]
\begin{proposition}
\label{prop:Phi}
We have
\[\Phi(m)\leq
\begin{cases}
6\cdot 2^{\frac{m}{2}}-2, &\text{if $m$ is even}\\
6\cdot 3\cdot 2^{\lfloor\frac{m}{2}\rfloor-1}-2,& \text{if $m$ is odd.}
\end{cases}\]
In particular $\Phi(m)$ is finite.
\end{proposition}
\begin{proof}
By Theorem \ref{theo:YZ}, we see that $\Phi(m)\leq|Y_{k,l}|$ for all $(k,l)$ such that $k+l+2=m$. It is easy to check that $|A_i|=|B_i|=3\cdot 2^i$ for all $i$. Thus,
\[|Y_{k,l}|= 2(|A_k|+|B_l|-1)=6(2^k+2^l)-2\quad\text{for all $(k,l)$}.\]
If $m=k+l+2$ is fixed, then we see that $|Y_{k,l}|$ is minimized
when $k=l=\frac{m}{2}-1$ if $m$ is even and otherwise is minimized when $k=\lfloor\frac{m}{2}\rfloor$ and $l=\lfloor\frac{m}{2}\rfloor-1$. Replacing the values of $k$ and $l$ yields the desired inequality.
\end{proof}
Observe that when $(k,l) \in \{ (0,0), (1,0), (1,1)\}$ (respectively), the graphs $Y_{k,l}$ and $Z_{k,l}$ are the smallest examples of non-isomorphic trees with the same $U_m$ for $m\in \{2,3,4\}$ (respectively). This fact was verified computationally in \cite{smith2015symmetric}. This leads us to make the following conjecture
\begin{conjecture}\label{c:YZ}
If $m$ is even, then $Y_{m/2-1,m/2-1}$ and $Z_{m/2-1,m/2-1}$ are the smallest non-isomorphic trees with the same $U_m$-polynomial and if $m$ is odd, then
the same is true for $Y_{\lfloor m/2\rfloor,\lfloor m/2\rfloor-1}$ and $Z_{\lfloor m/2\rfloor,\lfloor m/2\rfloor-1}$. In other words,
\[\Phi(m) =
\begin{cases}
6\cdot 2^{\frac{m}{2}}-2, &\text{if $m$ is even}\\
6\cdot 3\cdot 2^{\lfloor\frac{m}{2}\rfloor-1}-2,& \text{if $m$ is odd.}
\end{cases}\]
\end{conjecture}
The following proposition relates $\Phi$ with Stanley's conjecture.
\begin{proposition}
The following assertions are true:
\begin{enumerate}[a)]
\item For every $m$, Stanley's conjecture is true for trees with at most $\Phi(m)-1$ vertices.
\item Stanley's conjecture is true if and only if $\lim_m\Phi(m)=\infty$.
\item Conjecture \ref{c:YZ} implies Stanley's conjecture.
\end{enumerate}
\end{proposition}
\begin{proof}
To show a), observe that the existence of non-isomorphic trees $T$ and $T'$ of size smaller than $\Phi(m)$ with the same $U$-polynomial contradicts the definition of $\Phi(m)$. To see b), if $\lim_m\Phi(m)=\infty$, then by a), then clearly Stanley's conjecture is true for all (finite) trees. For the converse, suppose that $\Phi(m)$ is uniformly bounded by $N$, and let $T_m,T_m'$ be two non-isomorphic trees of size smaller or equal than $N$ with the same $U_m$-polynomial. Since there finitely many pairs of trees of size smaller or equal than $N$, it follows that there exist $T$ and $T'$ two trees such that $T=T_m$ and $T'=T'_m$ for infinitely many $m$. This implies that $U(T)=U(T')$ and this would contradict Stanley's conjecture. This finish the proof of b). Assertion c) follows directly from Conjecture \ref{c:YZ} and b).
\end{proof}
\section*{Acknowledgments}
The first and third author are partially supported by CONICYT FONDECYT Regular 1160975 and Basal PFB-03 CMM Universidad de Chile. The second author is partially supported by the Spanish
Ministerio de Economía y Competitividad project MTM2017-82166-P. A short version of this work appeared in \cite{aliste2018dmd}.
\end{document} | 1,301 | 13,506 | en |
train | 0.4995.0 | \begin{document}
\preprint{HEP/123-qed}
\title[Short Title]{Generating optimal states for a homodyne Bell test}
\author{Sonja Daffer}
\email{s.daffer@imperial.ac.uk}
\author{Peter L. Knight}
\affiliation{
Blackett Laboratory,
Imperial College London,
Prince Consort Road,
London SW7 2BW,
United Kingdom
}
\date{\today}
\begin{abstract}
\noindent We present a protocol that produces a
conditionally prepared state that can be used for a Bell test
based on homodyne detection. Based on the results of Munro [PRA
1999], the state is near-optimal for Bell-inequality violations
based on quadrature-phase homodyne measurements that use
correlated photon-number states. The scheme utilizes the Gaussian
entanglement distillation protocol of Eisert \textit{et.al.}
[Annals of Phys. 2004] and uses only beam splitters and
photodetection to conditionally prepare a non-Gaussian state from
a source of two-mode squeezed states with low squeezing parameter,
permitting a loophole-free test of Bell inequalities.
\\
\end{abstract}
\pacs{03.65.Ud, 42.50.Xa, 42.50.Dv, 03.65.Ta}
\maketitle
Bell's theorem is regarded by some as one of the most profound
discoveries of science in the twentieth century. Not only does it
provide a quantifiable measure of correlations stronger than any
allowed classically, which is a key resource in many quantum
information processing applications, it also addresses fundamental
questions in the foundations of quantum mechanics. In 1964, Bell
quantified Bohm's version of the Einstein, Podolsky, and Rosen
(EPR) gedanken experiment, by introducing an inequality that
provides a test of local hidden variable (LHV) models
\cite{bell1964}. A violation of Bell's inequality forces one to
conclude that, contrary to the view held by EPR, quantum mechanics
can not be both local and real. In order to experimentally support
this conclusion in a strict sense, a Bell test that is free from
loopholes is required. Although it is still quite remarkable that
such seemingly metaphysical questions can even be put to the test
in the laboratory, a loophole-free Bell test has yet to be
achieved.
For more than three decades, numerous experiments have confirmed
the predictions of the quantum theory, thereby disproving local
realistic models as providing a correct description of physical
reality\cite{aspect1982}. However, all experiments performed to
date suffer from at least one of the two primary loopholes -- the
detection loophole and the locality loophole. The detection
loophole arises due to low detector efficiencies that may not
permit an adequate sampling of the ensemble space while the
locality loophole suggests that component parts of the
experimental apparatus that are not space-like separated could
influence each other. The majority of Bell tests have used
optical systems to measure correlations, some achieving space-like
separations but still subjected to low efficiency photodetectors
(see, \textit{e.g.}, Ref. \cite{weihs1998}). Correlations in the
properties of entangled ions were shown to violate a Bell
inequality using high efficiency detectors eliminating the
detection loophole; however, the ions were not space-like
separated \cite{rowe2001}. A major challenge that has yet to be
achieved is to experimentally realize a single Bell test that
closes these loopholes.
The ease with which optical setups address the locality loophole
coupled with the currently achievable high efficiencies ($> 0.95$)
of homodyne detectors make Bell tests using quadrature-phase
measurements good candidates for a loophole-free experiment.
Furthermore, continuous quadrature amplitudes are the optical
analog of position and momentum and more closely resemble the
original state considered by EPR. Unlike photon counting
experiments which deal with the microscopic resolution of a small
number of photons, by mixing the signal with a strong field,
homodyne measurements allow one to detect a macroscopic current
\cite{reid1997}.
In this article, we propose a test of Bell inequalities using
homodyne detection on a conditional non-Gaussian ``source" state,
prepared using only passive optics and photon detection. Events
are pre-selected -- using event-ready detection one knows with
certainty that the desired source state has been produced --
requiring no post-processing. Photon detectors are only used in
the pre-selection process and only affect the probability of
successfully creating the source state whereas the actual
correlation measurements are performed using high efficiency
homodyne detectors. The source is a correlated photon-number
state that is near-optimal for Bell tests using homodyne
detection, opening the possibility of a conclusive, loophole-free
test.
We consider a two-mode quantum state of light that can be written
as
\begin{equation} \label{eq:correlated photon state}
| \Psi \rangle = \sum_{n=0}^{\infty} c_n | n,n \rangle,
\end{equation}
which is correlated in photon number $| n,n \rangle = | n
\rangle_A \otimes | n \rangle_B$ for modes $A$ and $B$. For
example, the two-mode squeezed state $| \psi_\lambda \rangle$ has
coefficients given by $c_n=\lambda^n \sqrt{1-\lambda^2} $, where
$\lambda=\tanh(s)$ is determined by the squeezing parameter $s$
\cite{knight1985}. Such states are experimentally easy to
generate; however, because they possess a Gaussian Wigner
distribution in phase space, they are unsuitable for tests of Bell
inequalities using quadrature-phase measurements
as it is a requirement that the Wigner function possesses negative
regions \cite{{bell1964}}. Alternative, theoretically predicted
two-mode quantum superposition states called circle states, also
generated from vacuum fields through nondegenerate parametric
oscillation, having coefficients given by $c_n=r^{2n}/n!
\sqrt{I_0(2r^2)}$, do exhibit a violation for quadrature-phase
measurements with a maximum violation occurring for $r=1.12$
\cite{Gilchrist1998}. Unfortunately, unlike the two-mode squeezed
states, circle states are difficult to realize experimentally. A
recently proposed solution towards an experimentally realizable
state of light that is suitable for a homodyne Bell test is the
photon-subtracted two-mode squeezed state
\cite{Nha2004,GarciaPatron2004}, having coefficients
$c_n=\sqrt{(1-\lambda^2)^3/(1+\lambda^2)}(n+1)\lambda^n$, which
utilizes non-Gaussian operations on a Gaussian state. In this
scheme, a photon is detected from each mode of a two-mode squeezed
state and only the resulting conditional state is used for
correlation measurements in the Bell test. While the two-mode
squeezed state has a positive-everywhere Wigner function, the
conditional state after photon subtraction does not.
To date, all proposed states for a Bell test using
quadrature-phase measurements are not optimal states, meaning that
they do not produce the maximum possible violation of Bell
inequalities. The scheme presented here produces a two-mode
photon-entangled state that is near-optimal, using only beam
splitters and photon detection. The beam splitter may be described
by the unitary operator \cite{wodkiewicz1985}
\begin{equation} \label{eq:BS unitary}
U_{ab}=T^{a^\dag a}e^{-R^* b^\dag a}e^{R b
a^\dag}T^{-b^\dagger b},
\end{equation}
which describes the mixing of two modes $a$ and $b$ at a beam
splitter with transmissivity $T$ and reflectivity $R$. On-off
photon detection is described by the positive operator-valued
measure (POVM) of each detector, given by
\begin{equation}
\Pi_0 = |0 \rangle \langle 0 |, \hspace{.2in} \Pi_1 = I-|0 \rangle \langle 0
|.
\end{equation}
The on-off detectors distinguish between vacuum and the presence
of any number of photons. The procedure is event-ready, a term
introduced by Bell, in the sense that one has a classical signal
indicating whether a measurable system has been produced. The
states demonstrating a violation of local realism presented here
do not rely on the production of exotic states of light; in fact,
only a parametric source generating states with a low squeezing
parameter is required, making the procedure experimentally
feasible with current technology. As depicted by Fig. 1, there
are three parties involved: Alice, Bob, and Sophie. Sophie
prepares the source states that are sent to Alice and Bob, who
perform correlation measurements. We first describe the procedure
Sophie uses to generate the source states, which is shown by the
diagram in Fig. 2, and then discuss the measurements performed by
Alice and Bob.
\begin{figure}\label{fig:f1BellSchematic8}
\end{figure}
In the first step, two-mode squeezed states are mixed pairwise at
unbalanced beam splitters followed by the non-Gaussian operation
associated with the POVM element $\Pi_1$. Specifically, a
non-Gaussian state is generated by
\begin{equation} \label{eq:BellFockStateOperation}
( \Pi_{1,c} \otimes \Pi_{1,d})
(U_{ac} \otimes U_{bd}) |\psi_\lambda \rangle
|\psi_\lambda \rangle,
\end{equation}
where $|\psi_\lambda \rangle$ denotes the two-mode squeezed state
with $c_n=\lambda^n \sqrt{1-\lambda^2}$. For sufficiently small
$\lambda$, the operator $\Pi_1$ describing the presence of photons
at the detector approaches the rank-one projection onto the single
photon number subspace $|1\rangle\langle 1|$, which is still a
non-Gaussian operation. Under this condition, (un-normalized)
states of the form
\begin{equation} \label{eq:BellFockState}
| \psi^{(0)} \rangle = | 0,0 \rangle + \xi | 1,1 \rangle
\end{equation}
can be produced. That is, even though the output state of
(\ref{eq:BellFockStateOperation}) will in general be a mixed
state, when $\lambda \in [0,1)$ is very small, the resulting
states can be made arbitrarily close in trace-norm to an entangled
state with state vector given by Eq. (\ref{eq:BellFockState}),
provided the appropriate choice of beam splitter transmittivity
$|T(\lambda)|=|\xi-\sqrt{\xi^2+8 \lambda^2}|/4 \lambda$ is used
\cite{browne2003}. It should be emphasized that the state $|
\psi^{(0)}\rangle$, having a Bell-state form, can be generated for
arbitrary $\xi$.
It is interesting to note that the state given by Eq.
(\ref{eq:BellFockState}) does not violate a Bell inequality for
quadrature-phase measurements for any $\xi$, even when it has the
form of a maximally entangled Bell state, as was shown in Ref.
\cite{munro1999}, in which a numerical study of the optimal
coefficients for Eq. (\ref{eq:correlated photon state}) was
performed. For certain values of $\xi$, Eq.
(\ref{eq:BellFockState}) describes a state that possesses a Wigner
distribution that has negative regions, showing that negativity of
the Wigner function is a necessary but not sufficient condition
for a violation of Bell inequalities using quadrature-phase
measurements.
The second step is to combine two copies of the state given by Eq.
(\ref{eq:BellFockState}) pairwise and locally at 50:50 beam
splitters described by the unitary operator of Eq. (\ref{eq:BS
unitary}). Detectors that distinguish only between the absence and
presence of photons are placed at the output port of each beam
splitter and when no photons are detected, the state is retained.
The resulting un-normalized state is
\begin{equation}
|\psi^{(i+1)} \rangle = \langle 0,0 | U_{ac} \otimes U_{bd}
| \psi^{(i)} \rangle | \psi^{(i)} \rangle = \sum_{n=0}^\infty
c_n^{(i+1)}
|n,n \rangle,
\end{equation}
where the coefficients are given by \cite{opatrny2000}
\begin{equation}
c_n^{(i+1)}=2^{-n} \sum_{r=0}^n
\left(
\begin{array}{c}
n \\
r
\end{array}
\right)
c_{r}^{(i)} c_{n-r}^{(i)}.
\end{equation}
It is optimal to iterate this procedure three times so that Sophie
prepares the state $|\psi^{(3)} \rangle$. Each iteration leads to
a Gaussification of the initial state
\cite{browne2003,eisert2004}, which builds up correlated photon
number pairs in the sum of Eq. (\ref{eq:correlated photon state}).
Further iterations would Gaussify the state too much and destroy
the nonlocal features for phase space measurements.
The final step is to reduce the vacuum contribution by subtracting
a photon from each mode of the state $|\psi^{(3)} \rangle$,
obtaining a state proportional to $ab |\psi^{(3)} \rangle$. This
is done by mixing each mode with vacuum at two beam splitters with
low reflectivity. A very low reflectivity results in single
photon counts at each detector with a high probability when a
detection event has occurred. Thus, the unitary operation
describing the action of the beam splitter is expanded to second
order in the reflectivity and the state is conditioned on the
result $N=1$ at each detector. The final photon-subtracted state,
given by
\begin{equation}
|\psi^{(3)} \rangle_{PS}= {\mathcal N} \sum_{n=0}^\infty (n+1)c^{(3)}_{n+1}
|n,n \rangle,
\end{equation}
where ${\mathcal N}$ is a normalization factor, is a near-optimal
state for homodyne detection. Figure 3 compares the previously
proposed states -- the circle state and the photon-subtracted
two-mode squeezed state -- with the near-optimal state
$|\psi^{(3)} \rangle_{PS}$, as well as the numerically optimized
state in Ref. \cite{munro1999}. The conditioning procedure alters
the photon number distribution of the input state and behaves
similarly to entanglement distillation.
\begin{figure}\label{fig:f2BellTree}
\end{figure} | 3,801 | 7,340 | en |
train | 0.4995.1 | \begin{figure}\label{fig:f1BellSchematic8}
\end{figure}
In the first step, two-mode squeezed states are mixed pairwise at
unbalanced beam splitters followed by the non-Gaussian operation
associated with the POVM element $\Pi_1$. Specifically, a
non-Gaussian state is generated by
\begin{equation} \label{eq:BellFockStateOperation}
( \Pi_{1,c} \otimes \Pi_{1,d})
(U_{ac} \otimes U_{bd}) |\psi_\lambda \rangle
|\psi_\lambda \rangle,
\end{equation}
where $|\psi_\lambda \rangle$ denotes the two-mode squeezed state
with $c_n=\lambda^n \sqrt{1-\lambda^2}$. For sufficiently small
$\lambda$, the operator $\Pi_1$ describing the presence of photons
at the detector approaches the rank-one projection onto the single
photon number subspace $|1\rangle\langle 1|$, which is still a
non-Gaussian operation. Under this condition, (un-normalized)
states of the form
\begin{equation} \label{eq:BellFockState}
| \psi^{(0)} \rangle = | 0,0 \rangle + \xi | 1,1 \rangle
\end{equation}
can be produced. That is, even though the output state of
(\ref{eq:BellFockStateOperation}) will in general be a mixed
state, when $\lambda \in [0,1)$ is very small, the resulting
states can be made arbitrarily close in trace-norm to an entangled
state with state vector given by Eq. (\ref{eq:BellFockState}),
provided the appropriate choice of beam splitter transmittivity
$|T(\lambda)|=|\xi-\sqrt{\xi^2+8 \lambda^2}|/4 \lambda$ is used
\cite{browne2003}. It should be emphasized that the state $|
\psi^{(0)}\rangle$, having a Bell-state form, can be generated for
arbitrary $\xi$.
It is interesting to note that the state given by Eq.
(\ref{eq:BellFockState}) does not violate a Bell inequality for
quadrature-phase measurements for any $\xi$, even when it has the
form of a maximally entangled Bell state, as was shown in Ref.
\cite{munro1999}, in which a numerical study of the optimal
coefficients for Eq. (\ref{eq:correlated photon state}) was
performed. For certain values of $\xi$, Eq.
(\ref{eq:BellFockState}) describes a state that possesses a Wigner
distribution that has negative regions, showing that negativity of
the Wigner function is a necessary but not sufficient condition
for a violation of Bell inequalities using quadrature-phase
measurements.
The second step is to combine two copies of the state given by Eq.
(\ref{eq:BellFockState}) pairwise and locally at 50:50 beam
splitters described by the unitary operator of Eq. (\ref{eq:BS
unitary}). Detectors that distinguish only between the absence and
presence of photons are placed at the output port of each beam
splitter and when no photons are detected, the state is retained.
The resulting un-normalized state is
\begin{equation}
|\psi^{(i+1)} \rangle = \langle 0,0 | U_{ac} \otimes U_{bd}
| \psi^{(i)} \rangle | \psi^{(i)} \rangle = \sum_{n=0}^\infty
c_n^{(i+1)}
|n,n \rangle,
\end{equation}
where the coefficients are given by \cite{opatrny2000}
\begin{equation}
c_n^{(i+1)}=2^{-n} \sum_{r=0}^n
\left(
\begin{array}{c}
n \\
r
\end{array}
\right)
c_{r}^{(i)} c_{n-r}^{(i)}.
\end{equation}
It is optimal to iterate this procedure three times so that Sophie
prepares the state $|\psi^{(3)} \rangle$. Each iteration leads to
a Gaussification of the initial state
\cite{browne2003,eisert2004}, which builds up correlated photon
number pairs in the sum of Eq. (\ref{eq:correlated photon state}).
Further iterations would Gaussify the state too much and destroy
the nonlocal features for phase space measurements.
The final step is to reduce the vacuum contribution by subtracting
a photon from each mode of the state $|\psi^{(3)} \rangle$,
obtaining a state proportional to $ab |\psi^{(3)} \rangle$. This
is done by mixing each mode with vacuum at two beam splitters with
low reflectivity. A very low reflectivity results in single
photon counts at each detector with a high probability when a
detection event has occurred. Thus, the unitary operation
describing the action of the beam splitter is expanded to second
order in the reflectivity and the state is conditioned on the
result $N=1$ at each detector. The final photon-subtracted state,
given by
\begin{equation}
|\psi^{(3)} \rangle_{PS}= {\mathcal N} \sum_{n=0}^\infty (n+1)c^{(3)}_{n+1}
|n,n \rangle,
\end{equation}
where ${\mathcal N}$ is a normalization factor, is a near-optimal
state for homodyne detection. Figure 3 compares the previously
proposed states -- the circle state and the photon-subtracted
two-mode squeezed state -- with the near-optimal state
$|\psi^{(3)} \rangle_{PS}$, as well as the numerically optimized
state in Ref. \cite{munro1999}. The conditioning procedure alters
the photon number distribution of the input state and behaves
similarly to entanglement distillation.
\begin{figure}\label{fig:f2BellTree}
\end{figure}
Although the procedure used to create the correlated photon source
is probabilistic, with the success probability determined by the
amount of two-mode squeezing and the transmittivity of the
unbalanced beam splitters, it is event-ready -- Sophie has a
record of when the source state was successfully prepared. Low
efficiency photon detectors used in the state preparation only
affect the success probability and do not constitute a detection
loophole. Each mode of the source state $|\psi^{(3)}\rangle_{PS}$
is distributed to a separate location where correlation
measurements using high efficiency homodyne detectors are
performed by the two distant (space-like separated) parties, Alice
and Bob. Alice and Bob each mix their light modes with independent
local oscillators (LO) and randomly measure the relative phase
between the beam and the LO, taking into account the timing
constraint that ensures fair sampling. Alice measures the rotated
quadrature $x_{\theta}^A=x^A \cos \theta + p^A \sin \theta $ and
Bob measures the rotated quadrature $x_{\phi}^B=x^B \cos \phi +
p^B \sin \phi.$ Correlations are considered for two choices of
relative phase: $\theta_1$ or $\theta_2$ for Alice and $\phi_1$ or
$\phi_2$ for Bob. Finally, Alice, Bob and Sophie compare their
experimental results to determine when the source state was
successfully generated and which correlation measurements to use
for the Bell inequalities.
Two types of Bell inequalities will be examined -- the
Clauser-Horne-Shimony-Holt (CHSH) and Clauser-Horne (CH)
inequalities \cite{clauser1969}. To apply these inequalities,
which are for dichotomous variables, the measurement outcomes for
Alice and Bob are discretized by assigning the value $+1$ if $x
\geq 0$ and $-1$ if $x<0$. Let $P^{AB}_{++}(\theta,\phi)$ denote
the joint probability that Alice and Bob realize the value $+1$
upon measuring $\theta$ and $\phi$, respectively and
$P^{A}_{+}(\theta)$ denote the probability that Alice realizes the
value $+1$ regardless of Bob's outcome, with similar notation for
the remaining possible outcomes. From LHV theories, the following
joint probability distribution can be derived:
\begin{equation}
P^{AB}_{ij}(\theta,\phi)=\int \rho(\lambda) p_i^A(\theta,\lambda)
p_j^B(\phi,\lambda) d \lambda
\end{equation}
with $i,j=\pm$, by postulating the existence of hidden variables
$\lambda$ and independence of outcomes for Alice and Bob. Quantum
mechanically, the joint probability distribution is given by the
Born rule $P(x^A_\theta,x^B_\phi)=|\langle x^A_\theta,x^B_\phi
|\psi^{(3)} \rangle_{PS}|^2.$ The probability for Alice and Bob
to both obtain the value $+1$ is
$P^{AB}_{++}(\theta,\phi)=\int_0^\infty \int_0^\infty
P(x^A_\theta,x^B_\phi) d x^A_\theta d x^B_\phi$. The joint
distribution is symmetric and a function of only the sum of the
angles $\chi=\theta+\phi$ permitting the identification
$P^{AB}_{++}(\theta,\phi)=P^{AB}_{++}(\chi)=P^{AB}_{++}(-\chi)$
and $P^{AB}_{++}(\chi)=P^{AB}_{--}(\chi)$. The marginal
distributions $P_+^{A}(\theta)=P_+^{B}(\phi)=1/2$ are independent
of the angle. Given the probability distributions, the predictions
of quantum theory can be tested with those of LHV theory.
First, we consider the Bell inequality of the CHSH type, which
arises from linear combination of correlation functions having the
form
\begin{equation} \label{eq:bellcombo}
\emph{B}= E(\theta_1,\phi_1)+E(\theta_1,\phi_2)+E(\theta_2,\phi_1)-E(\theta_2,\phi_2),
\end{equation}
where $E(\theta_i,\phi_j)$ is the correlation function for Alice
measuring $\theta_i$ and Bob measuring $\phi_j$. These
correlations are in turn determined by
\begin{equation} \label{eq:correlation function}
E(\theta,\phi)=P^{AB}_{++}(\theta,\phi)+P^{AB}_{--}(\theta,\phi)
-P^{AB}_{+-}(\theta,\phi)-P^{AB}_{-+}(\theta,\phi),
\end{equation}
obtained through the many measurements that infer the
distributions $P^{AB}_{ij}(\theta,\phi)$. With the aid of the
symmetry and angle factorization properties, the CHSH inequality
takes the simple form $\emph{B}= 3 E(\chi)-E(3 \chi)$ with LHV
models demanding that $|\emph{B}| \leq 2$. The strongest
violation of the inequality is obtained for the value
$\chi=\pi/4$, thus, a good choice of relative phases for Alice and
Bob's measurements is $\theta_1=0$, $\theta_2=\pi/2,$
$\phi_1=-\pi/4$, and $\phi_2=\pi/4$. Using homodyne detection
with optimal correlated photon number states, the maximum
achievable violation is 2.076 whereas using the source states
presented here, a Bell inequality violation of $\emph{B}=2.071$ is
achievable.
Let us also consider the Clauser-Horne (strong) Bell inequality
formed by the linear combination
\begin{equation} \label{eq:chcombo}
\frac{P^{AB}_{++}(\theta_1,\phi_1)-P^{AB}_{++}(\theta_1,\phi_2)+
P^{AB}_{++}(\theta_2,\phi_1)+P^{AB}_{++}(\theta_2,\phi_2)}
{P_+^{A}(\theta_2)+P_+^{B}(\phi_1)},
\end{equation}
denoted by $\emph{S}$, for which local realism imposes the bound
$|\emph{S}| \leq 1$. Again, using the properties of the
probability distributions, the simplification $\emph{S}= 3
P^{AB}_{++}(\chi)-P^{AB}_{++}(3 \chi)$ is possible. With the
following choice of the phases: $\theta_1=0,$ $\theta_2=\pi/2$,
$\phi_1=-\pi/4$, and $\phi_2=\pi/4$, a violation of
$\emph{S}=1.018$ is attainable given the states in Eq.
(\ref{eq:BellFockState}) with parameter value $\xi=1/\sqrt{2}$,
which is quite close to the maximum value of 1.019 achieved by the
numerical, optimal states in Ref. \cite{munro1999} .
\begin{figure}\label{fig:OptimalStateCompare}
\end{figure}
We have shown how it is possible to prepare a near-optimal
state for a Bell test that uses quadrature-phase homodyne
measurements. Only very low squeezed states, passive optical
elements and photon detectors are required, making the procedure
experimentally feasible at present.
An initial state with a positive-everywhere Wigner function was
succeeded by both non-Gaussian and Gaussian operations to prepare
a state that exhibits a strong violation of both the CHSH and CH
Bell inequalities. Efforts are currently being made towards an
experimental realization of entanglement distillation for Gaussian
states. The procedure presented here offers the opportunity for
another possible experiment, as it utilizes a subset of an
entanglement distillation procedure. Of course, any observed
violation of a Bell inequality is sensitive to inefficiencies in
the experiment that tend to deplete correlations. A full analysis
involving dark counts and detection inefficiencies as addressed in
Ref. \cite{eisert2004} is necessary. Near-optimal states for
homodyne detection may allow a larger window for experimental
imperfections and offer the opportunity for a conclusive,
loophole-free Bell test.
We thank Bill Munro and Stefan Scheel for useful comments and
discussions. This work was supported by the U.S. National Science
Foundation under the program NSF01-154, by the U.K. Engineering
and Physical Sciences Research Council, and the European Union.
\end{document} | 3,539 | 7,340 | en |
train | 0.4996.0 | \begin{document}
\title{\rqmtitle}
\preprint{Version 2}
\author{Ed Seidewitz}
\email{seidewitz@mailaps.org}
\affiliation{14000 Gulliver's Trail, Bowie MD 20720 USA}
\date{\today}
\pacs{03.65.Pm, 03.65.Fd, 03.30.+p, 11.10.Ef}
\begin{abstract}
Earlier work presented a spacetime path formalism for
relativistic quantum mechanics arising naturally from the
fundamental principles of the Born probability rule,
superposition, and spacetime translation invariance. The
resulting formalism can be seen as a foundation for a number
of previous parameterized approaches to relativistic quantum
mechanics in the literature. Because time is treated similarly
to the three space coordinates, rather than as an evolution
parameter, such approaches have proved particularly useful in
the study of quantum gravity and cosmology. The present paper
extends the foundational spacetime path formalism to include
massive, nonscalar particles of any (integer or half-integer)
spin. This is done by generalizing the principle of
translational invariance used in the scalar case to the
principle of full Poincar\'e invariance, leading to a
formulation for the nonscalar propagator in terms of a path
integral over the Poincar\'e group. Once the difficulty of the
non-compactness of the component Lorentz group is dealt with,
the subsequent development is remarkably parallel to the
scalar case. This allows the formalism to retain a clear
probabilistic interpretation throughout, with a natural
reduction to non-relativistic quantum mechanics closely
related to the well known generalized Foldy-Wouthuysen
transformation.
\end{abstract}
\maketitle
\section{Introduction} \label{sect:intro}
Reference \onlinecite{seidewitz06a} presented a foundational formalism
for relativistic quantum mechanics based on path integrals over
parametrized paths in spacetime. As discussed there, such an approach
is particularly suited for further study of quantum gravity and
cosmology, and it can be given a natural interpretation in terms of
decoherent histories \cite{seidewitz06b}. However, the formalism as
given in \refcite{seidewitz06a} is limited to scalar particles. The
present paper extends this spacetime path formalism to non-scalar
particles, although the present work is still limited to massive
particles.
There have been several approaches proposed in the literature for
extending the path integral formulation of the relativistic scalar
propagator \cite{feynman50,feynman51,teitelboim82,hartle92} to the
case of non-scalar particles, particularly spin-1/2 (see, for example,
\refcites{bordi80, henneaux82, barut84, mannheim85, forte05}). These
approaches generally proceed by including in the path integral
additional variables to represent higher spin degrees of freedom.
However, there is still a lack of a comprehensive path integral
formalism that treats all spin values in a consistent way, in the
spirit of the classic work of Weinburg \cite{weinberg64a, weinberg64b,
weinberg69} for traditional quantum field theory. Further, most
earlier references assume that the path integral approach is basically
a reformulation of an \emph{a priori} traditional Hamiltonian
formulation of quantum mechanics, rather than being foundational in
its own right.
The approach to be considered here extends the approach from
\refcite{seidewitz06a} to non-scalar particles by expanding the
configuration space of a particle to be the Poincar\'{e} group (also
known as the inhomogeneous Lorentz group). That is, rather than just
considering the position of a particle, the configuration of a
particle will be taken to be both a position \emph{and} a Lorentz
transformation. Choosing various representations of the group of
Lorentz transformations then allows all spins to be handled in a
consistent way.
The idea of using a Lorentz group variable to represent spin degrees
of freedom is not new. For example, Hanson and Regge \cite{hanson74}
describe the physical configuration of a relativistic spherical top as
a Poincar\'e element whose degrees of freedom are then restricted.
Similarly, Hannibal \cite{hannibal97} proposes a full canonical
formalism for classical spinning particles using the Lorentz group for
the spin configuration space, which is then quantized to describe both
spin and isospin. Rivas \cite{rivas89, rivas94, rivas01} has made a
comprehensive study in which an elementary particle is defined as ``a
mechanical system whose kinematical space is a homogeneous space of
the Poincar\'e group''.
Rivas actually proposes quantization using path integrals, but he does
not provide an explicit derivation of the non-scalar propagator by
evaluating such an integral. A primary goal of this paper to provide
such a derivation.
Following a similar approach to \refcite{seidewitz06a}, the form of
the path integral for non-scalar particles will be deduced from the
fundamental principles of the Born probability rule, superposition,
and Poincar\'e invariance. After a brief overview in
\sect{sect:background} of some background for this approach,
\sect{sect:non-scalar:propagator} generalizes the postulates from
\refcite{seidewitz06a} to the non-scalar case, leading to a path
integral over an appropriate Lagrangian function on the Poincar\'e
group variables.
The major difficulty with evaluating this path integral is the
non-compactness of the Lorentz group. Previous work on evaluating
Lorentz group path integrals (going back to \refcite{bohm87}) is based
on the irreducible unitary representations of the group. This is
awkward, since, for a non-compact group, these representations are
continuous \cite{vilenkin68} and the results do not generalize easily
to the covering group $SL(2,\cmplx)$ that includes half-integral
spins.
Instead, we will proceed by considering a Wick rotation to Euclidean
space, which replaces the non-compact Lorentz group $SO(3,1)$ by the
compact group $SO(4)$ of rotations in four dimensions, in which it is
straightforward to evaluate the path integral. It will then be argued
that, even though the $SO(4)$ propagator cannot be assumed the same as
the true Lorentz propagator, the propagators should be the same when
restricted to the common subgroup $SO(3)$ of rotations in three
dimensions. This leads directly to considerations of the spin
representations of $SO(3)$.
Accordingly, \sect{sect:non-scalar:euclidean} develops the Euclidean
$SO(4)$ propagator and \sect{sect:non-scalar:spin} then considers the
reduction to the three-dimensional rotation group and its spin
representations. However, rather than using the usual Wigner approach
of reduction along the momentum vector \cite{wigner39}, we will reduce
along an independent time-like four-vector \cite{piron78, horwitz82}.
This allows for a very parallel development to \refcite{seidewitz06a}
for antiparticles in \sect{sect:non-scalar:antiparticles} and for a
clear probability interpretation in
\sect{sect:non-scalar:probability}.
Interactions of non-scalar particles can be included in the formalism
by a straightforward generalization of the approach given in
\refcite{seidewitz06a}. \Sect{sect:non-scalar:interactions} gives an
overview of this, though full details are not included where they are
substantially the same as the scalar case.
Natural units with $\hbar = 1 = c$ are used throughout the following
and the metric has a signature of $(- + + +)$. | 2,035 | 35,173 | en |
train | 0.4996.1 | \section{Background}
\label{sect:background}
Path integrals were originally introduced by Feynman \cite{feynman48,
feynman65} to represent the non-relativistic propagation kernel
$\kersym(\threex_{1} - \threex_{0}; t_{1}-t_{0})$. This kernel gives the
transition amplitude for a particle to propagate from the position
$\threex_{0}$ at time $t_{0}$ to the position $\threex_{1}$ at time
$t_{1}$. That is, if $\psi(\threex_{0}; t_{0})$ is the probability
amplitude for the particle to be at position $\threex_{0}$ at time
$t_{0}$, then the amplitude for it to propagate to another position
at a later time is
\begin{equation*}
\psi(\threex; t) = \intthree \xz\,
\kersym(\threex - \threex_{0}; t-t_{0})
\psi(\threex_{0}; t_{0}) \,.
\end{equation*}
A specific \emph{path} of a particle in space is given by a
position function $\threevec{q}(t)$ parametrized by time (or, in
coordinate form, the three functions $q^{i}(t)$ for $i = 1,2,3$).
Now consider all possible paths starting at $\threevec{q}(t_{0}) =
\threex_{0}$ and ending at $\threevec{q}(t_{1}) = \threex_{1}$. The
path integral form for the propagation kernel is then given by
integrating over all these paths as follows:
\begin{equation} \label{eqn:A0a}
\kersym(\threex_{1} - \threex_{0}; t_{1}-t_{0})
= \zeta \intDthree q\,
\delta^{3}(\threevec{q}(t_{1}) - \threex_{1})
\delta^{3}(\threevec{q}(t_{0}) - \threex_{0})
\me^{\mi S[\threevec{q}]} \,,
\end{equation}
where the phase function $S[\threevec{q}]$ is given by the classical
action
\begin{equation*}
S[\threevec{q}] \equiv
\int_{t_{0}}^{t_{1}} \dt\, L(\dot{\threevec{q}}(t)) \,,
\end{equation*}
with $L(\dot{\threevec{q}})$ being the non-relativistic Lagrangian in
terms of the three-velocity $\dot{\threevec{q}} \equiv
\dif\threevec{q} / \dt$.
In \eqn{eqn:A0a}, the notation $\Dthree q$ indicates a path integral
over the three functions $q^{i}(t)$. The Dirac delta functions
constrain the paths integrated over to start and end at the
appropriate positions. Finally, $\zeta$ is a normalization factor,
including any limiting factors required to keep the path integral
finite (which are sometimes incorporated into the integration measure
$\Dthree q$ instead).
As later noted by Feynman himself \cite{feynman51}, it is possible to
generalize the path integral approach to the relativistic case. To do
this, it is necessary to consider paths in \emph{spacetime}, rather
than just space. Such a path is given by a four dimensional position
function $q(\lambda)$, parametrized by an invariant \emph{path
parameter} $\lambda$ (or, in coordinate form, the four functions
$\qmul$, for $\mu = 0,1,2,3$).
The propagation amplitude for a free scalar particle in spacetime is
given by the Feynman propagator
\begin{equation} \label{eqn:A0b}
\prop = -\mi(2\pi)^{-4}\intfour p\,
\frac{\me^{\mi p\cdot(x-\xz)}}
{p^{2}+m^{2}-\mi\epsilon} \,.
\end{equation}
It can be shown (in addition to \refcite{feynman51}, see also, e.g.,
\refcites{seidewitz06a, teitelboim82}) that this propagator can be
expressed in path integral form as
\begin{equation} \label{eqn:A0c}
\prop = \int_{\lambdaz}^{\infty} \dif\lambda\,
\zeta \intDfour q\,
\delta^{4}(q(\lambda) - x) \delta^{4}(q(\lambdaz) - \xz)
\me^{\mi S[q]} \,,
\end{equation}
where
\begin{equation*}
S[q] \equiv
\int_{\lambdaz}^{\lambda} \dif\lambda'\, L(\qdot)(\lambda')) \,,
\end{equation*}
and $L(\qdot)$ is now the relativistic Lagrangian in terms of the the
four-velocity $\qdot \equiv \dif q / \dl$.
Notice that the form of the relativistic expression differs from the
non-relativistic one by having an additional integration over
$\lambda$. This is necessary, since the propagator must, in the end,
depend only on the change in position, independent of $\lambda$.
However, as noted in \refcite{seidewitz06a}, \eqn{eqn:A0c} can be
written as
\begin{equation} \label{eqn:A0d}
\prop = \int_{\lambdaz}^{\infty} \dif\lambda\, \kerneld \,,
\end{equation}
where the \emph{relativistic kernel}
\begin{equation} \label{eqn:A0e}
\kerneld = \zeta \intDfour q\,
\delta^{4}(q(\lambda) - x) \delta^{4}(q(\lambdaz) - \xz)
\me^{\mi S[q]}
\end{equation}
now has a form entirely parallel with the non-relativistic case. The
relativistic kernel can be considered to represent propagation over
paths of the specific length $\lambda - \lambdaz$, while \eqn{eqn:A0d}
then integrates over all possible path lengths.
Given the parallel with the non-relativistic case, define the
\emph{parametrized} probability amplitudes $\psixl$ such that
\begin{equation*}
\psixl = \intfour \xz\, \kerneld \psixlz \,.
\end{equation*}
Parametrized amplitudes were introduced by Stueckelberg
\cite{stueckelberg41, stueckelberg42}, and parametrized approaches to
relativistic quantum mechanics have been developed by a number of
subsequent authors \cite{nambu50, schwinger51, cooke68, horwitz73,
collins78, piron78, fanchi78, fanchi83, fanchi93}. The approach is
developed further in the context of spacetime paths of scalar
particles in \refcite{seidewitz06a}.
In the traditional presentation, however, it is not at all clear
\emph{why} the path integrals of \eqns{eqn:A0a} and \eqref{eqn:A0b}
should reproduce the expected results for non-relativistic and
relativistic propagation. The phase functional $S$ is simply chosen to
have the form of the classical action, such that this works. In
contrast, \refcite{seidewitz06a} makes a more fundamental argument
that the exponential form of \eqn{eqn:A0e} is a consequence of
translation invariance in Minkowski spacetime. This allows for
development of the spacetime path formalism as a foundational
approach, rather than just a re-expression of already known results.
The full invariant group of Minkowski spacetime is not the
translation group, though, but the Poincar\'e group consisting of both
translations \emph{and} Lorentz transformations. This leads one to
consider the implications of applying the argument of
\refcite{seidewitz06a} to the full Poincar\'e group.
Now, while a translation applies to the position of a particle, a
Lorentz transformation applies to its \emph{frame of reference}. Just
as we can consider the position $x$ of a particle to be a translation
by $x$ from some fixed origin $O$, we can consider the frame of
reference of a particle to be given by a Lorentz transformation
$\Lambda$ from a fixed initial frame $I$. The full configuration of a
particle is then given by $(x,\Lambda)$, for a position $x$ and a
Lorentz transformation $\Lambda$---that is, the configuration space of
the particle is also just the Poincar\'e group. The application of an
arbitrary Poincar\'e transformation $(\Delta x,\Lambda')$ to a
particle configuration $(x,\Lambda)$ results in the transformed
configuration $(\Lambda' x + \Delta x, \Lambda' \Lambda)$.
A particle path will now be a path through the Poincar\'e group, not
just through spacetime. Such a path is given by both a position
function $q(\lambda)$ \emph{and} a Lorentz transformation function
$M(\lambda)$ (in coordinate form, a Lorentz transformation is
represented by a matrix, so there are \emph{sixteen} functions
$\hilo{M}{\mu}{\nu}(\lambda)$, for $\mu,\nu = 0,1,2,3,$). The
remainder of this paper will re-develop the spacetime path formalism
introduced in \refcite{seidewitz06a} in terms of this expanded
conception of particle paths. As we will see, this naturally leads to
a model for non-scalar particles. | 2,391 | 35,173 | en |
train | 0.4996.2 | \section{The Non-scalar Propagator}
\label{sect:non-scalar:propagator}
This section develops the path-integral form of the non-scalar
propagator from the conception of Poincar\'e group particle paths
introduced in the previous section. The argument parallels that of
\refcite{seidewitz06a} for the scalar case, motivating a set of
postulates that lead to the appropriate path integral form.
To begin, let $\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)$
be the transition amplitude for a particle to go from the
configuration $(\xz, \Lambdaz)$ at $\lambdaz$ to the configuration
$(x, \Lambda)$ at $\lambda$. By Poincar\'e invariance, this amplitude
only depends on the relative quantities $x-\xz$ and
$\Lambda\Lambdaz^{-1}$. By parameter shift invariance, it only depends
on $\lambda-\lambdaz$. Similarly to the scalar case (\eqn{eqn:A0d}),
the full propagator is given by integrating over the kernel path
length parameter:
\begin{equation} \label{eqn:A1a}
\propsym(x-\xz,\Lambda\Lambdaz^{-1})
= \int_{0}^{\infty} \dl\,
\kersym(x-\xz,\Lambda\Lambdaz^{-1};\lambda) \,.
\end{equation}
The fundamental postulate of the spacetime path approach is that a
particle's transition amplitude between two points is a superposition
of the transition amplitudes for all possible paths between those
points. Let the functional $\propsym[q, M]$ give the transition
amplitude for a path $q(\lambda), M(\lambda)$. Then the transition
amplitude $\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)$
must be given by a path integral over $\propsym[q, M]$ for all paths
starting at $(\xz, \Lambdaz)$ and ending at $(x, \Lambda)$ with the
parameter interval $[\lambdaz, \lambda]$.
\begin{postulate}
For a free, non-scalar particle, the transition amplitude
$\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)$ is given
by the superposition of path transition amplitudes $\propsym[q,
M]$, for all possible Poincar\'e path functions $q(\lambda),
M(\lambda)$ beginning at $(\xz, \Lambdaz)$ and ending at $(x,
\Lambda)$, parametrized over the interval $[\lambdaz, \lambda]$.
That is,
\begin{multline} \label{eqn:A1}
\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)
= \zeta \intDfour q\, \intDsix M\, \\
\delta^{4}(q(\lambda) - x)
\delta^{6}(M(\lambda)\Lambda^{-1} - I)
\delta^{4}(q(\lambdaz) - \xz)
\delta^{6}(M(\lambdaz)\Lambdaz^{-1} - I)
\propsym[q, M] \,,
\end{multline}
where $\zeta$ is a normalization factor as required to keep the
path integral finite.
\end{postulate}
As previously noted, the notation $\Dfour q$ in \eqn{eqn:A1} indicates
a path integral over the four path functions $\qmul$. Similarly,
$\Dsix M$ indicates a path integral over the Lorentz group functions
$\hilo{M}{\mu}{\nu}(\lambda)$. While a Lorentz transformation matrix
$[\hilo{\Lambda}{\mu}{\nu}]$ has sixteen elements, any such matrix is
constrained by the condition
\begin{equation} \label{eqn:A2}
\hilo{\Lambda}{\alpha}{\mu}\eta_{\alpha\beta}
\hilo{\Lambda}{\beta}{\nu} = \eta_{\mu\nu} \,,
\end{equation}
where $[\eta_{\mu\nu}]=\mathrm{diag}(-1,1,1,1)$ is the flat Minkowski
space metric tensor. This equation is symmetric, so it introduces ten
constraints, leaving only six actual degrees of freedom for a Lorentz
transformation. The Lorentz group is thus six dimensional, as
indicated by the notation $\Dsix$ in the path integral.
To further deduce the form of $\propsym[q, M]$, consider a family of
particle paths $q_{\xz,\Lambdaz}, M_{\xz,\Lambdaz}$, indexed by the
starting configuration $(\xz, \Lambdaz)$, such that
\begin{equation*}
q_{\xz,\Lambdaz}(\lambda)
= \xz + \Lambdaz \tilde{q}(\lambda)
\quad \text{and} \quad
M_{\xz,\Lambdaz}(\lambda) = \Lambdaz \tilde{M}(\lambda) \,,
\end{equation*}
where $\tilde{q}(\lambdaz) = 0$ and $\tilde{M}(\lambdaz) = I$. These
paths are constructed by, in effect, applying the Poincar\'e
transformation given by $(\xz, \Lambdaz)$ to the specific functions
$\tilde{q}$ and $\tilde{M}$ defining the family. (Note how the
ability to do this depends on the particle configuration space being
the same as the Poincar\'e transformation group.)
Consider, though, that the particle propagation embodied in
$\kersym[q,M]$ must be Poincar\'e invariant. That is, $\kersym[q',M']
= \kersym[q,M]$ for any $q',M'$ related to $q,M$ by a fixed Poincar\'e
transformation. Thus, all members of the family $q_{\xz,\Lambdaz},
M_{\xz,\Lambdaz}$, which are all related to $\tilde{q}. \tilde{M}$ by
Poincar\'e transformations, must have the same amplitude
$\kersym[q_{\xz,\Lambdaz}, M_{\xz,\Lambdaz}] = \kersym[\tilde{q},
\tilde{M}]$, depending only on the functions $\tilde{q}$ and
$\tilde{M}$.
Suppose that a probability amplitude $\psi(\xz, \Lambdaz)$ is given
for a particle to be at in an initial configuration $(\xz,\Lambdaz)$
and that the transition amplitude is known to be $\kersym[\tilde{q},
\tilde{M}]$ for specific relative configuration functions $\tilde{q},
\tilde{M}$. Then, the probability amplitude for the particle to
traverse a specific path $(q_{\xz,\Lambdaz}(\lambda),
M_{\xz,\Lambdaz}(\lambda))$ from the family defined by the functions
$\tilde{q}, \tilde{M}$ should be just $\kersym[q_{\xz,\Lambdaz},
M_{\xz,\Lambdaz}] \psi(\xz, \Lambdaz) = \kersym[\tilde{q}, \tilde{M}]
\psi(\xz, \Lambdaz)$.
However, the very meaning of being on a specific path is that the
particle must propagate from the given starting configuration to the
specific ending configuration of the path. Further, since the paths in
the family are parallel in configuration space, the ending
configuration is uniquely determined by the starting configuration.
Therefore, the probability for reaching the ending configuration must
be the same as the probability for having started out at the given
initial configuration $(\xz,\Lambdaz)$. That is,
\begin{equation*}
\sqr{\kersym[\tilde{q}, \tilde{M}]\psi(\xz,\Lambdaz)}
= \sqr{\psi(\xz,\Lambdaz)} \,.
\end{equation*}
But, since $\kersym[\tilde{q}, \tilde{M}]$ is independent of $\xz$ and
$\Lambdaz$, we must have $\sqr{\kersym[q, M]} = 1$ in general.
This argument therefore suggests the following postulate.
\begin{postulate}
For any path $(q(\lambda),M(\lambda))$, the transition amplitude
$\propsym[q,M]$ preserves the probability density for the particle
along the path. That is, it satisfies
\begin{equation} \label{eqn:A3}
\sqr{\propsym[q,M]} = 1 \,.
\end{equation}
\end{postulate}
The requirements of \eqn{eqn:A3} and Poincar\'e invariance mean that
$\propsym[q,M]$ must have the exponential form
\begin{equation} \label{eqn:A4}
\propsym[q,M] = \me^{\mi S[\tilde{q}, \tilde{M}]} \,,
\end{equation}
for some phase functional $S$ of the \emph{relative} path functions
\begin{equation*}
\tilde{q}(\lambda) \equiv M(\lambdaz)^{-1}(q(\lambda)-q(\lambdaz))
\quad \text{and} \quad
\tilde{M}(\lambda) \equiv M(\lambdaz)^{-1}M(\lambda) \,.
\end{equation*}
As discussed in \refcite{seidewitz06a}, we are actually justified in
replacing these relative functions with path derivatives under the
path integral, even though the path functions $q(\lambda)$ and
$M(\lambda)$ may not themselves be differentiable in general. This is
because a path integral is defined as the limit of discretized
approximations in which path derivatives are approximated as mean
values, and the limit is then taken over the path integral as a
whole, not each derivative individually. Thus, even though the
individual path derivative limits may not be defined, the path
integral has a well-defined value so long as the overall path
integral limit is defined.
However, the quantities $\tilde{q}$ and $\tilde{M}$ are expressed in a
frame that varies with the $M(\lambdaz)$ of the specific path under
consideration. We wish instead to construct differentials in the fixed
``laboratory'' frame of the $q(\lambda)$. Transforming $\tilde{q}$ and
$\tilde{M}$ to this frame gives
\begin{equation*}
M(\lambdaz)\tilde{q}(\lambda) = q(\lambda)-q(\lambdaz)
\quad \text{and} \quad
M(\lambdaz)\tilde{M}(\lambda)M(\lambdaz)^{-1}
= M(\lambda)M(\lambdaz)^{-1} \,.
\end{equation*}
Clearly, the corresponding derivative for $q$ is simply
$\qdot(\lambda) \equiv \dif q / \dl$, which is the tangent vector to
the path $q(\lambda)$. The derivative for $M$ needs to be treated a
little more carefully. Since the Lorentz group is a \emph{Lie group}
(that is, a continuous, differentiable group), the tangent to a path
$M(\lambda)$ in the Lorentz group space is given by an element of the
corresponding \emph{Lie algebra} \cite{warner83, frankel97}. For the
Lorentz group, the proper such tangent is given by the matrix
$\Omega(\lambda) = \dot{M}(\lambda)M(\lambda)^{-1}$, where
$\dot{M}(\lambda) \equiv \dif M / \dl$.
Together, the quantities $(\qdot, \Omega)$ form a tangent along the
path in the full Poincar\'e group space. We can then take the
arguments of the phase functional in \eqn{eqn:A4} to be $(\qdot,
\Omega)$. Substituting this into \eqn{eqn:A1} gives
\begin{multline} \label{eqn:A5}
\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)
= \zeta \intDfour q\, \intDsix M\, \\
\delta^{4}(q(\lambda) - x)
\delta^{6}(M(\lambda)\Lambda^{-1} - I)
\delta^{4}(q(\lambdaz) - \xz)
\delta^{6}(M(\lambdaz)\Lambdaz^{-1} - I)
\me^{\mi S[\qdot, \Omega]} \,,
\end{multline}
which reflects the typical form of a Feynman sum over paths.
Now, by dividing a path $(q(\lambda), M(\lambda))$ into two paths at
some arbitrary parameter value $\lambda$ and propagating over each
segment, one can see that
\begin{equation}
S[\qdot, \Omega;\lambda_{1},\lambdaz]
= S[\qdot, \Omega;\lambda_{1},\lambda] +
Ê S[\qdot, \Omega;\lambda,\lambdaz] \,,
\end{equation}
where $S[\qdot, \Omega;\lambda',\lambda]$ denotes the value of
$S[\qdot,\Omega]$ for the path parameter range restricted to
$[\lambda,\lambda']$. Using this property to build the total value of
$S[\qdot,\Omega]$ from infinitesimal increments leads to the following result
(whose full proof is a straightforward generalization of the proof
given in \refcite{seidewitz06a} for the scalar case).
\begin{theorem}[Form of the Phase Functional]
The phase functional $S$ must have the form
\begin{equation*}
S[\qdot,\Omega] = \int^{\lambda_{1}}_{\lambdaz} \dl' \,
L[\qdot,\Omega;\lambda'] \,,
\end{equation*}
where the parametrization domain is $[\lambdaz,\lambda_{1}]$ and
$L[\qdot, \Omega;\lambda]$ depends only on $\qdot$, $\Omega$ and
their higher derivatives evaluated at $\lambda$.
\end{theorem} | 3,487 | 35,173 | en |
train | 0.4996.3 | This argument therefore suggests the following postulate.
\begin{postulate}
For any path $(q(\lambda),M(\lambda))$, the transition amplitude
$\propsym[q,M]$ preserves the probability density for the particle
along the path. That is, it satisfies
\begin{equation} \label{eqn:A3}
\sqr{\propsym[q,M]} = 1 \,.
\end{equation}
\end{postulate}
The requirements of \eqn{eqn:A3} and Poincar\'e invariance mean that
$\propsym[q,M]$ must have the exponential form
\begin{equation} \label{eqn:A4}
\propsym[q,M] = \me^{\mi S[\tilde{q}, \tilde{M}]} \,,
\end{equation}
for some phase functional $S$ of the \emph{relative} path functions
\begin{equation*}
\tilde{q}(\lambda) \equiv M(\lambdaz)^{-1}(q(\lambda)-q(\lambdaz))
\quad \text{and} \quad
\tilde{M}(\lambda) \equiv M(\lambdaz)^{-1}M(\lambda) \,.
\end{equation*}
As discussed in \refcite{seidewitz06a}, we are actually justified in
replacing these relative functions with path derivatives under the
path integral, even though the path functions $q(\lambda)$ and
$M(\lambda)$ may not themselves be differentiable in general. This is
because a path integral is defined as the limit of discretized
approximations in which path derivatives are approximated as mean
values, and the limit is then taken over the path integral as a
whole, not each derivative individually. Thus, even though the
individual path derivative limits may not be defined, the path
integral has a well-defined value so long as the overall path
integral limit is defined.
However, the quantities $\tilde{q}$ and $\tilde{M}$ are expressed in a
frame that varies with the $M(\lambdaz)$ of the specific path under
consideration. We wish instead to construct differentials in the fixed
``laboratory'' frame of the $q(\lambda)$. Transforming $\tilde{q}$ and
$\tilde{M}$ to this frame gives
\begin{equation*}
M(\lambdaz)\tilde{q}(\lambda) = q(\lambda)-q(\lambdaz)
\quad \text{and} \quad
M(\lambdaz)\tilde{M}(\lambda)M(\lambdaz)^{-1}
= M(\lambda)M(\lambdaz)^{-1} \,.
\end{equation*}
Clearly, the corresponding derivative for $q$ is simply
$\qdot(\lambda) \equiv \dif q / \dl$, which is the tangent vector to
the path $q(\lambda)$. The derivative for $M$ needs to be treated a
little more carefully. Since the Lorentz group is a \emph{Lie group}
(that is, a continuous, differentiable group), the tangent to a path
$M(\lambda)$ in the Lorentz group space is given by an element of the
corresponding \emph{Lie algebra} \cite{warner83, frankel97}. For the
Lorentz group, the proper such tangent is given by the matrix
$\Omega(\lambda) = \dot{M}(\lambda)M(\lambda)^{-1}$, where
$\dot{M}(\lambda) \equiv \dif M / \dl$.
Together, the quantities $(\qdot, \Omega)$ form a tangent along the
path in the full Poincar\'e group space. We can then take the
arguments of the phase functional in \eqn{eqn:A4} to be $(\qdot,
\Omega)$. Substituting this into \eqn{eqn:A1} gives
\begin{multline} \label{eqn:A5}
\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz)
= \zeta \intDfour q\, \intDsix M\, \\
\delta^{4}(q(\lambda) - x)
\delta^{6}(M(\lambda)\Lambda^{-1} - I)
\delta^{4}(q(\lambdaz) - \xz)
\delta^{6}(M(\lambdaz)\Lambdaz^{-1} - I)
\me^{\mi S[\qdot, \Omega]} \,,
\end{multline}
which reflects the typical form of a Feynman sum over paths.
Now, by dividing a path $(q(\lambda), M(\lambda))$ into two paths at
some arbitrary parameter value $\lambda$ and propagating over each
segment, one can see that
\begin{equation}
S[\qdot, \Omega;\lambda_{1},\lambdaz]
= S[\qdot, \Omega;\lambda_{1},\lambda] +
Ê S[\qdot, \Omega;\lambda,\lambdaz] \,,
\end{equation}
where $S[\qdot, \Omega;\lambda',\lambda]$ denotes the value of
$S[\qdot,\Omega]$ for the path parameter range restricted to
$[\lambda,\lambda']$. Using this property to build the total value of
$S[\qdot,\Omega]$ from infinitesimal increments leads to the following result
(whose full proof is a straightforward generalization of the proof
given in \refcite{seidewitz06a} for the scalar case).
\begin{theorem}[Form of the Phase Functional]
The phase functional $S$ must have the form
\begin{equation*}
S[\qdot,\Omega] = \int^{\lambda_{1}}_{\lambdaz} \dl' \,
L[\qdot,\Omega;\lambda'] \,,
\end{equation*}
where the parametrization domain is $[\lambdaz,\lambda_{1}]$ and
$L[\qdot, \Omega;\lambda]$ depends only on $\qdot$, $\Omega$ and
their higher derivatives evaluated at $\lambda$.
\end{theorem}
Clearly, the functional $L[\qdot,\Omega; \lambda]$ plays the traditional role
of the Lagrangian. The simplest non-trivial form for this functional
would be for it to depend only on $\qdot$ and $\Omega$ and no higher
derivatives. Further, suppose that it separates into uncoupled parts
dependent on $\qdot$ and $\Omega$:
\begin{equation*}
L[\qdot,\Omega; \lambda] =
L_{q}[\qdot;\lambda] + L_{M}[\Omega; \lambda] \,.
\end{equation*}
The path integral of \eqn{eqn:A5} then factors into independent parts
in $q$ and $M$, such that
\begin{equation} \label{eqn:A5a}
\kersym(x-\xz, \Lambda\Lambdaz^{-1}; \lambda-\lambdaz) =
\kerneld \kersym(\Lambda\Lambdaz^{-1}; \lambda-\lambdaz) \,.
\end{equation}
If we take $L_{q}$ to have the classical Lagrangian form
\begin{equation*}
L_{q}[\qdot;\lambda] = L_{q}(\qdot(\lambda))
= \frac{1}{4}\qdot(\lambda)^{2} - m^{2} \,,
\end{equation*}
for a particle of mass $m$, then the path integral in $q$ can be
evaluated to give \cite{seidewitz06a,teitelboim82}
\begin{equation} \label{eqn:A5b}
\kerneld = (2\pi)^{-4}\intfour p\, \me^{ip\cdot(x-\xz)}
\me^{-\mi(\lambda-\lambdaz)(p^{2}+m^{2})} \,.
\end{equation}
Similarly, take $L_{M}$ to be a Lorentz-invariant scalar function of
$\Omega(\lambda)$. $\Omega$ is an antisymmetric matrix (this can be
shown by differentiating the constraint \eqn{eqn:A2}), so the scalar
$\tr(\Omega) = \hilo{\Omega}{\mu}{\mu} = 0$. The next simplest choice
is
\begin{equation*}
L_{M}[\Omega;\lambda] = L_{M}(\Omega(\lambda))
= \frac{1}{2}\tr(\Omega(\lambda)\Omega(\lambda)\T)
= \frac{1}{2}\Omega^{\mu\nu}(\lambda)
\Omega_{\mu\nu}(\lambda) \,.
\end{equation*}
\begin{postulate}
For a free non-scalar particle of mass $m$, the Lagrangian
function is given by
\begin{equation*}
L(\qdot,\Omega) = L_{q}(\qdot) + L_{M}(\Omega) \,,
\end{equation*}
where
\begin{equation*}
L_{q}(\qdot) = \frac{1}{4}\qdot^{2} - m^{2}
\end{equation*}
and
\begin{equation*}
L_{M}(\Omega) = \frac{1}{2}\tr(\Omega\Omega\T) \,.
\end{equation*}
\end{postulate}
Evaluating the path integral in $M$ is complicated by the fact that
the Lorentz group is not \emph{compact}, and integration over the
group is not, in general, bounded. The Lorentz group is denoted
$SO(3,1)$ for the three plus and one minus sign of the Minkowski
metric $\eta$ in the defining pseudo-orthogonality condition
\eqn{eqn:A2}. It is the minus sign on the time component of $\eta$
that leads to the characteristic Lorentz boosts of special relativity.
But since such boosts are parametrized by the boost velocity,
integration of this sector of the Lorentz group is unbounded. This is
in contrast to the three dimensional rotation subgroup $SO(3)$ for the
Lorentz, which is parameterized by rotation angles that are bounded.
To avoid this problem, we will \emph{Wick rotate} \cite{wick50} the
time axis in complex space. This replaces the physical $t$ coordinate
with $\mi t$, turning the minus sign in the metric to a plus sign,
resulting in the normal Euclidean metric $\mathrm{diag}(1,1,1,1)$. The
symmetry group of Lorentz transformations in Minkowski space then
corresponds to the symmetry group $SO(4)$ of rotations in
four-dimensional Euclidean space. The group $SO(4)$ \emph{is} compact,
and the path integration over $SO(4)$ can be done \cite{bohm87}.
Rather than dividing into boost and rotational parts, like the Lorentz
group, $SO(4)$ instead divides into two $SO(3)$ subgroups of rotations
in three dimensions. Actually, rather than $SO(3)$ itself, it is more
useful to consider its universal covering group $SU(2)$, the group of
two-dimensional unitary matrices, because $SU(2)$ allows for
representations with half-integral spin \cite{weyl50, weinberg95,
frankel97}. (The covering group $SU(2) \times SU(2)$ for $SO(4)$ in
Euclidean space corresponds to the covering group $SL(2,\cmplx)$ of
two-dimensional complex matrices for the Lorentz group $SO(3,1)$ in
Minkowski space.)
Typically, Wick rotations have been used to simplify the evaluation of
path integrals parametrized in time, like the non-relativistic
integral of \eqn{eqn:A0a}. In this case, replacing $t$ by $\mi t$
results in the exponent in the integrand of the path integral to
become real. Unlike this case, the exponent in the integrand of a
spacetime path integral remains imaginary, since the Wick rotation
does not affect the path parameter $\lambda$. Nevertheless, the path
integral can be evaluated, giving the following result (proved in the
Appendix).
\begin{theorem}[Evaluation of the $SO(4)$ Path Integral]
Consider the path integral
\begin{multline} \label{eqn:A6a}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
= \euc{\eta} \intDsix \ME\,
\delta^{6}(\ME(\lambda)\LambdaE^{-1}-I)
\delta^{6}(\ME(\lambdaz)\LambdaEz^{-1}-I) \\
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \tr(\OmegaE(\lambda')
\OmegaE(\lambda')\T)
\right]
\end{multline}
over the six dimensional group $SO(4) \sim SU(2) \times SU(2)$,
where $\OmegaE(\lambda')$ is the element of the Lie algebra $so(4)$
tangent to the path $\ME(\lambda)$ at $\lambda'$. This path integral
may be evaluated to get
\begin{multline} \label{eqn:A6}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz) \\
= \sum_{\ell_{A},\ell_{B}}
\exp^{-\mi( \Delta m_{\ell_{A}}^{2}
+ \Delta m_{\ell_{B}}^{2})
(\lambda - \lambdaz)}
(2\ell_{A}+1)(2\ell_{B}+1)
\chi^{(\ell_{A}\ell_{B})}(\LambdaE\LambdaEz^{-1}) \,,
\end{multline}
where the summation over $\ell_{A}$ and $\ell_{B}$ is from $0$ to
$\infty$ in steps of $1/2$, $\Delta m_{\ell}^{2} = \ell(\ell+1)$
and $\chi^{(\ell_{A},\ell_{B})}$ is the group character for the
$(\ell_{A},\ell_{B})$ $SU(2) \times SU(2)$ group representation.
\end{theorem}
The result of \eqn{eqn:A6} is in terms of the \emph{representations}
of the covering group $SU(2) \times SU(2)$. A (matrix) representation
$L$ of a group assigns to each group element $g$ a matrix $D^{(L)}(g)$
that respects the group operation, that is, such that
$D^{(L)}(g_{1}g_{2}) = D^{(L)}(g_{1}) D^{(L)}(g_{2})$. The
\emph{character function} $\chi^{(L)}$ for the representation $L$ of a
group is a function from the group to the reals such that
\begin{equation*}
\chi^{(L)}(g) \equiv \tr(D^{(L)}(g)) \,.
\end{equation*}
The group $SU(2)$ has the well known \emph{spin representations},
labeled by spins $\ell = 0, 1/2, 1, 3/2, \ldots$ \cite{weyl50,
weinberg95} (for example, spin 0 is the trivial scalar representation,
spin 1/2 is the spinor representation and spin 1 is the vector
representation). A $(\ell_{A},\ell_{B})$ representation of $SU(2)
\times SU(2)$ then corresponds to a spin-$\ell_{A}$ representation for
the first $SU(2)$ component and a spin-$\ell_{B}$ representation for
the second $SU(2)$ component.
Of course, it is not immediately clear that this result for $SO(4)$
applies directly to $SO(3,1)$. In some cases, it can be shown that the
evolution propagator for a non-compact group is, in fact, the same as
the propagator for a related compact group. Unfortunately, the
relationship between $SO(4)$ and $SO(3,1)$ (in which an odd number,
three, of the six generators of $SO(4)$ are multiplied by $\mi$ to get
the boost generators for $SO(3,1)$) is such that the evolution
propagator of the non-compact group does not coincide with that of the
compact group \cite{krausz00}. | 4,014 | 35,173 | en |
train | 0.4996.4 | Rather than dividing into boost and rotational parts, like the Lorentz
group, $SO(4)$ instead divides into two $SO(3)$ subgroups of rotations
in three dimensions. Actually, rather than $SO(3)$ itself, it is more
useful to consider its universal covering group $SU(2)$, the group of
two-dimensional unitary matrices, because $SU(2)$ allows for
representations with half-integral spin \cite{weyl50, weinberg95,
frankel97}. (The covering group $SU(2) \times SU(2)$ for $SO(4)$ in
Euclidean space corresponds to the covering group $SL(2,\cmplx)$ of
two-dimensional complex matrices for the Lorentz group $SO(3,1)$ in
Minkowski space.)
Typically, Wick rotations have been used to simplify the evaluation of
path integrals parametrized in time, like the non-relativistic
integral of \eqn{eqn:A0a}. In this case, replacing $t$ by $\mi t$
results in the exponent in the integrand of the path integral to
become real. Unlike this case, the exponent in the integrand of a
spacetime path integral remains imaginary, since the Wick rotation
does not affect the path parameter $\lambda$. Nevertheless, the path
integral can be evaluated, giving the following result (proved in the
Appendix).
\begin{theorem}[Evaluation of the $SO(4)$ Path Integral]
Consider the path integral
\begin{multline} \label{eqn:A6a}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
= \euc{\eta} \intDsix \ME\,
\delta^{6}(\ME(\lambda)\LambdaE^{-1}-I)
\delta^{6}(\ME(\lambdaz)\LambdaEz^{-1}-I) \\
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \tr(\OmegaE(\lambda')
\OmegaE(\lambda')\T)
\right]
\end{multline}
over the six dimensional group $SO(4) \sim SU(2) \times SU(2)$,
where $\OmegaE(\lambda')$ is the element of the Lie algebra $so(4)$
tangent to the path $\ME(\lambda)$ at $\lambda'$. This path integral
may be evaluated to get
\begin{multline} \label{eqn:A6}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz) \\
= \sum_{\ell_{A},\ell_{B}}
\exp^{-\mi( \Delta m_{\ell_{A}}^{2}
+ \Delta m_{\ell_{B}}^{2})
(\lambda - \lambdaz)}
(2\ell_{A}+1)(2\ell_{B}+1)
\chi^{(\ell_{A}\ell_{B})}(\LambdaE\LambdaEz^{-1}) \,,
\end{multline}
where the summation over $\ell_{A}$ and $\ell_{B}$ is from $0$ to
$\infty$ in steps of $1/2$, $\Delta m_{\ell}^{2} = \ell(\ell+1)$
and $\chi^{(\ell_{A},\ell_{B})}$ is the group character for the
$(\ell_{A},\ell_{B})$ $SU(2) \times SU(2)$ group representation.
\end{theorem}
The result of \eqn{eqn:A6} is in terms of the \emph{representations}
of the covering group $SU(2) \times SU(2)$. A (matrix) representation
$L$ of a group assigns to each group element $g$ a matrix $D^{(L)}(g)$
that respects the group operation, that is, such that
$D^{(L)}(g_{1}g_{2}) = D^{(L)}(g_{1}) D^{(L)}(g_{2})$. The
\emph{character function} $\chi^{(L)}$ for the representation $L$ of a
group is a function from the group to the reals such that
\begin{equation*}
\chi^{(L)}(g) \equiv \tr(D^{(L)}(g)) \,.
\end{equation*}
The group $SU(2)$ has the well known \emph{spin representations},
labeled by spins $\ell = 0, 1/2, 1, 3/2, \ldots$ \cite{weyl50,
weinberg95} (for example, spin 0 is the trivial scalar representation,
spin 1/2 is the spinor representation and spin 1 is the vector
representation). A $(\ell_{A},\ell_{B})$ representation of $SU(2)
\times SU(2)$ then corresponds to a spin-$\ell_{A}$ representation for
the first $SU(2)$ component and a spin-$\ell_{B}$ representation for
the second $SU(2)$ component.
Of course, it is not immediately clear that this result for $SO(4)$
applies directly to $SO(3,1)$. In some cases, it can be shown that the
evolution propagator for a non-compact group is, in fact, the same as
the propagator for a related compact group. Unfortunately, the
relationship between $SO(4)$ and $SO(3,1)$ (in which an odd number,
three, of the six generators of $SO(4)$ are multiplied by $\mi$ to get
the boost generators for $SO(3,1)$) is such that the evolution
propagator of the non-compact group does not coincide with that of the
compact group \cite{krausz00}.
Nevertheless, $SO(4)$ and $SO(3,1)$ both have compact $SO(3)$
subgroups, which are isomorphic. Therefore, the restriction of the
$SO(4)$ propagator to its $SO(3)$ subgroup should correspond to the
restriction of the $SO(3,1)$ propagator to its $SO(3)$ subgroup. This
will prove sufficient for our purposes. In the next section, we will
continue to freely work with the Wick rotated Euclidean space and the
$SO(4)$ propagator as necessary. To show clearly when this is being
done, quantities effected by Wick rotation will be given a subscript
$E$, as in \eqn{eqn:A6}. | 1,552 | 35,173 | en |
train | 0.4996.5 | \section{The Euclidean Propagator} \label{sect:non-scalar:euclidean}
For a scalar particle, one can define the probability amplitude
$\psixl$ for the particle to be at position $x$ at the point $\lambda$
in its path \cite{seidewitz06a, stueckelberg41, stueckelberg42}. For a
non-scalar particle, this can be extended to a probability amplitude
$\psixLl$ for the particle to be in the Poincar\'{e} configuration
$(x,\Lambda)$, at the point $\lambda$ in its path. The transition
amplitude given in \eqn{eqn:A1} acts as a propagation kernel for
$\psixLl$:
\begin{equation*}
\psixLl = \intfour \xz\, \intsix \Lambdaz\,
\kersym(x-\xz,\Lambda\Lambdaz^{-1};\lambda-\lambdaz)
\psilz{\xz, \Lambdaz} \,.
\end{equation*}
The Euclidean version of this equation has an identical form, but in
terms of Euclidean configuration space quantities:
\begin{equation} \label{eqn:B1}
\psixLEl = \intfour \xEz\, \intsix \LambdaEz\,
\kersym(\xE-\xEz,\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
\psixLElz \,.
\end{equation}
Using \eqn{eqn:A5a}, substitute into \eqn{eqn:B1} the Euclidean scalar
kernel (as in \eqn{eqn:A5b}, but with a leading factor of $\mi$) and
the $SO(4)$ kernel (\eqn{eqn:A6}), giving
\begin{multline} \label{eqn:B1a}
\psixLEl = \sum_{\ell_{A},\ell_{B}}
\intfour \xEz\, \intsix \LambdaEz\, \\
\kersym^{(\ell_{A},\ell_{B})}(\xE-\xEz;\lambda-\lambdaz)
\chi^{(\ell_{A},\ell_{B})}(\LambdaE\LambdaEz^{-1})
\psixLElz \,,
\end{multline}
where
\begin{equation*}
\kersym^{(\ell_{A},\ell_{B})}(\xE-\xEz;\lambda-\lambdaz)
\equiv \mi(2\pi)^{-4}\intfour \pE\, \me^{i\pE\cdot(\xE-\xEz)}
\me^{-\mi(\lambda-\lambdaz)
(\pE^{2}+m^{2}+\Delta m_{A}^{2}+\Delta m_{B}^{2})}\,.
\end{equation*}
Since the group characters provide a complete set of orthogonal
functions \cite{weyl50}, the function $\psi(\xEz,\LambdaEz;\lambdaz)$
can be expanded as
\begin{equation*}
\psixLElz = \sum_{\ell_{A},\ell_{B}}
\chi^{(\ell_{A},\ell_{B})}(\LambdaEz)
\psi^{(\ell_{A},\ell_{B})}(\xEz;\lambdaz) \,.
\end{equation*}
Substituting this into \eqn{eqn:B1a} and using
\begin{equation*}
\chi^{(\ell_{A},\ell_{B})}(\LambdaE) = \intsix \LambdaEz\,
\chi^{(\ell_{A},\ell_{B})}(\LambdaE\LambdaEz^{-1})
\chi^{(\ell_{A},\ell_{B})}(\LambdaEz)
\end{equation*}
(see \refcite{weyl50}) gives
\begin{equation*}
\psixLEl = \sum_{\ell_{A},\ell_{B}}
\chi^{(\ell_{A},\ell_{B})}(\LambdaE)
\psi^{(\ell_{A},\ell_{B})}(\xE;\lambda) \,,
\end{equation*}
where
\begin{equation} \label{eqn:B1b}
\psi^{(\ell_{A},\ell_{B})}(\xE;\lambda)
= \intfour \xEz\,
\kersym^{(\ell_{A},\ell_{B})}(\xE-\xEz;\lambda-\lambdaz)
\psi^{(\ell_{A},\ell_{B})}(\xEz;\lambdaz) \,.
\end{equation}
The general amplitude $\psixLEl$ can thus be expanded into a sum of
terms in the various $SU(2) \times SU(2)$ representations, the
coefficients $\psi^{(\ell_{A},\ell_{B})} (\xE;\lambdaz)$ of which each
evolve separately according to \eqn{eqn:B1b}. As is well known,
reflection symmetry requires that a real particle amplitude must
transform according to a $(\ell,\ell)$ or $(\ell_{A},\ell_{B}) \oplus
(\ell_{B},\ell_{A})$ representation. That is, the amplitude function
$\psixLEl$ must either have the form
\begin{equation*}
\psixLEl = \chi^{(\ell,\ell)}(\LambdaE)
\psi^{(\ell,\ell)} (\xE;\lambda)
\end{equation*}
or
\begin{equation*}
\psixLEl
= \chi^{(\ell_{A},\ell_{B})}(\LambdaE)
\psi^{(\ell_{A},\ell_{B})} (\xE;\lambda)
+ \chi^{(\ell_{B},\ell_{A})}(\LambdaE)
\psi^{(\ell_{B},\ell_{A})} (\xE;\lambda) \,.
\end{equation*}
Assuming one of the above two forms, shift the particle mass to
$m'^{2} = m^{2} + 2\Delta m_{\ell}^{2}$ or $m'^{2} = m^{2} + 2\Delta
m_{\ell_{A}}^{2} + 2\Delta m_{\ell_{B}}^{2}$, so that
\begin{equation*}
\psixLEl = \intfour \xz, \intsix \Lambdaz\,
\chi^{(L)}(\LambdaE\LambdaEz^{-1})
\kersym(\xE-\xEz;\lambda-\lambdaz) \psixLElz \,,
\end{equation*}
where $\kersym$ here is (the Euclidean version of) the scalar
propagator of \eqn{eqn:A5b}, but now for the shifted mass $m'$, and
$(L)$ is either $(\ell,\ell)$ or $(\ell_{A},\ell_{B})$. That is, the
full kernel must have the form
\begin{equation} \label{eqn:B1c}
\kersym^{(L)}(\xE-\xEz,\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
= \chi^{(L)}(\LambdaE\LambdaEz^{-1})
\kersym(\xE-\xEz;\lambda-\lambdaz) \,.
\end{equation}
As is conventional, from now on we will use four-dimensional spinor
indices for the $(1/2,0) \oplus (0,1/2)$ representation and vector
indices (also four dimensional) for the $(1,1)$ representation, rather
than the $SU(2) \times SU(2)$ indices $(\ell_{A},\ell_{B})$ (see, for
example, \refcite{weinberg95}). Let $\DLElpl$ be a matrix
representation of the $SO(4)$ group using such indices. Define
correspondingly indexed amplitude functions by
\begin{equation} \label{eqn:B2}
\lpl{\psi}(\xE;\lambda) \equiv \intsix \LambdaE\, \DLElpl \psixLEl
\end{equation}
(note the \emph{double} indexing of $\psi$ here).
These $\lpl{\psi}$ are the elements of an algebra over the $SO(4)$
group for which, given $\xE$ and $\lambda$, the $\psixLEl$ are the
\emph{components}, ``indexed'' by the group elements $\LambdaE$ (see
Section III.13 of \refcite{weyl50}). The product of two such algebra
elements is (with summation implied over repeated up and down indices)
\begin{equation*}
\begin{split}
\hilo{\psi_{1}}{l'}{\lbar}(\xE; \lambda)
\hilo{\psi_{2}}{\lbar}{l}(\xE; \lambda)
&= \intsix {\LambdaE}_{1}\, \intsix {\LambdaE}_{2}\,
\hilo{\Dmat}{l'}{\lbar}({\LambdaE}_{1})
\hilo{\Dmat}{\lbar}{l}({\LambdaE}_{2})
\psi_{1}(\xE, {\LambdaE}_{1};\lambda)
\psi_{2}(\xE, {\LambdaE}_{2}; \lambda) \\
&= \intsix \LambdaE\, \DLElpl \intsix {\LambdaE}_{1} \,
\psi_{1}(\xE, {\LambdaE}_{1}; \lambda)
\psi_{2}(\xE, {\LambdaE}_{1}^{-1}\LambdaE; \lambda) \\
&= \lpl{(\psi_{1}\psi_{2})}(\xE;\lambda) \,,
\end{split}
\end{equation*}
where the second equality follows after setting ${\LambdaE}_{2} =
{\LambdaE}_{1}^{-1}\LambdaE$ from the invariance of the integration
measure of a Lie group (see, for example, \refcite{warner83}, Section
4.11, and \refcite{weyl50}, Section III.12---this property will be
used regularly in the following), and the product components
$(\psi_{1}\psi_{2})(\xE, \LambdaE; \lambda)$ are defined to be
\begin{equation*}
(\psi_{1}\psi_{2})(\xE, \LambdaE; \lambda)
\equiv \intsix \LambdaE'\, \psi_{1}(\xE, \LambdaE'; \lambda)
\psi_{2}(\xE, \LambdaE^{\prime -1}\LambdaE; \lambda) \,.
\end{equation*}
Now substitute \eqn{eqn:B1} into \eqn{eqn:B2} to get
\begin{multline*}
\lpl{\psi}(\xE;\lambda)
= \intsix \LambdaE\, \intfour \xEz\, \intsix \LambdaEz\, \\
\DLElpl
\kersym(\xE-\xEz,\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
\psixLElz \,.
\end{multline*}
Changing variables $\LambdaE \to \LambdaE'\LambdaEz$ then gives
\begin{equation*}
\begin{split}
\lpl{\psi}(\xE;\lambda)
&= \intfour \xz\,
\left[
\intsix \LambdaE'\,
\hilo{\Dmat}{l'}{\lbar}(\LambdaE')
\kersym(\xE-\xz,\LambdaE';\lambda - \lambdaz)
\right] \\
&\qquad\qquad\qquad\qquad\qquad\qquad
\intsix \LambdaEz\, \hilo{\Dmat}{\lbar}{l}(\LambdaEz)
\psixLElz \\
&= \intfour \xz\,
\hilo{\kersym}{l'}{\lbar}(\xE-\xz;\lambda-\lambdaz)
\hilo{\psi}{\lbar}{l}(\xz;\lambdaz) \,,
\end{split}
\end{equation*}
where the kernel for the algebra elements $\lpl{\psi}(\xE;\lambda)$ is
thus
\begin{equation*}
\kersymlpl(\xE-\xEz;\lambda-\lambdaz)
= \intsix \LambdaE\, \DLElpl
\kersym(\xE-\xEz,\LambdaE;\lambda-\lambdaz) \,.
\end{equation*}
Substituting \eqn{eqn:B1c} into this, and using the definition of the
character for a specific representation, $\chi(\LambdaE) \equiv
\tr(\DLE)$, gives
\begin{equation*}
\kersymlpl(\xE-\xEz;\lambda-\lambdaz)
= \left[
\intsix \LambdaE\, \DLElpl
\hilo{\Dmat}{\lbar}{\lbar}(\LambdaE)
\right]
\kersym(\xE-\xEz;\lambda-\lambdaz) \,.
\end{equation*}
Use the orthogonality property
\begin{equation*}
\intsix \LambdaE\, \DLElpl \lohi{\Dmat}{\lbar'}{\lbar}(\LambdaE)
= \hilo{\delta}{l'}{\lbar'} \lohi{\delta}{l}{\lbar} \,,
\end{equation*}
where the $SO(4)$ integration measure has been normalized so that $\intsix
\LambdaE = 1$ (see \refcite{weyl50}, Section 11), to get
\begin{equation} \label{eqn:B3}
\kersymlpl(\xE-\xEz;\lambda-\lambdaz)
= \lpl{\delta}\kersym(\xE-\xEz;\lambda-\lambdaz) \,.
\end{equation}
The $SO(4)$ group propagator is thus simply $\lpl{\delta}$. As
expected, this does not have the same form as would be expected for
the $SO(3,1)$ Lorentz group propagator. However, as argued at the end
of \sect{sect:non-scalar:propagator}, the propagator restricted to the
compact $SO(3)$ subgroup of $SO(3,1)$ \emph{is} expected to have the
same form as for the $SO(3)$ subgroup of $SO(4)$. So we turn now to
the reduction of $SO(3,1)$ to $SO(3)$. | 3,435 | 35,173 | en |
train | 0.4996.6 | \section{Spin} \label{sect:non-scalar:spin}
In traditional relativistic quantum mechanics, the Lorentz-group
dependence of non-scalar states is reduced to a rotation representation
that is amenable to interpretation as the intrinsic particle spin.
Since, in the usual approach, physical states are considered to have
on-shell momentum, it is natural to use the 3-momentum as the vector
around which the spin representation is induced, using Wigner's
classic ``little group'' argument \cite{wigner39}.
However, in the spacetime path approach used here, the fundamental
states are not naturally on-shell, rather the on-shell states are
given as the time limits of off-shell states \cite{seidewitz06a}.
Further, there are well-known issues with the localization of on-shell
momentum states \cite{newton49, hegerfeldt74}. Therefore, instead of
assuming on-shell states to start, we will adopt the approach of
\refcite{piron78, horwitz82}, in which the spin representation is
induced about an arbitrary timelike vector. This will allow for a
straightforward generalization of the interpretation obtained in the
spacetime path formalism for the scalar case \cite{seidewitz06a}.
First, define the probability amplitudes $\lpl{\psi}(x;\lambda)$ for a
given Lorentz group representation similarly to the correspondingly
indexed amplitudes for $SO(4)$ representations from
\sect{sect:non-scalar:euclidean}. Corresponding to such amplitudes,
define a \emph{set} of ket vectors $\ketpsi\lol$, with a \emph{single}
Lorentz-group representation index. The $\ketpsi\lol$ define a
\emph{vector bundle} (see, for example, \refcite{frankel97}), of the
same dimension as the Lorentz-group representation, over the
scalar-state Hilbert space.
The basis position states for this vector bundle then have the form
$\ketxll$, such that
\begin{equation*}
\lpl{\psi}(x;\lambda)
= \gmat^{l'\lbar}\,{}_{\lbar}\innerxlpsi\lol \,,
\end{equation*}
with summation assumed over repeated upper and lower indices and
$\gmat$ being the invariant matrix of a given Lorentz group
representation such that
\begin{equation*}
\Dmat\adj \gmat \Dmat = \Dmat \gmat \Dmat\adj = \gmat \,,
\end{equation*}
for any member $\Dmat$ of the representation, where $\Dmat\adj$ is the
Hermitian transpose of the matrix $\Dmat$. For the scalar
representation, $\gmat$ is $1$, for the (Weyl) spinor representation
it is the Dirac matrix $\beta$ and for the vector representation it is
the Minkowski metric $\eta$.
In the following, $\gmat$ will be used (usually implicitly) to
``raise'' and ``lower'' group representation indices. For instance,
\begin{equation*}
\lpbraxl \equiv \gmat^{l'l}\,{}\lol\braxl \,,
\end{equation*}
so that
\begin{equation} \label{eqn:C0}
\lpl{\psi}(x;\lambda) = \hilp\innerxlpsi\lol \,.
\end{equation}
The states $\ketxll$ are then normalized so that
\begin{equation} \label{eqn:C0d}
\hilp\inner{x';\lambda}{x;\lambda}\lol
= \lpl{\delta}\, \delta^{4}(x' - x) \,,
\end{equation}
that is, they are orthogonal at equal $\lambda$.
Consider an arbitrary Lorentz transformation $M$. Since $\psixLl$ is a
scalar, it should transform as $\psipxLl = \psil{M^{-1}x',
M^{-1}\Lambda'}$. In terms of algebra elements,
\begin{equation} \label{eqn:C0e}
\begin{split}
\lpl{\psi'}(x';\lambda)
&= \intsix \Lambda'\, \Dmatlpl(\Lambda')
\psil{M^{-1}x', M^{-1}\Lambda'} \\
&= \intsix \Lambda\,
\hilo{\Dmat}{l'}{\lbar'}(M)
\hilo{\Dmat}{\lbar'}{l}(\Lambda)
\psil{M^{-1}x', \Lambda} \\
&= \hilo{\Dmat}{l'}{\lbar'}(M)
\hilo{\psi}{\lbar'}{l}(M^{-1}x; \lambda) \,.
\end{split}
\end{equation}
Let $\UL$ denote the unitary operator on Hilbert space corresponding
to the Lorentz transformation $\Lambda$. Then, from \eqn{eqn:C0},
\begin{equation*}
\lpl{\psi'}(x';\lambda) = \hilp\inner{x';\lambda}{\psi'}\lol \\
= \hilp\bral{x'} \UL \ketpsi\lol \,.
\end{equation*}
This and \eqn{eqn:C0e} imply that
\begin{equation*}
\UL^{-1} \ketll{x'} = \ketllp{\Lambda^{-1}x'}\,
\lpl{[\DL^{-1}]} \,,
\end{equation*}
or
\begin{equation} \label{eqn:C0f}
\UL \ketxll = \ketllp{\Lambda x}\, \DLlpl \,.
\end{equation}
Thus, the $\ketxll$ are localized position states that transform
according to a representation of the Lorentz group.
Now, for any future-pointing, timelike, unit vector $n$ ($n^{2} = -1$
and $n^{0} > 0$) define the standard Lorentz transformation
\begin{equation*}
L(n) \equiv R(\threen) B(|\threen|) R^{-1}(\threen) \,,
\end{equation*}
where $R(\threevec{n})$ is a rotation that takes the $z$-axis into the
direction of $\threevec{n}$ and $B(|\threevec{n}|)$ is a boost of
velocity $|\threevec{n}|$ in the $z$ direction. Then $n = L(n) e$,
where $e \equiv (1, 0, 0, 0)$.
Define the \emph{Wigner rotation} for $n$ and an arbitrary Lorentz
transformation $\Lambda$ to be
\begin{equation} \label{eqn:C1}
\WLn \equiv L(\Lambda n)^{-1} \Lambda L(n) \,,
\end{equation}
such that $\WLn e = e$. That is, $\WLn$ is a member of the
\emph{little group} of transformations that leave $e$ invariant. Since
$e$ is along the time axis, its little group is simply the rotation
group $SO(3)$ of the three space axes.
Substituting the transformation
\begin{equation*}
\Lambda = L(\Lambda n) \WLn L(n)^{-1} \,,
\end{equation*}
into \eqn{eqn:C0f} gives
\begin{equation*}
\UL \ketxll = \ketl{\Lambda x}\lolp\,
\lpl{\left[
\Dmat \left(
L(\Lambda n) \WLn L(n)^{-1}
\right)
\right]} \,.
\end{equation*}
Defining
\begin{equation} \label{eqn:C2}
\ketlml{x,n} \equiv \ketxllp \lpl{[\Lmat(n)]} \,,
\end{equation}
where $\Lmat(n) \equiv \Dmat(L(n))$, we see that $\ketlml{x,n}$
transforms under $\UL$ as
\begin{equation} \label{eqn:C3}
\UL \ketlml{x,n}
= \ketlmlp{\Lambda x, \Lambda n}\,
\lpl{\left[
\Dmat \left( \WLn \right)
\right]} \,,
\end{equation}
that is, according to the Lorentz representation subgroup given by
$\Dmat(\WLn)$, which is isomorphic to some representation of the
rotation group.
The irreducible representations of the rotation group (or, more
exactly, its covering group $SU(2)$) are just the spin
representations, with members given by matrices $\Dsps$, where the
$\sigma$ are spin indices. Let $\ketpsi\los$ be a member of a Hilbert
space vector bundle indexed by spin indices. Then there is a linear,
surjective mapping from $\ketpsi\lol$ to $\ketpsi\los$ given by
\begin{equation*}
\ketpsi\los = \ketpsi\lol \uls \,,
\end{equation*}
where
\begin{equation} \label{eqn:C4}
(\ulspt)^{*} \uls = \hilo{\delta}{\sigma'}{\sigma} \,.
\end{equation}
The isomorphism between the rotation subgroup of the Lorentz
group and the rotation group then implies that, for any
rotation $W$, for all $\ketpsi\lol$,
\begin{equation*}
\ketpsi\lolp \ulpsp \sps{[D(W)]} = \ketpsi\lolp \lpl{[\Dmat(W)]} \uls
\end{equation*}
(with summation implied over repeated $\sigma$ indices, as well as
$l$ indices) or
\begin{equation} \label{eqn:C5}
\ulpsp \sps{[D(W)]} = \lpl{[\Dmat(W)]} \uls \,,
\end{equation}
where $D(W)$ is the spin representation matrix corresponding to $W$.
Define
\begin{equation} \label{eqn:C6}
\ketls{x,n} \equiv \ketlml{x,n}\, \uls \,.
\end{equation}
Substituting from \eqn{eqn:C2} gives
\begin{equation} \label{eqn:C7}
\ketls{x,n} = \ketxll\, \uls(n) \,.
\end{equation}
where
\begin{equation} \label{eqn:C8}
\uls(n) \equiv \hilo{[\Lmat(n)]}{l}{l'}\,
\ulps \,.
\end{equation}
Then, under a Lorentz transformation $\Lambda$, using \eqns{eqn:C3}
and \eqref{eqn:C5},
\begin{equation*}
\begin{split}
\UL \ketls{x,n}
&= \ketlmlp{\Lambda x, \Lambda n}\, \lpl{[\Dmat(\WLn)]} \uls \\
&= \ketlmlp{\Lambda x, \Lambda n}\, \hilo{u}{l'}{\sigma'}
\sps{[D(\WLn)]} \\
&= \ketlsp{\Lambda x, \Lambda n}\, \sps{[D(\WLn)]} \,,
\end{split}
\end{equation*}
that is, $\ketls{x,n}$ transforms according to the appropriate spin
representation.
Now consider a past-pointing $n$ ($n^{2} = -1$ and $n^{0} < 0$). In
this case, $-n$ is future pointing so that $-n = L(-n)e$, or $n =
L(-n)(-e)$. Taking $L(-n)$ to be the standard Lorentz transformation
for past-pointing $n$, it is thus possible to construct spin states
in terms of the future-pointing $-n$. However, since the spacial
part of $n$ is also reversed in $-n$, it is conventional to consider
the spin sense reversed, too. Therefore, define
\begin{equation} \label{eqn:C9}
\vls(n) \equiv (-1)^{j+\sigma} \hilo{u}{l}{-\sigma}(-n) \,,
\end{equation}
for a spin-$j$ representation, and, for past-pointing $n$, take
\begin{equation*}
\ketls{x,n} = \ketxll\, \vls(n) \,.
\end{equation*}
The matrices $\uls$ and $\vls$ are the same as the spin coefficient
functions in Weinberg's formalism in the context of traditional field
theory \cite{weinberg64a} (see also Chapter 5 of
\refcite{weinberg95}). Note that, from \eqn{eqn:C5}, using
\eqn{eqn:C1},
\begin{equation*}
\ulpsp \sps{[D(\WLn)]}
= \lpl{[\Dmat(\WLn)]} \uls
= \lpl{[\Lmat(\Lambda n)^{-1} \DL \Lmat(n)]} \uls \,,
\end{equation*}
so, using \eqn{eqn:C8},
\begin{equation} \label{eqn:C10}
\ulsp(\Lambda n) \sps{[D(\WLn)]} = \lpl{[\DL]} \uls(n) \,.
\end{equation}
Using this with \eqn{eqn:C9} gives
\begin{equation*}
\lpl{[\DL]} \vls(n)
= (-1)^{\sigma-\sigma'} \vlpsp(\Lambda n)
\hilo{[D(\WLn)]}{-\sigma'}{-\sigma}\,.
\end{equation*}
Since
\begin{equation*}
(-1)^{\sigma - \sigma'} \hilo{D(W)}{-\sigma'}{-\sigma}
= [\sps{D(W)}]^{*}
\end{equation*}
(which can be derived by integrating the infinitesimal case), this
gives,
\begin{equation} \label{eqn:C11}
\vlpsp(\Lambda n) [\sps{D(\WLn)}]^{*}
= \lpl{[\DL]} \vls(n) \,.
\end{equation}
As shown by Weinberg \cite{weinberg64a, weinberg95}, \eqns{eqn:C10}
and \eqref{eqn:C11} can be used to completely determine the $u$ and
$v$ matrices, along with the usual relationship of the Lorentz group
scalar, spinor and vector representations to the rotation group
spin-0, spin-1/2 and spin-1 representations. | 3,663 | 35,173 | en |
train | 0.4996.7 | Define
\begin{equation} \label{eqn:C6}
\ketls{x,n} \equiv \ketlml{x,n}\, \uls \,.
\end{equation}
Substituting from \eqn{eqn:C2} gives
\begin{equation} \label{eqn:C7}
\ketls{x,n} = \ketxll\, \uls(n) \,.
\end{equation}
where
\begin{equation} \label{eqn:C8}
\uls(n) \equiv \hilo{[\Lmat(n)]}{l}{l'}\,
\ulps \,.
\end{equation}
Then, under a Lorentz transformation $\Lambda$, using \eqns{eqn:C3}
and \eqref{eqn:C5},
\begin{equation*}
\begin{split}
\UL \ketls{x,n}
&= \ketlmlp{\Lambda x, \Lambda n}\, \lpl{[\Dmat(\WLn)]} \uls \\
&= \ketlmlp{\Lambda x, \Lambda n}\, \hilo{u}{l'}{\sigma'}
\sps{[D(\WLn)]} \\
&= \ketlsp{\Lambda x, \Lambda n}\, \sps{[D(\WLn)]} \,,
\end{split}
\end{equation*}
that is, $\ketls{x,n}$ transforms according to the appropriate spin
representation.
Now consider a past-pointing $n$ ($n^{2} = -1$ and $n^{0} < 0$). In
this case, $-n$ is future pointing so that $-n = L(-n)e$, or $n =
L(-n)(-e)$. Taking $L(-n)$ to be the standard Lorentz transformation
for past-pointing $n$, it is thus possible to construct spin states
in terms of the future-pointing $-n$. However, since the spacial
part of $n$ is also reversed in $-n$, it is conventional to consider
the spin sense reversed, too. Therefore, define
\begin{equation} \label{eqn:C9}
\vls(n) \equiv (-1)^{j+\sigma} \hilo{u}{l}{-\sigma}(-n) \,,
\end{equation}
for a spin-$j$ representation, and, for past-pointing $n$, take
\begin{equation*}
\ketls{x,n} = \ketxll\, \vls(n) \,.
\end{equation*}
The matrices $\uls$ and $\vls$ are the same as the spin coefficient
functions in Weinberg's formalism in the context of traditional field
theory \cite{weinberg64a} (see also Chapter 5 of
\refcite{weinberg95}). Note that, from \eqn{eqn:C5}, using
\eqn{eqn:C1},
\begin{equation*}
\ulpsp \sps{[D(\WLn)]}
= \lpl{[\Dmat(\WLn)]} \uls
= \lpl{[\Lmat(\Lambda n)^{-1} \DL \Lmat(n)]} \uls \,,
\end{equation*}
so, using \eqn{eqn:C8},
\begin{equation} \label{eqn:C10}
\ulsp(\Lambda n) \sps{[D(\WLn)]} = \lpl{[\DL]} \uls(n) \,.
\end{equation}
Using this with \eqn{eqn:C9} gives
\begin{equation*}
\lpl{[\DL]} \vls(n)
= (-1)^{\sigma-\sigma'} \vlpsp(\Lambda n)
\hilo{[D(\WLn)]}{-\sigma'}{-\sigma}\,.
\end{equation*}
Since
\begin{equation*}
(-1)^{\sigma - \sigma'} \hilo{D(W)}{-\sigma'}{-\sigma}
= [\sps{D(W)}]^{*}
\end{equation*}
(which can be derived by integrating the infinitesimal case), this
gives,
\begin{equation} \label{eqn:C11}
\vlpsp(\Lambda n) [\sps{D(\WLn)}]^{*}
= \lpl{[\DL]} \vls(n) \,.
\end{equation}
As shown by Weinberg \cite{weinberg64a, weinberg95}, \eqns{eqn:C10}
and \eqref{eqn:C11} can be used to completely determine the $u$ and
$v$ matrices, along with the usual relationship of the Lorentz group
scalar, spinor and vector representations to the rotation group
spin-0, spin-1/2 and spin-1 representations.
Since, from \eqns{eqn:C4} and \eqref{eqn:C8},
\begin{equation*}
\begin{split}
\ulspt(n)^{*}\uls(n) &= [\lohi{\Lmat(n)}{l}{\lbar'}]^{*}
(\lohi{u}{\lbar'}{\sigma'})^{*}
\hilo{[\Lmat(n)]}{l}{\lbar}\,
\hilo{u}{\lbar}{\sigma} \\
&= (\lohi{u}{\lbar'}{\sigma'})^{*}
\hilo{[\Lmat(n)^{-1}]}{\lbar'}{l}
\hilo{[\Lmat(n)]}{l}{\lbar}\,
\hilo{u}{\lbar}{\sigma} \\
&= (\lohi{u}{\lbar}{\sigma'})^{*}
\hilo{u}{\lbar}{\sigma} \\
&= \sps{\delta} \,,
\end{split}
\end{equation*}
\eqns{eqn:C0d} and \eqref{eqn:C7} give
\begin{equation} \label{eqn:C12a}
\hisp\inner{x',n;\lambda}{x,n;\lambda}\los
= \sps{\delta} \delta^{4}(x'-x)
\end{equation}
(and similarly for past-pointing $n$ with $\vls$), so that, for given $n$
and $\lambda$, the $\ketls{x,n}$ form an orthogonal basis. However,
for different $\lambda$, the inner product is
\begin{equation} \label{eqn:C12b}
\hisp\inner{x,n;\lambda}{\xz,n;\lambdaz}\los
= \kernelsps \,,
\end{equation}
where $\kernelsps$ is the kernel for the rotation group. As previously
argued, this should have the same form as the Euclidean kernel of
\eqn{eqn:B3}, restricted to the rotation subgroup of $SO(4)$. That is
\begin{equation}
\kernelsps = \sps{\delta} \kerneld \,.
\end{equation}
As in \eqn{eqn:A1a}, the propagator is given by integrating the
kernel over $\lambda$:
\begin{equation*}
\propsps = \sps{\delta} \prop \,,
\end{equation*}
where (using \eqn{eqn:A5b})
\begin{equation*}
\prop = \int_{\lambdaz}^{\infty} \dl\, \kerneld
= -\mi(2\pi)^{-4}\intfour p\,
\frac{\me^{\mi p\cdot(x-\xz)}}
{p^{2}+m^{2}-\mi\epsilon} \,,
\end{equation*}
the usual Feynman propagator \cite{seidewitz06a}. Defining
\begin{equation*}
\kets{x,n} \equiv \int_{\lambdaz}^{\infty} \dl\, \ketls{x,n}
\end{equation*}
then gives
\begin{equation} \label{eqn:C13}
\hisp\inner{x,n}{\xz,n;\lambdaz}\los = \propsps \,.
\end{equation}
Finally, we can inject the spin-representation basis states
$\ketls{x,n}$ back into the Lorentz group representation by
\begin{equation*}
\ketll{x,n} \equiv \ketls{x,n}\ulst(n)^{*} \,,
\end{equation*}
(and similarly for past-pointing $n$ with $\vlst$). Substituting
\eqn{eqn:C7} into this gives
\begin{equation} \label{eqn:C13a}
\ketll{x,n} = \ketxllp\Plpl(n) \,,
\end{equation}
where
\begin{equation} \label{eqn:C14}
\Plpl(n) \equiv \ulps(n)\ulst(n)^{*} = \vls(n)\vlst(n)^{*}
\end{equation}
(the last equality following from \eqn{eqn:C9}). Using
\eqns{eqn:C12a} and \eqref{eqn:C12b}, the kernel for these states is
\begin{equation*}
\hilp\inner{x,n;\lambda}{\xz,n;\lambdaz}\lol
= \Plpl(n) \kerneld \,.
\end{equation*}
However, using \eqns{eqn:C10} and \eqref{eqn:C11}, it can be shown
that the $\ketll{x,n}$ transform like the $\ketxll$:
\begin{equation*}
\UL \ketll{x,n}
= \ketllp{\Lambda x, \Lambda n}\, \Dmatlpl(\lambda) \,.
\end{equation*}
Taking
\begin{equation*}
\ket{x,n}\lol \equiv \int_{\lambdaz}^{\infty} \dl\, \ketll{x,n}
\end{equation*}
and using \eqn{eqn:C13} gives the propagator
\begin{equation} \label{eqn:C15}
\hilp\inner{x,n}{\xz,n;\lambdaz}\lol = \Plpl(n) \prop \,.
\end{equation}
Now, the $\ketll{x,n}$ do not span the full Lorentz group Hilbert
space vector bundle of the $\ketxll$, but they do span the subspace
corresponding to the rotation subgroup. Therefore, using
\eqn{eqn:C13a} and the idempotency of $\Plpl(n)$ as a projection
matrix,
\begin{equation} \label{eqn:C16}
\begin{split}
\ket{x,n}\lol &= \intfour \xz\,
\hilp\inner{\xz,n;\lambdaz}{x,n}\lol
\ketlz{\xz,n}\lolp \\
&= \intfour \xz\, \Plpl(n)\prop^{*}
\hilo{P}{\lbar'}{l'}(n) \ketxlz_{\lbar'} \\
&= \intfour \xz\, \Plpl(n)\prop^{*} \ketxlzlp \,.
\end{split}
\end{equation} | 2,785 | 35,173 | en |
train | 0.4996.8 | \section{Particles and Antiparticles}
\label{sect:non-scalar:antiparticles}
Because of \eqn{eqn:C13}, the states $\kets{x,n}$ allow for a
straightforward generalization of the treatment of particles and
antiparticles from \refcite{seidewitz06a} to the non-scalar case. As in
that treatment, consider particles to propagate \emph{from} the past
\emph{to} the future while antiparticles propagate from the
\emph{future} into the \emph{past} \cite{stueckelberg41,
stueckelberg42, feynman49}. Therefore, postulate non-scalar particle
states $\ketans{x}$ and antiparticle states $\ketrns{x}$ as follows.
\begin{postulate}
Normal particle states $\ketans{x}$ are such that
\begin{equation*}
\hisp\inner{\adv{x},n}{\xz,n;\lambdaz}\los
= \thetaax \propsps
= \thetaax \propasps \,,
\end{equation*}
and antiparticle states $\ketrns{x}$ are such that
\begin{equation*}
\hisp\inner{\ret{x},n}{\xz,n;\lambdaz}\los
= \thetarx \propsps
= \thetarx \proprsps \,,
\end{equation*}
where $\theta$ is the Heaviside step function, $\theta(x) = 0$,
for $x < 0$, and $\theta(x) = 1$, for $x > 0$, and
\begin{equation*}
\proparsps = \sps{\delta} (2\pi)^{-3}
\intthree p\, (2\Ep)^{-1}
\me^{\mi[\mp\Ep(x^{0}-\xz^{0})
+\threep\cdot(\threex-\threex_{0})]} \,,
\end{equation*}
with $\Ep \equiv \sqrt{\threep^{2} + m^{2}}$.
\end{postulate}
Note that the vector $n$ used here is timelike but otherwise
arbitrary, with no commitment that it be, e.g., future-pointing for
particles and past-pointing for antiparticles.
This division into particle and antiparticle paths depends, of course,
on the choice of a specific coordinate system in which to define the
time coordinate. However, if we take the time limit of the end point
of the path to infinity for particles and negative infinity for
antiparticles, then the particle/antiparticle distinction will be
coordinate system independent.
In taking this time limit, one cannot expect to hold the 3-position of
the path end point constant. However, for a free particle, it is
reasonable to take the particle \emph{3-momentum} as being fixed.
Therefore, consider the state of a particle or antiparticle with a
3-momentum $\threep$ at a certain time $t$.
\begin{postulate}
The state of a particle ($+$) or antiparticle ($-$) with
3-momentum $\threep$ is given by
\begin{equation*}
\ketarns{t,\threep}
\equiv (2\pi)^{-3/2} \intthree x\,
\me^{\mi(\mp\Ep t + \threep\cdot\threex)}
\ketarns{t,\threex} \,.
\end{equation*}
\end{postulate}
Now, following the derivation in \refcite{seidewitz06a}, but carrying
along the spin indices, gives
\begin{equation} \label{eqn:D1}
\begin{split}
\ketans{t,\threep} &=
(2\Ep)^{-1} \int_{-\infty}^{t} \dt_{0}\,
\ketanlzs{t_{0},\threep} \quad \text{and} \\
\ketrns{t,\threep} &=
(2\Ep)^{-1} \int_{t}^{+\infty} \dt_{0}\,
\ketrnlzs{t_{0},\threep} \,,
\end{split}
\end{equation}
where
\begin{equation} \label{eqn:D1a}
\ketarnlzs{t,\threep}
\equiv (2\pi)^{-3/2} \intthree x\,
\me^{\mi(\mp\Ep t + \threep\cdot\threex)}
\ketlzs{t,\threex,n} \,.
\end{equation}
Since
\begin{equation*}
\hisp\inner{\advret{t',\threepp},n; \lambdaz}
{\advret{t,\threep},n; \lambdaz}\los =
\sps{\delta} \delta(t'-t) \delta^{3}(\threepp - \threep) \,,
\end{equation*}
we have, from \eqn{eqn:D1},
\begin{equation*}
\hisp\inner{\advret{t,\threep},n}
{\advret{t_{0}, \threep_{0}{}},n; \lambdaz}\los =
(2\Ep)^{-1} \sps{\delta} \theta(\pm(t-t_{0}))
\delta^{3}(\threep - \threep_{0}) \,.
\end{equation*}
Defining the time limit particle and antiparticle states
\begin{equation} \label{eqn:D2}
\ketarthreepns \equiv \lim_{t \to \pm\infty}
\ketarns{t,\threep} \,,
\end{equation}
then gives
\begin{equation} \label{eqn:D3}
\hisp\inner{\advret{\threep},n}
{\advret{t_{0}, \threep_{0}{},n}; \lambdaz}\los
= (2\Ep)^{-1} \sps{\delta}
\delta^{3}(\threep - \threep_{0}) \,,
\end{equation}
for \emph{any} value of $t_{0}$.
Further, writing
\begin{equation*}
\ketarnlzs{t_{0}, \threep}
= (2\pi)^{-1/2} \me^{\mp\mi\Ep t_{0}}
\int \dif p^{0}\, \me^{\mi p^{0}t_{0}}
\ketlzs{p,n} \,,
\end{equation*}
where
\begin{equation} \label{eqn:D4}
\ketlzs{p,n} \equiv (2\pi)^{-2} \intfour x\, \me^{\mi p \cdot x}
\ketlzs{x,n}
\end{equation}
is the corresponding 4-momentum state, it is straightforward to see
from \eqn{eqn:D1} that the time limit of \eqn{eqn:D2} is
\begin{equation} \label{eqn:D5}
\ketarthreepns \equiv \lim_{t \to \pm\infty} \ketarns{t,\threep}
= (2\pi)^{1/2} (2\Ep)^{-1} \ketarnlzs{\pm\Ep,\threep} \,.
\end{equation}
Thus, a normal particle ($+$) or antiparticle ($-$) that has
3-momentum $\threep$ as $t \to \pm\infty$ is \emph{on-shell}, with
energy $\pm\Ep$. Such on-shell particles are unambiguously normal
particles or antiparticles.
For the on-shell states $\ketarthreepns$, it now becomes reasonable to
introduce the usual convention of taking the on-shell momentum vector
as the spin vector. That is, set $\npar \equiv (\pm\Ep, \threep)/m$
and define
\begin{equation*}
\varketarthreep\los \equiv \kets{\advret{\threep},\npar}
\end{equation*}
and
\begin{equation*}
\varketar{t,\threep}\los \equiv \kets{t,\advret{\threep},\npar} \,,
\end{equation*}
so that
\begin{equation*}
\varketarthreep\los =
\lim_{t\to\pm\infty} \varketar{t,\threep}\los \,.
\end{equation*}
Further, define the position
states
\begin{equation} \label{eqn:D6}
\begin{split}
\varketax\lol
&\equiv (2\pi)^{-3/2}\intthree p\,
\me^{\mi(\Ep x^{0} - \threep\cdot\threex)}
\varketa{x^{0},\threep}\los \ulst(\npa)^{*}
\text{ and } \\
\varketrx\lol
&\equiv (2\pi)^{-3/2}\intthree p\,
\me^{\mi(-\Ep x^{0} - \threep\cdot\threex)}
\varketr{x^{0},\threep}\los \vlst(\npr)^{*} \,.
\end{split}
\end{equation}
Then, working the previous derivation backwards gives
\begin{equation*}
\hilp(\advret{x}\ketxlzl = \thetaarx \proparlpl \,,
\end{equation*}
where
\begin{equation*}
\proparlpl \equiv
(2\pi)^{-3} \intthree p\, \Plpl(\npar)
(2\Ep)^{-1} \me^{\mi[\pm\Ep (x^{0}-\xz^{0}) -
\threep\cdot(\threex-\threex_{0})]} \,.
\end{equation*}
Now, it is shown in \refcites{weinberg64a, weinberg95} that the
covariant non-scalar propagator
\begin{equation*}
\proplpl = -\mi(2\pi)^{-4} \intfour p\, \Plpl(p/m)
\frac{\me^{\mi p\cdot(x-\xz)}}{p^{2}+m^{2}-\mi\varepsilon} \,,
\end{equation*}
in which $\Plpl(p/m)$ has the polynomial form of $\Plpl(n)$, but $p$
is not constrained to be on-shell, can be decomposed into
\begin{equation*}
\proplpl = \thetaax\propalpl + \thetarx\proprlpl
+ \Qlpl\left(-\mi\pderiv{}{x}\right)
\mi\delta^{4}(x-\xz) \,,
\end{equation*}
where the form of $\Qlpl$ depends on any non-linearity of $\Plpl(p/m)$
in $p^{0}$. Then, defining
\begin{equation*}
\varketx\lol \equiv \intfour \xz\, \proplpl^{*} \ketxlz\lolp \,,
\end{equation*}
$\varketax\lol$ and $\varketrx\lol$ can be considered as a
particle/antiparticle partitioning of $\varketx\lol$, in a similar
way as the partitioning of $\ket{x,n}\los$ into $\keta{x,n}\los$ and
$\ketr{x,n}\los$:
\begin{equation*}
\begin{split}
\thetaarx\hilp(x\ketxlzl &= \thetaarx \proplpl \\
&= \thetaarx \proparlpl \\
&= \hilp(\advret{x}\ketxlzl \,.
\end{split}
\end{equation*}
Because of the delta function, the term in $\Qlpl$ does not
contribute for $x \neq \xz$.
The states $\ket{x,n}\lol$ and $\varketx\lol$ both transform according
to a representation $\Dmatlpl$ of the Lorentz group, but it is
important to distinguish between them. The $\ket{x,n}\lol$ are
projections back into the Lorentz group of the states $\kets{x,n}$
defined on the rotation subgroup, in which that subgroup is obtained
by uniformly reducing the Lorentz group about the axis given by $n$.
The $\varketx\lol$, on the other hand, are constructed by
inverse-transforming from the momentum states
$\varketar{t,\threep}\los$, with each superposed state defined over a
rotation subgroup reduced along a different on-shell momentum vector.
One can further highlight the relationship of the $\varketx\lol$ to
the momentum in the position representation by the formal equation
(using \eqn{eqn:C16})
\begin{equation*}
\varketx\lol = \intfour \xz\,
\Plpl\left(
\mi m^{-1} \pderiv{}{x}
\right)
\prop^{*} \ketxlzlp
= \ket{x, \mi m^{-1} \partial/\partial x}\lol
= \Plpl\left(
\mi m^{-1} \pderiv{}{x}
\right) \ketx\lolp \,.
\end{equation*}
The $\varketx\lol$ correspond to the position states used in
traditional relativistic quantum mechanics, with associated on-shell
momentum states $\varketarthreep$. However, we will see in the next
section that the states $\ket{x,n}\lol$ provide a better basis for
generalizing the scalar probability interpretation discussed in
\refcite{seidewitz06a}. | 3,489 | 35,173 | en |
train | 0.4996.9 | \section{On-Shell Probability Interpretation}
\label{sect:non-scalar:probability}
Similarly to the scalar case \cite{seidewitz06a}, let $\HilbH^{(j,n)}$
be the Hilbert space of the $\ketnlzs{x}$ for the spin-$j$
representation of the rotation group and a specific timelike vector
$n$, and let $\HilbH^{(j,n)}_{t}$ be the subspaces spanned by the
$\ketnlzs{t,\threex}$, for each $t$, forming a foliation of
$\HilbH^{(j,n)}$. Now, from \eqn{eqn:D1a}, it is clear that the
particle and antiparticle 3-momentum states $\ketarnlzs{t,\threep}$
also span $\HilbH^{(j,n)}_{t}$. Using these momentum bases, states in
$\HilbH^{(j,n)}_{t}$ have the form
\begin{equation*}
\ketarnlzs{t, \psi}
= \intthree p\, \sps{\psi}(\threep) \ketarnlzsp{t, \threep} \,,
\end{equation*}
for matrix functions $\psi$ such that $\tr(\psi\adj\psi)$ is
integrable. Conversely, it follows from \eqn{eqn:D3} that the
probability amplitude $\sps{\psi}(\threep)$ is given by
\begin{equation} \label{eqn:E0}
\sps{\psi}(\threep) = (2\Ep)\hisp\inner{\advret{\threep},n}
{\advret{t,\psi},n; \lambdaz}\los \,.
\end{equation}
Let $\HilbH^{\prime (j,n)}_{t}$ be the space of linear functions dual
to $\HilbH^{(j,n)}_{t}$. Via \eqn{eqn:E0}, the bra states
$\his\braathreep$ can be considered as spanning subspaces
$\advret{\HilbH}^{\prime (j,n)}$ of the $\HilbH^{\prime (j,n)}_{t}$,
with states of the form
\begin{equation*}
\his\bra{\advret{\psi},n}
= \intthree p\, \lohi{\psi}{\sigma'}{\sigma}(\threep)^{*}\;
\hisp\bra{\advret{\threep},n} \,.
\end{equation*}
The inner product
\begin{equation*}
(\psi_{1},\psi_{2})
\equiv \his\inner{\advret{\psi_{1}{}},n}
{\advret{t,\psi_{2}{}},n;\lambdaz}\los
= \int \frac{\dthree p}{2\Ep}
\lohi{\psi_{1}}{\sigma'}{\sigma}(\threep)^{*}
\sps{\psi_{2}}(\threep)
\end{equation*}
gives
\begin{equation*}
(\psi,\psi)
= \int \frac{\dthree p}{2\Ep}
\sum_{\sigma'\sigma} \sqr{\sps{\psi}(\threep)}
\geq 0 \,,
\end{equation*}
so that, with this inner product, the $\HilbH^{(j,n)}_{t}$ actually
are Hilbert spaces in their own right.
Further, \eqn{eqn:D3} is a \emph{bi-orthonormality relation} with the
corresponding resolution of the identity (see \refcite{akhiezer81} and
App.\ A.8.1 of \refcite{muynk02})
\begin{equation*} \label{eqn:E1}
\intthree p\,
(2\Ep) \ketarnlzs{t, \threep}\;\his\bra{\advret{\threep},n}
= 1 \,.
\end{equation*}
The operator $(2\Ep) \ketarnlzs{t, \threep}\;\his\braar{\threep,n}$
represents the quantum proposition that an on-shell, non-scalar
particle or antiparticle has 3-momentum $\threep$.
Like the $\lpl{\psi}$ discussed in \sect{sect:non-scalar:euclidean} for
the Lorentz group, the $\sps{\psi}$ form an algebra over the rotation
group with components $\psi(\threep, B)$, where $\Bsps$ is a member
of the appropriate representation of the rotation group, such that
\begin{equation} \label{eqn:E2}
\sps{\psi}(\threep)
= \intthree B\, \Bsps \psi(\threep, B) \,,
\end{equation}
with the integration taken over the 3-dimensional rotation group.
Unlike the Lorentz group, however, components can also be reconstructed
from the $\sps{\psi}(\threep)$ by
\begin{equation} \label{eqn:E3}
\psi(\threep, B) = \beta^{-1}\hilo{(B^{-1})}{\sigma}{\sigma'}
\sps{\psi}(\threep) \,
\end{equation}
where
\begin{equation*}
\beta \equiv \frac{1}{2j+1} \intthree B \,,
\end{equation*}
for a spin-$j$ representation, is finite because the rotation group is
closed. Plugging \eqn{eqn:E3} into the right side of \eqn{eqn:E2}
and evaluating the integral does, indeed, give $\sps{\psi}(\threep)$,
as required, because of the orthogonality property
\begin{equation*}
\intthree B\, \Bsps \hilo{(B^{-1})}{\sbar}{\sbar'}
= \beta \hilo{\delta}{\sigma'}{\sbar'} \lohi{\delta}{\sigma}{\sbar}
\end{equation*}
(see \refcite{weyl50}, Section 11). We can now adjust the group
volume measure $\dthree B$ so that $\beta = 1$.
The set of all $\psi(\threep, B)$ constructed as in \eqn{eqn:E3} forms
a subalgebra such that each $\psi(\threep, B)$ is uniquely determined
by the corresponding $\sps{\psi}(\threep)$ (see \refcite{weyl50},
pages 167ff). We can then take $\sqr{\psi(\threep, B)} =
\sqr{\hilo{(B^{-1})}{\sigma}{\sigma'}\sps{\psi}(\threep)}$ to be the
probability density for the particle or antiparticle to have
3-momentum $\threep$ and to be rotated as given by $B$ about the axis
given by the spacial part of the unit timelike 4-vector $n$. The
probability density for the particle or antiparticle in 3-momentum
space is
\begin{equation*}
\intthree B\, \sqr{\psi(\threep, B)}
= \lohi{\psi}{\sigma'}{\sigma}(\threep)^{*}
\sps{\psi}(\threep) \,
\end{equation*}
with the normalization
\begin{equation*}
(\psi,\psi)
= \int \frac{\dthree p}{2\Ep}
\lohi{\psi}{\sigma'}{\sigma}(\threep)^{*}
\sps{\psi}(\threep)
= 1 \,.
\end{equation*}
Next, consider that $\ketnlzs{t,\threex}$ is an eigenstate of the
three-position operator $\op{\threevec{X}}$, representing a particle
localized at the three-position $\threex$ at time $t$. From
\eqn{eqn:E0}, and using the inverse Fourier transform of \eqn{eqn:D4}
with \eqn{eqn:D5}, its three momentum wave function is
\begin{equation} \label{eqn:E4}
(2\Ep)\, \hisp\inner{\advret{\threep},n}
{t,\threex;\lambdaz}\los
= (2\pi)^{-3/2} \sps{\delta}
\me^{\mi(\pm\Ep t - \threep\cdot\threex)} \,.
\end{equation}
This is just a plane wave, and it is an eigenfunction of the operator
\begin{equation*}
\me^{\pm\mi\Ep t} \mi \pderiv{}{\threep} \me^{\mp\mi\Ep t} \,,
\end{equation*}
which acts as the identity on the spin indices and is otherwise the
traditional momentum representation $\mi \partial/\partial\threep$ of
the three-position operator $\op{\threevec{X}}$, translated to time
$t$.
This result exactly parallels that of the scalar case
\cite{seidewitz06a}. Note that this is only so because of the use of
the independent vector $n$ for reduction to the rotation group,
rather than the traditional approach of using the three-momentum
vector $\threep$. Indeed, it is not even possible to define a
spin-indexed position eigenstate in the traditional approach,
because, of course, the momentum is not sharply defined for such a
state \cite{piron78, horwitz82}.
On the other hand, consider the three-position states $\varketarx\lol$
introduced at the end of \sect{sect:non-scalar:antiparticles}. Even
though these are Lorentz-indexed, they only span the rotation
subgroup. Therefore, we can form their three-momentum wave functions
in the $\his\varbra{\advret{\threep}}$ bases. Using \eqns{eqn:D6} and
\eqref{eqn:D3},
\begin{equation} \label{eqn:E5}
(2\Ep)\, \his\varinner{\advret{\threep}}{\advret{x}}\lol
= (2\pi)^{-3/2} \ulst(\np)^{*}
\me^{\mi(\pm\Ep t - \threep\cdot\threex)} \,.
\end{equation}
At $t = 0$, up to normalization factors of powers of $(2\Ep)$, this is
just the Newton-Wigner wave function for a localized particle of
non-zero spin \cite{newton49}. It is an eigenfunction of the position
operator represented as
\begin{equation} \label{eqn:E6}
\ulpspt(\np)^{*} \me^{\mi\Ep t} \mi \pderiv{}{\threep}
\me^{-\mi\Ep t} \ulsp(\np)
\end{equation}
for the particle case, with a similar expression using $\vls$ in the
antiparticle case. Other than the time translation, this is
essentially the Newton-Wigner position operator for non-zero spin
\cite{newton49}.
Note that \eqn{eqn:E4} is effectively related to \eqn{eqn:E5} by a
generalized Foldy-Wouthuysen transformation \cite{foldy50, case54}.
However, in the present approach it is \eqn{eqn:E4} that is seen to
be the primary result, with a natural separation of particle and
antiparticle states and a reasonable non-relativistic limit, just as
in the scalar case \cite{seidewitz06a}. | 2,827 | 35,173 | en |
train | 0.4996.10 | \section{Interactions} \label{sect:non-scalar:interactions}
It is now straightforward to extend the formalism to multiparticle
states and introduce interactions, quite analogously to the scalar
case \cite{seidewitz06a}. In order to allow for multiparticle states
with different types of particles, extend the position state of each
individual particle with a \emph{particle type index} $\nbase$, such
that
\begin{equation*}
\hilp\inner{x',\nbase';\lambda}{x,\nbase;\lambda}\lol
= \delta^{l'}_{l}\delta^{\nbase'}_{\nbase}\delta^{4}(x'-x) \,.
\end{equation*}
Then, construct a basis for the Fock space of multiparticle states as
sym\-me\-trized/anti\-sym\-me\-trized products of $N$ single particle
states:
\begin{multline*}
\ket{\xnliN}\lolN
\equiv (N!)^{-1/2} \sum_{\text{perms }\Perm} \delta_{\Perm}
\ket{\xni{\Perm 1};\lambda_{\Perm 1}}\loli{\Perm 1} \cdots \\
\ket{\xni{\Perm N};\lambda_{\Perm N}}\loli{\Perm N} \,,
\end{multline*}
where the sum is over permutations $\Perm$ of $1,\ldots,N$, and
$\delta_{\Perm}$ is $+1$ for permutations with an even number of
interchanges of fermions and $-1$ for an odd number of interchanges.
Define multiparticle states $\ket{\xniN}\lolN$ as similarly
sym\-me\-trized/anti\-sym\-me\-trized products of $\ketx\lol$ states.
Then,
\begin{equation} \label{eqn:F1}
\hilpN\inner{\xnpiN}{\seqN{\xnlzi}}\lolN
= \sum_{\text{perms }\Perm} \delta_{\Perm}
\prod_{i = 1}^{N}
\proplplij{\Perm i}{i} \,,
\end{equation}
where each propagator is also implicitly a function of the mass of the
appropriate type of particle. Note that the use of the same parameter
value $\lambdaz$ for the starting point of each particle path is
simply a matter of convenience. The intrinsic length of each particle
path is still integrated over \emph{separately} in $\ket{\xniN}\lolN$,
which is important for obtaining the proper particle propagator
factors in \eqn{eqn:F1}. Nevertheless, by using $\lambdaz$ as a common
starting parameter, we can adopt a similar notation simplification as
in \refcite{seidewitz06a}, defining
\begin{equation*}
\ket{\xnlziN}\lolN \equiv \ket{\seqN{\xnlzi}}\lolN \,.
\end{equation*}
It is also convenient to introduce the formalism of creation and
annihilation fields for these multiparticle states. Specifically,
define the creation field $\oppsitl(x,\nbase;\lambda)$ by
\begin{equation*}
\oppsitl(x,\nbase;\lambda)\ket{\xnliN}\lolN
= \ket{x,\nbase,\lambda;\xnliN}_{l,\listN{l}} \,,
\end{equation*}
with the corresponding annihilation field $\oppsil(x,\nbase;\lambda)$
having the commutation relation
\begin{equation*}
[\oppsilp(x',\nbase';\lambda), \oppsitl(x,\nbase;\lambdaz)]_{\mp}
= \delta^{\nbase'}_{\nbase}\propsymlpl(x'-x;\lambda-\lambdaz) \,,
\end{equation*}
where the upper $-$ is for bosons and the lower $+$ is for fermions.
Further define
\begin{equation*}
\oppsil(x,\nbase) \equiv
\int_{\lambdaz}^{\infty} \dl\, \oppsil(x,\nbase;\lambda) \,,
\end{equation*}
so that
\begin{equation*}
[\oppsilp(x',\nbase'), \oppsitl(x,\nbase;\lambdaz)]_{\mp}
= \delta^{\nbase'}_{\nbase}\propsymlpl(x'-x) \,,
\end{equation*}
which is consistent with the multi-particle inner product as given in
\eqn{eqn:F1}. Finally, as in \refcite{seidewitz06a}, define a
\emph{special adjoint} $\oppsi\dadj$ by
\begin{equation} \label{eqn:F2}
\oppsi\dadj\lol(x,\nbase) = \oppsitl(x,\nbase;\lambdaz) \text{ and }
\oppsi\dadj\lol(x,\nbase;\lambdaz) = \oppsitl(x,\nbase) \,,
\end{equation}
which allows the commutation relation to be expressed in the more
symmetric form
\begin{equation*}
[\oppsilp(x',\nbase'), \oppsi\dadj\lol(x,\nbase)]_{\mp}
= \delta^{\nbase'}_{\nbase}\propsymlpl(x'-x) \,.
\end{equation*}
We can now readily generalize the postulated interaction vertex
operator of \refcite{seidewitz06a} to the non-scalar case.
\begin{postulate}
An interaction vertex, possibly occurring at any position in
spacetime, with some number $a$ of incoming particles and some
number $b$ of outgoing particles, is represented by the operator
\begin{equation} \label{eqn:F3}
\opV \equiv g\hilpn{a}{}\loln{b} \intfour x\,
\prod_{i = 1}^{a} \oppsi\dadj_{l'_{i}}(x,\nbase'_{i})
\prod_{j = 1}^{b} \oppsi^{l_{j}}(x,\nbase_{j}) \,,
\end{equation}
where the coefficients $g\hilpn{a}{}\loln{b}$ represent the
relative probability amplitudes of various combinations of indices
in the interaction and $\oppsi\dadj$ is the special adjoint
defined in \eqn{eqn:F2}.
\end{postulate}
Given a vertex operator defined as in \eqn{eqn:F3}, the interacting
transition amplitude, with any number of intermediate interactions,
is then
\begin{multline} \label{eqn:F4}
G(\xnpiNp | \xniN)\hilpn{N'}{}\lolN \\
= \hilpn{N'}\bra{\xnpiN} \opG \ket{\xnlziN}\lolN \,,
\end{multline}
where
\begin{equation*}
\opG \equiv \sum_{m=0}^{\infty} \frac{(-\mi)^{m}}{m!}\opV^{m}
= \me^{-i\opV} \,.
\end{equation*}
Each term in this sum gives the amplitude for $m$ interactions,
represented by $m$ applications of $\opV$. The $(m!)^{-1}$ factor
accounts for all possible permutations of the $m$ identical factors
of $\opV$.
Clearly, we can also construct on-shell multiparticle states
$\ket{\parnpiN}\lospn{N'}$ and $\ket{\tparnlziN}\losN$ from the
on-shell particle and antiparticle states $\ketarthreep\los$ and
$\ketarlz{t,\threep}\los$. Using these with the operator $\opG$:
\begin{multline} \label{eqn:F5}
G(\parnpiN | \parniN)\hispn{N'}{}\losN \\
\equiv \left[ \prod_{i=1}^{N'} 2\E{\threepp_{i}} \right]
\hispn{N'}\bra{\parnpiN} \opG \ket{\tparnlziN}\losN \,,
\end{multline}
results in a sum of Feynman diagrams with the given momenta on
external legs. Note that use of the on-shell states requires
specifically identifying external lines as particles and
antiparticles. For each incoming and outgoing particle, $+$ is chosen
if it is a normal particle and $-$ if it is an antiparticle. (Note
that ``incoming'' and ``outgoing'' here are in terms of the path
evolution parameter $\lambda$, \emph{not} time.)
The inner products of the on-shell states for individual incoming and
outgoing particles with the off-shell states for interaction vertices
give the proper factors for the external lines of a Feynman diagram.
For example, the on-shell state $\keta{\threepp}\los$ is obtained in
the $+\infty$ time limit and thus represents a \emph{final} (i.e.,
outgoing in \emph{time}) particle. If the external line for this
particle starts at an interaction vertex $x$, then the line
contributes a factor
\begin{equation*}
(2\Epp) \hisp\inner{\adv{\threepp}}{x;\lambdaz}\lol
= (2\pi)^{-3/2}
\me^{\mi(+\Epp x^{0} - \threepp \cdot \threex)}
\ulspt(\threepp)^{*} \,.
\end{equation*}
For an incoming particle on an external line ending at an
interaction vertex $x'$, the factor for this line is (assuming
$x^{\prime 0} > t$)
\begin{equation*}
(2\Ep) \hilp\inner{x'}{\adv{t,\threep};\lambdaz}\los
= (2\pi)^{-3/2}
\me^{\mi(-\Ep x^{\prime 0} + \threep \cdot \threexp)}
\ulps(\threep) \,.
\end{equation*}
Note that this expression is independent of $t$, so we can take $t \to
-\infty$ and treat the particle as \emph{initial} (i.e., incoming in
time). The factors for antiparticles are similar, but with the time
sense reversed. Thus, the effect is to remove the propagator factors
from external lines, exactly in the sense of the usual LSZ reduction
\cite{lsz55}.
Now, the formulation of \eqn{eqn:F5} is still not that of the usual
scattering matrix, since the incoming state involves initial particles
but final antiparticles, and vice versa for the outgoing state. To
construct the usual scattering matrix, it is necessary to have
multi-particle states that involve either all initial particles and
antiparticles (that is, they are composed of individual asymptotic
particle states that are all consistently for $t \to -\infty$) or all
final particles and antiparticles (with individual asymptotic states
all for $t \to +\infty$). The result is a formulation in terms of the
more familiar scattering operator $\opS$, which can be expanded in a
Dyson series in terms of a time-dependent version $\opV(t)$ of the
interaction operator. The procedure for doing this is exactly
analogous to the scalar case. For details see \refcite{seidewitz06a}.
\section{Conclusion} \label{sect:conclusion}
The extension made here of the scalar spacetime path approach
\cite{seidewitz06a} begins with the argument in \sect{sect:background}
on the form of the path propagator based on Poincar\'e invariance.
This motivates the use of a path integral over the Poincar\'e group,
with both position and Lorentz group variables, for computation of the
non-scalar propagator. Once the difficulty with the non-compactness of
the Lorentz group is overcome, the development for the non-scalar case
is remarkably parallel to the scalar case.
A natural further generalization of the approach, particularly given
its potential application to quantum gravity and cosmology, would be
to consider paths in curved spacetime. Of course, in this case it is
not in general possible to construct a family of parallel paths over
the entire spacetime, as was done in \sect{sect:non-scalar:propagator}.
Nevertheless, it is still possible to consider infinitesimal
variations along a path corresponding to arbitrary coordinate
transformations. And one can certainly construct a family of
``parallel'' paths at least over any one coordinate patch on the
spacetime manifold. The implications of this for piecing together a
complete path integral will be explored in future work.
Another direction for generalization is to consider massless
particles, leading to a complete spacetime path formulation for
Quantum Electrodynamics. However, as has been shown in previous work
on relativistically parametrized approaches to QED (e.g.,
\refcite{shnerb93}), the resulting gauge symmetries need to be handled
carefully. This will likely be even more so if consideration is
further extended to non-Abelian interactions. Nevertheless, the
spacetime path approach may provide some interesting opportunities for
addressing renormalization issues in these cases \cite{seidewitz06a}.
In any case, the present paper shows that the formalism proposed in
\refcite{seidewitz06a} can naturally include non-scalar particles. This
is, of course, critical if the approach is to be given the
foundational status considered in \refcite{seidewitz06a} and the
cosmological interpretation discussed in \refcite{seidewitz06b}.
\endinput
\appendix* | 3,387 | 35,173 | en |
train | 0.4996.11 | \section{Conclusion} \label{sect:conclusion}
The extension made here of the scalar spacetime path approach
\cite{seidewitz06a} begins with the argument in \sect{sect:background}
on the form of the path propagator based on Poincar\'e invariance.
This motivates the use of a path integral over the Poincar\'e group,
with both position and Lorentz group variables, for computation of the
non-scalar propagator. Once the difficulty with the non-compactness of
the Lorentz group is overcome, the development for the non-scalar case
is remarkably parallel to the scalar case.
A natural further generalization of the approach, particularly given
its potential application to quantum gravity and cosmology, would be
to consider paths in curved spacetime. Of course, in this case it is
not in general possible to construct a family of parallel paths over
the entire spacetime, as was done in \sect{sect:non-scalar:propagator}.
Nevertheless, it is still possible to consider infinitesimal
variations along a path corresponding to arbitrary coordinate
transformations. And one can certainly construct a family of
``parallel'' paths at least over any one coordinate patch on the
spacetime manifold. The implications of this for piecing together a
complete path integral will be explored in future work.
Another direction for generalization is to consider massless
particles, leading to a complete spacetime path formulation for
Quantum Electrodynamics. However, as has been shown in previous work
on relativistically parametrized approaches to QED (e.g.,
\refcite{shnerb93}), the resulting gauge symmetries need to be handled
carefully. This will likely be even more so if consideration is
further extended to non-Abelian interactions. Nevertheless, the
spacetime path approach may provide some interesting opportunities for
addressing renormalization issues in these cases \cite{seidewitz06a}.
In any case, the present paper shows that the formalism proposed in
\refcite{seidewitz06a} can naturally include non-scalar particles. This
is, of course, critical if the approach is to be given the
foundational status considered in \refcite{seidewitz06a} and the
cosmological interpretation discussed in \refcite{seidewitz06b}.
\endinput
\appendix*
\section{Evaluation of the $SO(4)$ Path Integral}
\label{app:path}
\begin{theorem*}
Consider the path integral
\begin{multline*}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz)
= \euc{\zeta} \intDsix \ME\,
\delta^{6}(\ME(\lambda)\LambdaE^{-1}-I)
\delta^{6}(\ME(\lambdaz)\LambdaEz^{-1}-I) \\
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \tr(\OmegaE(\lambda')
\OmegaE(\lambda')\T)
\right]
\end{multline*}
over the six dimensional group $SO(4) \sim SU(2) \times SU(2)$,
where $\OmegaE(\lambda')$ is the element of the Lie algebra $so(4)$
tangent to the path $\ME(\lambda)$ at $\lambda'$. This path integral
may be evaluated to get
\begin{multline} \label{eqn:A1A}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz) \\
= \sum_{\ell_{A},\ell_{B}}
\exp^{-\mi( \Delta m_{\ell_{A}}^{2}
+ \Delta m_{\ell_{B}}^{2})
(\lambda - \lambdaz)}
(2\ell_{A}+1)(2\ell_{B}+1)
\chi^{(\ell_{A}\ell_{B})}(\LambdaE\LambdaEz^{-1}) \,,
\end{multline}
where the summation over $\ell_{A}$ and $\ell_{B}$ is from $0$ to
$\infty$ in steps of $1/2$, $\Delta m_{\ell}^{2} = \ell(\ell+1)$
and $\chi^{(\ell_{A},\ell_{B})}$ is the group character for the
$(\ell_{A},\ell_{B})$ $SU(2) \times SU(2)$ group representation.
\end{theorem*}
\begin{proof}
Parametrize a group element $\ME$ by a six-vector $\theta$ such
that
\begin{equation*}
\ME = \exp(\sum_{i=1}^{6}\theta_{i}J_{i}) \,,
\end{equation*}
where the $J_{i}$ are $so(4)$ generators for $SO(4)$. Then
$\tr(\OmegaE\OmegaE\T) = \dot{\theta}^{2}$, where the dot denotes
differentiation with respect to $\lambda$. Dividing the six
generators $J_{i}$ into two sets of three $SU(2)$ generators, the
six-vector $\theta$ may be divided into two three-vectors
$\theta_{A}$ and $\theta_{B}$, parametrizing the two $SU(2)$
subgroups. The path integral then factors into two path integrals
over $SU(2)$:
\begin{multline*}
\kersym(\LambdaE\LambdaEz^{-1};\lambda-\lambdaz) \\
= \euc{\zeta}^{1/2} \intDthree W_{A}\,
\delta^{3}(W_{A}(\lambda)B_{A}^{-1}-I)
\delta^{6}(W_{A}(\lambdaz)B_{A0}^{-1}-I)
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \dot{\theta_{A}}^{2})
\right] \\
\times \euc{\zeta}^{1/2} \intDthree W_{B}\,
\delta^{3}(W_{B}(\lambda)B_{B}^{-1}-I)
\delta^{6}(W_{B}(\lambdaz)B_{B0}^{-1}-I)
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \dot{\theta_{B}}^{2})
\right] \,,
\end{multline*}
where $\LambdaE = B_{A} \otimes B_{B}$ and $\LambdaEz = B_{A0}
\otimes B_{B0}$.
The $SU(2)$ path integrals may be computed by expanding the
exponential in group characters \cite{kleinert06,bohm87}. The
result is
\begin{multline} \label{eqn:A2A}
\euc{\zeta}^{1/2} \intDthree W\,
\delta^{3}(W(\lambda)B^{-1}-I)
\delta^{6}(W(\lambdaz)B_{0}^{-1}-I)
\exp \left[
\mi\int^{\lambda}_{\lambdaz}\dl'\,
\frac{1}{2} \dot{\theta}^{2})
\right] \\
= \sum_{\ell}\exp^{-\mi\Delta m_{\ell}^{2}
(\lambda - \lambdaz)}
(2\ell+1)
\chi^{(\ell)}(B B_{0}^{-1}) \,,
\end{multline}
where $\chi^{(\ell)}$ is the character for the spin-$\ell$
representation of $SU(2)$ and the result includes the correction
for integration ``on'' the group space, as given by Kleinert
\cite{kleinert06}. The full $SO(4)$ path integral is then given by
the product of the two factors of the form \eqn{eqn:A2A}, which is
just \eqn{eqn:A1A}, since \cite{weyl50}
\begin{equation*}
\chi^{(\ell_{A},\ell_{B})}(\LambdaE\LambdaEz^{-1}) =
\chi^{(\ell_{A})}(B_{A}B_{A0}^{-1})
\chi^{(\ell_{B})}(B_{B}B_{B0}^{-1}) \,.
\end{equation*}
\end{proof}
\endinput
\end{document} | 2,108 | 35,173 | en |
train | 0.4997.0 | \begin{document}
\title{On finite groups with few automorphism orbits}
\begin{abstract}
Denote by $\omega(G)$ the number of orbits of the action of $Aut(G)$ on the finite group $G$. We prove that if $G$ is a finite nonsolvable group in which $\omega(G) \leqslant 5$, then $G$ is isomorphic to one of the groups $A_5,A_6,PSL(2,7)$ or $PSL(2,8)$. We also consider the case when $\omega(G) = 6$ and show that if $G$ is a nonsolvable finite group with $\omega(G) = 6$, then either $G \simeq PSL(3,4)$ or there exists a characteristic elementary abelian $2$-subgroup $N$ of $G$ such that $G/N \simeq A_5$.
\end{abstract}
\title{On finite groups with few automorphism orbits}
\section{Introduction}
The groups considered in the following are finite. The problem of the classification of the groups with a prescribed number of {\emph conjugacy classes} was suggested in \cite{B}. For more details for this problem we refer the reader to \cite{VS}. In this paper we consider an other related invariant. Denote by $\omega(G)$ the number of orbits of the action of $Aut(G)$ on the finite group $G$. If $\omega(G) = n$, then we say that $G$ has $n$ automorphism orbits. The trivial group is the only group with $\omega(G) = 1$. It is clear that $\omega(G) = 2$ if and only if $G$ is an elementary abelian $p$-group, for some prime number $p$ \cite[3.13]{D}. In \cite{LD}, Laffey and MacHale give the following results:
\begin{itemize}
\item[(i)] Let $G$ be a finite group which is not of prime-power order. If $w(G) = 3$, then $|G| = p^nq$ and $G$ has a normal elementary abelian Sylow $p$-subgroup $P$, for some primes $p$, $q$, and for some integer $n > 1$. Furthermore, $p$ is a primitive root mod $q$.
\item[(ii)] If $w(G)\leqslant 4$ in a group $G$, then either $G$ is solvable or $G$ is isomorphic to $A_5$.
\end{itemize}
Stroppel in \cite[Theorem 4.5]{S} has shown that if $G$ is a nonabelian simple group with $\omega(G) \leqslant 5$, then $G$ is isomorphic to one of the groups $A_5$, $A_6$, $PSL(2,7)$ or $PSL(2,8)$. In the same work he suggested the following problem: \\
{\noindent}{\bf Problem.}{
(Stroppel \cite[Problem 9.9]{S}) Determine the finite nonsolvable groups $G$ in which $\omega(G) \leqslant 6$.} \\
In answer to Stroppel's question, we give a complete classification for the case $\omega(G)\leq 5$ in Theorem A and provide a characterization of $G$ when $\omega(G)= 6$ in Theorem B. Precisely:
\begin{thmA}\label{th.A}
Let $G$ be a nonsolvable group in which $\omega(G) \leqslant 5$. Then $G$ is isomorphic to one of the groups $A_5,A_6,PSL(2,7)$ or $PSL(2,8)$.
\end{thmA}
Using GAP, we obtained an example of a nonsolvable and non simple group $G$ in which $\omega(G) = 6$ and $|G|=960$. Moreover, there exists a characteristic subgroup $N$ of $G$ such that $G/N \simeq A_5$, where $N$ is an elementary abelian $2$-subgroup. Actually, we will prove that this is the case for any nonsolvable non simple group with $6$ automorfism orbits.
\begin{thmB} \label{th.B}
Let $G$ be a nonsolvable group in which $\omega(G) = 6$. Then one of the following holds:
\end{thmB}
\begin{itemize}
\item[(i)] $G \simeq PSL(3,4)$;
\item[(ii)] There exists a characteristic elementary abelian $2$-subgroup $N$ of $G$ such that $G/N \simeq A_5$.
\end{itemize}
According to Landau's result \cite[Theorem 4.31]{Rose}, for every positive integer $n$, there are only finitely many groups with exactly $n$ conjugacy classes. It is easy to see that no exists similar result for automorphism orbits. Nevertheless, using the classification of finite simple groups, Kohl has been able to prove that for every positive integer $n$ there are only finitely many nonabelian simple groups with exactly $n$ automorphism orbits \cite[Theorem 2.1]{Kohl}. This suggests the following question. \\
{\noindent}{\it Are there only finitely many nonsolvable groups with $6$ automorphism orbits?}
\section{Preliminary results}
A group $G$ is called AT-group if all elements of the same order are conjugate in the automorphism groups. The following result is a straightforward corollary of \cite[Theorem 3.1]{Z}.
\begin{lem} \label{Z.th}
Let $G$ be a nonsolvable AT-group in which $\omega(G) \leqslant 6$. Then $G$ is simple. Moreover, $G$ is isomorphic to one of the groups $A_5$, $A_6$, $PSL(2,7)$, $PSL(2,8)$ or $PSL(3,4)$.
\end{lem}
The spectrum of a group is the set of orders of its elements. Let us denote by $spec(G)$ the spectrum of the group $G$.
\begin{rem} \label{rem.spec} The maximal subgroups of $A_5$, $A_6$, $PSL(2,7)$, $PSL(2,8)$ and $PSL(3,4)$ are well know (see for instance \cite{Atlas}). Then
\begin{itemize}
\item[(i)] $spec(A_5) = \{1,2,3,5\}$;
\item[(ii)] $spec(A_6) = \{1,2,3,4,5\}$;
\item[(iii)] $spec(PSL(2,7)) = \{1,2,3,4,7\}$;
\item[(iv)] $spec(PSL(2,8)) = \{1,2,3,7,9\}$;
\item[(v)] $spec(PSL(3,4)) = \{1,2,3,4,5,7\}$
\end{itemize}
\end{rem}
For a group $G$ we denote by $\pi(G)$ the set of prime divisors of the orders of the elements of $G$.
Recall that a group $G$ is a characteristically simple group if $G$ has no proper nontrivial characteristic subgroups.
\begin{lem} \label{ch.simple}
Let $G$ be a nonabelian group. If $G$ is a characteristically simple group in which $\omega(G) \leqslant 6$, then $G$ is simple.
\end{lem}
\begin{proof}
Suppose that $G$ is not simple. By \cite[Theorem 1.5]{G}, there exist a nonabelian simple subgroup $H$ and an integer $k \geqslant 2$ such that
$$
G = \underbrace{H \times \ldots \times H}_{k \ times}.
$$
By Burnside's Theorem \cite[p. 131]{G}, $\pi(G) =\{p_1,\ldots, p_s\}$, where $s \geqslant 3$. Then, there are elements in $G$ of order $p_ip_j$, where $i,j \in \{1,\ldots,s\}$ and $i \neq j$. Thus, $\omega(G) \geqslant 7$.
\end{proof}
\begin{lem} \label{simple-lem}
Let $G$ be a nonsolvable group and $N$ a characteristic subgroup of $G$. Assume that $|\pi(G)| = 4$ and $N$ is isomorphic to one of the groups $A_5, A_6, PSL(2,7)$ or $PSL(2,8)$. Then $\omega(G) \geqslant 8$.
\end{lem}
\begin{proof}
Let $P$ be a Sylow $p$-subgroup of $G$, where $p \not\in \pi(N)$. Set $M = NP$. Since $p$ and $|Aut(N)|$ have coprime orders, we conclude that $M = N \times P$. Arguing as in the proof of Lemma \ref{ch.simple} we deduce that $\omega(G) \geqslant 8$.
\end{proof}
\begin{rem} \label{prop.spec}
(Stroppel, \cite[Lemma 1.2]{S}) Let $G$ be a nontrivial group and $K$ a characteristic subgroup of $G$. Then
$$
\omega(G) \geqslant \omega(K)+ \omega(G/K) - 1.
$$
\end{rem}
\begin{lem} \label{lem.spec}
Let $G$ be a nonsolvable group in which $|spec(G)|\geqslant 6$. Then either $G$ is simple or $\omega(G) \geqslant 7$.
\end{lem}
\begin{proof}
Assume that $\omega(G)= 6$. Then, $G$ is AT-group. By Lemma \ref{Z.th}, $G$ is simple. Moreover, $G \simeq PSL(3,4)$.
\end{proof}
\begin{prop} \label{A5-prop}
Let $G$ be a group and $N$ a proper characteristic subgroup of $G$. If $N$ is isomorphic to one of the following groups $A_5,A_6,PSL(2,7)$ or $PSL(2,8)$, then $\omega(G) \geqslant 7$.
\end{prop}
\begin{proof}
By Lemma \ref{simple-lem}, there is no loss of generality in assuming $\pi(G) = \pi(N)$. In particular, there is a subgroup $M$ in $G$ such that $|M| = p|N|$, for some prime $p \in \pi(G)$. Excluding the case $N \simeq PSL(2,8)$ and $p=7$, a GAP computation shows that $|spec(M)| \geqslant 6$. By Lemma \ref{lem.spec}, $\omega(G) \geqslant 7$. Finally, if $|M| = 7|N|$, where $N \simeq PSL(2,8)$, then $M/C_M(N) \lesssim Aut(N)$. Since $\pi(Aut(N)) = \pi(N)$, we have $C_M(N) \neq \{1\}$. Thus, $\omega(G) \geqslant 7$.
\end{proof}
The following result gives us a description of all nonabelian simple groups with at most $5$ automorphism orbits.
\begin{thm} (Stroppel, \cite[Theorem 4.5]{S}) \label{S.th}
Let $G$ be a non-abelian simple group in which $\omega(G) \leqslant 5$. Then $G$ is isomorphic to one of the groups $A_5$, $A_6$, $PSL(2,7)$ or $PSL(2,8)$.
\end{thm} | 2,914 | 5,330 | en |
train | 0.4997.1 | \section{Proofs of the main results}
\begin{thmA}
Let $G$ be a nonsolvable group in which $\omega(G) \leqslant 5$. Then $G$ is isomorphic to one of the following groups $A_5$, $A_6$, $PSL(2,7)$ or $PSL(2,8)$.
\end{thmA}
\begin{proof}
According to Theorem \ref{S.th}, all simple groups with at most $5$ automorphism orbits are $A_5, A_6, PSL(2,7)$ and $PSL(2,8)$. We need to show that every non simple group $G$ with $\omega(G) \leqslant 5$ is solvable.
Suppose that $G$ is not simple. Note that, if $G$ is caracteristically simple and $\omega(G) \leqslant 5$, then $G$ is simple (Lemma \ref{ch.simple}). Thus, we may assume that $G$ contains a proper nontrivial characteristic subgroup, say $N$. By Remark \ref{prop.spec}, $\omega(N)$ and $\omega(G/N) \leqslant 4$. By \cite[Theorem 3]{LD}, it suffices to prove that $N$ and $G/N$ cannot be isomorphic to $A_5$. If $N \simeq A_5$, then $\omega(G) \geqslant 7$ by Proposition \ref{A5-prop}. Suppose that $G/N \simeq A_5$. Then $N$ is elementary abelian $p$-group, for some prime $p$. For convenience, the next steps of the proof are numbered.
\begin{itemize}
\item[(1)] Assume $p\neq 2$.
\end{itemize}
Since a Sylow 2-subgroup of $G$ is not cyclic, we have an element in $G$ of order $2p$ \cite[p. 225]{G}. Therefore $\omega(G) \geqslant 6$.
\begin{itemize}
\item[(2)] Assume $p=2$.
\end{itemize}
In particular, $|g| \in \{2,4\}$ for any $2$-power element outside of $N$. Note that, if $|g|=4$, then $G$ is AT-group. By Lemma \ref{Z.th}, $G$ is simple, a contradiction. So, we may assume that there exists an involution $g$ outside of $N$. We have, $(gh)^2 \in N$, for any $h \in N$. In particular, $gh = hg$, for any $h \in N$. Therefore $N < C_G(N)$. Since $G/N \simeq A_5$, it follows that $N \subseteq Z(G)$. So $\omega(G) \geqslant 6$. Thus $G$ is solvable, which completes the proof.
\end{proof}
It is convenient to prove first Theorem B under the hypothesis that $|\pi(G)| > 3$ and then extend the result to the general case.
\begin{prop} \label{aux-prop}
Let $G$ be a nonsolvable group in which $\omega(G) = 6$. If $|\pi(G)| > 3$, then $G \simeq PSL(3,4)$.
\end{prop}
\begin{proof}
Assume that $G$ is not characteristically simple. Let $N$ be a proper nontrivial characteristic subgroup of $G$. By Remark \ref{prop.spec}, $N$ and $G/N$ have at most $5$ automorphism orbits. Since $G$ is nonsolvable, we have $N$ or $G/N$ nonsolvable.
Suppose that $N$ is nonsolvable. By Theorem A, $N$ is isomorphic to one of the groups $A_5,A_6,PSL(2,7)$ or $PSL(2,8)$. By Lemma \ref{simple-lem}, $\omega(G) \geqslant 8$. Thus, we may assume that $G/N$ is nonsolvable. By Theorem A, $G/N$ is isomorphic to one of the groups $A_5,A_6,PSL(2,7)$ or $PSL(2,8).$ Let $\pi(G) = \{2,3,p,q\}$ and $\pi(G/N) = \{2,3,p\}$. Since $\omega(N) \leqslant 3$, it follows that there exists a characteristic elementary abelian $q$-subgroup $Q$ in $N$ (\cite[Theorem 2]{LD}). Without loss of generality we can assume that $Q=N$. By Schur-Zassenhaus Theorem \cite[p. 221]{G}, there exists a complement for $N$ in $G$ (that is, there exists a subgroup $K$ such that $G = KN$ and $K \cap N = 1$). In particular, $K \simeq G/N$. Since $|Aut(K)|$ and $|N|$ are coprime numbers, it follows that $G$ is the direct product of $N$ and $K$. Arguing as in the proof of Lemma \ref{ch.simple} we deduce that $\omega(G) \geqslant 8$.
We may assume that $G$ is characteristically simple. By Lemma \ref{ch.simple}, $G$ is simple. Using Kohl's classification \cite{Kohl}, $G \simeq PSL(3,4)$. The result follows.
\end{proof}
\begin{ex} \label{ex.N}
Using GAP we obtained one example of nonsolvable and non simple group $G$ such that $|G| = 960$ and $\omega(G)=6$. Moreover, there exists a normal subgroup $N$ of $G$ such that
$$
G/N \simeq A_5 \ \mbox{e} \ N \simeq C_2 \times C_2 \times C_2 \times C_2.
$$
\end{ex}
\begin{thmB} \label{th.B}
Let $G$ be a nonsolvable group in which $\omega(G) = 6$. Then one of the following holds:
\end{thmB}
\begin{itemize}
\item[(i)] $G \simeq PSL(3,4)$;
\item[(ii)] There exists a characteristic elementary abelian $2$-subgroup $N$ of $G$ such that $G/N \simeq A_5$.
\end{itemize}
\begin{proof}
By Lemma \ref{ch.simple}, if $G$ is characteristically simple, then $G$ is simple. According to Kohl's classification \cite{Kohl}, $G \simeq PSL(3,4)$. In particular, by Proposition \ref{aux-prop}, if $|\pi(G)| \geqslant 4$, then $G \simeq PSL(3,4)$. So, we may assume that $|\pi(G)| = 3$ and $G$ is not characteristically simple. We need to show that for every non simple and nonsolvable group $G$ with $\omega(G) = 6$, there exists a proper characteristic subgroup $N$ such that $G/N \simeq A_5$, where $N$ is an elementary abelian $2$-subgroup.
Let $N$ be a proper characteristic subgroup of $G$. For convenience, the next steps of the proof are numbered.
\begin{itemize}
\item[(1)] Assume that $\omega(N) = 2$.
\end{itemize}
So, $\omega(G/N) = 4$ or $5$ and $N$ is elementary abelian $p$-group, for some prime $p$. According to Theorem A and Example \ref{ex.N}, it is sufficient to consider $G/N$ isomorphic to one of the groups $A_6, PSL(2,7)$ or $PSL(2,8)$. Since the Sylow 2-subgroup of $G/N$ is not cyclic, it follows that the subgroup $N$ is elementary abelian $2$-subgroup \cite[p.\ 225]{G}. Suppose that $G/N \simeq PSL(2,8)$. Arguing as in the proof of Theorem A we deduce that $G$ is $AT$-group, a contradiction. Now, we may assume that $G/N \in \{ A_6, PSL(2,7)\}$. Without loss of generality we can assume that there are elements $a \in G \setminus N$ and $h \in N$ such that $|a| = 2$ and $|ah| = 4$. Then there exist the only one automorphism orbit in which it elements has order $4$, $\{(ah)^{\varphi} \mid \ \varphi \in Aut(G) \}$. On the other hand, $aN$ has order $2$ and $\omega(G) = 6$. Therefore $G/N$ cannot contains elements of order $4$, a contradiction.
\begin{itemize}
\item[(2)] Assume that $\omega(N) = 3$.
\end{itemize}
There exists a characteristic subgroup $Q$ of $N$ and of $G$ (\cite[Theorem 2]{LD}). As $G/N$ and $G/Q$ are simple, we have a contradiction.
\begin{itemize}
\item[(3)] Assume that $\omega(N) = 4$ or $5$.
\end{itemize}
In particular, $\omega(G/N) \leqslant 3$. Arguing as in $(2)$ we deduce that $\omega(G/N) = 2$. By Theorem A, $N$ is
simple. Hence $$N \in \{A_5, A_6, PSL(2,7), PSL(2,8)\}.$$ By Proposition \ref{A5-prop}, $\omega(G) \geqslant 7$.
\end{proof}
\subsection*{Acknowledgment}
The authors wishes to express their thanks to S\'ilvio Sandro for several helpful comments concerning ``GAP''.
\end{document} | 2,416 | 5,330 | en |
train | 0.4998.0 | \begin{document}
\title{Exact order of extreme $L_p$ discrepancy of infinite sequences in arbitrary dimension}
\author{Ralph Kritzinger and Friedrich Pillichshammer\thanks{The first author is supported by the Austrian Science Fund (FWF), Project F5509-N26, which is a part of the Special Research Program ``Quasi-Monte Carlo Methods: Theory and Applications''.}}
\date{}
\maketitle
\begin{abstract}
We study the extreme $L_p$ discrepancy of infinite sequences in the $d$-dimensional unit cube, which uses arbitrary sub-intervals of the unit cube as test sets. This is in contrast to the classical star $L_p$ discrepancy, which uses exclusively intervals that are anchored in the origin as test sets. We show that for any dimension $d$ and any $p>1$ the extreme $L_p$ discrepancy of every infinite sequence in $[0,1)^d$ is at least of order of magnitude $(\log N)^{d/2}$, where $N$ is the number of considered initial terms of the sequence. For $p \in (1,\infty)$ this order of magnitude is best possible.
\end{abstract}
\centerline{\begin{minipage}[hc]{130mm}{
{\em Keywords:} extreme $L_p$-discrepancy, lower bounds, van der Corput sequence\\
{\em MSC 2020:} 11K38, 11K06, 11K31}
\end{minipage}}
\section{Introduction}
Let $\mathcal{P}=\{\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots,\boldsymbol{x}_{N-1}\}$ be an arbitrary $N$-element point set in the $d$-dimensional unit cube $[0,1)^d$. For any measurable subset $B$ of $[0,1]^d$ the {\it counting function} $$A_N(B,\mathcal{P}):=|\{n \in \{0,1,\ldots,N-1\} \ : \ \boldsymbol{x}_n \in B\}|$$ counts the number of elements from $\mathcal{P}$ that belong to the set $B$. The {\it local discrepancy} of $\mathcal{P}$ with respect to a given measurable ``test set'' $B$ is then given by $$\Delta_N(B,\mathcal{P}):=A_N(B,\mathcal{P})-N \lambda (B),$$ where $\lambda$ denotes the Lebesgue measure of $B$. A global discrepancy measure is then obtained by considering a norm of the local discrepancy with respect to a fixed class of test sets.
In the following let $p \in [1,\infty)$.
The classical {\it (star) $L_p$ discrepancy} uses as test sets the class of axis-parallel rectangles contained in the unit cube that are anchored in the origin. The formal definition is
$$ L_{p,N}^{{\rm star}}(\mathcal{P}):=\left(\int_{[0,1]^d}\left|\Delta_N([\boldsymbol{z}ero,\boldsymbol{t}),\mathcal{P})\right|^p\,\mathrm{d} \boldsymbol{t}\right)^{1/p}, $$
where for $\boldsymbol{t}=(t_1,t_2,\ldots,t_d)\in [0,1]^d$ we set $[\boldsymbol{z}ero,\boldsymbol{t})=[0,t_1)\times [0,t_2)\times \ldots \times [0,t_d)$ with area $\lambda([\boldsymbol{z}ero,\boldsymbol{t}))=t_1t_2\cdots t_d$.
The {\it extreme $L_p$ discrepancy} uses as test sets arbitrary axis-parallel rectangles contained in the unit cube. For $\boldsymbol{u}=(u_1,u_2,\ldots,u_d)$ and $\boldsymbol{v}=(v_1,v_2,\ldots,v_d)$ in $[0,1]^d$ and $\boldsymbol{u} \leq \boldsymbol{v}$ let $[\boldsymbol{u},\boldsymbol{v})=[u_1,v_1)\times [u_2,v_2) \times \ldots \times [u_d,v_d)$, where $\boldsymbol{u} \leq \boldsymbol{v}$ means $u_j\leq v_j$ for all $j \in \{1,2,\ldots,d\}$. Obviously, $\lambda([\boldsymbol{u},\boldsymbol{v}))=\prod_{j=1}^d (v_j-u_j)$. The extreme $L_p$ discrepancy of $\mathcal{P}$ is then defined as
$$L_{p,N}^{\mathrm{extr}}(\mathcal{P}):=\left(\int_{[0,1]^d}\int_{[0,1]^d,\, \boldsymbol{u}\leq \boldsymbol{v}}\left|\Delta_N([\boldsymbol{u},\boldsymbol{v}),\mathcal{P})\right|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}. $$
Note that the only difference between standard and extreme $L_p$ discrepancy is the use of anchored and arbitrary rectangles in $[0,1]^d$, respectively.
For an infinite sequence $\mathcal{S}_d$ in $[0,1)^d$ the star and the extreme $L_p$ discrepancies $L_{p,N}^{\bullet}(\mathcal{S}_d)$ are defined as $L_{p,N}^{\bullet}(\mathcal{P}_{d,N})$, $N \in \mathbb{N}$, of the point set $\mathcal{P}_{d,N}$ consisting of the initial $N$ terms of $\mathcal{S}_d$, where $\bullet \in \{{\rm star},{\rm extr}\}$.
Of course, with the usual adaptions the above definitions can be extended also to the case $p=\infty$. However, it is well known that in this case the star and extreme $L_{\infty}$ discrepancies are equivalent in the sense that $L_{\infty,N}^{{\rm star}}(\mathcal{P}) \le L_{\infty,N}^{{\rm extr}}(\mathcal{P}) \le 2^d L_{\infty,N}^{{\rm star}}(\mathcal{P})$ for every $N$-element point set $\mathcal{P}$ in $[0,1)^d$. For this reason we restrict the following discussion to the case of finite $p$.
Recently it has been shown that the extreme $L_p$ discrepancy is dominated -- up to a multiplicative factor that depends only on $p$ and $d$ -- by the star $L_p$ discrepancy (see \cite[Corollary~5]{KP21}), i.e., for every $d \in \mathbb{N}$ and $p \in [1,\infty)$ there exists a positive quantity $c_{d,p}$ such that for every $N \in \mathbb{N}$ and every $N$-element point set in $[0,1)^d$ we have
\begin{equation}\label{monLpstex}
L_{p,N}^{{\rm extr}}(\mathcal{P})\le c_{d,p} L_{p,N}^{{\rm star}}(\mathcal{P}).
\end{equation}
For $p=2$ we even have $c_{d,2}=1$ for all $d \in \mathbb{N}$; see \cite[Theorem~5]{HKP20}. A corresponding estimate the other way round is in general not possible (see \cite[Section~3]{HKP20}). So, in general, the star and the extreme $L_p$ discrepancy for $p< \infty$ are not equivalent, which is in contrast to the $L_{\infty}$-case.
\paragraph{Bounds for finite point sets.} For finite point sets the order of magnitude of the considered discrepancies is more or less known. For every $p \in(1,\infty)$ and $d \in \mathbb{N}$ there exists a $c_{p,d}>0$ such that for every finite $N$-element point set $\mathcal{P}$ in $[0,1)^d$ with $N \ge 2$ we have
\begin{equation*}
L_{p,N}^{\bullet}(\mathcal{P}) \ge c_{p,d} (\log N)^{\frac{d-1}{2}}, \quad \mbox{where } \bullet \in \{{\rm star},{\rm extr}\}.
\end{equation*}
For the star $L_p$ discrepancy the result for $p \ge 2$ is a celebrated result by Roth~\cite{Roth} from 1954 that was extended later by Schmidt~\cite{S77} to the case $p \in (1,2)$. For the extreme $L_p$ discrepancy the result for $p \ge 2$ was first given in \cite[Theorem~6]{HKP20} and for $p \in (1,2)$ in \cite{KP21}. Hal\'{a}sz~\cite{H81} for the star discrepancy and the authors \cite{KP21} for the extreme discrepancy proved that the lower bound is even true for $p=1$ and $d=2$, i.e., there exists a positive number $c_{1,2}$ with the following property: for every $N$-element $\mathcal{P}$ in $[0,1)^2$ with $N \ge 2$ we have
\begin{equation}\label{lbdl1D2dipts}
L_{1,N}^{\bullet}(\mathcal{P}) \ge c_{1,2} \sqrt{\log N}, \quad \mbox{where } \bullet \in \{{\rm star},{\rm extr}\}.
\end{equation}
On the other hand, it is known that for every $d,N \in \mathbb{N}$ and every $p \in [1,\infty)$ there exist $N$-element point sets $\mathcal{P}$ in $[0,1)^d$ such that
\begin{equation}\label{uplpps}
L_{p,N}^{{\rm star}}(\mathcal{P}) \lesssim_{d,p} (\log N)^{\frac{d-1}{2}}.
\end{equation}
(For $f,g: D \subseteq \mathbb{N} \rightarrow \mathbb{R}^+$ we write $f(N) \lesssim g(N)$, if there exists a positive number $C$ such that $f(N) \le C g(N)$ for all $N \in D$. Possible implied dependencies of $C$ are indicated as lower indices of the $\lesssim$ symbol.)
Due to \eqref{monLpstex} the upper bound \eqref{uplpps} even applies to the extreme $L_p$ discrepancy. Hence, for $p \in (1,\infty)$ and arbitrary $d \in \mathbb{N}$ (and also for $p=1$ and $d=2$) we have matching lower and upper bounds. The result in \eqref{uplpps} was proved by Davenport~\cite{D56} for $p= 2$, $d= 2$, by Roth~\cite{R80} for $p= 2$ and arbitrary $d$ and finally by Chen~\cite{C80} in the general case. Other proofs were found by Frolov~\cite{Frolov}, Chen~\cite{C83}, Dobrovol'ski{\u\i}~\cite{Do84}, Skriganov~\cite{Skr89, Skr94}, Hickernell and Yue~\cite{HY00}, and Dick and Pillichshammer~\cite{DP05b}. For more details on the early history of the subject see the monograph \cite{BC}. Apart from Davenport, who gave an explicit construction in dimension $d=2$, these results are pure existence results and explicit constructions of point sets were not known until the beginning of this millennium. First explicit constructions of point sets with optimal order of star $L_2$ discrepancy in arbitrary dimensions have been provided in 2002 by Chen and Skriganov \cite{CS02} for $p=2$ and in 2006 by Skriganov \cite{S06} for general $p$. Other explicit constructions are due to Dick and Pillichshammer \cite{DP14a} for $p=2$, and Dick \cite{D14} and Markhasin \cite{M15} for general $p$.
\paragraph{Bounds for infinite sequences.} For the star $L_p$ discrepancy the situation is also more or less clear. Using a method from Pro{\u\i}nov~\cite{pro86} (see also \cite{DP14b}) the results about lower bounds on star $L_p$ discrepancy for finite sequences can be transferred to the following lower bounds for infinite sequences: for every $p \in(1,\infty]$ and every $d \in \mathbb{N}$ there exists a $C_{p,d}>0$ such that for every infinite sequence $\mathcal{S}_d$ in $[0,1)^d$
\begin{equation}\label{lbdlpdiseq}
L_{p,N}^{{\rm star}}(\mathcal{S}_d) \ge C_{p,d} (\log N)^{d/2} \ \ \ \ \mbox{for infinitely many $N \in \mathbb{N}$.}
\end{equation}
For $d=1$ the result holds also for the case $p=1$, i.e., for every $\mathcal{S}$ in $[0,1)$ we have
\begin{equation*}
L_{1,N}^{{\rm star}}(\mathcal{S}) \ge C_{1,1} \sqrt{\log N} \ \ \ \ \mbox{for infinitely many $N \in \mathbb{N}$.}
\end{equation*}
Matching upper bounds on the star $L_p$ discrepancy of infinite sequences have been shown in \cite{DP14a} (for $p=2$) and in \cite{DHMP} (for general $p$). For every $d \in \mathbb{N}$ there exist infinite sequences $\mathcal{S}_d$ in $[0,1)^d$ such that for any $p \in [1,\infty)$ we have $$L_{p,N}^{{\rm star}}(\mathcal{S}_d) \lesssim_{d,p} (\log N)^{d/2} \quad \mbox{ for all $N \in \mathbb{N}$.}$$
So far, the extreme $L_p$ discrepancy of infinite sequences has not yet been studied. Obviously, due to \eqref{monLpstex} the upper bounds on the star $L_p$ discrepancy also apply to the extreme $L_p$ discrepancy. However, a similar reasoning for obtaining a lower bound is not possible. In this paper we show that the lower bound \eqref{lbdlpdiseq} also holds true for the extreme $L_p$ discrepancy. Thereby we prove that for fixed dimension $d$ and for $p\in (1,\infty)$ the minimal extreme $L_p$ discrepancy is, like the star $L_p$ discrepancy, of exact order of magnitude $(\log N)^{d/2}$ when $N$ tends to infinity. | 3,588 | 9,791 | en |
train | 0.4998.1 | \section{The result}
We extend the lower bound \eqref{lbdlpdiseq} for the star $L_p$ discrepancy of infinite sequences to extreme $L_p$ discrepancy.
\begin{thm}\label{thm2}
For every dimension $d \in \mathbb{N}$ and any $p>1$ there exists a real $\alpha_{d,p} >0$ with the following property: For any infinite sequence $\mathcal{S}_d$ in $[0,1)^d$ we have $$L_{p,N}^{{\rm extr}}(\mathcal{S}_{d}) \ge \alpha_{d,p} (\log N)^{d/2} \ \ \ \ \mbox{ for infinitely many }\ N \in \mathbb{N}.$$ For $d=1$ the results even holds true for the case $p=1$.
\end{thm}
For the proof we require the following lemma. For the usual star discrepancy this lemma goes back to Roth~\cite{Roth}. We require a similar result for the extreme discrepancy.
\begin{lem}\label{le1}
For $d\in \mathbb{N}$ let $\mathcal{S}_d=(\boldsymbol{y}_k)_{k\ge 0}$, where $\boldsymbol{y}_k=(y_{1,k},\ldots,y_{d,k})$ for $k \in \mathbb{N}_0$, be an arbitrary sequence in the $d$-dimensional unit cube with extreme $L_p$ discrepancy $L_{p,N}^{{\rm extr}}(\mathcal{S}_d)$ for $p \in [1,\infty]$. Then for every $N\in \mathbb{N}$ there exists an $n \in \{1,2,\ldots,N\}$ such that $$L_{p,n}^{{\rm extr}}(\mathcal{S}_d) \ge \frac{2^{1/p}}{2}\, L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1})- \frac{1}{2^{d/p}},$$ with the adaption that $2^{1/p}$ and $2^{d/p}$ have to be set 1 if $p =\infty$, where $\mathcal{P}_{N,d+1}$ is the finite point set in $[0,1)^{d+1}$ consisting of the $N$ points $$\boldsymbol{x}_k=(y_{1,k},\ldots,y_{d,k},k/N) \ \ \mbox{ for }\ k \in \{0,1,\ldots ,N-1\}.$$
\end{lem}
\begin{proof}
We present the proof for finite $p$. For $p=\infty$ the proof is similar.
Choose $n \in \{1,2,\ldots,N\}$ such that $$L_{p,n}^{{\rm extr}}(\mathcal{S}_d) =\max_{k=1,2,\ldots,N} L_{p,k}^{{\rm extr}}(\mathcal{S}_d).$$
Consider a sub-interval of the $(d+1)$-dimensional unit cube of the form $E=\prod_{i=1}^{d+1}[u_i,v_i)$ with $\boldsymbol{u}=(u_1,u_2,\ldots,u_{d+1}) \in [0,1)^{d+1}$ and $\boldsymbol{v}=(v_1,v_2,\ldots,v_{d+1}) \in [0,1)^{d+1}$ satisfying $\boldsymbol{u}\leq\boldsymbol{v}$. Put $\overline{m}=\overline{m}(v_{d+1}):=\lceil N v_{d+1}\rceil$ and $\underline{m}=\underline{m}(u_{d+1}):=\lceil N u_{d+1}\rceil$. Then a point $\boldsymbol{x}_k$, $k \in \{0,1,\ldots, N-1\}$, belongs to $E$, if and only if $\boldsymbol{y}_k \in \prod_{i=1}^d[u_i,v_i)$ and $N u_{d+1} \le k < N v_{d+1}$. Denoting $E'=\prod_{i=1}^d[u_i,v_i)$ we have $$A_N(E,\mathcal{P}_{N,d+1})=A_{\overline{m}}(E',\mathcal{S}_d)-A_{\underline{m}}(E',\mathcal{S}_d)$$ and therefore
\begin{align*}
\Delta_N(E,\mathcal{P}_{N,d+1}) = & A_N(E,\mathcal{P}_{N,d+1}) -N \prod_{i=1}^{d+1} (v_i - u_i)\\
= & A_{\overline{m}}(E',\mathcal{S}_d)-A_{\underline{m}}(E',\mathcal{S}_d) - \overline{m} \prod_{i=1}^d (v_i-u_i) + \underline{m} \prod_{i=1}^d (v_i-u_i)\\
& + \overline{m} \prod_{i=1}^d (v_i-u_i) - \underline{m} \prod_{i=1}^d (v_i-u_i) - N \prod_{i=1}^{d+1} (v_i - u_i)\\
= & \Delta_{\overline{m}}(E',\mathcal{S}_d) - \Delta_{\underline{m}}(E',\mathcal{S}_d) + \left(\overline{m}-\underline{m}-N(v_{d+1}-u_{d+1})\right) \prod_{i=1}^d (v_i-u_i).
\end{align*}
We obviously have $|\overline{m}-N v_{d+1}| \le 1$, $|\underline{m}-N u_{d+1}| \le 1$ and $|\prod_{i=1}^d (v_i-u_i)| \le 1$. Hence $$|\Delta_N(E,\mathcal{P}_{N,d+1})| \le |\Delta_{\overline{m}}(E',\mathcal{S}_d)| + |\Delta_{\underline{m}}(E',\mathcal{S}_d)| +2,$$
which yields
\begin{align*}
L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1}) \le & \left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}}\Big| |\Delta_{\overline{m}}(E',\mathcal{S}_d)| + |\Delta_{\underline{m}}(E',\mathcal{S}_d)| +2\Big|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}\\
\le & \left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} |\Delta_{\overline{m}}(E',\mathcal{S}_d)|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}\\
& + \left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} |\Delta_{\underline{m}}(E',\mathcal{S}_d)|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}\\
& + \left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} 2^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p},
\end{align*}
where the last step easily follows from the triangle-inequality for the $L_p$-semi-norm.
For every $u_{d+1},v_{d+1} \in [0,1]$ we have $L_{\overline{m},p}^{{\rm extr}}(\mathcal{S}_d) \le L_{n,p}^{{\rm extr}}(\mathcal{S}_d)$ and $L_{\underline{m},p}^{{\rm extr}}(\mathcal{S}_d) \le L_{n,p}^{{\rm extr}}(\mathcal{S}_d)$, respectively.
Setting $\boldsymbol{u}'=(u_1,\dots,u_d)$ and $\boldsymbol{v}'=(v_1,\dots,v_d)$, we obtain
\begin{eqnarray*}
\lefteqn{\left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} |\Delta_{\overline{m}}(E',\mathcal{S}_d)|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}}\\
& = & \left( \int_0^1 \int_{0,\, u_{d+1} \le v_{d+1}}^1 \int_{[0,1]^d}\int_{[0,1]^d,\, \boldsymbol{u}'\leq \boldsymbol{v}'} |\Delta_{\overline{m}}(E',\mathcal{S}_d)|^p\,\mathrm{d} \boldsymbol{u}'\,\mathrm{d}\boldsymbol{v}' \,\mathrm{d} u_{d+1} \,\mathrm{d} v_{d+1}\right)^{1/p}\\
& = & \left( \int_0^1 \int_{0,\, u_{d+1} \le v_{d+1}}^1 (L_{\overline{m},p}^{{\rm extr}}(\mathcal{S}_d))^p \,\mathrm{d} u_{d+1} \,\mathrm{d} v_{d+1}\right)^{1/p}\\
& \le & \left( \int_0^1 \int_{0,\, u_{d+1} \le v_{d+1}}^1 (L_{n,p}^{{\rm extr}}(\mathcal{S}_d))^p \,\mathrm{d} u_{d+1} \,\mathrm{d} v_{d+1}\right)^{1/p}\\
& = & \frac{1}{2^{1/p}}\, L_{p,n}^{{\rm extr}}(\mathcal{S}_d).
\end{eqnarray*}
Likewise we also have $$\left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} |\Delta_{\underline{m}}(E',\mathcal{S}_d)|^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p} \le \frac{1}{2^{1/p}}\, L_{p,n}^{{\rm extr}}(\mathcal{S}_d).$$ Also $$\left(\int_{[0,1]^{d+1}}\int_{[0,1]^{d+1},\, \boldsymbol{u}\leq \boldsymbol{v}} 2^p\,\mathrm{d} \boldsymbol{u}\,\mathrm{d}\boldsymbol{v}\right)^{1/p}= \frac{2}{2^{(d+1)/p}}.$$
Therefore we obtain
$$L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1}) \le \frac{2}{2^{1/p}}\, L_{p,n}^{{\rm extr}}(\mathcal{S}_d) + \frac{2}{2^{(d+1)/p}}.$$
From here the result follows immediately.
\end{proof}
Now we can give the proof of Theorem~\ref{thm2}.
\begin{proof}[Proof of Theorem~\ref{thm2}]
We use the notation from Lemma~\ref{le1}. For the extreme $L_p$ discrepancy of the finite point set $\mathcal{P}_{N,d+1}$ in $[0,1)^{d+1}$ we obtain from \cite[Corollary~4]{KP21} (for $d \in \mathbb{N}$ and $p>1$) and \cite[Theorem~7]{KP21} (for $d=1$ and $p=1$) that $$L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1}) \ge c_{d+1,q} (\log N)^{d/2}$$ for some real $c_{d+1,q}>0$ which is independent of $N$. Let $\alpha_{d,p} \in (0,2^{\frac{1}{p}-1}c_{d+1,p})$ and let $N \in \mathbb{N}$ be large enough such that $$\frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}} \ge \alpha_{d,p} (\log N)^{d/2}.$$ According to Lemma~\ref{le1} there exists an $n \in \{1,2,\ldots,N\}$ such that
\begin{eqnarray}\label{eq1}
L_{p,n}^{{\rm extr}}(\mathcal{S}_d) & \ge & \frac{2^{1/p}}{2}\, L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1})-\frac{1}{2^{d/p}}\nonumber\\
& \ge & \frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}}\nonumber\\
& \ge & \alpha_{d,p} (\log n)^{d/2}.
\end{eqnarray}
Thus we have shown that for every large enough $N \in \mathbb{N}$ there exists an $n \in \{1,2,\ldots,N\}$ such that
\begin{equation}\label{eq2}
L_{p,n}^{{\rm extr}}(\mathcal{S}_d) \ge \alpha_{d,p} (\log n)^{d/2}.
\end{equation}
It remains to show that \eqref{eq2} holds for infinitely many $n \in \mathbb{N}$. Assume on the contrary that \eqref{eq2} holds for finitely many $n \in \mathbb{N}$ only and let $m$ be the largest integer with this property. Then choose $N \in \mathbb{N}$ large enough such that $$\frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}} \ge \alpha_{d,p} (\log N)^{d/2} > \max_{k=1,2,\ldots,m} L_{p,k}^{{\rm extr}}(\mathcal{S}_d).$$ For this $N$ we can find an $n \in \{1,2,\ldots,N\}$ for which \eqref{eq1} and \eqref{eq2} hold true. However, \eqref{eq1} implies that $n > m$ which leads to a contradiction since $m$ is the largest integer such that \eqref{eq2} is true. Thus we have shown that \eqref{eq2} holds for infinitely many $n \in \mathbb{N}$ and this completes the proof.
\end{proof} | 3,575 | 9,791 | en |
train | 0.4998.2 | Now we can give the proof of Theorem~\ref{thm2}.
\begin{proof}[Proof of Theorem~\ref{thm2}]
We use the notation from Lemma~\ref{le1}. For the extreme $L_p$ discrepancy of the finite point set $\mathcal{P}_{N,d+1}$ in $[0,1)^{d+1}$ we obtain from \cite[Corollary~4]{KP21} (for $d \in \mathbb{N}$ and $p>1$) and \cite[Theorem~7]{KP21} (for $d=1$ and $p=1$) that $$L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1}) \ge c_{d+1,q} (\log N)^{d/2}$$ for some real $c_{d+1,q}>0$ which is independent of $N$. Let $\alpha_{d,p} \in (0,2^{\frac{1}{p}-1}c_{d+1,p})$ and let $N \in \mathbb{N}$ be large enough such that $$\frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}} \ge \alpha_{d,p} (\log N)^{d/2}.$$ According to Lemma~\ref{le1} there exists an $n \in \{1,2,\ldots,N\}$ such that
\begin{eqnarray}\label{eq1}
L_{p,n}^{{\rm extr}}(\mathcal{S}_d) & \ge & \frac{2^{1/p}}{2}\, L_{p,N}^{{\rm extr}}(\mathcal{P}_{N,d+1})-\frac{1}{2^{d/p}}\nonumber\\
& \ge & \frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}}\nonumber\\
& \ge & \alpha_{d,p} (\log n)^{d/2}.
\end{eqnarray}
Thus we have shown that for every large enough $N \in \mathbb{N}$ there exists an $n \in \{1,2,\ldots,N\}$ such that
\begin{equation}\label{eq2}
L_{p,n}^{{\rm extr}}(\mathcal{S}_d) \ge \alpha_{d,p} (\log n)^{d/2}.
\end{equation}
It remains to show that \eqref{eq2} holds for infinitely many $n \in \mathbb{N}$. Assume on the contrary that \eqref{eq2} holds for finitely many $n \in \mathbb{N}$ only and let $m$ be the largest integer with this property. Then choose $N \in \mathbb{N}$ large enough such that $$\frac{2^{1/p} c_{d+1,p}}{2} \, (\log N)^{d/2} -\frac{1}{2^{d/p}} \ge \alpha_{d,p} (\log N)^{d/2} > \max_{k=1,2,\ldots,m} L_{p,k}^{{\rm extr}}(\mathcal{S}_d).$$ For this $N$ we can find an $n \in \{1,2,\ldots,N\}$ for which \eqref{eq1} and \eqref{eq2} hold true. However, \eqref{eq1} implies that $n > m$ which leads to a contradiction since $m$ is the largest integer such that \eqref{eq2} is true. Thus we have shown that \eqref{eq2} holds for infinitely many $n \in \mathbb{N}$ and this completes the proof.
\end{proof}
As already mentioned, there exist explicit constructions of infinite sequences $\mathcal{S}_d$ in $[0,1)^d$ with the property that
\begin{align}\label{ub:extrlpinf}
L_{p,N}^{{\rm extr}}(\mathcal{S}_d) \lesssim_{p,d} (\log N)^{d/2}\ \ \ \ \mbox{ for all $N\ge 2$ and all $p \in [1,\infty)$.}
\end{align}
This result follows from \eqref{monLpstex} together with \cite[Theorem~1.1]{DHMP}. These explicitly constructed sequences are so-called order 2 digital $(t, d)$-sequence over the finite field $\mathbb{F}_2$; see \cite[Section~2.2]{DHMP}. The result \eqref{ub:extrlpinf} implies that the lower bound from Theorem~\ref{thm2} is best possible in the order of magnitude in $N$ for fixed dimension $d$.
\begin{rem}\rm
Although the optimality of the lower bound in Theorem~\ref{thm2} is shown by means of matching upper bounds on the star $L_p$ discrepancy we point out that in general the extreme $L_p$ discrepancy is really lower than the star $L_p$ discrepancy. An easy example is the van der Corput sequence $\mathcal{S}^{{\rm vdC}}$ in dimension $d=1$, whose extreme $L_p$ discrepancy is of the optimal order of magnitude
\begin{equation}\label{optvdclpex}
L_{p,N}^{{\rm extr}}(\mathcal{S}^{{\rm vdC}}) \lesssim_p \sqrt{\log N}\quad \mbox{all $N\geq 2$ and all $p\in [1,\infty)$,}
\end{equation}
but its star $L_p$ discrepancy is only of order of magnitude $\log N$ since
\begin{equation}\label{exordvdc}
\limsup_{N \rightarrow \infty} \frac{L_{p,N}^{{\rm star}}(\mathcal{S}^{{\rm vdC}})}{\log N}=\frac{1}{6 \log 2} \quad \mbox{ for all $p \in [1,\infty)$.}
\end{equation}
For a proof of \eqref{exordvdc} see, e.g., \cite{chafa,proat} for $p=2$ and \cite{pil04} for general $p$. A proof of \eqref{optvdclpex} can be given by means of a Haar series representation of the extreme $L_p$ discrepancy as given in \cite[Proposition~3, Eq.~(9)]{KP21}. One only requires good estimates for all Haar coefficients of the discrepancy function of the first $N$ elements of the van der Corput sequence, but these can be found in~\cite{KP2015}. Employing these estimates yields after some lines of algebra the optimal order result \eqref{optvdclpex}.
\end{rem}
\begin{rem}\rm
The periodic $L_p$ discrepancy is another type of discrepancy that is based on the class of periodic intervals modulo one as test sets; see \cite{HKP20,KP21}. Denote it by $L_{p,N}^{{\rm per}}$. The periodic $L_p$ discrepancy dominates the extreme $L_p$ discrepancy because the range of integration in the definition of the extreme $L_p$ discrepancy is a subset of the range of integration in the definition of the periodic $L_p$ discrepancy, as already noted in \cite[Eq.~(1)]{HKP20} for the special case $p=2$. Furthermore, it is well known that the periodic $L_2$ discrepancy, normalized by the number of elements of the point set, is equivalent to the diaphony, which was introduced by Zinterhof~\cite{zint} and which is yet another quantitative measure for the irregularity of distribution; see \cite[Theorem~1]{Lev} or \cite[p.~390]{HOe}. For $\mathcal{P}=\{\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots,\boldsymbol{x}_{N-1}\}$ in $[0,1)^d$ it is defined as $$F_N(\mathcal{P})=\left(\sum_{\boldsymbol{h} \in \mathbb{Z}^d} \frac{1}{r(\boldsymbol{h})^2} \left| \frac{1}{N} \sum_{k=0}^{N-1} {\rm e}^{2 \pi \mathtt{i} \boldsymbol{h} \cdot \boldsymbol{x}_k}\right|^2\right)^{1/2},$$ where for $\boldsymbol{h} =(h_1,h_2,\ldots,h_d)\in \mathbb{Z}^d$ we set $r(\boldsymbol{h})= \prod_{j=1}^d \max(1,|h_j|)$. Now, for every $p>1$ for every infinite sequence $\mathcal{S}_d$ in $[0,1)^d$ we have for infinitely many $N \in \mathbb{N}$ the lower bound
$$\frac{(\log N)^{d/2}}{N} \lesssim_{p,d} \frac{1}{N}\, L_{p,N}^{{\rm extr}}(\mathcal{S}_d) \le \frac{1}{N}\, L_{p,N}^{{\rm per}}(\mathcal{S}_d).$$ Choosing $p=2$ we obtain $$\frac{(\log N)^{d/2}}{N} \lesssim_{d} \frac{1}{N}\, L_{2,N}^{{\rm per}}(\mathcal{S}_d) \lesssim_d F_N(\mathcal{S}_d)\quad \mbox{ for infinitely many $N \in \mathbb{N}$.}$$ Thus, there exists a positive $C_d$ such that for every sequence $\mathcal{S}_d$ in $[0,1)^d$ we have
\begin{equation}\label{lb:dia}
F_N(\mathcal{S}_d) \ge C_d \, \frac{(\log N)^{d/2}}{N} \quad \mbox{ for infinitely many $N \in \mathbb{N}$.}
\end{equation}
This result was first shown by Pro{\u\i}nov~\cite{pro2000} by means of a different reasoning. The publication \cite{pro2000} is only available in Bulgarian; a survey presenting the relevant result is published by Kirk~\cite{kirk}. At least in dimension $d=1$ the lower bound \eqref{lb:dia} is best possible, since, for example, $F_N(\mathcal{S}^{{\rm vdC}}) \lesssim \sqrt{\log N}/N$ for all $N \in \mathbb{N}$ as shown in \cite{progro} (see also \cite{chafa,F05}). Note that this also yields another proof of \eqref{optvdclpex} for the case $p=2$. A corresponding result for dimensions $d>1$ is yet missing.
\end{rem}
\noindent {\bf Author's address:} Institute of Financial Mathematics and Applied Number Theory, Johannes Kepler University Linz, Austria, 4040 Linz, Altenberger Stra{\ss}e 69. Email: ralph.kritzinger@yahoo.de, friedrich.pillichshammer@jku.at
\end{document} | 2,628 | 9,791 | en |
train | 0.4999.0 | \begin{document}
\title{Image Segmentation with Eigenfunctions of an Anisotropic Diffusion Operator
hanks{This work was supported in part by the NSF under Grant DMS-1115118.}
\newcommand{\slugmaster}{
\slugger{siims}{xxxx}{xx}{x}{x--x}}
\begin{abstract}
We propose the eigenvalue problem of an anisotropic diffusion operator for image segmentation.
The diffusion matrix is defined based on the input image. The eigenfunctions and the projection
of the input image in some eigenspace capture key features of the input image.
An important property of the model is that for many input images, the first few eigenfunctions are close to being piecewise
constant, which makes them useful as the basis for a variety of applications such as image segmentation and edge detection.
The eigenvalue problem is shown to be related to the algebraic eigenvalue problems resulting from
several commonly used discrete spectral clustering models.
The relation provides a better understanding and helps developing more efficient numerical implementation
and rigorous numerical analysis for discrete spectral segmentation methods.
The new continuous model is also different from energy-minimization methods such as geodesic active contour
in that no initial guess is required for in the current model. The multi-scale feature is a natural consequence of
the anisotropic diffusion operator so there is no need to solve the eigenvalue problem at multiple levels.
A numerical implementation based on a finite element method with an anisotropic mesh adaptation strategy
is presented. It is shown that the numerical scheme gives much more accurate results on eigenfunctions than
uniform meshes. Several interesting features of the model are examined in numerical examples and
possible applications are discussed.
\end{abstract}
\noindent{\bf Key Words.}
eigenvalue problem, image segmentation, finite-element schemes, mesh adaptation, anisotropic diffusion.
\noindent{\bf AMS 2010 Mathematics Subject Classification.}
65N25, 68U10, 94A08
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{IMAGE SEGMENTATION WITH EIGENFUNCTIONS}{IMAGE SEGMENTATION WITH EIGENFUNCTIONS}
\section{Introduction}
We are concerned with image segmentation using
the eigenvalue problem of an anisotropic linear diffusion operator,
\begin{equation}
-\nabla \cdot (\mathbb{D}\nabla u)=\lambda u, \quad \text{ in }\Omega
\label{eq:HW-eigen}
\end{equation}
subject to a homogeneous Dirichlet or Neumann boundary condition, where the diffusion matrix
$\mathbb{D}$ is symmetric and uniformly positive definite on $\Omega$. In this study, we consider
an anisotropic diffusion situation where $\mathbb{D}$ has different eigenvalues and is defined based
on the gray level of the input image.
A method employing an eigenvalue problem to study image segmentation is referred to as a spectral clustering
method in the literature. This type of methods have extracted great interest from researchers
in the past decade; e.g., see \cite{Grady-01,shi_normalized_2000,Wu-Leahy-1993}.
They are typically derived from a minimum-cut criterion on a graph.
One of the most noticeable spectral clustering methods is the normalized cut method proposed
by Shi and Malik \cite{shi_normalized_2000} (also see Section~\ref{SEC:relation-1} below)
which is based on the eigenvalue problem
\begin{equation}
(D-W) {\bf u} = \lambda D {\bf u} ,
\label{eq:Malik-Shi-eigen}
\end{equation}
where ${\bf u}$ is a vector representing the gray level value on the pixels, $W$ is a matrix defining
pairwise similarity between pixels, and $D$ is a diagonal matrix formed with the degree
of pixels (cf. Section 2.2 below). The operator $L=D-W$ corresponds to the graph Laplacian in graph spectral
theory. An eigenvector associated with the second eigenvalue is used
as a continuous approximation to a binary or $k$-way vector that
indicates the partitions of the input image. Shi and Malik suggested that
image segmentation be done on a hierarchical basis where
low level coherence of brightness, texture, and etc. guides a binary (or
$k$-way) segmentation that provides a big picture while high
level knowledge is used to further partition the low-level segments.
While discrete spectral clustering methods give impressive partitioning results in general,
they have several drawbacks. Those methods are typically defined and operated
on a graph or a data set. Their implementation cost depends on the size of the graph or data set.
For a large data set, they can be very expensive to implement.
Moreover, since they are discrete, sometimes their physical and/or geometrical meanings
are not so clear. As we shall see in Section~\ref{SEC:relation-1}, the normalized cut
method of Shi and Malik \cite{shi_normalized_2000} is linked to an anisotropic diffusion
differential operator which from time to time can lead to isotropic diffusion.
The objective of this paper is to investigate the use of the eigenvalue problem
(\ref{eq:HW-eigen}) of an anisotropic diffusion operator for image segmentation.
This anisotropic model can be viewed as a continuous, improved anisotropic generalization
of discrete spectral clustering models such as (\ref{eq:Malik-Shi-eigen}).
The model is also closely related to the Perona-Malik anisotropic filter.
The advantages of using a continuous model for image segmentation
include (i) It has a clear physical interpretation (heat diffusion or
Fick's laws of diffusion in our case);
(ii) Many well developed theories of partial differential equations can be used;
(iii) Standard discretization methods such as finite differences, finite elements,
finite volumes, and spectral methods can be employed;
and (iv) The model does not have be discretized on a mesh associated with the given
data set and indeed, mesh adaptation can be used to improve accuracy and efficiency.
As mentioned early, we shall define the diffusion matrix $\mathbb{D}$ using
the input image and explore properties of the eigenvalue problem.
One interesting property is that for many input images, the first few eigenfunctions of the model
are close to being piecewise constant, which are very useful for image segmentation.
However, this also means that these eigenfunctions change abruptly between
objects and their efficient numerical approximation requires mesh adaptation.
In this work, we shall use an anisotropic mesh adaptation strategy developed
by the authors \cite{Huang-Wang-13} for differential eigenvalue problems.
Another property of (\ref{eq:HW-eigen}) is that
eigenfunctions associated with small eigenvalues possess coarse, global
features of the input image whereas eigenfunctions associated with larger eigenvalues
carry more detailed, localized features.
The decomposition of features agrees with the view of Shi and Malik \cite{shi_normalized_2000}
on the hierarchical structure of image segmentation but in a slightly different sense
since all eigenfunctions come from low level brightness knowledge.
The paper is organized as follows. In Section~\ref{SEC:eigen}, we give a detailed description
of the eigenvalue problem based on an anisotropic diffusion operator and
discuss its relations to some commonly used discrete spectral clustering models
and diffusion filters and some other models in image segmentation.
Section~\ref{SEC:implement} is devoted to
the description of the finite element implementation of the model and
an anisotropic mesh adaptation strategy.
In Section~\ref{SEC:numerics}, we present a number of applications in image segmentation
and edge detection and demonstrate several properties of the model.
Some explanations to the piecewise constant property of eigenfunctions are given in
Section~\ref{SEC:piecewise}.
Concluding remarks are given in Section~\ref{SEC:conclusion}. | 1,880 | 17,873 | en |
train | 0.4999.1 | \section{Description of the eigenvalue problem}
\label{SEC:eigen}
\subsection{Eigenvalue problem of an anisotropic diffusion operator}
We shall use the eigenvalue problem (\ref{eq:HW-eigen}) subject to a Dirichlet or Neumann boundary condition
for image segmentation.
We are inspired by the physics of anisotropic heat transport
process (e.g., see \cite{Gunter-Yu-Kruger-Lackner-05,Sharma-Hammett-07}),
treating the dynamics of image diffusion as the transport of energy
(pixel values) and viewing the eigenvalue problem as the steady state of the dynamic process.
Denote the principal diffusion direction by
$v$ (a unit direction field) and its perpendicular unit direction by $v^{\perp}$.
Let the conductivity coefficients along these directions be
$\chi_{\parallel}$ and $\chi_{\perp}$. ($v$, $\chi_{\parallel}$, and $\chi_{\perp}$ will be defined below.)
Then the diffusion matrix can be written as
\begin{equation}
\mathbb{D}=\chi_{\parallel}vv^{T}+\chi_{\perp}v^{\perp}(v^{\perp})^{T} .
\label{D-1}
\end{equation}
When $\chi_{\parallel}$ and $\chi_{\perp}$ do not depend on $u$, the diffusion operator
in (\ref{eq:HW-eigen}) is simply a linear symmetric second order elliptic operator.
The anisotropy of the diffusion tensor $\mathbb{D}$ depends on the choice of the conductivity coefficients.
For example, if $\chi_{\parallel}\gg\chi_{\perp}$, the diffusion
is preferred along the direction of $v$. Moreover, if $\chi_{\parallel}=\chi_{\perp}$,
the diffusion is isotropic, having no preferred diffusion direction.
To define $\mathbb{D}$, we assume that an input image is given. Denote its gray level by $u_0$.
In image segmentation, pixels with similar values of gray level will be grouped and
the interfaces between those groups provide object boundaries. Since those interfaces are orthogonal
to $\nabla u_0$, it is natural to choose the principal diffusion direction as $v=\nabla u_{0}/|\nabla u_{0}|$.
With this choice, we can rewrite (\ref{D-1}) into
\begin{equation}
\mathbb{D} = \frac{\chi_{\parallel}}{|\nabla u_{0}|^{2}}
\begin{bmatrix}|\partial_x u_{0}|^{2}+\mu |\partial_y u_{0}|^{2} & (1-\mu)\left|\partial_x u_{0}\partial_y u_{0}\right|\\
(1-\mu)\left|\partial_x u_{0} \partial_y u_{0}\right| & |\partial_y u_{0}|^{2}+\mu |\partial_x u_{0}|^{2} \end{bmatrix}
\label{D-2}
\end{equation}
where $\mu = \chi_{\perp}/\chi_{\parallel}$.
We consider two choices of $\chi_{\parallel}$ and $\mu$. The first one is
\begin{equation}
\chi_{\parallel}=g(|\nabla u_{0}|), \quad \mu = 1,
\label{D-3}
\end{equation}
where $g(x)$ is a conductance function that governs the behavior of diffusion.
This corresponds to linear isotropic diffusion. As in \cite{Perona-Malik-90},
we require $g$ to satisfy $g(0)=1$, $g(x)\ge0$, and $g(x)\to 0$ as $x \to \infty$.
For this choice, both $\chi_{\parallel}$ and $\chi_{\perp}$ becomes very small
across the interfaces of the pixel groups and therefore, almost no diffusion is allowed
along the normal and tangential directions of the interfaces.
The second choice is
\begin{equation}
\chi_{\parallel}=g(|\nabla u_{0}|), \quad \mu=1+|\nabla u_{0}|^{2}.
\label{D-4}
\end{equation}
This choice results in an anisotropic diffusion process. Like the first case, almost no diffusion is allowed
across the interfaces of the pixel groups but, depending on the choice of $g$, some degree of diffusion
is allowed on the tangential direction of the interfaces.
We shall show later that with a properly chosen $g$ the eigenfunctions of (\ref{eq:HW-eigen})
capture certain ``grouping'' features of the input image $u_{0}$
very well. This phenomenon has already been observed and explored
in many applications such as shape analysis \cite{Reuter-09,Reuter-06},
image segmentation and data clustering \cite{Grady-01, shi_normalized_2000, Shi-Malik-2001, Wu-Leahy-1993},
and high dimensional data analysis and machine learning
\cite{Belkin_towards_2005,Nadler_diffusion_2005,Nadler_diffusion_2006,Luxburg-2007}.
In these applications, all eigenvalue problems are formulated on a
discrete graph using the graph spectral theory, which is different from what is
considered here, i.e., eigenvalue problems of differential operators.
The application of the latter to image segmentation is much less known.
We shall discuss the connection of these discrete eigenvalue problems
with continuous ones in the next subsection.
It is noted that the gray level function $u_0$ is defined only at pixels.
Even we can view $u_{0}$ as the ``ground truth'' function (assuming
there is one function whose discrete sample is the input image),
it may not be smooth and the gradient cannot be defined
in the classical sense. Following \cite{Alvarez-Lions-92,Catte-Lions-92},
we may treat $u_{0}$ as a properly regularized approximation of the ``true image''
so that the solution to the eigenvalue problem (\ref{eq:HW-eigen}) exists.
In the following, we simply take $u_{0}$ as the linear interpolation
of the sampled pixel values (essentially an implicit regularization
from the numerical scheme). More sophisticated regularization
methods can also be employed.
We only deal with gray level images in this work. The approach
can be extended to color or texture images when a diffusion matrix
can be defined appropriately based on all channels. In our computation, we use
both Dirichlet and Neumann boundary conditions, with the latter
being more common in image processing.
\subsection{Relation to discrete spectral clustering models}
\label{SEC:relation-1}
The eigenvalue problem (\ref{eq:HW-eigen}) is closely related to a family of discrete spectral
clustering models, with the earliest one being the normalized
cut method proposed by Shi and Malik \cite{shi_normalized_2000}.
To describe it, we define the degree of dissimilarity (called $cut$) between
any two disjoint sets $A,B$ of a weighted undirected graph $G=(V,E)$ (where $V$ and $E$ denote the sets of the nodes
and edges of the graph) as the total weight of
the edges connecting nodes in the two sets, i.e.,
\[
cut(A,B)=\sum_{p\in A,\; q\in B}w(p,q).
\]
Wu and Leahy \cite{Wu-Leahy-1993} proposed to find $k$-subgraphs by
minimizing the maximum cut across the subgroups and use them for
a segmentation of an image. However, this approach
usually favors small sets of isolated nodes in the graph. To address
this problem, Shi and Malik \cite{shi_normalized_2000} used
the normalized cut defined as
\[
Ncut(A,B)=\frac{cut(A,B)}{assoc(A,A\cup B)}+\frac{cut(A,B)}{assoc(B,A\cup B)} ,
\]
where $assoc(A,A\cup B)=\sum_{p\in A,\; q\in A\cup B}w(p,q)$. They sought the minimum
of the functional $Ncut(A,B)$ recursively to obtain a $k$-partition
of the image. The edge weight $w(p,q)$ is chosen as
\[
w(p,q)=\begin{cases}
e^{-|u_{q}-u_{p}|^2/\sigma^2}, & q\in\mathcal{N}_{p},\\
0, & {\rm otherwise,}
\end{cases}
\]
where $\mathcal{N}_{p}$ is a neighborhood of pixel $p$ and $\sigma$ is a positive parameter.
Shi and Malik showed that the above optimization problem is NP-hard
but a binary solution to the normalized cut problem can be
mapped to a binary solution to the algebraic eigenvalue problem (\ref{eq:Malik-Shi-eigen})
with $D$ being a diagonal matrix with diagonal entries $d_{p}=\sum_{q}w(p,q)$
and $W$ being the weight matrix $(w(p,q))_{p,q}^{N\times N}$. Eigenvectors of
this algebraic eigenvalue problem are generally not binary. They are used to approximate
binary solutions of the normalized cut problem through certain partitioning.
To see the connection between the algebraic eigenvalue problem (\ref{eq:Malik-Shi-eigen})
(and therefore, the normalized cut method) with the continuous eigenvalue problem (\ref{eq:HW-eigen}),
we consider an eigenvalue problem in the form of (\ref{eq:HW-eigen}) with the diffusion matrix defined as
\begin{equation}
\mathbb{D} = \begin{bmatrix}e^{-|\partial_{x}u_{0}|^2/\sigma^2} & 0\\
0 & e^{-|\partial_{y}u_{0}|^2/\sigma^2} \end{bmatrix}
\label{D-5}
\end{equation}
A standard central finite difference discretization of this problem on a rectangular mesh gives rise to
\begin{equation}
\frac{(c_{E_{i,j}}+c_{W_{i,j}}+c_{N_{i,j}}+c_{S_{i,j}})u_{i,j}-c_{E_{i,j}}u_{i+1,j}-c_{W_{i,j}}u_{i-1,j}-c_{N_{i,j}}u_{i,j+1}-c_{S_{i,j}}u_{i,j-1}}{h^{2}}=\lambda u_{i,j},
\label{eq:orthotropic-fe-scheme}
\end{equation}
where $h$ is the grid spacing and the coefficients $c_{E_{i,j}},\; c_{W_{i,j}},\; c_{N_{i,j}},\; c_{S_{i,j}}$ are given as
\begin{eqnarray*}
c_{E_{i,j}} = e^{-|u_{i+1,j}-u_{i,j}|^2/\sigma^2},\quad
c_{W_{i,j}} = e^{-|u_{i-1,j}-u_{i,j}|^2/\sigma^2},\\
c_{N_{i,j}} = e^{-|u_{i,j+1}-u_{i,j}|^2/\sigma^2},\quad
c_{S_{i,j}} = e^{-|u_{i,j-1}-u_{i,j}|^2/\sigma^2}.
\end{eqnarray*}
It is easy to see that (\ref{eq:orthotropic-fe-scheme}) is almost the same as (\ref{eq:Malik-Shi-eigen})
with the neighborhood $\mathcal{N}_{i,j}$ of a pixel location $(i,j)$ being chosen to include the four closest
pixel locations $\{(i+1,j),(i-1,j),(i,j+1),(i,j-1)\}$. The difference lies in that (\ref{eq:Malik-Shi-eigen}) has
a weight function on its right-hand side. Moreover, it can be shown that (\ref{eq:orthotropic-fe-scheme})
gives {\it exactly} the algebraic eigenvalue problem for the average cut problem
\[
{\rm min}\frac{cut(A,B)}{|A|}+\frac{cut(A,B)}{|B|} ,
\]
where $|A|$ and $|B|$ denote the total numbers of nodes in $A$ and $B$, respectively.
Notice that this problem is slightly different from the normalized cut problem and its solution is known
as the Fiedler value. Furthermore, if we consider the following generalized eigenvalue problem
(by multiplying the right-hand side of (\ref{eq:HW-eigen}) with a mass-density function),
\begin{equation}
-\nabla \cdot \left(\begin{bmatrix}e^{-|\partial_{x}u_{0}|^2/\sigma^2} & 0\\
0 & e^{-|\partial_{y}u_{0}|^2/\sigma^2} \end{bmatrix}
\nabla u\right)= (e^{-|\partial_x u_{0}|^2/\sigma^2} + e^{-|\partial_y u_{0}|^2/\sigma^2}) \lambda u,
\label{eq:pm-aniso-1}
\end{equation}
we can obtain (\ref{eq:Malik-Shi-eigen}) exactly with a proper central finite difference discretization.
The above analysis shows that either the average cut or normalized cut model can be approximated
by a finite difference discretization of the continuous eigenvalue problem
(\ref{eq:HW-eigen}) with the diffusion matrix (\ref{D-5})
which treats diffusion differently in the $x$ and $y$ directions.
While (\ref{D-5}) is anisotropic in general, it results in isotropic diffusion
near oblique interfaces where $\partial_x u_0 \approx \partial_y u_0$ or
$\partial_x u_0 \approx - \partial_y u_0$. This can be avoided
with the diffusion matrix (\ref{D-2}) which defines diffusion differently along
the normal and tangential directions of group interfaces.
In this sense, our method consisting of (\ref{eq:HW-eigen}) with (\ref{D-2})
can be regarded as an improved version of (\ref{eq:HW-eigen}) with (\ref{D-5}),
and thus, an improved continuous generalization of the normalized cut or the average cut method.
It should be pointed out that there is a fundamental difference between discrete spectral clustering
methods and those based on continuous eigenvalue problems.
The former are defined and operated directly on a graph or data set
and their cost depends very much on the size of the graph or data.
On the other hand, methods based on continuous eigenvalue problems
treat an image as a sampled function and are defined by
a discretization of some differential operators. They have the advantage
that many standard discretization methods such as finite difference, finite
element, finite volume, and spectral methods can be used.
Another advantage is that they do not have to be operated directly
on the graph or the data set. As shown in \cite{Huang-Wang-13},
continuous eigenvalue problems can be solved efficiently on
adaptive, and especially anisotropic adaptive, meshes (also see Section~\ref{SEC:numerics}).
It is worth pointing out that the graph Laplacian
can be connected to a continuous diffusion operator by defining the latter on a
manifold and proving it to be the limit of the discrete Laplacian.
The interested reader is referred to the work of
\cite{Belkin_towards_2005,Nadler_diffusion_2005,Nadler_diffusion_2006,Singer-06,Luxburg-2007}. | 3,788 | 17,873 | en |
train | 0.4999.2 | \subsection{Relation to diffusion models}
The eigenvalue problem (\ref{eq:HW-eigen}) is related to several diffusion models used in image processing.
They can be cast in the form
\begin{equation}
\frac{\partial u}{\partial t}=\nabla \cdot \left(\mathbb{D}\nabla u\right)
\label{eq:linear-diffusion}
\end{equation}
with various definitions of the diffusion matrix. For example, the Perona-Malik nonlinear filter
\cite{Perona-Malik-90} is in this form with $\mathbb{D} = g(|\nabla u|) I$, where $g$ is the same
function in (\ref{D-3}) and $I$ is the identity matrix. The above equation with $\mathbb{D}$ defined in (\ref{D-2})
with $\mu=1$ and $\chi_{\parallel}=g(|\nabla u_{0}|)$ gives rise to a linear diffusion process
that has similar effects as the affine Gaussian smoothing process \cite{Nitzberg-Shiota-92}.
The diffusion matrix we use in this paper in most cases is in the form (\ref{D-2}) with $\mu$ and $\chi_{\parallel}$
defined in (\ref{D-4}). A similar but not equivalent process
was studied as a structure adaptive filter by Yang et al. \cite{Yang-Burger-96}.
The diffusion matrix (\ref{D-2}) can be made $u$-dependent by choosing $\mu$ and $\chi_{\parallel}$
as functions of $\nabla u$.
Weickert \cite{Weickert-1996} considered a nonlinear anisotropic diffusion model with a diffusion matrix
in a similar form as (\ref{D-2}) but with $\nabla u_0$ being replaced by the gradient of a smoothed
gray level function $u_\sigma$ and with $\chi_{\parallel} = g(|\nabla u_\sigma|)$
and $\mu = 1/g(|\nabla u_\sigma|)$.
Interestingly, Perona and Malik \cite{Perona-Malik-90} considered
\begin{eqnarray}
\frac{\partial u}{\partial t} = \nabla \cdot \left(\begin{bmatrix}g(|\partial_{x}u |) & 0\\
0 & g(|\partial_{y} u|) \end{bmatrix}\nabla u\right)
\label{eq:pm-linear-diffusion}
\end{eqnarray}
as an easy-to-compute variant to the Perona-Malik diffusion model (with $\mathbb{D} = g(|\nabla u|) I$).
Zhang and Hancock in \cite{Zhang-Hancock-08} considered
\begin{eqnarray}
\frac{\partial u}{\partial t} = -\mathcal{L}(u_0) u,
\label{eq:Zhang-Hancock}
\end{eqnarray}
where $\mathcal{L}$ is the graph Laplacian defined on the input image $u_0$
and image pixels are treated as the nodes of a graph. The weight between
two nodes $i,j$ is defined as
\[
w_{i,j}=\begin{cases}
e^{-(u_{0}(i)-u_{0}(j))^{2}/\sigma^{2}}, & \quad \text{ for }\|i-j\|\le r\\
0, & \quad \text{otherwise}
\end{cases}
\]
where $r$ is a prescribed positive integer and $\sigma$ is a positive parameter.
As in Section~\ref{SEC:relation-1}, it can be shown that this model can be regarded
as a discrete form of a linear anisotropic diffusion model.
It has been reported in \cite{Buades-Chien-08, Nitzberg-Shiota-92, Yang-Burger-96,Zhang-Hancock-08}
that the image denoising effect with this type of linear diffusion model is
comparable to or in some cases better than nonlinear evolution models. | 948 | 17,873 | en |
train | 0.4999.3 | \section{Numerical Implementation}
\label{SEC:implement}
The eigenvalue problem (\ref{eq:HW-eigen}) is discretized using the standard
linear finite element method with a triangular mesh for $\Omega$. The finite element
method preserves the symmetry of the underlying continuous problem
and can readily be implemented with (anisotropic) mesh adaptation.
As will be seen in Section~\ref{SEC:numerics}, the eigenfunctions
of (\ref{eq:HW-eigen}) can have very strong anisotropic behavior,
and (anisotropic) mesh adaptation is essential to improving the efficiency
of their numerical approximation.
While both Dirichlet and Neumann boundary conditions are considered in our
computation, to be specific we consider only a Dirichlet boundary condition
in the following. The case with a Neumann boundary condition can be discussed similarly.
We assume that a triangular mesh $\mathcal{T}_{h}$ is given for $\Omega$.
Denote the number of the elements of $\mathcal{T}_h$ by $N$ and
the linear finite element space associated with $\mathcal{T}_h$ by $V^{h}\subset H_{0}^{1}\left(\Omega\right)$.
Then the finite element approximation to the eigenvalue problem (\ref{eq:HW-eigen}) subject
to a Dirichlet boundary condition is to find $0 \not\equiv u^h \in V^{h}$ and $\lambda^h \in \mathbb{R}$ such that
\begin{equation}
\int_{\Omega}(\nabla v^h)^t \mathbb{D}\nabla u^h =\lambda^h \int_{\Omega} u^h v^h,\qquad\forall v^h\in V^{h}.
\label{eq:fem-1}
\end{equation}
This equation can be written into a matrix form as
\[
A {\bf u} = \lambda^h M {\bf u},
\]
where $A$ and $M$ are the stiffness and mass matrices, respectively, and ${\bf u}$ is the vector
formed by the nodal values of the eigenfunction at the interior mesh nodes.
An error bound for the linear finite element approximation of the eigenvalues is given by
a classical result of Raviart and Thomas \cite{Raviart-Thomas-83}. It states that
for any given integer $k$ ($1\le k\le N$),
\[
0\le\frac{\lambda_{j}^{h}-\lambda_{j}}{\lambda_{j}^{h}}\le C(k)\sup_{v\in E_{k},\|v\|=1}\| v-\Pi_{h}v\|_E^{2},
\qquad1\le j\le k
\]
where $\lambda_j$ and $\lambda_j^h$ are the eigenvalues (ordered in an increasing order) of the continuous and
discrete problems, respectively, $E_{k}$ is the linear space spanned by the first $k$ eigenfunctions of the
continuous problem, $\Pi_{h}$ is the projection operator from $L^{2}(\Omega)$ to the finite element space $V^{h}$,
and $\| \cdot \|_E$ is the energy norm, namely,
\[
\| v-\Pi_{h}v\|_E^{2} = \int_\Omega \nabla (v-\Pi_{h}v)^t \mathbb{D} \nabla (v-\Pi_{h}v).
\]
It is easy to show (e.g., see \cite{Huang-Wang-13}) that the project error can be bounded
by the error of the interpolation associated with the underlying finite element space,
with the latter depending directly on the mesh. When the eigenfunctions change abruptly
over the domain and exhibit strong anisotropic behavior, anisotropic mesh adaptation
is necessary to reduce the error or improve the computational efficiency (e.g. see \cite{Boff-10,Huang-Wang-13}).
An anisotropic mesh adaptation method was proposed for eigenvalue problems
by the authors \cite{Huang-Wang-13}, following the so-called $\mathbb{M}$-uniform mesh approach
developed in \cite{Huang-05,Huang-06,Huang-Russell-11} for the numerical solution
of PDEs. Anisotropic mesh adaptation provides one advantage over isotropic one
in that, in addition to the size, the orientation of triangles is also adapted to be aligned with
the geometry of the solution locally. In the context of image processing, this mesh alignment
will help better capture the geometry of edges than with isotropic meshes.
The $\mathbb{M}$-uniform mesh approach of anisotropic mesh adaptation views
and generates anisotropic adaptive meshes as uniform ones in the metric specified
by a metric tensor $\mathbb{M} = \mathbb{M}(x,y)$.
Putting it in a simplified scenario, we may consider a uniform mesh defined on the surface of
the gray level $u$ and obtain an anisotropic adaptive mesh by projecting the uniform mesh
into the physical domain. In the actual computation, instead of using the surface of $u$ we employ
a manifold associated with a metric tensor defined based on the Hessian of the eigenfunctions.
An optimal choice of the metric tensor (corresponding to the energy norm) is given \cite{Huang-Wang-13} as
\[
\mathbb{M}_{K}=\det\left(H_{K}\right)^{-1/4}\max_{(x,y)\in K}\|H_{K}\mathbb{D}(x,y)\|^{1/2}\left(\frac{1}
{|K|}\|H_{K}^{-1}H\|_{L^{2}(K)}^{2}\right)^{1/2}H_{K},\qquad\forall K\in\mathcal{T}_{h}
\]
where $K$ denotes a triangle element of the mesh, $H$ is the intersection of the recovered Hessian
matrices of the computed first $k$ eigenfunctions, and $H_{K}$ is the average of $H$ over $K$.
A least squares fitting method is used for Hessian recovery. That is,
a quadratic polynomial is constructed locally for each node via least squares fitting to neighboring
nodal function values and then an approximate Hessian at the node is obtained by differentiating the
polynomial. The recovered Hessian is regularized with a prescribed small positive constant which is taken
to be $0.01$ in our computation.
An outline of the computational procedure of the anisotropic adaptive mesh finite
element approximation for the eigenvalue problem (\ref{eq:HW-eigen}) is given
in Algorithm~\ref{alg:aniso}. In Step 5, BAMG (Bidimensional Anisotropic Mesh Generator)
developed by Hecht \cite{Hecht-Bamg-98} is used to generate the new mesh
based on the computed metric tensor defined on the current mesh.
The resultant algebraic eigenvalue problems are solved using the Matlab eigenvalue solver
{\tt eigs} for large sparse matrices. Note that the algorithm is iterative. Ten iterations are used in our
computation, which was found to be enough to produce an adaptive mesh
with good quality (see \cite{Huang-05} for mesh quality measures).
\begin{algorithm}[h]
\begin{raggedright}
1. Initialize a background mesh.
\par\end{raggedright}
\begin{raggedright}
2. Compute the stiffness and mass matrices on the mesh.
\par\end{raggedright}
\begin{raggedright}
3. Solve the algebraic eigenvalue problem for the first $k$ eigenpairs.
\par\end{raggedright}
\begin{raggedright}
4. Use the eigenvectors obtained in Step 3 to compute the metric tensor.
\par\end{raggedright}
\begin{raggedright}
5. Use the metric tensor to generate a new mesh (anisotropic, adaptive) and go to Step 2.
\par\end{raggedright}
\caption{Anisotropic adaptive mesh finite element approximation for eigenvalue problems.}
\label{alg:aniso}
\end{algorithm} | 1,914 | 17,873 | en |
train | 0.4999.4 | \section{Numerical results}
\label{SEC:numerics}
In this section, all input images are of size $256\times256$ and
normalized so that the gray values are between 0 and 1. The domain
of input images is set to be $[0,1]\times[0,1]$. All eigenfunctions
are computed with a homogeneous Neumann boundary condition unless otherwise specified.
When we count the indices of eigenfunctions, we ignore the first trivial
constant eigenfunction and start the indexing from the second one.
\subsection{Properties of eigenfunctions}
\subsubsection{Almost piecewise constant eigenfunctions}
A remarkable feature of the eigenvalue problem (\ref{eq:HW-eigen}) with the diffusion matrix (\ref{D-2})
is that for certain input images, the first few eigenfunctions are close
to being piecewise constant. In Fig.~\ref{fig:synth1}, we display
a synthetic image containing 4 objects and the first 7 eigenfunctions.
The gaps between objects are 4 pixel wide. To make the problem more interesting,
the gray level is made to vary within each object (so the gray value of the
input image is not piecewise-constant).
We use the anisotropic diffusion tensor $\mathbb{D}$ defined in (\ref{D-2}) and (\ref{D-4}) with
\begin{equation}
g(x)=\frac{1}{(1+x^{2})^{\alpha}},
\label{D-6}
\end{equation}
where $\alpha$ is a positive parameter. Through numerical experiment (cf. Section~\ref{SEC:4.1.6}), we observe that
the larger $\alpha$ is, the closer to being piecewise constant the eigenfunctions are.
In the same time, the eigenvalue problem (\ref{eq:HW-eigen}) is also harder to solve numerically since
the eigenfunctions change more abruptly between the objects.
We use $\alpha = 1.5$ in the computation for Fig.~\ref{fig:synth1}.
The computed eigenfunctions are normalized such that
they have the range of $[0,255]$ and can be rendered as gray level images.
The results are obtained with an adaptive mesh of 65902 vertices and re-interpolated to
a $256\times256$ mesh for rendering.
The histograms of the first 3 eigenfunctions together with the plot
of the first 10 eigenvalues are shown in Fig.~\ref{fig:synth7hist}.
It is clear that the first 3 eigenfunctions are almost piecewise constant.
In fact, the fourth, fifth, and sixth are also almost piece constant whereas the seventh
is clearly not. (Their histograms are not shown here to save space but this can be seen
in Fig.~\ref{fig:synth1}.)
Fig.\ref{fig:nzsynth1x} shows the results obtained an image with a mild level of noise.
The computation is done with the same condition as for Fig.~\ref{fig:synth1} except that
the input image is different. We can see that the first few eigenfunctions are also piecewise
constant and thus the phenomenon is relatively robust to noise.
\begin{figure}
\caption{The input synthetic image and the first 7 eigenfunctions (excluding
the trivial constant eigenfunction), from left to right, top to bottom. The results are obtained with
the diffusion matrix defined in (\ref{D-2}
\label{fig:synth1}
\end{figure}
\begin{figure}
\caption{The first 10 eigenvalues and the histograms of the first 3 eigenfunctions in Fig.\ref{fig:synth1}
\label{fig:synth7hist}
\end{figure}
\begin{figure}
\caption{A noisy synthetic image and the first 7 eigenfunctions, left to right,
top to bottom. \label{fig:nzsynth1x}
\label{fig:nzsynth1x}
\end{figure}
\subsubsection{Eigenvalue problem (\ref{eq:HW-eigen}) versus Laplace-Beltrami operator}
Eigenfunctions of the Laplace-Beltrami operator (on surfaces) have been studied for image segmentation
\cite{Shah-00,Sochen-Kimmel-Malladi-98} and shape analysis \cite{Reuter-09,Reuter-06}.
Thus, it is natural to compare the performance of
the Laplace-Beltrami operator and that of the eigenvalue problem (\ref{eq:HW-eigen}).
For this purpose, we choose a surface such that the Laplace-Beltrami operator has the same
diffusion matrix as that defined in (\ref{D-2}), (\ref{D-4}), and (\ref{D-6}) and takes the form as
\begin{equation}
- \nabla \cdot \left(\frac{1}{\sqrt{1+|\nabla u|^{2}}}\begin{bmatrix}1+|\partial_y u_{0}|^{2} & -|\partial_x u_{0}\partial_y u_{0}|\\
-|\partial_x u_{0}\partial_y u_{0}| & 1+|\partial_x u_{0}|^{2}
\end{bmatrix}\nabla u\right) = \lambda \sqrt{1+|\nabla u_0|^{2}}\; u .
\label{LB-1}
\end{equation}
The main difference between this eigenvalue problem with (\ref{eq:HW-eigen}) is that there is a weight function on the right-hand side of (\ref{LB-1}), and
in our model the parameter $\alpha$ in (\ref{D-6}) is typically greater than 1.
The eigenfunctions of the Laplace-Beltrami operator obtained with a clean input image of Fig.~\ref{fig:nzsynth1x}
are shown in Fig.~\ref{fig:LB-eigenfunctions}. From these figures one can see that the eigenfunctions of
the Laplace-Beltrami operator are far less close to being piecewise constant, and thus, less suitable for
image segmentation.
\begin{figure}
\caption{The clean input image of Fig.~\ref{fig:nzsynth1x}
\label{fig:LB-eigenfunctions}
\end{figure}
\subsubsection{Open or closed edges}
We continue to study the piecewise constant property of eigenfunctions of (\ref{eq:HW-eigen}).
Interestingly, this property seems related to whether the edges of the input image form a closed curve.
We examine the two input images in Fig.~\ref{fig:openarc}, one containing
a few open arcs and the other having a closed curve that makes a jump in the gray level.
The first eigenfunction for the open-arc image changes gradually where
that for the second image is close to being piecewise constant.
\begin{figure}
\caption{From left to right, input image with open arcs, the corresponding first eigenfunction,
input image with connected arcs, the corresponding first eigenfunction.}
\label{fig:openarc}
\end{figure}
\subsubsection{Anisotropic mesh adaptation}
For the purpose of image segmentation, we would like the eigenfunctions to be as close to
being piecewise constant as possible. This would mean that they change abruptly in narrow regions
between objects. As a consequence, their numerical approximation can be difficult, and
(anisotropic) mesh adaptation is then necessary in lieu of accuracy and efficiency.
The reader is referred to \cite{Huang-Wang-13} for the detailed studies of convergence and advantages
of using anisotropic mesh adaptation in finite element approximation of anisotropic eigenvalue problems
with anisotropic diffusion operators. Here, we demonstrate the advantage of using an anisotropic adaptive
mesh over a uniform one for the eigenvalue problem (\ref{eq:HW-eigen}) with the diffusion matrix defined
in (\ref{D-2}), (\ref{D-4}), and (\ref{D-6}) and subject to the homogeneous Dirichlet boundary condition.
The input image is taken as the Stanford bunny; see Fig.~\ref{fig:bunny41}. The figure also shows
the eigenfunctions obtained on an adaptive mesh and uniform meshes of several sizes.
It can be seen that the eigenfunctions obtained with the adaptive mesh have very sharp
boundaries, which are comparable to those obtained with a uniform mesh of more than
ten times of vertices.
\begin{figure}
\caption{Top row: the image of the Stanford bunny and the first 3 eigenfunctions
computed with an anisotropic adaptive mesh with 45383 vertices; Bottom row: the
first eigenfunction on a uniform mesh with 93732, 276044, 550394 vertices and on
an adaptive mesh with 45383 vertices, respectively. All eigenfunctions are computed
with the same diffusion matrix defined in (\ref{D-2}
\label{fig:bunny41}
\end{figure}
\subsubsection{Anisotropic and less anisotropic diffusion}
Next, we compare the performance of the diffusion matrix (\ref{D-2}) (with
(\ref{D-4}), (\ref{D-6}), and $\alpha = 1.5$) and that of a less anisotropic diffusion matrix
(cf. (\ref{eq:pm-linear-diffusion}), with (\ref{D-6}) and $\alpha = 1.5$)
\begin{equation}
\mathbb{D} = \begin{bmatrix}g(|\partial_{x}u_0 |) & 0\\ 0 & g(|\partial_{y} u_0|) \end{bmatrix} .
\label{D-7}
\end{equation}
The eigenfunctions of (\ref{eq:HW-eigen}) with those diffusion matrices
with the Stanford bunny as the input image are shown in
Fig.~\ref{fig:ani-vs-iso}. For (\ref{D-7}), we compute the
eigenfunction on both a uniform mesh of size $256\times256$ and
an adaptive mesh of 46974 vertices. The computation with (\ref{D-2})
is done with an adaptive mesh of 45562 vertices.
The most perceptible difference in the results is that the right ear of the bunny
(not as bright as other parts) almost disappears in the first eigenfunction with
the less anisotropic diffusion matrix. This can be recovered if the conductance
is changed from $\alpha = 1.5$ to $\alpha = 1.0$, but in this case,
the eigenfunction becomes farther from being piecewise-constant.
The image associated with the first eigenfunction for (\ref{D-2}) seems sharper
than that with (\ref{D-7}).
\begin{figure}
\caption{The first eigenfunction of (\ref{eq:HW-eigen}
\label{fig:ani-vs-iso}
\end{figure}
\subsubsection{Effects of the conductance $g$}
\label{SEC:4.1.6}
We now examine the effects of the conductance and consider four cases:
$g_1$ ((\ref{D-6}) with $\alpha = 1.0$), $g_2$ ((\ref{D-6}) with $\alpha = 1.5$),
$g_3$ ((\ref{D-6}) with $\alpha = 3.0$), and
\[
g_{4}(x)=\begin{cases}
(1-(x/\sigma)^{2})^{2}/2, & \text{ for }|x|\le\sigma\\
\epsilon, & \text{ for }|x|>\sigma
\end{cases}
\]
where $\sigma$ and $\epsilon$ are positive parameters.
The last function is called Tukey's biweight function and considered
in \cite{Black-Sapiro-01} as a more robust choice of the edge-stopping
function in the Perona-Malik diffusion. We show the results with
(\ref{D-2}) on the Stanford bunny in Fig.~\ref{fig:g-choice}.
We take $\sigma=9$ and $\epsilon=10^{-6}$ for Tukey's biweight function. Increasing
the power $\alpha$ in $g(x)$ defined in (\ref{D-6}) will make
eigenfunctions steeper in the regions between different objects
and thus, closer to being piecewise constant. Tukey's biweight function
gives a sharp result but the body and legs are indistinguishable.
\begin{figure}
\caption{Top row: the graphs of $g_1, g_2, g_3, g_4$. Middle row: the first eigenfunctions on the bunny image for $g_1, g_2, g_3, g_4$, respectively. Bottom row: the histograms of the corresponding first eigenfunctions.}
\label{fig:g-choice}
\end{figure} | 3,052 | 17,873 | en |
train | 0.4999.5 | \subsection{Applications in Edge Detection and Image Segmentation}
Eigenfunctions can serve as a low level image feature extraction device
to facilitate image segmentation or object edge detection. Generally speaking,
eigenfunctions associated with small eigenvalues contain ``global'' segmentation features
of an image while eigenfunctions associated with larger eigenvalues carry more information
on the detail. Once the eigenfunctions are obtained, one can use
numerous well developed edge detection or data clustering techniques
to extract edge or segmentation information. We point out that spectral
clustering methods also follow this paradigm. In this section,
we focus on the feature extraction step and employ only simple, well known
techniques such as thresholding by hand, $k$-means
clustering, or Canny edge detector in the partitioning step. More
sophisticated schemes can be easily integrated to automatically detect
edges or get the segmentations.
We point out that boundary conditions have an interesting effect on
the eigenfunctions. A homogeneous Dirichlet boundary condition
forces the eigenfunctions to be zero on the boundary and may wipe out
some structures there (and therefore, emphasize objects inside the domain).
It essentially plays the role of defining ``seeds'' that indicates
background pixels on the image border. The idea of using user-defined seeds or
intervene cues has been widely used in graph based image segmentation methods
\cite{Grady-01}, \cite{Rother-Blake-04}, \cite{Shi-Yu-04}, \cite{Malik-Martin-04}.
The PDE eigenvalue problem (\ref{eq:HW-eigen}) can also be solved with more sophisticated
boundary conditions that are defined either on the image border or inside the image.
On the other hand, a homogeneous Neumann
boundary condition tends to keep those structures.
Since mostly we are interested in objects inside the domain, we consider here
a homogeneous Dirichlet boundary condition. The diffusion matrix
defined in (\ref{D-2}), (\ref{D-4}), and (\ref{D-6}) ($\alpha = 1.5$) is used.
In Fig.~\ref{fig:Lenna_ef1}, we show the first eigenfunctions obtained with Dirichlet and Neumann
boundary conditions with Lenna as the input image.
For the edge detection for Lenna, it is natural to extract the ``big picture'' from the first eigenfunction
and get the edge information from it. We show the edges obtained by thresholding a few level lines
in the top row of Fig.~\ref{fig:lenna-contour}. Since any level line
with value $s$ is the boundary of the level set $L_{s}=\{(x,y):I(x,y)\ge s\}$
of an image $I$, and $L_{s}$ is non-increasing with respect to $s$,
the level line is ``shrinking'' from the boundary of a wider shape
to empty as $s$ increases from 0 to 255. Some intermediate steps
give salient boundaries of the interior figure. However, to make the ``shrinking''
automatically stop at the correct edge, other clues potentially from
mid or high level knowledge in addition to the low level brightness
info should be integrated in the edge detection step. We also use
the MATLAB function {\tt imcontour} to get major contours, and apply $k$-means
clustering to the eigenfunctions with $k=2,3,4,5$, shown in the second
row of Fig.~\ref{fig:lenna-contour}.
\begin{figure}
\caption{From left to right, Lenna, first eigenfunctions obtained with Dirichlet and Neumann
boundary conditions, respectively.}
\label{fig:Lenna_ef1}
\end{figure}
\begin{figure}
\caption{Top row: contour drawing by MATLAB (with no level parameters specified),
level line 50, 240, 249; bottom row: segmentation with $k$-means, $k=2,3,4,5$.}
\label{fig:lenna-contour}
\end{figure}
We next compute for an image with more textures from \cite{MartinFTM01} (Fig.~\ref{fig:tiger-gallery}).
This is a more difficult image for
segmentation or edge detection due to many open boundary arcs
and ill-defined boundaries. We display the the first eigenfunction
and the $k$-means clustering results in Fig.~\ref{fig:tiger-gallery}.
The $k$-means clustering does not capture the object as well as in
the previous example. Better separation of the object and the background
can be obtained if additional information is integrated into the clustering strategy.
For instance, the edges detected by the Canny detector (which uses
the gradient magnitude of the image) on the
first eigenfunction clearly give the location of the tiger. Thus,
the use of the gradient map of the first eigenfunction
in the clustering process yields more accurate object boundaries.
For comparison, we also show the edges detected from the input image
with the Canny detector.
\begin{figure}
\caption{Top row: the input image, the edges of the input image with the Canny detector,
Level lines with value 50, 240, 254. bottom row: the first eigenfunction,
the edges of the first eigenfunction with the Canny detector, $k$-means
clustering with $k=2,3,4$.}
\label{fig:tiger-gallery}
\end{figure}
Another way to extract ``simple'' features is to change the conductance
$g$ (e.g., by increasing $\alpha$ in (\ref{D-6})) to make
the eigenfunctions closer to being piecewise constant. This makes eigenfunctions more clustered
but wipes out some detail of the image too. To avoid this difficulty,
we can employ a number of eigenfunctions and use the projection of
the input image into the space spanned by the eigenfunctions to construct
a composite image. A much better result obtained in this way with 64 eigenfunctions
is shown in Fig.~\ref{fig:tigerfin}.
\begin{figure}
\caption{From left to right, the first eigenfunction with $\alpha = 2$ in (\ref{D-6}
\label{fig:tigerfin}
\end{figure}
It should be pointed out that not always the first few eigenfunctions cary most useful information
of the input image. Indeed, Fig.~\ref{fig:sports-gallery} shows that
the first eigenfunction carries very little information. Since the
eigenfunctions form an orthogonal set in $L^{2}$, we can project
the input image onto the computed eigenfunctions.
The coefficients are shown in Fig.~\ref{fig:sports-components}.
We can see that the coefficients for the first two eigenfunctions are very small
compared with those for the several following eigenfunctions.
It is reasonable to use the eigenfunctions with the greatest magnitudes of
the coefficients. These major eigenfunctions will provide most useful information;
see Fig.~\ref{fig:sports-gallery}.
\begin{figure}
\caption{Top row: from left to right, the input image and the first 6 eigenfunctions.
Bottom row: from left to right, the edges on the input image (Canny),
the edges on the 3rd and 4th eigenfunctions (Canny), the $k$-means
clustering results with $k=3$ for the 3rd and the 4th eigenfunctions,
respectively; Level line of value 205 of the 3rd eigenfunction, level
line of value 150 of the 4th eigenfunction.}
\label{fig:sports-gallery}
\end{figure}
\begin{figure}
\caption{The coefficients of the input image projected onto the first 64 eigenfunctions in
Fig.~\ref{fig:sports-gallery}
\label{fig:sports-components}
\end{figure} | 1,873 | 17,873 | en |
train | 0.4999.6 | \section{The piecewise constant property of eigenfunctions}
\label{SEC:piecewise}
As we have seen in Section~\ref{SEC:numerics},
the eigenfunctions of problem (\ref{eq:HW-eigen})
are localized in sub-regions of the input image and the first few of them are close
to being piecewise constant for most input images except for two types of images.
The first type of images is those containing
regions of which part of their boundaries is not clearly defined (such
as open arcs that are common in natural images). In this case, the first eigenfunction
is no longer piecewise-constant although the function values can still be well clustered.
The other type is input images for which the gray level changes gradually and its gradient
is bounded (i.e., the image contrast is mild).
In this case, the diffusion operator simply behaves like the Laplace operator and has
smooth eigenfunctions. For other types of images, the gray level has an abrupt change
across the edges of objects, which causes the conductance $g(|\nabla u_0|)$
to become nearly zero on the boundaries between the objects. As a consequence,
the first few eigenfunctions are close to being constant within each object. This property
forms the basis for the use of the eigenvalue problem (\ref{eq:HW-eigen}) (and its eigenfunctions)
in image segmentation and edge detection.
In this section, we attempt to explain this property from the physical, mathematical, and graph spectral points of view.
We hope that the analysis, although not rigorous, provides some insight of the phenomenon.
From the physical point of view, when the conductance $g(|\nabla u_0|)$
becomes nearly zero across the boundaries between the objects, the diffusion flux will be nearly zero
and each object can be viewed as a separated region from other objects. As a consequence,
the eigenvalue problem can be viewed as a problem defined on multiple separated subdomains, subject to
homogeneous Neumann boundary conditions (a.k.a. insulated boundary conditions)
on the boundary of the whole image and the internal
boundaries between the objects. Then, it is easy to see that the eigenfunctions corresponding to the eigenvalue 0
include constant and piecewise constant (taking a different constant value on each object) functions.
This may explain why piecewise constant eigenfunctions have been observed for most input images.
On the other hand, for images with mild contrast or open arc object edges, the portion of the domain associated
any object is no longer totally separated from other objects and thus the eigenvalue problem may not have piecewise constant
eigenfunctions.
\begin{figure}
\caption{A piecewise smooth function representing an input image with two objects.}
\label{fig:a-fun}
\end{figure}
Mathematically, we consider a 1D example with an input image with two objects. The gray level of the image is sketched
in Fig.~\ref{fig:a-fun}. The edge is located at the origin, and the
segmentation of the image is a 2-partition of $[-L,0]$ and
$[0,L]$. The 1D version of the eigenvalue problem (\ref{eq:HW-eigen}) reads as
\begin{equation}
- \frac{d}{d x}\left(g(|u_0'(x) |)\frac{d u}{d x} \right) =\lambda u ,
\label{eq:example1}
\end{equation}
subject to the Neumann boundary conditions $u'(-L)=u'(L)=0$. We take the conductance
function as in (\ref{D-6}) with $\alpha = 2$. Although $u_0$ is not differentiable,
we could imagine that $u_0$ were replaced by a smoothed function which has a very large
or an infinite derivative at the origin. Then, (\ref{eq:example1}) is degenerate since $g(|u_0'(x)|$
vanishes at $x=0$. As a consequence, its eigenfunctions can be non-smooth. Generally speaking,
studying the eigenvalue problem of a degenerate elliptic operator is a difficult task,
and this is also beyond the scope of the current work. Instead of performing a rigorous analysis, we
consider a simple approximation to $g(|u_0'(x)|)$,
\[
g_{\epsilon}(x)=\begin{cases}
g(|u_0'(x)|), & \text{ for }-L\le x<-\epsilon{\rm \ or\ }\epsilon<x\le L\\
0, & \text{ for }-\epsilon \le x\le \epsilon
\end{cases}
\]
where $\epsilon$ is a small positive number.
The corresponding approximate eigenvalue problem is
\begin{equation}
- \frac{d}{d x}\left(g_\epsilon(|u_0'(x) |)\frac{d u}{d x} \right) =\lambda u .
\label{eq:exampe1-approx}
\end{equation}
The variational formulation of this eigenvalue problem is given by
\begin{equation}
\min_{u\in H^1(-L,L)}\int_{-L}^{L}g_{\epsilon}(|u_0'(x)|)(u')^{2}, \quad \text{ subject to } \int_{-L}^{L}u^{2}=1 .
\label{eq:ex1-variation-prob}
\end{equation}
Once again, the eigenvalue problem (\ref{eq:exampe1-approx}) and the variational problem (\ref{eq:ex1-variation-prob})
are degenerate and should be allowed to admit non-smooth solutions.
The first eigenvalue of (\ref{eq:exampe1-approx}) is 0, and a trivial eigenfunction associated with this eigenvalue
is a constant function. To get other eigenfunctions associated with 0, we consider
functions that are orthogonal to constant eigenfunctions, i.e., we append to
the optimization problem (\ref{eq:ex1-variation-prob}) with the constraint
\[
\int_{-L}^{L}u=0.
\label{eq:ex1-constraints}
\]
It can be verified that an eigenfunction is
\begin{equation}
u_{\epsilon}(x)=\begin{cases}
-c, & \text{ for }x\in [-L,-\epsilon)\\
\frac{c x}{\epsilon}, & \text{ for } x\in [-\epsilon, \epsilon]\\
c, & \text{ for } x\in (\epsilon, L]
\end{cases}
\label{1D-solution}
\end{equation}
where $c = (2 (L-2\epsilon/3))^{-1/2}$. This function is piecewise constant for most part of the domain
except the small region $[-\epsilon, \epsilon]$. Since the original problem (\ref{eq:example1}) can be viewed
to some extent as the limit of (\ref{eq:exampe1-approx}) as $\epsilon \to 0$, the above analysis may
explain why some of the eigenfunctions of (\ref{eq:example1}) behave like piecewise constant functions.
The piecewise constant property can also be understood in the context of the graph spectral
theory. We first state a result from \cite{Mohar-Alavi-91,Luxburg-2007}.
\begin{prop}[\cite{Mohar-Alavi-91,Luxburg-2007}]\;
Assume that $G$ is a undirected graph with $k$ connected components and
the edge weights between those components are zero. If the nonnegative weights matrix $W$
and the diagonal matrix $D$ are defined as in Section~\ref{SEC:relation-1},
then the multiplicity of the eigenvalue 0 of the matrix $D-W$ equals the number of the connected
components in the graph. Moreover, the eigenspace of
eigenvalue 0 is spanned by the indicator vectors $1_{A_{1}},\cdots,1_{A_{k}}$
of those components ${A_{1}},\cdots, {A_{k}}$.
\label{prop-1}
\end{prop}
This proposition shows that the eigenvalue zero of the algebraic eigenvalue problem (\ref{eq:Malik-Shi-eigen})
could admit multiple eigenvectors as indicators of the components. As shown in Section~\ref{SEC:relation-1},
(\ref{eq:Malik-Shi-eigen}) can be derived from a finite difference discretization of the continuous
eigenvalue problem (\ref{eq:HW-eigen}) (with a proper choice of $\mathbb{D}$). Thus,
the indicators of the components can also be regarded as discrete approximations of some continuous
eigenfunctions. This implies that the latter must behave like piecewise constant functions.
Interestingly, Szlam and Bresson \cite{Szlam-Bresson-10} recently proved
that global binary minimizers exist for a graph based problem called Cheeger Cut where the minimum of
the cut is not necessarily zero.
In the continuous setting, a properly designed conductance $g(|\nabla u_0|)$
can act like cutting the input image $u_0$ into subregions along the boundary
of the subregions and forcing the eigenvalue problem to be solved
on each subregion. In a simplified case, we can have the following continuous
analogue of Proposition~\ref{prop-1}. The proof is straightforward.
\begin{prop}\;
Suppose $\Omega\subset \mathbb{R}^2 $ is a bounded Lipschitz domain, $u_0 \in SBV(\Omega)$
(the collect of special functions of bounded variation) and the discontinuity set $K$ of $u_0$ is a finite
union of $C^{1}$ closed curves, $g(\cdot)$ is a bounded positive continuous function.
We define $g(|\nabla u_0|)=0$
for $(x,y)\in K$, and $g(|\nabla u_0|)$ takes its usual meaning for $(x,y)\in\Omega\backslash K$.
For any function $u\in SBV(\Omega)$, assuming $\Gamma$ is the discontinuity
set of $u$, we define the Rayleigh quotient on $u$ as
\[
R(u)=\frac{\widetilde{\int}_{\Omega}(\nabla u)^{T}g(|\nabla u_0|)\nabla u}{\int_{\Omega}u^{2}} ,
\]
where
\[
\widetilde{\int}_{\Omega}(\nabla u)^{T}g(|\nabla u_0|)\nabla u=\begin{cases}
\int_{\Omega\backslash K}g(\nabla u_0|)|\nabla u|^{2}, & \text{ for }\Gamma\subseteq K\\
\infty, & \text{ for } \Gamma\nsubseteq K .
\end{cases}
\]
Then, the minimum of $R(u)$ is zero and any piecewise constant function
in $SBV(\Omega)$ with discontinuity set in $K$ is a minimizer.
\label{prop-2}
\end{prop}
The eigenvalue problem related to the above variational problem can be formally written as
\begin{equation}
-\nabla \cdot \left(g(|\nabla u_0|)\nabla u\right)=\lambda u.
\label{eq:general-eigen-prob}
\end{equation}
The equation should be properly defined for all $u_0$, $u$ possibly in the space
of $BV$. This is a degenerate elliptic problem which could admit discontinuous
solutions, and it seems to be far from being fully understood.
In the following proposition, we suggest a definition of weak solution
in a simplified case. The property indicates that problem (\ref{eq:general-eigen-prob})
is quite different from a classical elliptic eigenvalue problem if
it has a solution in $BV$ that takes a non-zero constant value on an
open set. The proof is not difficult and thus omitted.
\begin{prop}\;
Suppose $\Omega$ is a bounded Lipschitz domain in $R^{2}$, $u_0\in SBV(\Omega)$
and the discontinuity set $K$ of $u_0$ is a finite union of $C^{1}$
closed curves, $g(\cdot)$ is a bounded positive continuous function.
We define $g(|\nabla u_0|)=0$ for $(x,y)\in K$, and $g(|\nabla u_0|)$ takes its usual
meaning for $(x,y)\in\Omega\backslash K$.
We define $u\in SBV(\Omega)$ to be a weak eigenfunction of (\ref{eq:general-eigen-prob})
satisfying a homogeneous Dirichlet boundary condition if
\[
\int_{\Omega}(\nabla u)^{T}g(|\nabla u_0|)\nabla\phi=\int_{\Omega}\lambda u\phi,
\quad \forall \phi \in C_{0}^{1}(\Omega)
\]
where, assuming that $\Gamma$ is the discontinuity set of $u$, the integral
on the left side is defined by
\[
\int_{\Omega}(\nabla u)^{T}g(|\nabla u_0|)\phi=\begin{cases}
\int_{\Omega\backslash K}(\nabla u)^{T}g(\nabla u_0|)\phi, & \Gamma\subseteq K\\
\infty, & {\rm \Gamma\nsubseteq K.}
\end{cases}
\]
If a weak eigenfunction $u\in SBV(\Omega)$ exists and takes a non-zero constant value
on a ball $B_{\epsilon}(x_{0}, y_0)\subset\Omega$, then the corresponding eigenvalue $\lambda$ is zero.
\label{prop-3}
\end{prop}
If (\ref{eq:general-eigen-prob}) indeed admits non-zero
piecewise-constant eigenfunctions, one can see an interesting connection
between (\ref{eq:general-eigen-prob}) (for simplicity we assume
a homogeneous Dirichlet boundary condition is used) and Grady's Random Walk image
segmentation model \cite{Grady-01} where multiple combinatorial Dirichlet problems are solved
for a $k$-region segmentation with predefined seeds indicating segmentation labels.
Using a similar argument in Section 2.2, one can show that the numerical
implementation of the method is equivalent to solving a set of Laplace problems
which are subject to a Neumann boundary condition on the image border and Dirichlet boundary conditions
on the seeds and are discretized on a uniform mesh for potentials $u^{i}$,
$i=1,\cdots,k$. These boundary problems read as
\begin{eqnarray}
\nabla \cdot \left(\begin{bmatrix}g(|\partial_{x}u_{0}|) & 0\\
0 & g(|\partial_{y}u_{0}|)
\end{bmatrix}\nabla u^{i}\right) & = & 0,\ {\rm in}\ \Omega\backslash S
\label{eq:Grady-model}\\
\frac{\partial u^{i}}{\partial n} & = & 0,\ {\rm on}\ \partial\Omega \nonumber \\
u^{i} & = & 1,\ {\rm in}\ S_{i} \nonumber \\
u^{i} & = & 0,\ {\rm in}\ S\backslash S_{i}
\nonumber
\end{eqnarray}
where $S_{i}$ is the set of seeds for label $i$ and $S$ is the set of all seeds.
This problem with a proper choice of $g$ also gives
a solution that has well clustered function values, a phenomenon called
``histogram concentration'' in \cite{Buades-Chien-08}.
Note that when $\lambda=0$, the following equation, which
is an anisotropic generalization of (\ref{eq:general-eigen-prob}),
$$
-\nabla \cdot
\left(\begin{bmatrix}g(|\partial_{x}u_{0}|) & 0\\
0 & g(|\partial_{y}u_{0}|)
\end{bmatrix}\nabla u^{i}\right) = \lambda u^i,
$$
becomes exactly the Laplace equation in Grady's model which
is the Euler-Lagrange equation of the energy
\begin{equation}
\int_{\Omega}(\nabla u)^{T}\begin{bmatrix}g(|\partial_{x}u_{0}|) & 0\\
0 & g(|\partial_{y}u_{0}|)
\end{bmatrix}\nabla u .
\label{eq:bv-energy}
\end{equation}
While the proper definition of functional (\ref{eq:bv-energy})
for general $u$, $u_{0}$ possibly in $BV$ is missing, we can still
define it for a simpler case as in Proposition 5.2. Then, there is
no unique minimizer of this energy, and, as stated in Proposition
5.2, any minimizer of the above energy in $SBV$ yields the minimum
value 0 in the ideal case (with proper $g$ and $u_{0}$ as in Proposition
5.2). While the eigenvalue method considers the minimizer of the above
energy on the admissible set $\left\{ u:u|_{\partial\Omega}=0,\int u^{2}\ dx=1\right\} $,
the Random Walk method considers the minimizer satisfying boundary
conditions in (\ref{eq:Grady-model}). Both gives piecewise-constant
minimizers that can be used for image segmentation. | 4,078 | 17,873 | en |
train | 0.4999.7 | \section{Concluding remarks}
\label{SEC:conclusion}
We have introduced an eigenvalue problem of an anisotropic differential operator
as a tool for image segmentation. It is a continuous and anisotropic generalization
of some commonly used, discrete spectral clustering models for image
segmentation. The continuous formulation of the eigenvalue problem allows
for accurate and efficient numerical implementation, which is crucial
in locating the boundaries in an image. An important observation
from numerical experiment is that non-trivial, almost piecewise constant
eigenfunctions associated with very small eigenvalues exist, and
this phenomenon seems to be an inherent property of the model.
These eigenfunctions can be used as the basis for image segmentation
and edge detection. The mathematical theory behind this is still unknown
and will be an interesting topic for future research.
We have implemented our model with a finite element method and shown
that anisotropic mesh adaptation is essential to the accuracy and efficiency
for the numerical solution of the model. Numerical tests on segmentation
of synthetic, natural or texture images based on computed eigenfunctions
have been conducted. It has been shown that the adaptive mesh implementation
of the model can lead to a significant gain in efficiency.
Moreover, numerical results also show that the anisotropic nature of the
model can enhance some nontrivial regions of eigenfunctions which may
not be captured by a less anisotropic or an isotropic model.
\def$'${$'$}
\end{document} | 340 | 17,873 | en |
train | 0.5000.0 | \begin{document}
\title{Principally Goldie*-Lifting Modules}
\renewcommand{\arabic{enumi}}{\arabic{enumi}}
\renewcommand{\emph{(\theenumi)}}{\emph{(\arabic{enumi})}}
\textbf{ABSTRACT.} A module $M$ is called principally Goldie*-lifting if for every proper cyclic submodule $X$ of $M$, there is a direct summand $D$ of $M$ such that $X\beta^* D$. In this paper, we focus on principally Goldie*-lifting modules as generalizations of lifting modules. Various properties of these modules are given.
\begin{flushleft}
\textbf{Mathematics Subject Classification (2010)}: 16D10, 16D40, 16D70.\\
\end{flushleft}
\textbf{Keywords}: Principally supplemented, Principally lifting, Goldie*-lifting, Principally Goldie*-lifting .
\section{Introduction}
\quad Throughout this paper $R$ denotes an associative ring with identity and all modules are unital right $R$-modules. $Rad(M)$ will denote the Jacobson radical of $M$. Let $M$ be an $R$-module and $N,K$ be submodules of $M$. A submodule $K$ of a module $M$ is called \emph{small} (or superfluous) in $M$, denoted by $K\ll M$, if for every submodule $N$ of $M$ the equality $K+N=M$ implies $N=M$. $K$ is called a \emph{supplement} of $N$ in $M$ if $K$ is a minimal with respect to $N+K=M$, equivalently $K$ is a supplement (weak supplement) of $N$ in $M$ if and only if $K+N=M$ and $K\cap N\ll K$ ($K\cap N\ll M$). A module $M$ is called \emph{supplemented module} (\emph{weakly supplemented module}) if every submodule of $M$ has a supplement (weak supplement) in $M$. A module $M$ is $\oplus$-\emph{supplemented module} if every submodule of $M$ has a supplement which is a direct summand in $M$. \cite{Harmancý} defines principally supplemented modules and investigates their properties. A module $M$ is said to be \emph{principally supplemented} if for all cyclic submodule $X$ of $M$ there exists a submodule $N$ of $M$ such that $M=N+X$ with $N\cap X$ is small in $N$. A module $M$ is said to be \emph{$\oplus$-principally supplemented} if for each cyclic submodule $X$ of $M$ there exists a direct summand $D$ of $M$ such that $M=D+X$ and $D\cap X\ll D$. A nonzero module $M$ is said to be \emph{hollow} if every proper submodule of $M$ is small in $M$. A nonzero module $M$ is said to be \emph{principally hollow} which means every proper cyclic submodule of $M$ is small in $M$. Clearly, hollow modules are principally hollow. Given submodules $K\subseteq N \subseteq M$ the inclusion $K\overset{cs}\hookrightarrow N$ is called \emph{cosmall }in $M$, denoted by $K\hookrightarrow N$, if $N/K \ll M/K$.\\
\newline \quad Lifting modules play an important role in module theory. Also their various generalizations are studied by many authors in \cite{Harmancý}, \cite{Býrkenmeýer}, \cite{Kamal}, \cite{Keskin}, \cite{Muller}, \cite{Talebý}, \cite{Yongduo} etc. A module $M$ is called \emph{lifting module} if for every submodule $N$ of $M$ there is a decomposition $M=D\oplus D'$ such that $D\subseteq N$ and $D'\cap N \ll M$. A module $M$ is called \emph{principally lifting module} if for all cyclic submodule $X$ of $M$ there exists a decomposition $M=D\oplus D'$ such that $D\subseteq X$ and $D'\cap X \ll M$. G.F.Birkenmeier et.al. \cite{Býrkenmeýer} defines $\beta^*$ relation to study on the open problem `Is every $H$-supplemented module supplemented?' in \cite{Muller}. They say submodules $X$, $Y$ of $M$ are $\beta^*$ equivalent, $X\beta^* Y$, if and only if $\dfrac{X+Y}{X}$ is small in $\dfrac{M}{X}$ and $\dfrac{X+Y}{Y}$ is small in $\dfrac{M}{Y}$. $M$ is called \emph{Goldie*-lifting }(or briefly, \emph{$\mathcal{G}$*-lifting}) if and only if for each $X\leq M$ there exists a direct summand $D$ of $M$ such that $X\beta^*D$. $M$ is called \emph{Goldie*-supplemented} (or briefly, \emph{$\mathcal{G}$*-supplemented}) if and only if for each $X\leq M$ there exists a supplement submodule $S$ of $M$ such that $X\beta^*S$ (see \cite{Býrkenmeýer}).
In Section $2$, we recall the equivalence relation $\beta^*$ which is defined in \cite{Býrkenmeýer} and investigate some basic properties of it.
In section $3$ we define principally Goldie*-lifting modules as a generalization of lifting modules. We give some neccesary assumptions for a quotient module or a direct summand of a principally Goldie*-lifting module to be principally Goldie*-lifting. Principally lifting, principally Goldie*-lifting and principally supplemented modules are compared. It is also shown that principally lifting, principally Goldie*-lifting and $\oplus$-principally supplemented coincide on $\pi$-projective modules.
\section{Properties of $\beta^*$ Relation }
\begin{definition}(See \cite{Býrkenmeýer}) \label{rel}
Any submodules $X,Y$ of $M$ are $\beta^*$ equivalent, $X\beta^*Y$, if and only if $\dfrac{X+Y}{X}$ is small in $\dfrac{M}{X}$ and $\dfrac{X+Y}{Y}$ is small in $\dfrac{M}{Y}$.
\end{definition}
\begin{lemma}\normalfont{(See \cite{Býrkenmeýer})}
$\beta^*$ is an equivalence relation.
\end{lemma}
By [\cite{Býrkenmeýer}, page $43$], the zero submodule is $\beta^*$ equivalent to any small submodule.
\begin{theorem}\normalfont{(See \cite{Býrkenmeýer})} \label{thm1}
Let $X,Y$ be submodules of $M$. The following are equivalent:
\begin{itemize}
\item[(a)] $X\beta^*Y$.
\item[(b)] $X\overset{cs}\hookrightarrow X+Y$ and $Y\overset{cs}\hookrightarrow X+Y$.
\item[(c)] For each submodule $A$ of $M$ such that $X+Y+A=M$, then $X+A=M$ and $Y+A=M$.
\item[(d)] If $K+X=M$ for any submodule $K$ of $M$, then $Y+K=M$ and if $Y+H=M$ for any submodule $H$ of $M$, then $X+H=M$.
\end{itemize}
\end{theorem}
\begin{lemma} \label{lemma1}
Let $M=D\oplus D'$ and $A,B\leq D$. Then $A\beta$*$B$ in $M$ if and only if $A\beta$*$B$ in $D$.
\end{lemma}
\begin{proof}
$(\ensuremath{\mathbb{R}}ightarrow)$ Let $A\beta$*$B$ in $M$ and $A+B+N=D$ for some submodule $N$ of $D$. Let us show $A+N=D$ and $B+N=D$. Since $A\beta$*$B$ in $M$, $$M=D\oplus D'=A+B+N+D'$$ implies $A+N+D'=M$ and $B+N+D'=M$. By [\cite{Wisbauer}, $41$], $A+N=D$ and $B+N=D$. From Theorem \ref{thm1}, we get $A\beta$*$B$ in $D$.\\
$(\Leftarrow)$ Let $A\beta$*$B$ in $D$. Then $\dfrac{A+B}{A}\ll \dfrac{D}{A}$ implies $\dfrac{A+B}{A}\ll \dfrac{M}{A}$. Similarly, $\dfrac{A+B}{B}\ll \dfrac{D}{B}$ implies $\dfrac{A+B}{B}\ll \dfrac{M}{B}$. This means that $A\beta$*$B$ in $M$.
\end{proof}
\begin{lemma} \label{lemma2}
If a direct summand $D$ in $M$ is $\beta$* equivalent to a cyclic submodule $X$ of $M$, then $D$ is also cyclic.
\end{lemma}
\begin{proof}
Assume that $M=D\oplus D'$ for some submodules $D,D'$ of $M$ and $X$ is a cyclic submodule of $M$ which is $\beta$* equivalent to $D$. By Theorem \ref{thm1}, $M=X+D'$. Since $\dfrac{X+D'}{D'}=\dfrac{M}{D'}\cong D$, $D$ is cyclic.
\end{proof} | 2,434 | 10,100 | en |
train | 0.5000.1 | \section{Principally Goldie* - Lifting Modules}
\indent \indent In \cite{Býrkenmeýer}, the authors defined $\beta^*$ relation and they introduced two notions called Goldie*-supplemented module and Goldie*-lifting module depend on $\beta^*$ relation. $M$ is called \emph{Goldie*-lifting} (or briefly, $\mathcal{G}$*-lifting) if and only if for each $N\leq M$ there exists a direct summand $D$ of $M$ such that $N\beta^*D$. $M$ is called \emph{Goldie*-supplemented} (or briefly, $\mathcal{G}$*-supplemented) if and only if for each $N\leq M$ there exists a supplement submodule $S$ of $M$ such that $N\beta^*S$. A module $M$ is said to be \emph{$H$-supplemented }if for every submodule $N$ there is a direct summand $D$ of $M$ such that $M=N+B$ holds if and only if $M=D+B$ for any submodule $B$ of $M$. They showed that Goldie*-lifting modules and $H$-supplemented modules are the same in [\cite{Býrkenmeýer}, Theorem $3.6$]. In this section, we define principally Goldie*-lifting module (briefly principally $\mathcal{G}$*-lifting module) as a generalization of $\mathcal{G}$*-lifting module and investigate some properties of this module.
\begin{definition}
A module $M$ is called \emph{principally Goldie*-lifting module} (briefly principally $\mathcal{G}$*-lifting) if for each cyclic submodule $X$ of $M$, there exists a direct summand $D$ of $M$ such that $X\beta^*D$.
\end{definition}
Clearly, every $\mathcal{G}$*-lifting module is principally $\mathcal{G}$*-lifting. However the converse does not hold.
\begin{example}
Consider the $\mathbb{Z}$-module $\mathbb{Q}$. Since $Rad (\mathbb{Q})= \mathbb{Q}$, every cyclic submodule of $\mathbb{Q}$ is small in $\mathbb{Q}$. By [\cite{Býrkenmeýer}, Example $2.15$], the $\mathbb{Z}$-module $\mathbb{Q}$ is principally $\mathcal{G}$*-lifting. But the $\mathbb{Z}$-module $\mathbb{Q}$ is not supplemented. So it is not $\mathcal{G}$*-lifting by [\cite{Býrkenmeýer}, Theorem $3.6$].
\end{example}
A module $M$ is said to be \emph{radical} if $Rad(M)=M$.
\begin{lemma}
Every radical module is principally $\mathcal{G}$*-lifting.
\end{lemma}
\begin{proof}
Let $m\in M$. If $M$ is radical, $mR \subseteq Rad(M)$. By [\cite{Wisbauer}, $21.5$], $mR\ll M$. So we get $mR \beta^*0$. Thus $M$ is principally $\mathcal{G}$*-lifting.
\end{proof}
\begin{theorem}\label{thm3}
Let $M$ be a module. Consider the following conditions:
\begin{itemize}
\item[(a)] $M$ is principally lifting,
\item[(b)] $M$ is principally $\mathcal{G}$*-lifting,
\item[(c)] $M$ is principally supplemented.
\end{itemize}
Then $(a)\ensuremath{\mathbb{R}}ightarrow (b)\ensuremath{\mathbb{R}}ightarrow (c)$.
\end{theorem}
\begin{proof}
$(a)\ensuremath{\mathbb{R}}ightarrow (b)$ Let $m\in M$. Then $mR$ is cyclic submodule of $M$. From $(a)$, there is a decomposition $M=D\oplus D'$ with $D\leq mR$ and $mR \cap D' \ll M$. Since $D\leq mR$, $\dfrac{mR+D}{mR}\ll \dfrac{M}{mR}$. By modularity, $mR=D\oplus (mR\cap D')$. Then $\dfrac{mR}{D}\cong mR\cap D'$ and $\dfrac{M}{D}\cong D'$. If $mR \cap D' \ll M$, by [\cite{Wisbauer}, $19.3$], $mR \cap D' \ll D'$. It implies that $\dfrac{mR+D}{D}\ll \dfrac{M}{D}$. Therefore it is seen that $mR\beta$*$D$ from Definition \ref{rel}. Hence $M$ is principally $\mathcal{G}$*-lifting.\\
$(b)\ensuremath{\mathbb{R}}ightarrow (c)$ Let $m \in M$. By hypothesis, there exists a direct summand $D$ of $M$ such that $mR\beta$* $D$. Since $M=D\oplus D'$ for some submodule $D'$ of $M$ and $D'$ is a supplement of $D$, $D'$ is a supplement of $mR$ in $M$ by [\cite{Býrkenmeýer}, Theorem $2.6 (ii)$]. Thus $M$ is principally supplemented.
\end{proof}
\quad The following example shows that a principally $\mathcal{G}$*-lifting module need not be principally lifting in general:
\begin{example}
Consider the $\mathbb{Z}$-module $M=\mathbb{Z} / 2\mathbb{Z} \oplus \mathbb{Z}/ 8\mathbb{Z}$. By [\cite{Yongduo}, Example $3.7$], $M$ is $H$-supplemented module. Then $M$ is $\mathcal{G}$*-lifting by [\cite{Býrkenmeýer}, Theorem $3.6$]. Since every $\mathcal{G}$*-lifting module is principally $\mathcal{G}$*-lifting, $M$ is principally $\mathcal{G}$*-lifting. But from [\cite{Harmancý}, Examples $7.(3)$], $M$ is not principally lifting. \\
\end{example}
\begin{theorem}\label{ind}
Let $M$ be an indecomposable module. Consider the following conditions:
\begin{itemize}
\item[(a)] $M$ is principally lifting,
\item[(b)] $M$ is principally hollow,
\item[(c)] $M$ is principally $\mathcal{G}$*-lifting.
\end{itemize}
Then $(a)\Leftrightarrow (b)\Leftrightarrow (c)$
\end{theorem}
\begin{proof}
$(a)\Leftrightarrow(b)$ It is easy to see from [\cite{Harmancý}, Lemma $14$].\\
$(b)\ensuremath{\mathbb{R}}ightarrow(c)$ Let $M$ be principally hollow and $m\in M$. Then $mR \ll M$ implies that $mR\beta$*$0$.\\
$(c) \ensuremath{\mathbb{R}}ightarrow (b)$ Let $mR$ be a proper cyclic submodule of $M$. By $(c)$, there exists a decomposition $M=D\oplus D'$ such that $mR \beta$*$D$. Since $M$ is indecomposable, $D=M$ or $D=0$. Let $D=M$. From [\cite{Býrkenmeýer}, Corollary $2.8.(iii)$], we obtain $mR=M$ but this is a contradiction. Thus $mR\beta$*$0$ and so $mR\ll M$. That is, $M$ is principally hollow.
\end{proof}
We shall give the following example of modules which are principally supplemented but not principally $\mathcal{G}$*-lifting.
\begin{example}
Let $F$ be a field, $x$ and $y$ commuting indeterminates over $F$. Let $R= F[x,y]$ be a polynomial ring and its ideals $I_{1}=(x^{2})$ and $I_{2}=(y^{2})$ and the ring $S=R/(x^{2},y^{2})$. Consider the $S$-module $M=\overline{x}S+\overline{y}S$. By [\cite{Harmancý}, Example $15$], $M$ is an indecomposable $S$-module and it is not principally hollow. Then $M$ is not principally $\mathcal{G}$*-lifting from Theorem \ref{ind}. Again by [\cite{Harmancý}, Example $15$], $M$ is principally supplemented.
\end{example}
A module $M$ is said to be \emph{principally semisimple} if every cyclic submodule of $M$ is a direct summand of $M$.
\begin{lemma}
Every principally semisimple module is principally $\mathcal{G}$*-lifting.
\end{lemma}
\begin{proof}
It is clear from the definition of semisimple modules and the reflexive property of $\beta$*.
\end{proof}
Recall that a submodule $N$ is called \emph{fully invariant} if for each endomorphism $f$ of $M$, $f(N)\leq N$. A module $M$ is said to be \emph{duo module} if every submodule of $M$ is fully invariant. A module $M$ is called \emph{distributive} if for all submodules $A,B,C$ of $M$, $A+(B\cap C)=(A+B)\cap (A+C)$ or $A\cap (B+C)=(A\cap B)+(A\cap C)$.\\
\begin{proposition}
Let $M = M_{1}\oplus M_{2}$ be a duo module (or distributive module). Then $M$ is principally $\mathcal{G}$*-lifting if and only if $M_{1}$ and $M_{2}$ are principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
$(\ensuremath{\mathbb{R}}ightarrow)$ Let $m\in M_1$. Since $M$ is principally $\mathcal{G}$*-lifting, there is a decomposition $M=D\oplus D'$ such that $mR\beta$*$D$ in $M$. As $M$ is duo module $M_1=(M_1\cap D)\oplus (M_1\cap D')$, $mR=(mR\cap D)\oplus (mR\cap D')$ and \linebreak $D'=(M_1\cap D')\cap(M_2\cap D')$. We claim that $mR\beta$*$(M_1\cap D)$ in $M_1$. Since $\dfrac{mR+(M_1\cap D)}{mR}\leq\dfrac{mR+D}{mR}$ and $\dfrac{mR+D}{mR}\ll\dfrac{M}{mR}$, we have $\dfrac{mR+(M_1\cap D)}{mR}\ll\dfrac{M}{mR}$ by [\cite{Wisbauer}, $19.3(2)$]. From isomorphism theorem and the direct decomposition of $mR$
$$\dfrac{mR+(M_1\cap D)}{M_1\cap D}\cong\dfrac{mR}{mR\cap(M_1\cap D)}=\dfrac{mR}{mR\cap D}\cong mR\cap D'.$$ Since $D'$ is a supplement of $mR$, $mR\cap D'\ll D'$. By [\cite{Wisbauer}, $19.3(5)$], \linebreak $mR\cap D'\ll M_1\cap D'$. Further $M_1\cap D'\cong\dfrac{M_1}{M_1\cap D}$. This shows that $\dfrac{mR+(M_1\cap D)}{M_1\cap D}$ is small in $\dfrac{M_1}{M_1\cap D}$ and also in $\dfrac{M}{M_1\cap D}$. From Definition \ref{rel} we get $mR\beta$*$(M_1\cap D)$ in $M$. Then $mR\beta$*$(M_1\cap D)$ in $M_1$ by Lemma \ref{lemma1}.
$(\Leftarrow)$ Let $m\in M$. If $M$ is a duo module, for cyclic submodule $mR$ of $M$, \linebreak $mR=(mR\cap M_{1})\oplus (mR\cap M_{2})$. So $mR\cap M_{1}=m_{1}R$ and $mR\cap M_{2}=m_{2}R$ for some $m_1\in M_1$, $m_2\in M_2$ . Since $M_{1}$ and $M_{2}$ are principally $\mathcal{G}$*-lifting, there are decompositions $M_1=D_1\oplus D_1'$ and $M_2=D_2\oplus D_2'$ such that $m_{1}R\beta$*$D_1$ in $M_1$ and $m_{2}R\beta$*$D_2$ in $M_2$. By Lemma \ref{lemma1}, $m_{1}R\beta$*$D_1$ and $m_{2}R\beta$*$D_2$ in $M$. Since $mR=m_{1}R+m_{2}R$, by [\cite{Býrkenmeýer}, Proposition $2.11$], $mR\beta$*$(D_1\oplus D_2)$.\\
\end{proof}
\begin{proposition}
Let any cyclic submodule of $M$ have a supplement which is a relatively projective direct summand of $M$. Then $M$ is principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof} Let $m\in M$. By hypothesis, there exsists a decomposition \linebreak $M=D\oplus D'$ such that $M=mR+D'$ and $mR\cap D'\ll D'$. Because $D$ is $D'$-projective, $M=A\oplus D'$ for some submodule $A$ of $mR$ by [\cite{Muller}, Lemma $4.47$]. So $M$ is principally lifting. It follows from Theorem \ref{thm3} that M is principally $\mathcal{G}$*-lifting.
\end{proof}
\begin{proposition}\label{quo}
Let $M$ be principally $\mathcal{G}$*-lifting and $N$ be a submodule of $M$. If $\dfrac{N+D}{N}$ is a direct summand in $\dfrac{M}{N}$ for any cyclic direct summand $D$ of $M$, then $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
Let $\dfrac{mR+N}{N}$ be a cyclic submodule of $\dfrac{M}{N}$ for $m\in M$. Since $M$ is principally $\mathcal{G}$*-lifting there exists a decomposition $M=D\oplus D'$ such that $mR\beta$*$D$. Then $D$ is also cyclic Lemma \ref{lemma2}. By hypothesis, $\dfrac{D+N}{N}$ is a direct summand in $\dfrac{M}{N}$. We claim that $\dfrac{mR+N}{N}\beta$*$\dfrac{D+N}{N}$. Consider the canonical epimorphism $\theta : M\rightarrow M/N$. By [\cite{Býrkenmeýer}, Proposition $2.9(i)$], $\theta (mR)\beta$*$\theta (D)$, that is, $\dfrac{mR+N}{N}\beta$*$\dfrac{D+N}{N}$. Thus $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.
\end{proof} | 3,861 | 10,100 | en |
train | 0.5000.2 | \begin{proposition}
Let any cyclic submodule of $M$ have a supplement which is a relatively projective direct summand of $M$. Then $M$ is principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof} Let $m\in M$. By hypothesis, there exsists a decomposition \linebreak $M=D\oplus D'$ such that $M=mR+D'$ and $mR\cap D'\ll D'$. Because $D$ is $D'$-projective, $M=A\oplus D'$ for some submodule $A$ of $mR$ by [\cite{Muller}, Lemma $4.47$]. So $M$ is principally lifting. It follows from Theorem \ref{thm3} that M is principally $\mathcal{G}$*-lifting.
\end{proof}
\begin{proposition}\label{quo}
Let $M$ be principally $\mathcal{G}$*-lifting and $N$ be a submodule of $M$. If $\dfrac{N+D}{N}$ is a direct summand in $\dfrac{M}{N}$ for any cyclic direct summand $D$ of $M$, then $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
Let $\dfrac{mR+N}{N}$ be a cyclic submodule of $\dfrac{M}{N}$ for $m\in M$. Since $M$ is principally $\mathcal{G}$*-lifting there exists a decomposition $M=D\oplus D'$ such that $mR\beta$*$D$. Then $D$ is also cyclic Lemma \ref{lemma2}. By hypothesis, $\dfrac{D+N}{N}$ is a direct summand in $\dfrac{M}{N}$. We claim that $\dfrac{mR+N}{N}\beta$*$\dfrac{D+N}{N}$. Consider the canonical epimorphism $\theta : M\rightarrow M/N$. By [\cite{Býrkenmeýer}, Proposition $2.9(i)$], $\theta (mR)\beta$*$\theta (D)$, that is, $\dfrac{mR+N}{N}\beta$*$\dfrac{D+N}{N}$. Thus $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.
\end{proof}
\begin{corollary}
Let $M$ be principally $\mathcal{G}$*-lifting.
\begin{itemize}
\item[(a)] If $M$ is distributive (or duo) module, then any quotient module of $M$ principally $\mathcal{G}$*-lifting.
\item[(b)] Let $N$ be a projection invariant, that is, $eN\subseteq N$ for all $e^2=e\in End(M)$. Then $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting. In particular, $\dfrac{M}{A}$ is principally $\mathcal{G}$*-lifting for every fully invariant submodule $A$ of $M$.
\end{itemize}
\end{corollary}
\begin{proof}
$(a)$ Let $N$ be any submodule of $M$ and $M=D\oplus D'$ for some submodules $D,D'$ of $M$. Then
$$\dfrac{M}{N}=\dfrac{D\oplus D'}{N}=\dfrac{D+N}{N}+\dfrac{D'+N}{N}.$$
Since $M$ is distributive, $N=(D+N)\cap(D'+N)$. We obtain $\dfrac{M}{N}=\dfrac{D+N}{N}\oplus \dfrac{D'+N}{N}$. By Theorem \ref{quo}, $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.\\
$(b)$ Assume that $M=D\oplus D'$ for some $D,D'\leq M$. For the projection map $\pi_D:M\rightarrow D$, $\pi_D^2=\pi\in End(M)$ and $\pi_D(N)\subseteq N$. So $\pi_D(N)=N\cap D$. Similarly, $\pi_{D'}(N)=N\cap D'$ for the projection map $\pi_{D'}:M\rightarrow D'$. Hence we have $N=(N\cap D)+(N\cap D')$. By modularity,
$$M=[D+(N\cap D)+(N\cap D')]+(D'+N)=[D\oplus(N\cap D')]+(D'+N).$$
and
$$[D\oplus(N\cap D')]\cap(D'+N)=[D\cap(D'+N)]+(N\cap D')=(N\cap D)+(N\cap D')=N$$
Thus $\dfrac{M}{N}=\dfrac{D\oplus(N\cap D')}{N}\oplus\dfrac{D'+N}{N}$. By Theorem \ref{quo}, $\dfrac{M}{N}$ is principally $\mathcal{G}$*-lifting.
\end{proof}
Another consequence of Proposition $3.10$ is given in the next result.
A module $M$ is said to have the \emph{summand sum property} (SSP) if the sum of any two direct summands of $M$ is again a direct summand.
\begin{proposition}
Let $M$ be a principally $\mathcal{G}$*-lifting module. If $M$ has SSP, then any direct summand of $M$ is principally $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
Let $M=N\oplus N'$ for some submodule $N,N'$ of $M$. Take any cyclic direct summand $D$ of $M$. Since $M$ has SSP, $M=(D+N')\oplus T$ for some submodule $T$ of $M$. Then $$\dfrac{M}{N'}=\dfrac{D+N'}{N'}+\dfrac{T+N'}{N'}$$ By modularity, $$(D+N')\cap(T+N') = N'+[(D+N')\cap T]= N'.$$ So we obtain $$\dfrac{M}{N'}=\dfrac{D+N'}{N'}\oplus\dfrac{T+N'}{N'}.$$ Thus $N$ is principally $\mathcal{G}$*-lifting from Proposition \ref{quo}.
\end{proof}
Next we show when $M/ Rad(M)$ is principally semisimple in case $M$ is principally $\mathcal{G}$*-lifting module.
\begin{proposition}
Let $M$ be principally $\mathcal{G}$*-lifting and distributive module. Then $\dfrac{M}{Rad(M)}$ is a principally semisimple.
\end{proposition}
\begin{proof}
Let $m\in M$. There exists a decomposition $M=D\oplus D'$ such that $mR\beta$*$D$. By [\cite{Býrkenmeýer}, Theorem $2.6 (ii)$], $D'$ is a supplement of $mR$, that is, $M=mR+D'$ and $mR\cap D'\ll D'$. Then
$$\dfrac{M}{Rad(M)}=\dfrac{mR+Rad(M)}{Rad(M)} + \dfrac{D'+Rad(M)}{Rad(M)}.$$
Because $M$ is distributive and $mR\cap D'\ll D'$, $$(mR+Rad(M)) \cap (D'+ Rad(M))= (mR \cap D') + Rad(M)= Rad(M).$$ Hence $\dfrac{mR+Rad(M)}{Rad(M)}$ is a direct summand in $\dfrac{M}{Rad(M)}$, this means that $\dfrac{M}{Rad(M)}$ is a principally semisimple module.
\end{proof}
\begin{proposition}\label{pro1}
Let $M$ be a principally $\mathcal{G}$*-lifting module and \linebreak $Rad(M)\ll M$. Then $\dfrac{M}{Rad(M)}$ is principally semisimple.
\end{proposition}
\begin{proof}
Let $\dfrac{X}{Rad(M)}$ be a cyclic submodule of $\dfrac{M}{Rad(M)}$. Then $X=mR+Rad(M)$ for some $m\in M$. There exists a decomposition $M=D\oplus D'$ such that $mR\beta$*$D$. By [\cite{Býrkenmeýer}, Theorem $2.6 (ii)$], $D'$ is a supplement of $mR$ in $M$. It follows from [\cite{Býrkenmeýer}, Corollary $2.12$] that $(mR+Rad(M))\beta$*$D$. Therefore $$\dfrac{M}{Rad(M)}=\dfrac{X}{Rad(M)}+\dfrac{D'+Rad(M)}{Rad(M)}.$$ By modularity and $X\cap D'\subseteq Rad(M)$, $$\dfrac{X}{Rad(M)} \cap \dfrac{D'+Rad(M)}{Rad(M)}=\dfrac{(X\cap D')+Rad(M)}{Rad(M)}.$$ Then we obtain that $\dfrac{M}{Rad(M)}=\dfrac{X}{Rad(M)}\oplus\dfrac{D'+Rad(M)}{Rad(M)}$.\\
\end{proof}
\begin{proposition}
Let $M$ be principally $\mathcal{G}$*-lifting. If $Rad(M)\ll M$, then $M=A\oplus B$ where $A$ is principally semisimple and $Rad(B) \ll B$.
\end{proposition}
\begin{proof}
Let $A$ be a submodule of $M$ such that $Rad(M)\oplus A$ is small in $M$ and $m\in A$. By hypothesis, there exists a decomposition $M=D\oplus D'$ such that $mR\beta^*D$. Then $D$ is cyclic from Lemma \ref{lemma2}. By [\cite{Býrkenmeýer}, Theorem $2.6(ii)$], $M=mR+D'$ and $mR\cap D'\ll D'$. So $mR\cap D'=0$ because $mR\cap D'$ is a submodule of $Rad(M)$. That is, $M=mR\oplus D'$. Hence $mR=D$. Since $D\cap Rad(M)=0$, $D$ is isomorphic to a submodule of $\dfrac{M}{Rad(M)}$. By Proposition \ref{pro1}, $\dfrac{M}{Rad(M)}$ is principally semisimple. Thus $D$ is principally semisimple.
\end{proof}
In general, it is not true that principally lifting and principally $\mathcal{G}$*-lifting modules coincide. As we will see the following theorem, we need projectivity condition.
\begin{proposition}\label{pro}
Let $M$ be a $\pi$-projective module. The following are equivalent:
\begin{itemize}
\item[(a)] $M$ is principally lifting,
\item[(b)] $M$ is principally $\mathcal{G}$*-lifting,
\item[(c)] $M$ is $\oplus$-principally supplemented.
\end{itemize}
\end{proposition}
\begin{proof}
$(a)\ensuremath{\mathbb{R}}ightarrow (b)$ follows from Theorem \ref{thm3}.\\
$(b)\ensuremath{\mathbb{R}}ightarrow(c)$ follows from [\cite{Býrkenmeýer}, Theorem $2.6(ii)$].\\
$(c)\ensuremath{\mathbb{R}}ightarrow(a)$ Consider any $m\in M$. From $(c)$, $mR$ has a supplement $D$ which is a direct summand in $M$, that is, $M=mR+D$ and $mR\cap D\ll D$. Since $M$ is $\pi$-projective there exists a complement $D'$ of $D$ such that $D'\subseteq mR$ by [\cite{Clark}, $4.14(1)$]. Thus $M$ is principally lifting.
\end{proof}
\begin{proposition}
Let $M$ be a $\pi$-projective module. Then $M$ is principally $\mathcal{G}$*-lifting if and only if every cyclic submodule $X$ of $M$ can be written as $X=D\oplus A$ such that $D$ is a direct summand in $M$ and $A\ll M$.
\end{proposition}
\begin{proof}
$(\ensuremath{\mathbb{R}}ightarrow)$ Let $M$ be principally $\mathcal{G}$*-lifting and $\pi$-projective module. By Proposition \ref{pro}, $M$ is principally lifting. Then for any cyclic submodule $X$ of $M$ there exists a direct decomposition $M=D\oplus D'$ such that $D\leq X$ and $X \cap D'\ll M$. By modularity, we conclude that $X=D\oplus (X\cap D')$.
\newline $(\Leftarrow)$ By assumption and [\cite{Kamal}, Lemma $2.10$], $M$ is principally lifting. Hence from Proposition \ref{pro} $M$ is principally $\mathcal{G}$*-lifting.
\end{proof}
Now we mention that principally $\mathcal{G}$*-lifting and $\mathcal{G}$*-lifting modules are coincide under some condition.
\begin{proposition}
Let $M$ be Noetherian and have SSP. Then $M$ is principally $\mathcal{G}$*-lifting if and only if $M$ is $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
$(\Leftarrow)$ Clear.\\
$(\ensuremath{\mathbb{R}}ightarrow)$ If $M$ is Noetherian, for any submodule $X$ of $M$ there exist some $m_{1},m_{2},...,m_{n}\in M$ such that $X=m_{1}R+m_{2}R+...+m_{n}R$ by [\cite{Wisbauer}, $27.1$].Since $M$ is principally $\mathcal{G}$*-lifting, there exist some direct summands $D_{1}, D_{2},...,D_{n}$ of $M$ such that $m_{1}R\beta$*$D_{1}$, $m_{2}R\beta$*$D_{2}$,...,$m_{n}R\beta$*$D_{n}$. $D=D_{1}+D_{2}+...+D_{n}$ is also a direct summand in $M$ because of SSP. By [\cite{Býrkenmeýer}, Proposition $2.11$], $X\beta$*$D$. Hence $M$ is $\mathcal{G}$*-lifting.
\end{proof}
\begin{proposition}
Let any submodule $N$ of $M$ be the sum of a cyclic submodule $X$ and a small submodule $A$ in $M$. Then $M$ is principally $\mathcal{G}$*-lifting if and only if $M$ is $\mathcal{G}$*-lifting.
\end{proposition}
\begin{proof}
$(\Leftarrow)$ Clear.\\
$(\ensuremath{\mathbb{R}}ightarrow)$ Let $N=X+A$ for a cyclic submodule $X$ and a small submodule $A$ of $M$. Since $M$ is principally $\mathcal{G}$*-lifting, there exists a direct summand $D$ of $M$ such that $X\beta^*D$. From [\cite{Býrkenmeýer}, Corollary $2.12$], $(X+A)\beta^*D$, that is, $N\beta^*D$. Hence $M$ is $\mathcal{G}$*-lifting.
\end{proof}
\end{document} | 3,805 | 10,100 | en |