text
stringlengths
28
2.36M
meta
stringlengths
20
188
TITLE: "Complex" Roots, when can we compute result wihout imaginay numbers? QUESTION [2 upvotes]: The question is, how can we know if we can solve a n-index root without imaginary numbers when n-index is a non integer number, but a real number? First, basic theory: $$\sqrt[2.0]{8} = 2.8284.....$$ or $$\sqrt[3.0]{8} = 2.0$$ In the relationship between roots and powers, this is the same as: $$8^\frac{1}{2.0} = 2.8284....$$ or $$8^\frac{1}{3.0} = 2.0$$ Nevertheless, as we learn from powers -and this is very very related with final question-, its difficult to imagine (or at least, it is for me), how to represent this, literally: $$8^{2.0} = 8 * 8$$ but... $$8^\frac{1}{2.0} = aaa... 8*how much?$$ ...half eight * halft eight (and not 4 by 4)? Besides, we know when the index of a root is odd, we can compute the result for a negative number without "fight" with complex numbers: Even: $$\sqrt[2.0]{-8} = 2.8284.....*i = \sqrt[2.0]{8} * \sqrt[2.0]{-1}$$ Odd: $$\sqrt[3.0]{-8} = -2.0$$ The index can't be zero, because, in power form: $$\sqrt[0.0]{n} = n^ \frac{1}{0.0}$$ results a number powered to infinity, so results infinity. But there is a little bit more, the index of a root can be negative too, a thing that lead us to "invert" the result (1/result): $$\sqrt[-2.0]{8} = \frac{1}{2.8284.....}$$ $$\sqrt[-3.0]{8} = \frac{1}{2.0}$$ This put in power form means to invert the base, to let the exponent as a positive numer, in this way: $$ n^{-exp} = \frac{1}{n^{exp}}$$ as we can see here: $$8^\frac{1}{-2} = \frac{1}{8^\frac{1}{2}} = \frac{1}{2.8284....} = 0,35355.....$$ $$ 8^\frac{1}{-3} = \frac{1}{8^\frac{1}{3}} = \frac{1}{2.0} = 0.5$$ ...and of course, we can combine both "negative" numbers, in randicand and in index: $$ -8^\frac{1}{-2} = \frac{1}{-8^\frac{1}{2}} = \frac{1}{-2.8284....*i}$$ $$ -8^\frac{1}{-3} = \frac{1}{-8^\frac{1}{3}} = \frac{1}{-2.0} = -0.5$$ Besides, while we are used to roots with integer index (square, cubic, etc..), there is float index roots: $$\sqrt[2.0]{8} = 2.8284.....$$ $$\sqrt[2.2]{8} = 2,5733.....$$ $$\sqrt[2.4]{8} = 2,3784.....$$ $$\sqrt[2.6]{8} = 2,2250.....$$ $$\sqrt[2.8]{8} = 2,1015.....$$ $$\sqrt[3.0]{8} = 2.0$$ And here is where the point of interest raises: we can compute negative radicand roots while the index stays odd, getting ourselves away from complex numbers (i), but what happens with the sign when the index of the root is a float number? We know that an index of 2, for the signs, means -by-=+ or +by+=+, never will be negative, and for an index of 3, means -by-by-=- or +by+by+=+, both sign results are possible, but for float index, How can we operate the sign?. We can't do something as "fraction of -" by "fraction of -". We can compute the valor of roots with float index, but, what about the sign?. A greeting. REPLY [0 votes]: First, some terminology. A floating-point number (sometimes called a "float") is an object defined in computer science. It is a certain limited kind of rational number that can only represent only of a finite number of values allowed by the particular encoding of whichever floating-point representation you are using at the time. For example, $\sqrt[2]8 = 2.8284\ldots$ is not a "float". In mathematical language, it is a real number. Arithmetically, taking an $n$th root is the inverse of taking an $n$th power. That means we perform exactly the same kind of operations, but with a known output instead of a known input. For example, to take the square root of $8$, we need to know that there is a positive real number $x$ such that $$ x^2 = 8. $$ To prove that such a number exists, you need a definition of the real numbers, which I think is outside the scope of this question. Given a good definition of "real number," however, it is possible to prove that such a number $x$ exists. We call that number $\sqrt[2]8.$ Similarly, if we want $\sqrt[5]{11},$ we need a positive real number $y$ such that $$ y^5 = 11. $$ There is only one positive real number that solves that equation, and one if its names is $\sqrt[5]{11}.$ Rational powers of positive real numbers can be computed by taking integer powers of integer roots (or integer roots of integer powers). Specifically, if $x$ is real and $m$ and $n$ integers that have no common factor greater than $1$, and $n > 1,$ then $$ x^{m/n} = \left( \sqrt[n]x \right)^m = \sqrt[n]{x^m}. $$ It is conventional to extend this further to cover some cases where $x$ is negative. An example of such a definition is cited in What are the Laws of Rational Exponents? In short, if $x$ is positive and $n$ is odd, then we find that $$ \left(-\sqrt[n]x\right)^n = - \left(\sqrt[n]x\right)^n = -x, $$ and therefore we are able to define $$ \sqrt[n]{-x} = -\sqrt[n]x. $$ This does not work when $n$ is even. In order to define $\sqrt[2]{-8}$ we have to define the complex numbers (using symbols such as $i$), and for the equation $z^n = x$ (with $x$ known) there are generally then $n$ distinct roots $z$ from which we may decide to select one as the value of $\sqrt[n]x.$ Considering your examples: \begin{align} \sqrt[2.2]{8} &= 8^{1/2.2} = 8^{5/11} = (\sqrt[11]8)^5,\\ \sqrt[2.4]{8} &= 8^{1/2.4} = 8^{5/12} = (\sqrt[12]8)^5,\\ \sqrt[2.6]{8} &= 8^{1/2.6} = 8^{5/13} = (\sqrt[13]8)^5,\\ \sqrt[2.8]{8} &= 8^{1/2.8} = 8^{5/14} = (\sqrt[14]8)^5. \end{align} If we replace $8$ by $-8$, we could write \begin{align} \sqrt[2.2]{-8} &= (-8)^{1/2.2} = (-8)^{5/11} = (\sqrt[11]{-8})^5 = -(\sqrt[11]{8})^5,\\ \sqrt[2.4]{-8} &= (-8)^{1/2.4} = (-8)^{5/12},\\ \sqrt[2.6]{-8} &= (-8)^{1/2.6} = (-8)^{5/13} = (\sqrt[13]{-8})^5 = -(\sqrt[13]{8})^5,\\ \sqrt[2.8]{-8} &= (-8)^{1/2.8} = (-8)^{5/14}, \end{align} but to work out the cases $\sqrt[2.4]{-8}$ and $\sqrt[2.8]{-8}$ we require some kind of agreement about how to take rational powers of a negative number when the power is not an integer divided by an odd integer. One convention says that $$ (-8)^{5/12} = \left(8\cos\pi + i8\sin\pi\right)^{5/12} = 8^{5/12}\cos\tfrac{5\pi}{12} + i8^{5/12}\sin\tfrac{5\pi}{12} $$ and $$ (-8)^{5/14} = \left(8\cos\pi + i8\sin\pi\right)^{5/14} = 8^{5/14}\cos\tfrac{5\pi}{14} + i8^{5/14}\sin\tfrac{5\pi}{14}. $$ But if we're going to do this for $(-8)^{5/12}$ and $(-8)^{5/14},$ it raises the question why we do not apply the same kind of arithmetic to $(-8)^{5/11}$ and $(-8)^{5/13}.$ Many authors avoid this issue by saying that you simply cannot take the $5/12$ or $5/14$ power of a negative number. When the exponent $r$ is a real number, not necessarily rational, if $x > 0$ we could define $x^r$ as the smallest number that is greater than $x^p$ for every rational number $p$ such that $p < r,$ or the largest number that is less than $x^q$ for every rational number $q$ such that $q > r.$ Another alternative is to construct the exponential function $\exp(x) = e^x$ and the natural logarithm function $\log(x),$ then define $$ x^r = \exp(r \log(x)). $$ The answers to What does $2^x$ really mean when $x$ is not an integer? discuss both of these approaches as well as some other ideas. Although floating-point numbers in a computer are actually rational numbers, so that in theory we could raise a number to a floating-point power using the techniques I described for rational powers, this is not really a practical method and computer math libraries tend to calculate $\exp(r \log(x))$ instead. (They would implement $\exp$ and $\log$ functions for other purposes anyway, so it is simply easier and more efficient to use those functions for this purpose too.) But the last few paragraphs apply only for positive numbers $x.$ When $x < 0,$ there are two answers one typically finds to the question of raising $x$ to an irrational power: Don't do it. Generalize to complex powers of complex numbers. The approach for complex powers of complex numbers is similar to the formula $\exp(r \log(x)),$ but not quite as simple. The difficulty is that there are many possible values of $\log(z)$ when $z$ is a complex number. Therefore it is necessary to come up with a rule for picking just one value of $\ln(z)$ for any complex number $z.$ If we say $\operatorname{Log}(z)$ is the value of $\log(z)$ selected according to the rule, then we can write $$ z^r = \exp(r \operatorname{Log}(z)). $$ The "rule" typically involves something called a branch cut, for example as discussed in the answers to Determination of complex logarithm and How do I formally define complex exponentiation? The way I attempted to define numbers such as $(-8)^{5/12}$ earlier amounted to a certain kind of branch cut; using such a branch cut, one could define a general real power of a negative number so that (for example) $$ (-8)^r = \left(8\cos\pi + i8\sin\pi\right)^r = 8^r\cos(r\pi) + i8^r\sin(r\pi). $$ This leads to the conclusion that $$ (-8)^{1/3} = 8^{1/3}\cos\tfrac\pi3 + i8^{1/3}\sin\tfrac\pi3 = 1 + i\sqrt3. $$ This does not look much like $-2,$ the result we might have liked to see for $\sqrt[3]{-8},$ so if you really want to define general real powers of negative numbers you may have to decide that $\sqrt[3]x$ is a different function than $x^{1/3}$ after all, giving up on the idea that $\sqrt[r]x$ could be a general function of real numbers $r$ and $x.$ These limitations carry over in to computer arithmetic. In C++, for example, the function std::pow for floating-point powers of floating-point numbers rules out taking any non-integer power of a negative number, so you cannot use std::pow to compute $n$th roots of negative numbers. But there is another function, std::cbrt, that allows you to take cube roots of negative numbers. You could extend that idea to other odd integer powers safely enough.
{"set_name": "stack_exchange", "score": 2, "question_id": 2752970}
TITLE: $\int(x^{12}+x^8+x^4)(2x^8+3x^4+6)^{1/4}dx$ QUESTION [1 upvotes]: $\int(x^{12}+x^8+x^4)(2x^8+3x^4+6)^{1/4}dx$ I tried to solve this question but no luck. My try: $$\int(x^{12}+x^8+x^4)(2x^8+3x^4+6)^{1/4}dx=\int x^4(x^8+x^4+1)(2x^8+3x^4+6)^{1/4}dx\\ \int x^4(x^8+x^4+1)x^2(2+3x^{-4}+6x^{-8})^{1/4}dx$$ Now i got stuck,please help me reach the answer.Answer is $$\frac{x^5}{30}(2x^8+3x^4+6)^{\frac54}+C$$ REPLY [1 votes]: We start by factoring out $x^4$: $$ (x^{12}+x^8+x^4)(2x^8+3x^4+6)^{1/4}=x^4(x^{8}+x^4+1)(2x^8+3x^4+6)^{1/4}. $$ Next, we write $$ x^{8}+x^4+1=\frac{1}{6}(6+3x^4+2x^8)+\frac{1}{2}x^4+\frac{2}{3}x^8, $$ so the integrand can be written as $$ \Bigl(\frac{1}{2}x^8+\frac{2}{3}x^{12}\Bigr)(2x^8+3x^4+6)^{1/4} +\frac{1}{6}x^4(2x^8+3x^4+6)^{5/4}, $$ or, factoring out $x^5/24$ in the first term, $$ \frac{x^5}{24}\Bigl(12x^3+16x^{7}\Bigr)(2x^8+3x^4+6)^{1/4} +\frac{1}{6}x^4(2x^8+3x^4+6)^{5/4}. $$ Hooray (this is really lucky!), this is a derivative of a product, since $$ D(2x^8+3x^4+6)=16x^7+12x^3, $$ we find that the expression above is $$ \frac{x^5}{30}D\bigl[(2x^8+3x^4+6)^{5/4}\bigr]+\bigl[D(x^5/30)\bigr](2x^8+3x^4+6)^{5/4}, $$ i.e. $$ D\Bigl[\frac{x^5}{30}(2x^8+3x^4+6)^{5/4}\bigr]. $$ Hence, $$ \int (x^{12}+x^8+x^4)(2x^8+3x^4+6)^{1/4}\,dx=\frac{x^5}{30}(2x^8+3x^4+6)^{5/4}+c. $$
{"set_name": "stack_exchange", "score": 1, "question_id": 1429571}
TITLE: Sum of sum of elementary s of subsets QUESTION [1 upvotes]: Let n ≥ 1 be an integer. For each subset S ⊂ {1,2,...,3n}, let f (S) be the sum of the elements of S, with f (ø) = 0. Determine, as a function of n, the sum ∑ f (S) S ⊂{1,2,...,3n } 3|f (S) where S runs through all subsets of {1,2,...,3n} such that f (S) is a multiple of 3. Please help me , im really stuck on this one , maybe its induction , but idk how to find the general formula for n , the case n=1,2,3 are trivial REPLY [1 votes]: I don't know the answer but here's a approach that might help: Let $A=\{1,...,3n\}$ and let $M = \{S\subset A : f(S) \text{ is a multiple of } 3\}$. Notice that for any $S \subset A$ we have $f(S) + f(S^c) = f(A) = 3n(3n+1)/2$. Hence $S \in M\iff S^c \in M$, so $$\sum_{S\in M} f(S) = \frac{1}{2}\sum_{S\in M} \left[f(S) + f(S^c)\right]$$ $$= \frac{3n(3n+1)}{4}\big|M\big|$$ So the problem is equivalent to finding the size of the set $M$. I don't know if that's easier.
{"set_name": "stack_exchange", "score": 1, "question_id": 3892479}
TITLE: Prove the inequality $\frac{a+c}{a+b}+\frac{b+d}{b+c}+\frac{c+a}{c+d}+\frac{d+b}{d+a}\geq 4$ QUESTION [4 upvotes]: $a, b, c, d$ are positive reals. How would I prove the inequality $$\frac{a+c}{a+b}+\frac{b+d}{b+c}+\frac{c+a}{c+d}+\frac{d+b}{d+a} \geq 4$$ I have tried using the rearrangement inequality with $a\leq b\leq c\leq d$ But it doesn't seem to work well. Any hints please? REPLY [3 votes]: By C-S $$\sum_{cyc}\frac{a+c}{a+b}=(a+c)\left(\frac{1}{a+b}+\frac{1}{c+d}\right)+(b+d)\left(\frac{1}{b+c}+\frac{1}{d+a}\right)\geq$$ $$\geq\frac{4(a+c)}{a+b+c+d}+\frac{4(b+d)}{b+c+d+a}=4.$$
{"set_name": "stack_exchange", "score": 4, "question_id": 1766378}
theory Alternative_Semantics imports Semantics begin context begin (* the first thing (I think) we have to do is alter the Seq rule / merge it with NoMatch. Its properties make it hard to work with\<dots> *) private inductive iptables_bigstep_ns :: "'a ruleset \<Rightarrow> ('a, 'p) matcher \<Rightarrow> 'p \<Rightarrow> 'a rule list \<Rightarrow> state \<Rightarrow> state \<Rightarrow> bool" ("_,_,_\<turnstile> \<langle>_, _\<rangle> \<Rightarrow>\<^sub>s _" [60,60,60,20,98,98] 89) for \<Gamma> and \<gamma> and p where skip: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>[], t\<rangle> \<Rightarrow>\<^sub>s t" | accept: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m Accept # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision FinalAllow" | drop: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m Drop # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision FinalDeny" | reject: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m Reject # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision FinalDeny" | log: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m Log # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" | empty: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m Empty # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" | nms: "\<not> matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m a # rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" | (*decision: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Decision X\<rangle> \<Rightarrow>\<^sub>s Decision X" |*) call_return: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some (rs\<^sub>1 @ Rule m' Return # rs\<^sub>2); matches \<gamma> m' p; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1, Undecided\<rangle> \<Rightarrow>\<^sub>s Undecided; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rrs, Undecided\<rangle> \<Rightarrow>\<^sub>s t \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m (Call chain) # rrs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" | call_result: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision X \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m (Call chain) # rrs, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision X" | call_no_result: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s Undecided; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rrs, Undecided\<rangle> \<Rightarrow>\<^sub>s t \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>Rule m (Call chain) # rrs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" private lemma a: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>s t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t" apply(induction rule: iptables_bigstep_ns.induct; (simp add: iptables_bigstep.intros;fail)?) apply (meson iptables_bigstep.decision iptables_bigstep.accept seq_cons) apply (meson iptables_bigstep.decision iptables_bigstep.drop seq_cons) apply (meson iptables_bigstep.decision iptables_bigstep.reject seq_cons) apply (meson iptables_bigstep.log seq_cons) apply (meson iptables_bigstep.empty seq_cons) apply (meson nomatch seq_cons) subgoal using iptables_bigstep.call_return seq_cons by fastforce apply (meson iptables_bigstep.decision iptables_bigstep.call_result seq_cons) apply (meson iptables_bigstep.call_result seq'_cons) done private lemma empty_rs_stateD: assumes "\<Gamma>,\<gamma>,p\<turnstile> \<langle>[], s\<rangle> \<Rightarrow>\<^sub>s t" shows "t = s" using assms by(cases rule: iptables_bigstep_ns.cases) private lemma decided: "\<lbrakk>\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision X\<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1@rs\<^sub>2, Undecided\<rangle> \<Rightarrow>\<^sub>s Decision X" proof(induction rs\<^sub>1) case Nil then show ?case by (fast dest: empty_rs_stateD) next case (Cons a rs\<^sub>1) from Cons.prems show ?case by(cases rule: iptables_bigstep_ns.cases; simp add: Cons.IH iptables_bigstep_ns.intros) qed private lemma decided_determ: "\<lbrakk>\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1, s\<rangle> \<Rightarrow>\<^sub>s t; s = Decision X\<rbrakk> \<Longrightarrow> t = Decision X" by(induction rule: iptables_bigstep_ns.induct; (simp add: iptables_bigstep_ns.intros;fail)?) private lemma seq_ns: "\<lbrakk>\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1, Undecided\<rangle> \<Rightarrow>\<^sub>s t; \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>2, t\<rangle> \<Rightarrow>\<^sub>s t'\<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs\<^sub>1@rs\<^sub>2, Undecided\<rangle> \<Rightarrow>\<^sub>s t'" proof (cases t, goal_cases) case 1 from 1(1,2) show ?case unfolding 1 proof(induction rs\<^sub>1) case (Cons a rs\<^sub>3) then show ?case apply - apply(rule iptables_bigstep_ns.cases[OF Cons.prems(1)]; simp add: iptables_bigstep_ns.intros) done qed simp next case (2 X) hence "t' = Decision X" by (simp add: decided_determ) from 2(1) show ?case by (simp add: "2"(3) \<open>t' = Decision X\<close> decided) qed private lemma b: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t \<Longrightarrow> s = Undecided \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>s t" apply(induction rule: iptables_bigstep.induct; (simp add: iptables_bigstep_ns.intros;fail)?) apply (metis decided decision seq_ns seq_progress skipD state.exhaust) apply(metis call_no_result iptables_bigstep_ns.call_result iptables_bigstep_ns.skip state.exhaust) done private inductive iptables_bigstep_nz :: "'a ruleset \<Rightarrow> ('a, 'p) matcher \<Rightarrow> 'p \<Rightarrow> 'a rule list \<Rightarrow> state \<Rightarrow> bool" ("_,_,_\<turnstile> _ \<Rightarrow>\<^sub>z _" [60,60,60,20,98] 89) for \<Gamma> and \<gamma> and p where skip: "\<Gamma>,\<gamma>,p \<turnstile> [] \<Rightarrow>\<^sub>z Undecided" | accept: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Accept # rs \<Rightarrow>\<^sub>z Decision FinalAllow" | drop: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Drop # rs \<Rightarrow>\<^sub>z Decision FinalDeny" | reject: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Reject # rs \<Rightarrow>\<^sub>z Decision FinalDeny" | log: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Log # rs \<Rightarrow>\<^sub>z t" | empty: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Empty # rs \<Rightarrow>\<^sub>z t" | nms: "\<not> matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m a # rs \<Rightarrow>\<^sub>z t" | call_return: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some (rs\<^sub>1 @ Rule m' Return # rs\<^sub>2); matches \<gamma> m' p; \<Gamma>,\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided; \<Gamma>,\<gamma>,p\<turnstile> rrs \<Rightarrow>\<^sub>z t \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m (Call chain) # rrs \<Rightarrow>\<^sub>z t" | call_result: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z Decision X \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m (Call chain) # rrs \<Rightarrow>\<^sub>z Decision X" | call_no_result: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z Undecided; \<Gamma>,\<gamma>,p\<turnstile> rrs \<Rightarrow>\<^sub>z t \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m (Call chain) # rrs \<Rightarrow>\<^sub>z t" private lemma c: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow>\<^sub>s t" by(induction rule: iptables_bigstep_nz.induct; simp add: iptables_bigstep_ns.intros) private lemma d: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow>\<^sub>s t \<Longrightarrow> s = Undecided \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t" by(induction rule: iptables_bigstep_ns.induct; simp add: iptables_bigstep_nz.intros) inductive iptables_bigstep_r :: "'a ruleset \<Rightarrow> ('a, 'p) matcher \<Rightarrow> 'p \<Rightarrow> 'a rule list \<Rightarrow> state \<Rightarrow> bool" ("_,_,_\<turnstile> _ \<Rightarrow>\<^sub>r _" [60,60,60,20,98] 89) for \<Gamma> and \<gamma> and p where skip: "\<Gamma>,\<gamma>,p \<turnstile> [] \<Rightarrow>\<^sub>r Undecided" | accept: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Accept # rs \<Rightarrow>\<^sub>r Decision FinalAllow" | drop: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Drop # rs \<Rightarrow>\<^sub>r Decision FinalDeny" | reject: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Reject # rs \<Rightarrow>\<^sub>r Decision FinalDeny" | return: "matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Return # rs \<Rightarrow>\<^sub>r Undecided" | log: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Log # rs \<Rightarrow>\<^sub>r t" | empty: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Empty # rs \<Rightarrow>\<^sub>r t" | nms: "\<not> matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m a # rs \<Rightarrow>\<^sub>r t" | call_result: "\<lbrakk> matches \<gamma> m p; \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r Decision X \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m (Call chain) # rrs \<Rightarrow>\<^sub>r Decision X" | call_no_result: "\<lbrakk> \<Gamma> chain = Some rs; \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r Undecided; \<Gamma>,\<gamma>,p\<turnstile> rrs \<Rightarrow>\<^sub>r t \<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m (Call chain) # rrs \<Rightarrow>\<^sub>r t" private lemma returning: "\<lbrakk>\<Gamma>,\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>r Undecided; matches \<gamma> m' p\<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs\<^sub>1 @ Rule m' Return # rs\<^sub>2 \<Rightarrow>\<^sub>r Undecided" proof(induction rs\<^sub>1) case Nil then show ?case by (simp add: return) next case (Cons a rs\<^sub>3) then show ?case by - (rule iptables_bigstep_r.cases[OF Cons.prems(1)]; simp add: iptables_bigstep_r.intros) qed private lemma e: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> s = Undecided \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t" by(induction rule: iptables_bigstep_nz.induct; simp add: iptables_bigstep_r.intros returning) definition "no_call_to c rs \<equiv> (\<forall>r \<in> set rs. case get_action r of Call c' \<Rightarrow> c \<noteq> c' | _ \<Rightarrow> True)" definition "all_chains p \<Gamma> rs \<equiv> (p rs \<and> (\<forall>l rs. \<Gamma> l = Some rs \<longrightarrow> p rs))" private lemma all_chains_no_call_upd: "all_chains (no_call_to c) \<Gamma> rs \<Longrightarrow> (\<Gamma>(c \<mapsto> x)),\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t" proof (rule iffI, goal_cases) case 1 from 1(2,1) show ?case by(induction rule: iptables_bigstep_nz.induct; (simp add: iptables_bigstep_nz.intros no_call_to_def all_chains_def split: if_splits;fail)?) next case 2 from 2(2,1) show ?case by(induction rule: iptables_bigstep_nz.induct; (simp add: iptables_bigstep_nz.intros no_call_to_def all_chains_def split: action.splits;fail)?) qed lemma updated_call: "\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> matches \<gamma> m p \<Longrightarrow> \<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z t" by(cases t; simp add: iptables_bigstep_nz.call_no_result iptables_bigstep_nz.call_result iptables_bigstep_nz.skip) private lemma shows log_nz: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Log # rs \<Rightarrow>\<^sub>z t" and empty_nz: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> Rule m Empty # rs \<Rightarrow>\<^sub>z t" by (meson iptables_bigstep_nz.log iptables_bigstep_nz.empty iptables_bigstep_nz.nms)+ private lemma nz_empty_rs_stateD: assumes "\<Gamma>,\<gamma>,p\<turnstile> [] \<Rightarrow>\<^sub>z t" shows "t = Undecided" using assms by(cases rule: iptables_bigstep_nz.cases) private lemma upd_callD: "\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z t \<Longrightarrow> matches \<gamma> m p \<Longrightarrow> (\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>z t \<or> (\<exists>rs\<^sub>1 rs\<^sub>2 m'. rs = rs\<^sub>1 @ Rule m' Return # rs\<^sub>2 \<and> matches \<gamma> m' p \<and> \<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided \<and> t = Undecided))" by(subst (asm) iptables_bigstep_nz.simps) (auto dest!: nz_empty_rs_stateD) private lemma partial_fun_upd: "(f(x \<mapsto> y)) x = Some y" by(fact fun_upd_same) lemma f: "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<Longrightarrow> matches \<gamma> m p \<Longrightarrow> all_chains (no_call_to c) \<Gamma> rs \<Longrightarrow> (\<Gamma>(c \<mapsto> rs)),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z t" proof(induction rule: iptables_bigstep_r.induct; (simp add: iptables_bigstep_nz.intros;fail)?) case (return m rs) then show ?case by (metis append_Nil fun_upd_same iptables_bigstep_nz.call_return iptables_bigstep_nz.skip) next case (log rs t mx) have ac: "all_chains (no_call_to c) \<Gamma> rs" using log(4) by(simp add: all_chains_def no_call_to_def) have *: "\<Gamma>(c \<mapsto> Rule mx Log # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z Undecided" if "rs = rs\<^sub>1 @ Rule m' Return # rs\<^sub>2" "matches \<gamma> m' p" "\<Gamma>(c \<mapsto> rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" for rs\<^sub>1 rs\<^sub>2 m' proof - have ac2: "all_chains (no_call_to c) \<Gamma> rs\<^sub>1" using log(4) that by(simp add: all_chains_def no_call_to_def) hence "\<Gamma>(c \<mapsto> Rule mx Log # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" using that(3) unfolding that by(simp add: all_chains_no_call_upd) hence "\<Gamma>(c \<mapsto> Rule mx Log # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> Rule mx Log # rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" by (simp add: log_nz) thus ?thesis using that(1,2) by(elim iptables_bigstep_nz.call_return[where rs\<^sub>2=rs\<^sub>2, OF \<open>matches \<gamma> m p\<close>, rotated]; simp add: iptables_bigstep_nz.skip) qed from log(2)[OF log(3) ac] show ?case apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>]) apply(erule disjE) subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule log_nz) apply(simp add: ac all_chains_no_call_upd) done using * by blast next case (empty rs t mx) text\<open>analogous\<close> (*<*) have ac: "all_chains (no_call_to c) \<Gamma> rs" using empty(4) by(simp add: all_chains_def no_call_to_def) have *: "\<Gamma>(c \<mapsto> Rule mx Empty # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z Undecided" if "rs = rs\<^sub>1 @ Rule m' Return # rs\<^sub>2" "matches \<gamma> m' p" "\<Gamma>(c \<mapsto> rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" for rs\<^sub>1 rs\<^sub>2 m' proof - have ac2: "all_chains (no_call_to c) \<Gamma> rs\<^sub>1" using empty(4) that by(simp add: all_chains_def no_call_to_def) hence "\<Gamma>(c \<mapsto> Rule mx Empty # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" using that(3) unfolding that by(simp add: all_chains_no_call_upd) hence "\<Gamma>(c \<mapsto> Rule mx Empty # rs\<^sub>1 @ Rule m' Return # rs\<^sub>2),\<gamma>,p\<turnstile> Rule mx Empty # rs\<^sub>1 \<Rightarrow>\<^sub>z Undecided" by (simp add: empty_nz) thus ?thesis using that(1,2) by(elim iptables_bigstep_nz.call_return[where rs\<^sub>2=rs\<^sub>2, OF \<open>matches \<gamma> m p\<close>, rotated]; simp add: iptables_bigstep_nz.skip) qed from empty(2)[OF empty(3) ac] show ?case apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>]) apply(erule disjE) subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule empty_nz) apply(simp add: ac all_chains_no_call_upd) done using * by blast (*>*) next case (nms m' rs t a) have ac: "all_chains (no_call_to c) \<Gamma> rs" using nms(5) by(simp add: all_chains_def no_call_to_def) from nms.IH[OF nms(4) ac] show ?case apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>]) apply(erule disjE) subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule iptables_bigstep_nz.nms[OF \<open>\<not> matches \<gamma> m' p\<close>]) apply(simp add: ac all_chains_no_call_upd) done apply safe subgoal for rs\<^sub>1 rs\<^sub>2 r apply(subgoal_tac "all_chains (no_call_to c) \<Gamma> rs\<^sub>1") (* Ich kann auch anders. *) apply(subst (asm) all_chains_no_call_upd, assumption) apply(subst (asm) all_chains_no_call_upd[symmetric], assumption) apply(drule iptables_bigstep_nz.nms[where a=a, OF \<open>\<not> matches \<gamma> m' p\<close>]) apply(erule (1) iptables_bigstep_nz.call_return[where rs\<^sub>2=rs\<^sub>2, OF \<open>matches \<gamma> m p\<close>, rotated]) apply(insert ac; simp add: all_chains_def no_call_to_def iptables_bigstep_nz.skip)+ done done next case (call_result m' c' rs X rrs) have acrs: "all_chains (no_call_to c) \<Gamma> rs" using call_result(2,6) by(simp add: all_chains_def no_call_to_def) have cc: "c \<noteq> c'" (* okay, this one is a bit nifty\<dots> *) using call_result(6) by(simp add: all_chains_def no_call_to_def) have "\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z Decision X" using call_result.IH call_result.prems(1) acrs by blast then show ?case apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>]) apply(erule disjE) subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule iptables_bigstep_nz.call_result[where rs=rs, OF \<open>matches \<gamma> m' p\<close> ]) apply(simp add: cc[symmetric] call_result(2);fail) apply(simp add: acrs all_chains_no_call_upd;fail) done apply safe (* oh. Didn't expect that. :) *) done next case (call_no_result c' rs rrs t m') have acrs: "all_chains (no_call_to c) \<Gamma> rs" using call_no_result(1,7) by(simp add: all_chains_def no_call_to_def) have acrrs: "all_chains (no_call_to c) \<Gamma> rrs" using call_no_result(7) by(simp add: all_chains_def no_call_to_def) have acrs1: "all_chains (no_call_to c) \<Gamma> rs\<^sub>1" if "rs = rs\<^sub>1 @ rs\<^sub>2" for rs\<^sub>1 rs\<^sub>2 using acrs that by(simp add: all_chains_def no_call_to_def) have acrrs1: "all_chains (no_call_to c) \<Gamma> rs\<^sub>1" if "rrs = rs\<^sub>1 @ rs\<^sub>2" for rs\<^sub>1 rs\<^sub>2 using acrrs that by(simp add: all_chains_def no_call_to_def) have cc: "c \<noteq> c'" (* okay, this one is a bit nifty\<dots> *) using call_no_result(7) by(simp add: all_chains_def no_call_to_def) have *: "\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z Undecided" using call_no_result.IH call_no_result.prems(1) acrs by blast have **: "\<Gamma>(c \<mapsto> rrs),\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>z t" by (simp add: acrrs call_no_result.IH(2) call_no_result.prems(1)) show ?case proof(cases \<open>matches \<gamma> m' p\<close>) case True from call_no_result(5)[OF \<open>matches \<gamma> m p\<close> acrrs] * show ?thesis apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>])+ apply(elim disjE) (* 4 sg *) apply safe subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule iptables_bigstep_nz.call_no_result[where rs=rs, OF \<open>matches \<gamma> m' p\<close> ]) apply(simp add: cc[symmetric] call_no_result(1);fail) apply(simp add: acrs all_chains_no_call_upd;fail) apply(simp add: acrrs all_chains_no_call_upd) done subgoal for rs\<^sub>1 rs\<^sub>2 r apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule call_return[OF \<open>matches \<gamma> m' p\<close>]) apply(simp add: cc[symmetric] call_no_result(1);fail) apply(simp;fail) apply(simp add: acrs1 all_chains_no_call_upd;fail) apply(simp add: acrrs all_chains_no_call_upd) done subgoal for rs\<^sub>1 rs\<^sub>2 r apply(rule call_return[where rs\<^sub>1="Rule m' (Call c') # rs\<^sub>1", OF \<open>matches \<gamma> m p\<close>]) apply(simp;fail) apply(simp;fail) apply(rule iptables_bigstep_nz.call_no_result[OF \<open>matches \<gamma> m' p\<close>]) apply(simp add: cc[symmetric] call_no_result(1);fail) apply (meson acrs all_chains_no_call_upd) apply(subst all_chains_no_call_upd; simp add: acrrs1 all_chains_no_call_upd; fail) apply (simp add: iptables_bigstep_nz.skip;fail) done subgoal for rrs\<^sub>1 rs\<^sub>1 rrs\<^sub>2 rs\<^sub>2 rr r apply(rule call_return[where rs\<^sub>1="Rule m' (Call c') # rrs\<^sub>1", OF \<open>matches \<gamma> m p\<close>]) apply(simp;fail) apply(simp;fail) apply(rule call_return[OF \<open>matches \<gamma> m' p\<close>]) apply(simp add: cc[symmetric] call_no_result(1);fail) apply blast apply (meson acrs1 all_chains_no_call_upd) apply(subst all_chains_no_call_upd; simp add: acrrs1 all_chains_no_call_upd; fail) apply (simp add: iptables_bigstep_nz.skip;fail) done done next case False from iptables_bigstep_nz.nms[OF False] ** show ?thesis apply - apply(drule upd_callD[OF _ \<open>matches \<gamma> m p\<close>])+ apply(elim disjE) subgoal apply(rule updated_call[OF _ \<open>matches \<gamma> m p\<close>]) apply(rule iptables_bigstep_nz.nms[OF False]) apply(simp add: acrrs all_chains_no_call_upd) done apply safe subgoal for rs\<^sub>1 rs\<^sub>2 r apply(rule call_return[where rs\<^sub>1="Rule m' (Call c') # rs\<^sub>1", OF \<open>matches \<gamma> m p\<close>]) apply(simp;fail) apply(simp;fail) apply(rule iptables_bigstep_nz.nms[OF False]) apply(subst all_chains_no_call_upd; simp add: acrrs1 all_chains_no_call_upd; fail) apply(simp add: iptables_bigstep_nz.skip;fail) done done qed qed lemma r_skip_inv: "\<Gamma>,\<gamma>,p\<turnstile> [] \<Rightarrow>\<^sub>r t \<Longrightarrow> t = Undecided" by(subst (asm) iptables_bigstep_r.simps) auto (* why did I do all this? essentially, because I thought this should be derivable: *) lemma r_call_eq: "\<Gamma> c = Some rs \<Longrightarrow> matches \<gamma> m p \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> [Rule m (Call c)] \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t" (* and yes, there is a more general form of this lemma, but\<dots> meh. *) apply(rule iffI) subgoal apply(subst (asm) iptables_bigstep_r.simps) apply(auto dest: r_skip_inv) done subgoal apply(cases t) apply(erule iptables_bigstep_r.call_no_result) apply(simp;fail) apply(simp add: iptables_bigstep_r.skip;fail) apply(simp) apply(erule (2) iptables_bigstep_r.call_result) done by - (* we can make the same formulation for the original semantics if we tread a bit more carefully *) lemma call_eq: "\<Gamma> c = Some rs \<Longrightarrow> matches \<gamma> m p \<Longrightarrow> \<forall>r \<in> set rs. get_action r \<noteq> Return \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>[Rule m (Call c)],s\<rangle> \<Rightarrow> t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs,s\<rangle> \<Rightarrow> t" apply(rule iffI) subgoal apply(subst (asm) iptables_bigstep.simps) apply (auto) apply (simp add: decision) apply(erule rules_singleton_rev_E; simp; metis callD in_set_conv_decomp rule.sel(2) skipD) done by (metis decision iptables_bigstep.call_result iptables_bigstep_deterministic state.exhaust) theorem r_eq_orig: "\<lbrakk>all_chains (no_call_to c) \<Gamma> rs; \<Gamma> c = Some rs\<rbrakk> \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>[Rule MatchAny (Call c)], Undecided\<rangle> \<Rightarrow> t" apply(rule iffI) subgoal apply(drule f[where m=MatchAny, THEN c, THEN a]) apply(simp;fail) apply(simp;fail) apply (metis fun_upd_triv) done subgoal apply(subst r_call_eq[where m=MatchAny, symmetric]) apply(simp;fail) apply(simp;fail) apply(erule b[THEN d, THEN e, OF _ refl refl refl]) done done lemma r_no_call: "\<Gamma>,\<gamma>,p\<turnstile> Rule MatchAny (Call c)#rs \<Rightarrow>\<^sub>r t \<Longrightarrow> \<Gamma> c = None \<Longrightarrow> False" by(subst (asm) iptables_bigstep_r.simps) simp lemma no_call: "\<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, s\<rangle> \<Rightarrow> t \<Longrightarrow> rs = [Rule MatchAny (Call c)] \<Longrightarrow> s = Undecided \<Longrightarrow> \<Gamma> c = None \<Longrightarrow> False" by (meson b d e r_no_call) (*by(induction rule: iptables_bigstep.induct; clarsimp) (metis list_app_singletonE skipD)*) private corollary r_eq_orig': assumes "\<forall>rs \<in> ran \<Gamma>. no_call_to c rs" shows "\<Gamma>,\<gamma>,p\<turnstile> [Rule MatchAny (Call c)] \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>[Rule MatchAny (Call c)], Undecided\<rangle> \<Rightarrow> t" (* if you really like symmetry *) proof - show ?thesis proof (cases "\<Gamma> c") fix rs assume "\<Gamma> c = Some rs" moreover hence "all_chains (no_call_to c) \<Gamma> rs" using assms by (simp add: all_chains_def ranI) ultimately show ?thesis by(simp add: r_call_eq r_eq_orig) next assume "\<Gamma> c = None" thus ?thesis using r_no_call no_call by metis qed qed (* btw, we can still formulate a seq rules, but we have to tread a bit more carefully *) lemma r_tail: assumes "\<Gamma>,\<gamma>,p\<turnstile> rs1 \<Rightarrow>\<^sub>r Decision X" shows "\<Gamma>,\<gamma>,p\<turnstile> rs1 @ rs2 \<Rightarrow>\<^sub>r Decision X" proof - have "\<Gamma>,\<gamma>,p\<turnstile> rs1 \<Rightarrow>\<^sub>r t \<Longrightarrow> t = Decision X \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs1 @ rs2 \<Rightarrow>\<^sub>r Decision X" for t by(induction rule: iptables_bigstep_r.induct; simp add: iptables_bigstep_r.intros) thus ?thesis using assms by blast qed lemma r_seq: "\<Gamma>,\<gamma>,p\<turnstile> rs1 \<Rightarrow>\<^sub>r Undecided \<Longrightarrow> \<forall>r \<in> set rs1. \<not>(get_action r = Return \<and> matches \<gamma> (get_match r) p) \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs2 \<Rightarrow>\<^sub>r t \<Longrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs1 @ rs2 \<Rightarrow>\<^sub>r t" proof(induction rs1) case Nil then show ?case by simp next case (Cons r rs1) have p2: "\<forall>r\<in>set rs1. \<not> (get_action r = Return \<and> matches \<gamma> (get_match r) p)" "\<not>(get_action r = Return \<and> matches \<gamma> (get_match r) p)" by (simp_all add: Cons.prems(2)) from Cons.prems(1) p2(2) Cons.IH[OF _ p2(1) Cons.prems(3)] show ?case by(cases rule: iptables_bigstep_r.cases; simp add: iptables_bigstep_r.intros) qed lemma r_appendD: "\<Gamma>,\<gamma>,p\<turnstile> rs1 @ rs2 \<Rightarrow>\<^sub>r t \<Longrightarrow> \<exists>s. \<Gamma>,\<gamma>,p\<turnstile> rs1 \<Rightarrow>\<^sub>r s" proof(induction rs1) case (Cons r rs1) from Cons.prems Cons.IH show ?case by(cases rule: iptables_bigstep_r.cases) (auto intro: iptables_bigstep_r.intros) qed (meson iptables_bigstep_r.skip) corollary iptables_bigstep_r_eq: assumes "\<forall>rs \<in> ran \<Gamma>. no_call_to c rs" "A = Accept \<or> A = Drop" shows "\<Gamma>,\<gamma>,p\<turnstile> [Rule MatchAny (Call c), Rule MatchAny A] \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>[Rule MatchAny (Call c), Rule MatchAny A], Undecided\<rangle> \<Rightarrow> t" (* if you really like the way we do our analyses *) proof - show ?thesis proof (cases "\<Gamma> c") fix rs assume "\<Gamma> c = Some rs" moreover hence "all_chains (no_call_to c) \<Gamma> rs" using assms by (simp add: all_chains_def ranI) show ?thesis (* if this proof breaks, don't fix it. say 'meh' and re-prove this as a corollary of r_eq_orig''' with a stronger assumption *) apply(rule iffI[rotated]) apply(erule seqE_cons) apply(subst (asm) r_eq_orig'[symmetric]) apply (simp add: assms(1);fail) apply (meson assms(1) b d e r_eq_orig' seq'_cons) (* holy shi\<dots> *) apply(frule r_appendD[of _ _ _ "[Rule MatchAny (Call c)]" "[Rule MatchAny A]", simplified]) apply(subst (asm) r_eq_orig') apply (simp add: assms(1);fail) apply(clarsimp) apply(subst (asm) r_eq_orig'[symmetric]) apply (simp add: assms(1);fail) apply(subst (asm)(2) iptables_bigstep_r.simps) apply(subst (asm)(1) iptables_bigstep_r.simps) apply auto apply (metis append_Cons append_Nil assms(1) decision matches.simps(4) r_call_eq r_eq_orig' seq) apply (metis \<open>all_chains (no_call_to c) \<Gamma> rs\<close> calculation iptables_bigstep_deterministic option.inject r_eq_orig state.distinct(1)) subgoal using \<open>all_chains (no_call_to c) \<Gamma> rs\<close> calculation iptables_bigstep_deterministic r_eq_orig by fastforce apply(subst (asm) r_eq_orig[rotated]) apply(assumption) subgoal using \<open>all_chains (no_call_to c) \<Gamma> rs\<close> calculation by simp apply(erule seq'_cons) apply(subst (asm)(1) iptables_bigstep_r.simps) apply(insert assms(2); auto simp add: iptables_bigstep.intros) done next assume "\<Gamma> c = None" thus ?thesis using r_no_call no_call by (metis seqE_cons) qed qed (* now, you don't like that no_call_to assumption? this one's for you: *) lemma ex_no_call: "finite S \<Longrightarrow> \<exists>c. \<forall>(rs :: 'a rule list) \<in> S. no_call_to c rs" (* If you want, you can put in \<open>ran \<Gamma>\<close> for S. *) proof - assume fS: \<open>finite S\<close> define called_c where "called_c rs = {c. \<exists>m. Rule m (Call c) \<in> set rs}" for rs :: "'a rule list" define called_c' where "called_c' rs = set [c. r \<leftarrow> rs, c \<leftarrow> (case get_action r of Call c \<Rightarrow> [c] | _ \<Rightarrow> [])]" for rs :: "'a rule list" have cc: "called_c' rs = called_c rs" for rs unfolding called_c'_def called_c_def by(induction rs; simp add: Un_def) (auto; metis rule.collapse) have f: "finite (called_c rs)" for rs unfolding cc[symmetric] called_c'_def by blast have ncc: "no_call_to c rs \<longleftrightarrow> c \<notin> called_c rs" for c rs by(induction rs; auto simp add: no_call_to_def called_c_def split: action.splits) (metis rule.collapse) have isu: "infinite (UNIV :: string set)" by (simp add: infinite_UNIV_listI) have ff: "finite (\<Union>rs \<in> S. called_c rs)" using f fS by simp then obtain c where ne: "c \<notin> (\<Union>rs \<in> S. called_c rs)" by (blast dest: ex_new_if_finite[OF isu]) thus ?thesis by(intro exI[where x=c]) (simp add: ncc) (* stupid way of proving something, once again\<dots> *) qed private lemma ex_no_call': "finite (dom \<Gamma>) \<Longrightarrow> \<exists>c. \<Gamma> c = None \<and> (\<forall>(rs :: 'a rule list) \<in> (ran \<Gamma>). no_call_to c rs)" (* I want a corollary, and I need something a tad stronger\<dots> *) proof - have *: "finite S \<Longrightarrow> (dom M) = S \<Longrightarrow> \<exists>m. M = map_of m" for M S proof(induction arbitrary: M rule: finite.induct) case emptyI then show ?case by(intro exI[where x=Nil]) simp next case (insertI A a) show ?case proof(cases "a \<in> A") (* stupid induction rule *) case True then show ?thesis using insertI by (simp add: insert_absorb) next case False hence "dom (M(a := None)) = A" using insertI.prems by simp from insertI.IH[OF this] obtain m where "M(a := None) = map_of m" .. then show ?thesis by(intro exI[where x="(a, the (M a)) # m"]) (simp; metis domIff fun_upd_apply insertCI insertI.prems option.collapse) qed qed (* hm, thought that would give me what I want\<dots> *) have ran_alt: "ran f = (the o f) ` dom f" for f by(auto simp add: ran_def dom_def image_def) assume fD: \<open>finite (dom \<Gamma>)\<close> hence fS: \<open>finite (ran \<Gamma>)\<close> by(simp add: ran_alt) define called_c where "called_c rs = {c. \<exists>m. Rule m (Call c) \<in> set rs}" for rs :: "'a rule list" define called_c' where "called_c' rs = set [c. r \<leftarrow> rs, c \<leftarrow> (case get_action r of Call c \<Rightarrow> [c] | _ \<Rightarrow> [])]" for rs :: "'a rule list" have cc: "called_c' rs = called_c rs" for rs unfolding called_c'_def called_c_def by(induction rs; simp add: Un_def) (auto; metis rule.collapse) have f: "finite (called_c rs)" for rs unfolding cc[symmetric] called_c'_def by blast have ncc: "no_call_to c rs \<longleftrightarrow> c \<notin> called_c rs" for c rs by(induction rs; auto simp add: no_call_to_def called_c_def split: action.splits) (metis rule.collapse) have isu: "infinite (UNIV :: string set)" by (simp add: infinite_UNIV_listI) have ff: "finite (\<Union>rs \<in> ran \<Gamma>. called_c rs)" using f fS by simp hence fff: "finite (dom \<Gamma> \<union> (\<Union>rs \<in> ran \<Gamma>. called_c rs))" using fD by simp then obtain c where ne: "c \<notin> (dom \<Gamma> \<union> (\<Union>rs \<in> ran \<Gamma>. called_c rs))" thm ex_new_if_finite by (metis UNIV_I isu set_eqI) thus ?thesis by(fastforce simp add: ncc) qed lemma all_chains_no_call_upd_r: "all_chains (no_call_to c) \<Gamma> rs \<Longrightarrow> (\<Gamma>(c \<mapsto> x)),\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t" proof (rule iffI, goal_cases) case 1 from 1(2,1) show ?case by(induction rule: iptables_bigstep_r.induct; (simp add: iptables_bigstep_r.intros no_call_to_def all_chains_def split: if_splits;fail)?) next case 2 from 2(2,1) show ?case by(induction rule: iptables_bigstep_r.induct; (simp add: iptables_bigstep_r.intros no_call_to_def all_chains_def split: action.splits;fail)?) qed (* in a sense, this is code duplication with Ruleset_Update, but it's different enough that I can't use it. *) lemma all_chains_no_call_upd_orig: "all_chains (no_call_to c) \<Gamma> rs \<Longrightarrow> (\<Gamma>(c \<mapsto> x)),\<gamma>,p\<turnstile> \<langle>rs,s\<rangle> \<Rightarrow> t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs,s\<rangle> \<Rightarrow> t" proof (rule iffI, goal_cases) case 1 from 1(2,1) show ?case by(induction rs s t rule: iptables_bigstep.induct; (simp add: iptables_bigstep.intros no_call_to_def all_chains_def split: if_splits;fail)?) next case 2 from 2(2,1) show ?case by(induction rule: iptables_bigstep.induct; (simp add: iptables_bigstep.intros no_call_to_def all_chains_def split: action.splits;fail)?) qed corollary r_eq_orig''': assumes "finite (ran \<Gamma>)" and "\<forall>r \<in> set rs. get_action r \<noteq> Return" shows "\<Gamma>,\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>,\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> t" proof - from assms have "finite ({rs} \<union> (ran \<Gamma>))" by simp from ex_no_call[OF this] obtain c where c: "(\<forall>rs\<in>ran \<Gamma>. no_call_to c rs)" "no_call_to c rs" by blast hence acnc: "all_chains (no_call_to c) \<Gamma> rs" unfolding all_chains_def by (simp add: ranI) have ranaway: "\<forall>rs\<in>ran (\<Gamma>(c \<mapsto> rs)). no_call_to c rs" proof - { (* hammer *) fix rsa :: "'a rule list" assume a1: "rsa \<in> ran (\<Gamma>(c \<mapsto> rs))" have "\<And>R. rs \<in> R \<union> Collect (no_call_to c)" using c(2) by force then have "rsa \<in> ran (\<Gamma>(c := None)) \<union> Collect (no_call_to c)" using a1 by (metis (no_types) Un_iff Un_insert_left fun_upd_same fun_upd_upd insert_absorb ran_map_upd) then have "no_call_to c rsa" by (metis (no_types) Un_iff c(1) mem_Collect_eq ranI ran_restrictD restrict_complement_singleton_eq) } thus ?thesis by simp qed have "\<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> rs \<Rightarrow>\<^sub>r t \<longleftrightarrow> \<Gamma>(c \<mapsto> rs),\<gamma>,p\<turnstile> \<langle>rs, Undecided\<rangle> \<Rightarrow> t" apply(subst r_call_eq[where c=c and m=MatchAny,symmetric]) apply(simp;fail) apply(simp;fail) apply(subst call_eq[where c=c and m=MatchAny,symmetric]) apply(simp;fail) apply(simp;fail) apply(simp add: assms;fail) apply(rule r_eq_orig') apply(fact ranaway) done thus ?thesis apply - apply(subst (asm) all_chains_no_call_upd_r[where x=rs, OF acnc]) apply(subst (asm) all_chains_no_call_upd_orig[where x=rs, OF acnc]) . qed end end
{"subset_name": "curated", "file": "formal/afp/Iptables_Semantics/Alternative_Semantics.thy"}
TITLE: Is Newtonian physics invariant under fast-forward? QUESTION [1 upvotes]: I am aware of how the laws of physics are invariant with a time reversal transformation, so that (ignoring entropy) a film of billiard balls running backwards cannot be differentiated from one running forwards. What about a film of billiard balls running in fast-forward motion? Ignoring relativistic effects, I think that there would be no way to tell the playback speed. OK, then imagine that one of the billiards were made out of delicate, thin glass, and that in the regular play-back film the motions are slow enough that it does not break. Playing the film in fast-forward motion, the delicate ball still would not break despite being hit by very fast-moving ones. Does this mean that Newtonian physics is not invariant under fast-forwarding? Is there a way of expressing this mathematically (analogous to saying t goes to -t for time reversal)? And finally, since the glass ball does not break, is this violation of intuition an illusion having to do with entropy, like with running a film in reverse, or is there some kind of modification that needs be made to the forces of the molecular bonds under fast-forward motion to prevent the glass from shattering? REPLY [1 votes]: The laws of physics themselves will be unchanged, but in a problem such as the one you mention there will be some parameters with dimensions that determine what happens. This is in fact the same question as asking what happens if you change the unit of time you use: speeding up by a factor of 60 is the same as measuring in minutes rather than seconds. Then $g$, the local acceleration due to gravity, changes its value by a factor of $60^2=3600$ to around 3500 metres per minute squared, rather than 9.8 metres per second squared. The slowed down video looks like the same physics, but with the strength of gravity much greater. There may be many other parameters that you will have to adjust in a similar manner: resistive properties of air, for example, or material properties of objects. Or if you are worried about relativity, the speed of light! Note that this all applies equally well if you "zoom in" and look on shorter distance scales too, though the scalings required will be different depending on the units of your quantities.
{"set_name": "stack_exchange", "score": 1, "question_id": 128589}
\begin{document} \allowdisplaybreaks \newcommand{\arXivNumber}{1901.01566} \renewcommand{\thefootnote}{} \renewcommand{\PaperNumber}{034} \FirstPageHeading \ShortArticleName{Jacobian Conjecture via Differential Galois Theory} \ArticleName{Jacobian Conjecture via Differential Galois Theory\footnote{This paper is a~contribution to the Special Issue on Algebraic Methods in Dynamical Systems. The full collection is available at \href{https://www.emis.de/journals/SIGMA/AMDS2018.html}{https://www.emis.de/journals/SIGMA/AMDS2018.html}}} \Author{El\.zbieta ADAMUS~$^\dag$, Teresa CRESPO~$^\ddag$ and Zbigniew HAJTO~$^\S$} \AuthorNameForHeading{E.~Adamus, T.~Crespo and Z.~Hajto} \Address{$^\dag$~Faculty of Applied Mathematics, AGH University of Science and Technology,\\ \hphantom{$^\dag$}~al.~Mickiewicza 30, 30-059 Krak\'ow, Poland} \EmailD{\href{mailto:esowa@agh.edu.pl}{esowa@agh.edu.pl}} \Address{$^\ddag$~Departament de Matem\`atiques i Inform\`atica, Universitat de Barcelona,\\ \hphantom{$^\ddag$}~Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain} \EmailD{\href{mailto:teresa.crespo@ub.edu}{teresa.crespo@ub.edu}} \Address{$^\S$~Faculty of Mathematics and Computer Science, Jagiellonian University,\\ \hphantom{$^\S$}~ul.~{\L}ojasiewicza 6, 30-348 Krak\'ow, Poland} \EmailD{\href{mailto:zbigniew.hajto@uj.edu.pl}{zbigniew.hajto@uj.edu.pl}} \ArticleDates{Received January 23, 2019, in final form May 01, 2019; Published online May 03, 2019} \Abstract{We prove that a polynomial map is invertible if and only if some associated differential ring homomorphism is bijective. To this end, we use a theorem of Crespo and Hajto linking the invertibility of polynomial maps with Picard--Vessiot extensions of partial differential fields, the theory of strongly normal extensions as presented by Kovacic and the characterization of Picard--Vessiot extensions in terms of tensor products given by Levelt.} \Keywords{polynomial automorphisms; Jacobian problem; strongly normal extensions} \Classification{14R10; 14R15; 13N15; 12F10} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} The Jacobian conjecture originates from the problem posed by Keller in~\cite{Ke}. It is the 16th problem in Stephen Smale's list of mathematical problems for the twenty-first century (cf.~\cite{Sma}). Let us recall the precise statement of the Jacobian conjecture. Let $C$ denote an algebraically closed field of characteristic zero. Let $n>0$ be a fixed integer and let $F=(F_1, \ldots, F_n)\colon C^n \rightarrow C^n$ be a polynomial map, i.e., $F_i \in C[X]$, for $i=1,\ldots, n$, where $X=(X_1, \ldots, X_n)$. We consider the Jacobian matrix $J_F = \big[\frac{\partial F_i}{\partial X_j}\big]_{1 \leq i,j \leq n}$. The Jacobian conjecture states that if $\det(J_F)$ is a non-zero constant, then $F$ has an inverse map, which is also polynomial. In spite of many different approaches involving various mathematical tools the question is still open. In 1980 S.S.-S.~Wang \cite{W} proved the Jacobian conjecture for quadratic maps. The same year H.~Bass, E.~Connell and D.~Wright in~\cite{Bass} and A.V.~Yagzhev in~\cite{Y} independently reduced the Jacobian problem to maps of degree 3 at cost of enlarging the number of variables. In~\cite{Bass} an interesting differential approach to the Jacobian problem due to Wright is also presented. An account of the research on the Jacobian conjecture may be found in \cite{E} and in the survey~\cite{E2}. In the recent years some new achievements have been reached such as the negative answer to the long standing dependence problem given by M.~de Bondt~\cite{Bo} and results by several authors on classification of special types of Keller maps. Recently, in~\cite{ABCH} and~\cite{ABCH2} we have considered the class of Pascal finite automorphisms. On the other hand the conjecture holds under strong additional assumptions. As an example let us recall the result of L.A.~Campbell (see~\cite{C}), which states that the thesis holds if $C(X)/C(F)$ is a Galois extension. In a previous paper using Picard--Vessiot theory of partial differential fields Crespo and Hajto obtained a differential version of the classical theorem of Campbell. Let us consider the field~$C(X)$ with the differential structure given by the Nambu derivations (see Section~\ref{section3}). Then Crespo and Hajto proved that if the differential extension~$C(X)/C$ is Picard--Vessiot, then~$F$ is invertible (cf.~\cite[Theorem~2]{CH}). A computational approach to this result using wronskians has been given in~\cite{ABH}. The use of wronskians makes the calculations longer but allows application to the more general context of dominant polynomial maps without the assumption of the Jacobian determinant beeing a non-zero constant to check the Galois character of the associated field extension. In \cite[Theorem~1]{L}, Levelt proved a necessary condition for a differential field extension $K/k$ to be Picard--Vessiot in terms of tensor products. In this paper, we prove a partial converse of Levelt's theorem. To this end, we use the theory of strongly normal extensions as presented by Kovacic in~\cite{Kov2} and~\cite{Kov1}. By using the converse of Levelt's theorem together with the above mentioned result by Crespo and Hajto we obtain that a polynomial map is invertible if and only if some associated differential ring homomorphism is bijective. This provides a criterion to check the invertibility of polynomial maps. \section{Inverting a theorem of A.H.M.~Levelt}\label{section2} In this section we prove that Levelt's necessary condition for a differential field extension $K/k$ to be Picard--Vessiot is sufficient for $K/k$ to be strongly normal in the case in which the base field is a field of constants. In the next section we shall apply this result to the extension $C(X)/C$ endowed with the Nambu derivations associated to a polynomial map $F$ in order to obtain an equivalent condition to the invertibility of~$F$. Let $C$ be an algebraically closed field of characteristic zero. Let $R$ be an integral domain containing $C$ with the differential ring structure given by a finite set $\Delta$ of commuting derivations. Let $K$ be the field of fractions of $R$ with the differential structure given by extending the derivations in $\Delta$ in the standard way. Let us assume that $C$ is the field of constants of $K$ and that $K$ is differentially finitely generated over $C$. Any derivation $\delta \in \Delta$ extends to $K\otimes_C R$ by the formula $\delta (x\otimes y)=\delta(x) \otimes y + x \otimes \delta(y)$ on elementary tensors and by linearity to the whole tensor product. Let $E$ denote the differential subring of constants in $K \otimes_C R$. We consider the differential ring homomorphism \begin{gather*} \phi \colon \ K \otimes_C E \rightarrow K \otimes_C R, \qquad \phi (a \otimes d)=(a \otimes 1)d .\end{gather*} Our aim is to prove that if $\phi$ is an isomorphism, then the extension $K/C$ is strongly normal. We shall prove first that under the above hypothesis, $\phi$ is injective. To this end, we need the following lemma. \begin{Lemma} \label{ideal} The map \begin{gather*} h\colon \ E \rightarrow K \otimes_C E, \qquad h(d ) = 1 \otimes d,\end{gather*} induces a bijection between the set $\mathcal{I}(E)$ of ideals of $E$ and the set $\mathcal{I}(K \otimes_C E)$ of differential ideals of $K \otimes_C E$. \end{Lemma} \begin{proof}To an ideal $\mathfrak{a}$ of $E$ we associate its extension $\mathfrak{a}^e=K \otimes_C \mathfrak{a}$ and to an ideal $\mathfrak{b}$ of $K \otimes_C E$ its contraction $\mathfrak{b}^c=h^{-1}(\mathfrak{b})$. The inclusion $\mathfrak{a}^{ec} \supset \mathfrak{a}$ is well known. Let us prove $\mathfrak{a}^{ec} \subset \mathfrak{a}$. We take a basis $\Lambda$ of the $C$-vector space $\mathfrak{a}$ and extend it to a basis $M$ of the $C$-vector space $E$. Then $1 \otimes M$ is a basis of the $K$-vector space $K \otimes_C E$. Let $d$ be any element in $\mathfrak{a}^{ec}$. Then $1 \otimes d \in \mathfrak{a}^{ece}=\mathfrak{a}^{e}$, so \begin{gather*} 1 \otimes d = \sum_{\lambda \in \Lambda} r_{\lambda} \otimes \lambda, \qquad r _{\lambda} \in K.\end{gather*} On the other hand $d=\sum\limits_{\mu \in M} c_{\mu} \mu$, $c_{\mu} \in C$, so \begin{gather*}1 \otimes d=\sum_{\mu \in M} 1 \otimes c_{\mu} \mu=\sum_{\mu \in M} c_{\mu} \otimes \mu.\end{gather*} Comparing the coefficients in both expressions, we get that $c_{\mu}=0 $ for $\mu \in M {\setminus} \Lambda$ and $r_{\lambda}=c_{\lambda}$, for $\lambda \in \Lambda$. Hence $d \in \mathfrak{a}$. The inclusion $\mathfrak{b}^{ce} \subset \mathfrak{b}$ is well known. Suppose now $\mathfrak{b} {\setminus} \mathfrak{b}^{ce} \neq \varnothing$. We take a $C$-vector space basis~$\Lambda$ of $\mathfrak{b}^c$ and extend it to a basis $M$ of the $C$-vector space~$E$. Let us choose an element $a \in \mathfrak{b} {\setminus} \mathfrak{b}^{ce}$ such that its representation in the form \begin{gather*} a = \sum_{\mu \in M} r_{\mu} \otimes \mu, \qquad r_{\mu} \in K\end{gather*} has the smallest number of nonzero terms. First let us consider the case when $a$ is an elementary tensor, i.e., $a=r \otimes \mu$, for $a \in K$, $\mu \in M$. If $\mu \in \Lambda$, then $a \in \mathfrak{b}^{ce}$ and we have a contradiction. So let us assume that $\mu \in M {\setminus} \Lambda$. Then we multiply $a$ by $r^{-1} \in K$ and obtain that $1 \otimes \mu \in \mathfrak{b}$ and consequently $\mu \in \mathfrak{b}^c$, hence again $a \in \mathfrak{b}^{ce}$ and we have a contradiction. Let us assume now that the representation $a = \sum\limits_{\mu \in M} r_{\mu} \otimes \mu$, $r_{\mu} \in K,$ has at least two nonzero terms. Since $\mathfrak{b}$ is a differential ideal, then for any differential operator $\delta \in \Delta$ we have \begin{gather*} \delta a = \sum_{\mu \in M} \delta r_{\mu} \otimes \mu \in \mathfrak{b}.\end{gather*} Since $a \neq 0$, we can choose $\mu_0$ such that $r_{\mu_0} \neq 0$. Then $\delta r_{\mu_0}a-r_{\mu_0} \delta a \in \mathfrak{b}$ and \begin{align*}\delta r_{\mu_0}a-r_{\mu_0} \delta a &= \delta r_{\mu_0} \bigg( \sum_{\mu \in M} r_{\mu} \otimes \mu\bigg)- r_{\mu_0} \bigg( \sum_{\mu \in M} \delta r_{\mu} \otimes \mu \bigg)\\ &= \sum_{\mu \in M,\, \mu \neq \mu_0} (\delta r_{\mu_0} r_{\mu} - r_{\mu_0} \delta r_{\mu} ) \otimes \mu.\end{align*} since the coefficient of $\mu_0$ is equal to $(\delta r_{\mu_0})r_{\mu_0}-r_{\mu_0} (\delta r_{\mu_0})=0$. By the minimality assumption on $a \in \mathfrak{b} {\setminus} \mathfrak{b}^{ce} $, we have $\delta r_{\mu_0}a-r_{\mu_0} \delta a \in \mathfrak{b}^{ce}$, hence \begin{gather*}\delta r_{\mu_0}a-r_{\mu_0}\delta a=\sum_{\mu \in \Lambda, \,\mu \neq \mu_0} (\delta r_{\mu_0} r_{\mu} - r_{\mu_0} \delta r_{\mu} ) \otimes \mu.\end{gather*} We have then $\delta r_{\mu_0} r_{\mu} - r_{\mu_0} \delta r_{\mu}=0$ for $\mu \in M {\setminus} \Lambda$. Hence $\delta\big(\frac{r_{\mu}}{r_{\mu_0}}\big)=0$ for $\mu \in M {\setminus} \Lambda$. This means $\frac{r_{\mu}}{r_{\mu_0}}$ is a constant in $K$. Hence there exists $c_{\mu} \in C$ such that $r_{\mu}=c_{\mu} r_{\mu_0}$ for $\mu \in M {\setminus} \Lambda$. We obtain \begin{gather*}a=\sum_{\mu \in M {\setminus} \Lambda} c_{\mu} r_{\mu_0} \otimes \mu+\sum_{\mu \in \Lambda} r_{\mu} \otimes \mu.\end{gather*} Observe that $\sum\limits_{\mu \in \Lambda} r_{\mu} \otimes \mu \in \mathfrak{b}^{ce} \subset \mathfrak{b}$. Hence \begin{gather*} \mathfrak{b} \ni \sum_{\mu \in M {\setminus} \Lambda} c_{\mu} r_{\mu_0} \otimes \mu=(r_{\mu_0} \otimes 1) \bigg(1 \otimes \sum_{\mu \in M {\setminus} \Lambda} c_{\mu} \mu\bigg).\end{gather*} Since $K$ is a field, we get that $1 \otimes d_0 \in \mathfrak{b}$, for $d_0=\sum\limits_{\mu \in M {\setminus} \Lambda} c_{\mu} \mu$. So $d_0 \in \mathfrak{b}^c$ and $(r_{\mu_0} \otimes 1)(1 \otimes d_0) \in \mathfrak{b}^{ce} $. And we have a contradiction with the minimality assumption on $a \in \mathfrak{b} {\setminus} \mathfrak{b}^{ce} $. \end{proof} \begin{Proposition}\label{injective}The map \begin{gather*} \phi \colon \ K \otimes_C E \rightarrow K \otimes_C R, \qquad \phi (a \otimes d)=(a \otimes 1)d\end{gather*} is injective. \end{Proposition} \begin{proof} Denote $\mathfrak{b}= \ker \phi$. Using Lemma~\ref{ideal} we can assume that $\mathfrak{b} = K \otimes \mathfrak{c}$, where $\mathfrak{c} \in \mathcal{I}(E)$. Take $c \in \mathfrak{c}$. Then \begin{gather*}0=\phi(1 \otimes c) = (1 \otimes 1)c=c.\end{gather*} So $c=0$ and $\mathfrak{b}=(0)$. \end{proof} We recall the notion of almost constant differential ring which will be used in the sequel. \begin{Definition}[{\cite[Definition~5.1]{Kov2}}] Let $A$ be a differential ring and~$C_A$ its ring of constants. We say that $A$ is almost constant if the inclusion $C_A\subset A$ induces a bijection between the set of radical ideals of $C_A$ and the set of radical differential ideals of~$A$. \end{Definition} \begin{Proposition} Let $C$ be an algebraically closed field of characteristic zero. Let $R$ be an integral differential ring containing $C$ and let $K$ be the field of fractions of~$R$. We assume that~$C$ is the field of constants of $K$ and that $K$ is differentially finitely generated over $C$. If the differential morphism \begin{gather*} \phi \colon \ K \otimes_C E \rightarrow K \otimes_C R, \qquad \phi (a \otimes d)=(a \otimes 1)d\end{gather*} is an isomorphism, then the differential ring $K\otimes_C R$ is almost constant. \end{Proposition} \begin{proof} If $\phi$ is a differential isomorphism, there is a bijection between the set of radical differential ideals of $K\otimes_C R$ and the set of radical differential ideals of $K \otimes_C E$. By Lemma~\ref{ideal} and \cite[Proposition~3.4]{Kov2}, this last set is in bijection with the set of radical ideals of the ring of constants~$E$ of $K\otimes_C R$. \end{proof} \begin{Theorem} \label{mainthm} Let $C$ be an algebraically closed field of characteristic zero. Let $R$ be an integral differential ring containing $C$ and let $K$ be the field of fractions of~$R$. We assume that $C$ is the field of constants of $K$ and that $K$ is differentially finitely generated over~$C$. If the differential morphism \begin{gather*} \phi \colon \ K \otimes_C E \rightarrow K \otimes_C R, \qquad \phi (a \otimes d)=(a \otimes 1)d\end{gather*} is an isomorphism, then $K/C$ is a strongly normal extension. \end{Theorem} \begin{proof}To prove that $K/C$ is strongly normal, we shall apply \cite[Proposition 12.5]{Kov2}. Let $\sigma$ be an arbitrary $\Delta$-isomorphism of $K$ over $C$. We put $\sigma\colon K \rightarrow M$, where $M$ is any differential field extension of $K$ and denote by $D_{\sigma}$ the field of constants of $M$. Define $\overline{\sigma}\colon K \otimes_C R \rightarrow M$ by the formula $\overline{\sigma}(a \otimes b) = a \sigma(b)$. Set $\psi=\overline{\sigma} \circ \phi\colon K \otimes_C E \rightarrow M$. Observe that $\psi(K \otimes 1) \subset K$. Indeed for $a \in K$ we have \begin{gather*}\psi(a \otimes 1)=(\overline{\sigma} \circ \phi)(a \otimes 1)=\overline{\sigma} (a \otimes 1) =a.\end{gather*} Because $E$ consists of constants, then $\psi(1 \otimes E) \subset D_{\sigma}$ (regardless of the choice of the differential isomorphism $\sigma$). So \begin{gather*}\psi(K \otimes_C E)=\psi \big( (K \otimes 1)(1 \otimes E)\big) \subset K D_{\sigma}.\end{gather*} We have then the commutative diagram \begin{gather*} \xymatrix{K \otimes_C E \ar[rr]^{\phi} \ar[rrdd]^{\psi}&& K \otimes_C R \ar[dd]^{\overline{\sigma}}\\&& \\ && K D_{\sigma}. } \end{gather*} Because $\phi$ is surjective, $\overline{\sigma}(K\otimes_C R) \subset KD_{\sigma}$, which implies that $\sigma K \subset K D_\sigma$. We can then use \cite[Proposition 12.5]{Kov2} and conclude that $K/C$ is a strongly normal extension. \end{proof} \begin{Remark}Let us observe that in order to prove that $K/C$ is a Picard--Vessiot extension it would be sufficient to know that $R$ is a differentially simple ring. In this case, the fact that $K/C$ is strongly normal and $K\otimes_C R$ is almost constant imply that $K/C$ is Picard--Vessiot. \end{Remark} \section{Application to polynomial automorphisms}\label{section3} Let $C$ be a field of characteristic zero and let $F=(F_1, \ldots, F_n)\colon C^n \rightarrow C^n$ be a polynomial map such that $\mathrm{det}(J_F) \in C{\setminus} \{0\}$. We can equip $C(X_1, \ldots, X_n)$ with the Nambu derivations, i.e., derivations $\delta_1, \ldots, \delta_n$ given by \begin{gather*} \left( \begin{matrix} \delta_1 \\ \vdots \\ \delta_n \end{matrix} \right) = \big(J_F^{-1}\big)^T \left( \begin{matrix} \frac{\partial}{\partial X_1} \\ \vdots \\ \frac{\partial}{\partial X_n} \end{matrix} \right). \end{gather*} Observe that both the field $C(F_1, \ldots, F_n)$ and the polynomial ring $C[X_1,\dots,X_n]$ are stable under $\delta_1, \ldots, \delta_n$. \begin{Theorem}\label{th2}Let $C$ be an algebraically closed field of characteristic zero and let $F=(F_1, \ldots, \allowbreak F_n)\colon C^n \rightarrow C^n$ be a polynomial map such that $\det(J_F) \in C{\setminus} \{0\}$. Let $R$ $($respectively~$K)$ denote the polynomial ring $C[X]$ $($respectively the rational function field~$C(X))$ with the partial differential structure given by the Nambu derivations. We extend these derivations to the tensor product $K\otimes_C R$ and denote by $E$ the ring of constants of $K\otimes_C R$. If the differential ring homomorphism \begin{gather*} \phi \colon \ K \otimes_C E \rightarrow K \otimes_C R, \qquad \phi (a \otimes d)=(a \otimes 1)d \end{gather*} is an isomorphism, then $F$ is invertible and its inverse is a polynomial map. \end{Theorem} \begin{proof}By Theorem~\ref{mainthm}, the differential field extension $K/C$ is a strongly normal extension. If we consider the intermediate differential field $k=C(F_1,\ldots, F_n)$, then $K/k$ is again strongly normal. Now, since $\det(J_F) \in C{\setminus} \{0\}$ and the field $C$ has characteristic zero, the image of~$F$ is a Zariski open subset of the affine space $C^n$. Hence the fields $K=C(X)$ and $k=C(F)$ have the same transcendence degree over $C$. This implies that $K/k$ is an algebraic extension, and so it is a Galois extension. Then by Campbell's theorem~\cite{C}, $F$ is invertible and its inverse is a~polynomial map. \end{proof} \begin{Remark}By Proposition \ref{injective}, the map $\phi$ is injective. In order to prove that it is also surjective, it is enough to prove that the elements $1\otimes X_i$, $1\leq i \leq n$ lie in the image of~$\phi$. Hence Theorem~\ref{th2} provides an effective criterion to check the invertibility of polynomial maps. Finally when we know that~$F$ has a polynomial inverse, the ring $C[X]$ is the same as $C[F]$ and therefore it is the Picard--Vessiot ring over~$C$. \end{Remark} \begin{Remark} The criterion given in \cite[Proposition 3.1.4(i)]{E} establishes the equivalence of the invertibility of a polynomial map and the nilpotency of the derivation $D=Y_1\delta_1+\dots +Y_n \delta_n$, where $Y_1,\dots,Y_n$ are additional variables and $\delta_1,\dots,\delta_n$ are the Nambu derivations. We have compared this criterion to our criterion in~\cite{ABCH} by applying both to the polynomial map associated to $g_1$ in \cite[Example~5.6.8]{Bot}. We have observed that with our criterion the computation of the inverse took less than one second whereas with the other criterion the computation was not ended after one hour of running the program. To implement the criterion in \cite[Proposition 3.1.4]{E} both the computation of the Nambu derivations and the powers of the derivation $D$ implies a~big number of products and is therefore rather time consuming. This criterion is quite useful for more general rings of coefficients whereas our criterion works very well in positive characteristic. For more details on it see our recent paper~\cite{ABCH2}. \end{Remark} \subsection*{Acknowledgments} This paper is dedicated to the memory of Jerald Joseph Kovacic. During his visit in Barcelona in summer 2008 he discussed with us algebraic aspects of the theory of strongly normal extensions. This work was partially supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Science and Higher Education. Crespo and Hajto acknow\-ledge support by grant MTM2015-66716-P (MINECO/FEDER,UE). We thank the anonymous referees for their valuable remarks and suggestions. \pdfbookmark[1]{References}{ref}
{"config": "arxiv", "file": "1901.01566/sigma19-034.tex"}
TITLE: Definition of the pushforward measure QUESTION [2 upvotes]: Given a map $f:X \to Y$, and two $\sigma$-algebras $\mathcal{A}$ and $\mathcal{B}$ on $X$ and $Y$ respectively, and a measure $\mu$ on $(X, \mathcal{A})$, we can define the pushforward measure $f_\#\mu$ on $B \in \mathcal{B}$ as $$f_\#\mu(B):=\mu(f^{-1}(B)).$$ I can't seem to understand the most basic property of this measure, which is that for any measurable $g$ on $Y$, we have $$\int g df_\#\mu=\int g \circ f d\mu.$$ I think I'm missing something really obvious here. My thought process: It suffices to prove the case for $g=1$, as we can approximate by simple functions and then apply the monotone convergence theorem. So we want to show $$\int_{f(A)} df_\#\mu=\int_A f d\mu,$$ where $A$ is the support of $f$. We have $\int_{f(A)} df_\#\mu=f_\#\mu(f(A))=\mu(f^{-1}(f(A)))$, and I can't see why this is equal to $\int_A f d\mu$. REPLY [4 votes]: I would say the most fundamental case to start with is $g = 1_E$, for some measurable subset $E$ of $Y$. Then we can compute both $$ \int_Y 1_E(y)\, f_\#\mu(dy) = \mu(f^{-1}(E)) = \int_X 1_{f^{-1}(E)}(x)\,\mu(dx). $$ This is all by the definitions. Now notice (check) that the function $1_{f^{-1}(E)}(x) = (1_{E}\circ f)(x)$. This observation shows that $$ \int_Y g(y)\,f_\#\mu(dy) = \int_X (g\circ f)(x)\,\mu(dx), $$ when $g$ is the indicator of a measurable subset of $Y$. Now, you can extend this to the case when $g$ is a simple function using linearity of integration, and to non-negative functions by approximation and monotone convergence. Finally, the identity holds for an arbitrary integrable $g$ using linearity and the decomposition of a function into its positive and negative parts: $$ g = g^+ - g^-. $$
{"set_name": "stack_exchange", "score": 2, "question_id": 4515811}
TITLE: Prime-like elements of rings QUESTION [5 upvotes]: An element $p$ of a commutative ring $R$ is called "prime" if, for any $a,b\in R$, whenever $ab$ is a multiple of $p$, either $a$ or $b$ is a multiple of $p$. Is there a word for the "prime-like" property that, whenever $ab$ is a multiple of $p^2$, either $a$ or $b$ is divisible by $p$? Or another, more usual concept in ring theory that this is connected to? I ask because the "prime-likeness" of $2$ in $R$ seems to control whether the quadratic formula can be made to work for monic polynomials over $R$ (as long as $2$ is also not a zero-divisor). This is because, if the discriminant of $x^2 + bx + c$ is a square $b^2 - 4c = d^2$, then $(-b+d)(-b-d) = 4c$, so at least one (and hence both) of $(-b+d)$ and $(-b-d)$ are multiples of $2$ in $R$. Their halves are the two roots of $x^2 + bx + c$. For example, $2$ is "prime-like" in $\mathbb{Z}[\sqrt{2}]$, which is easy to verify elementarily. Hence a monic quadratic over $\mathbb{Z}[\sqrt{2}]$ factors iff its discriminant is a square. But $2$ is not "prime-like" in $\mathbb{Z}[\sqrt{5}]$, since $(\sqrt{5}-1)(\sqrt{5}+1) = 4$. And indeed, the discriminant of $x^2 -x-1$ is a square in $\mathbb{Z}[\sqrt{5}]$, but the polynomial doesn't factor there. REPLY [2 votes]: I assume (based on your example) that you're primarily interested in the case where $R$ is the ring of integers in an algebraic extension of $\mathbb Q$. Then your property of being "prime-like" is equivalent to the property of generating a primary ideal. So assume $R$ is a Dedekind domain (in particular ideals factor uniquely into products of prime ideals), and let $p\in R$ be an arbitrary element. Then $p$ is "prime-like" if and only if $pR=P^k$ for some prime ideal $P$ of $R$ and some $k\in \mathbb N$ i.e. "prime-like" in Dedekind domains is the same as "primary". An element $x$ in the fraction field of $R$ lies in $R$ iff the valuations $\nu_Q(x)$ are non-negative for all prime ideals $Q$ of $R$. So let $a,b\in R$ and assume $p^2\mid ab$. Then $\nu_Q(a/p)=\nu_Q(a)\geq 0$ and $\nu_Q(b/p)=\nu_Q(b)\geq 0$ for all primes $Q\neq P$. So to see that either $a/p$ or $b/p$ lies in $R$, one just has to check that one of them has positive $P$-valuation. But $p^2\mid ab$ implies $\nu_P(a)+\nu_P(b)=\nu_P (ab) \geq \nu_P(p^2)=2\nu_P(p)$ which implies $\nu_P(a) \geq \nu_P(p)$ or $\nu_P(b) \geq \nu_P(p)$ so either $\nu_P(a/p)\geq 0$ or $\nu_P(b/p)\geq 0$. On the other hand, if the ideal $pR$ isn't primary then $p$ is not prime-like (the construction in Julian's comment can be generalized). Of course I'm not sure what exactly you're looking for, but at least this clears up what is going on in your last example: while $2$ isn't "prime-like" in $\mathbb Z[\sqrt{5}]$, it is prime like in its integral closure $\mathbb Z[\frac{1+\sqrt{5}}{2}]$ (which is however unspectacular because it remains a prime in that ring).
{"set_name": "stack_exchange", "score": 5, "question_id": 104994}
TITLE: How to prove that $2^\omega=\mathfrak{c}$? QUESTION [0 upvotes]: Let $\mathfrak{c}$ denote the continuum. My textbook says that $2^\omega=\mathfrak{c}$. How can one prove this equality? Thanks ahead:) REPLY [2 votes]: Clearly $2^\omega$ has cardinality equal to $|P(N)|$. Think of a subset of the naturals as a sequence of zero's and one's. (A one indicates a particular element is in the subset, a zero indicates it isn't). Between every digit, put a 3 and in the begining put a decimal point. (Eg $11001$... becomes $0.131303031$...; this is done to overcome the problem of ambiguity in representations) Now every subset has been put in a one one correspondence with a real number. So $|P(N)|\le |(0,1)|$. Conversely every real number between $0$ and $1$, has a binary expansion albeit sometimes redundantly; and removing the point gives us a 0-1 sequence, i.e. a subset of the naturals; and hence that particular set of reals has cardinality $\le |P(N)|$. By Schroder Bernstein theorem we have $|P(N)|=|(0,1)|$. It is now trivial to conclude that $|P(N)|=|R|$ (Hint: Think of the tan function.) As an added bonus by Cantor's theorem ($|A|<|P(A)|$) we have also established that the reals are uncountable.
{"set_name": "stack_exchange", "score": 0, "question_id": 173094}
\begin{document} \begin{abstract} We give a rigorous proof for the linear stability of the Skyrmion. In addition, we provide new proofs for the existence of the Skyrmion and the GGMT bound. \end{abstract} \maketitle \section{Introduction} \noindent In the 1960s and 1970s there was a lot of interest in classical relativistic nonlinear field theories as models for the interaction of elementary particles. The idea was to describe particles by solitons, i.e., static solutions of finite energy. Due to the success of the standard model, where particles are described by \emph{linear} (but quantized) fields, this original motivation became somewhat moot. However, classical nonlinear field theories continue to be an active area of research, albeit for different reasons. They are interesting as models for Einstein's equation of general relativity, in the context of nonperturbative quantum field theory or in the description of ferromagnetism. Furthermore, there is an ever-growing interest from the pure mathematical perspective. A rich source for field theories with ``natural'' nonlinearities are geometric action principles. One of the most prominent examples of this kind is the SU(2) sigma model \cite{GelLev60} that arises from the wave maps action \[ \mc S_{\mathrm{WM}}(u)=\int_{\R^{1,d}}\eta^{\mu\nu}(u^* g)_{\mu\nu}=\int_{\R^{1,d}}\eta^{\mu\nu}\partial_\mu u^A \partial_\nu u^B g_{AB}\circ u. \] Here, the field $u$ is a map from $(1+d)$-dimensional Minkowski space $(\R^{1,d},\eta)$ to a Riemannian manifold $(M,g)$ with metric $g$. Geometrically, the wave maps Lagrangian is the trace of the pull-back of the metric $g$ under the map $u$. A typical choice is $M=\S^d$ with $g$ the standard round metric and in the following, we restrict ourselves to this case. For $d=3$, one obtains the classical SU(2) sigma model. In general, the Euler-Lagrange equation associated to the action $\mc S_{\mathrm{WM}}$ is called the wave maps equation. Unfortunately, the SU(2) sigma model does not admit solitons and it develops singularities in finite time \cite{Sha88, BizChmTab00, Don11}. One way to recover solitons is to lower the spatial dimension to $d=2$ but this is less interesting from a physical point of view and, even worse, the corresponding model still develops singularities in finite time \cite{BizChmTab01, KriSchTat08, RodSte10, RapRod12}. Consequently, Skyrme \cite{Sky61} proposed to modify the wave maps Lagrangian by adding higher-order terms. This leads to the (generalized) Skyrme action \cite{ MakRybSan93} \[ \mc S_{\mathrm{Sky}}(u)=\mc S_{\mathrm{WM}}(u)+\frac12 \int_{\R^{1,d}}\Big [ [\eta^{\mu\nu}(u^* g)_{\mu\nu}]^2-(u^* g)_{\mu\nu}(u^* g)^{\mu\nu} \Big ]. \] Skyrme's modification breaks the scaling invariance which makes the model more rigid. Heuristically speaking, rigidity favors the existence of solitons and makes finite-time blowup less likely. The original Skyrme model arises from the action $\mc S_{\mathrm{Sky}}$ in the case $d=3$ and $M=\S^3$. By using standard spherical coordinates $(t,r,\theta,\varphi)$ on $\R^{1,3}$, one may consider so-called co-rotational maps $u: \R^{1,3}\to \S^3$ of the form $u(t,r,\theta,\varphi)=(\psi(t,r), \theta,\varphi)$. Under this symmetry reduction the Skyrme model reduces to the scalar quasilinear wave equation \begin{equation} \label{eq:Skyevol} (w\psi_t)_t-(w\psi_r)_r+\sin(2\psi)+\sin(2\psi)\left (\frac{\sin^2 \psi}{r^2}+\psi_r^2-\psi_t^2\right )=0 \end{equation} for the function $\psi=\psi(t,r)$, where $w=r^2+2\sin^2 \psi$. It is well-known that there exists a static solution $F_0\in C^\infty[0,\infty)$ to Eq.~\eqref{eq:Skyevol} with the property that $F_0(0)=0$ and $\lim_{r\to\infty}F_0(r)=\pi$. This was proved by variational methods \cite{KapLad83} and ODE techniques \cite{McLTro91}. In fact, $F_0$ is the \emph{unique} static solution with these boundary values \cite{McLTro91} and called the \emph{Skyrmion}. Unfortunately, the Skyrmion is not known in closed form and as a consequence, even the most basic questions concerning its role in the dynamics remain unanswered to this day. \subsection{Stability of the Skyrmion} Numerical studies \cite{BizChmRos07} strongly suggest that the Skyrmion is a global attractor for the nonlinear flow. In particular, $F_0$ should be stable under nonlinear perturbations. A first step in approaching this problem from a rigorous point of view is to consider the \emph{linear} stability of $F_0$. To this end, one inserts the ansatz $\psi(t,r)=F_0(r)+\phi(t,r)$ into Eq.~\eqref{eq:Skyevol} and linearizes in $\phi$. This leads to the linear wave equation \[ \varphi_{tt}-\varphi_{rr}+\frac{2}{r^2}\varphi+V(r)\varphi=0 \] for the auxiliary variable $\varphi(t,r)=\sqrt{r^2+2\sin^2F_0(r)}\,\phi(t,r)$. The potential $V$ is given by \[ V=-4a^2\frac{1+3a^2+3a^4}{(1+2a^2)^2},\qquad a(r)=\frac{\sin F_0(r)}{r}. \] Consequently, the linear stability of the Skyrmion is governed by the $\ell=1$ Schr\"odinger operator \[ \mc A f(r):=-f''(r)+\frac{2}{r^2}f(r)+V(r)f(r) \] on $L^2(0,\infty)$. More precisely, the Skyrmion is linearly stable if and only if $\mc A$ has no negative eigenvalues. Unfortunately, the analysis of $\mc A$ is difficult since the potential $V$ is negative and not known explicitly. Consequently, the linear stability of $F_0$ hinges on the particular shape of $V$ and this renders the application of general soft arguments hopeless. Our main result is the following. \begin{theorem} \label{thm:main} The Schr\"odinger operator $\mc A$ does not have eigenvalues. In particular, the Skyrmion $F_0$ is linearly stable. \end{theorem} \subsection{Related work} Due to the complexity of the field equation, there are not many rigorous results on dynamical aspects of the Skyrme model. In \cite{GebNakRaj12}, small data global well-posedness and scattering is proved and \cite{Li12} establishes large-data global well-posedness. There is also some recent activity on the related but simpler Adkins-Nappi model, see e.g.~\cite{GebRaj10a, GebRaj10b, Law15}. From a numerical point of view, the linear stability of the Skyrmion is addressed in \cite{HeuDroStr91} and \cite{BizChmRos07} studies the nonlinear stability. As far as the method of proof is concerned, we note that our approach is in parts inspired by \cite{CosHuaSch12}. \subsection{Outline of the proof} According to the GGMT bound, see \cite{GlaGroMarThi75, GlaGroMar78, ReeSim78} or Appendix \ref{app:GGMT}, the number of negative eigenvalues of $\mc A$ is bounded by \[ \nu(V):=3^{-7}\frac{3^3\Gamma(8)}{4^4\Gamma(4)^2}\int_0^\infty r^7 |V(r)|^4\d r. \] Consequently, our aim is to show that $\nu(V)<1$. In fact, by a perturbative argument this also excludes the eigenvalue $0$ and there cannot be threshold resonances at zero energy since the decay of the recessive solution of $\mc A f=0$ is $1/r$ at infinity. In Appendix \ref{app:GGMT} we elaborate on this and give a new proof of the GGMT bound. In order to show $\nu(V)<1$, we proceed by an explicit construction of the Skyrmion $F_0$. In particular, this yields a new proof for the existence of the Skyrmion. Our approach is mildly computer-assisted in the sense that one has to perform a large number of elementary operations involving fractions. It is worth noting that all computations are done in $\Q$, i.e., they are free of rounding or truncation errors. We also emphasize that the proof does not require a computer algebra system. Consequently, the necessary computations can easily be carried out using any programming language that supports fraction arithmetic. A natural choice is {\tt Python} which is open source and freely available for all common operating systems. In the following, we give a brief outline of the main steps in the proof. \begin{itemize} \item We consider Eq.~\eqref{eq:Skyevol} for static solutions $\psi(t,r)=F(r)$ and change variables according to \[ F(r)=2\arctan\left (r(1+r)g\left (\frac{r-1}{r+1}\right )\right ). \] The new independent variable $x=\frac{r-1}{r+1}$ allows us to compactify the problem by considering $x\in [-1,1]$. Furthermore, the $\arctan$ removes the trigonometric functions in Eq.~\eqref{eq:Skyevol}. Consequently, we obtain an equation of the form \[ \mc R(g)(x):=g''(x)+\Phi(x,g(x),g'(x))=0 \] where $\Phi$ is a (fairly complicated) rational function of $3$ variables. \item We numerically construct a very precise approximation to the Skyrmion. This is done by employing a Chebyshev pseudospectral method \cite{Boy01}. The expansion coefficients are rationalized to allow for error-free computations in the sequel. This leads to a polynomial $\gT(x)$ with rational coefficients and we rigorously prove that $\|\mc R(\gT)\|_{L^\infty(-1,1)}\leq \frac{1}{500}$. As a consequence, the construction of the Skyrmion reduces to finding a (small) correction $\delta(x)$ such that $\mc R(\gT+\delta)=0$. \item Next, we obtain bounds on second derivatives of $\Phi$ by employing rational interval arithmetic. As a consequence, we obtain the representation \[ \mc R(\gT+\delta)=\mc R(\gT)+\mc L \delta+\mc N(\delta) \] with explicit bounds on the nonlinear remainder $\mc N$. The linear operator $\mc L$ is also given explicitly in terms of $\gT$ and first derivatives of $\Phi$. \item Again, by a Chebyshev pseudospectral method, we numerically construct an approximate fundamental system $\{u_-,u_+\}$ for the linear equation $\mc Lu=0$. The functions $u_\pm$ satisfy $\tilde{\mc L}u_\pm=0$ for another linear operator $\tilde{\mc L}$ that is close to $\mc L$ in a suitable sense. Using $u_\pm$ we construct an inverse $\tilde{\mc L}^{-1}$ to $\tilde{\mc L}$ which allows us to rewrite the equation $\mc R(\gT+\delta)=0$ as a fixed point problem \[ \delta=-\tilde{\mc L}^{-1}\mc R(\gT)-\tilde{\mc L}^{-1}(\mc L-\tilde{\mc L})\delta-\tilde{\mc L}^{-1}\mc N(\delta)=:\mc K(\delta). \] From the explicit form of $u_\pm$ we obtain rigorous and explicit bounds on the operator $\tilde{\mc L}^{-1}$. \item Finally, we prove that $\mc K$ is a contraction on a small closed ball in $W^{1,\infty}(-1,1)$. This yields the existence of a small correction $\delta(x)$ such that $\gT+\delta$ solves the transformed Skyrmion equation. From the uniqueness of the Skyrmion we conclude that \[ F_0(r)=2\arctan\left (r(1+r)(\gT+\delta)\left (\frac{r-1}{r+1}\right )\right ) \] and the desired $\nu(V)<1$ follows by elementary estimates. \end{itemize} \subsection{Notation} Throughout the paper we abbreviate $L^\infty:=L^\infty(-1,1)$ and also $W^{1,\infty}:=W^{1,\infty}(-1,1)$. For the norm in $W^{1,\infty}$ we use the convention \[ \|f\|_{W^{1,\infty}}:=\sqrt{\|f'\|_{L^\infty}^2+\|f\|_{L^\infty}^2}. \] The Wronskian $W(f,g)$ of two functions $f$ and $g$ is defined as $W(f,g):=fg'-f'g$. \section{Preliminary transformations} \noindent Static solutions $\psi(t,r)=F(r)$ of Eq.~\eqref{eq:Skyevol} satisfy the Skyrmion equation \begin{equation} \label{eq:sky} \frac{\d}{\d r}\Big [\big (r^2+2\sin^2 F(r)\big )F'(r)\Big ]-\sin(2 F(r))\left [ F'(r)^2+\frac{\sin^2 F(r)}{r^2}+1 \right ]=0. \end{equation} The Skyrmion $F_0$ is the unique solution of Eq.~\eqref{eq:sky} satisfying $F_0(0)=0$ and $\lim_{r\to\infty}F_0(r)=\pi$. More precisely, we have $F_0(r)=\pi+O(r^{-2})$ as $r\to\infty$. Furthermore, it is known that the Skyrmion is monotonically increasing \cite{McLTro91}. In order to remove the trigonometric functions it is thus natural to define a new dependent variable $f: [0,\infty)\to \R$ by \[ F(r)=:2\arctan f(r). \] Then we have \begin{align*} F'=\frac{2f'}{1+f^2},\qquad F''=\frac{2f''}{1+f^2}-\frac{4f'^2 f}{(1+f^2)^2} \end{align*} as well as \[ \sin^2 F=\frac{4f^2}{(1+f^2)^2},\qquad \sin(2 F)=\frac{4f(1-f^2)}{(1+f^2)^2}. \] Consequently, Eq.~\eqref{eq:sky} is equivalent to \begin{equation} \label{eq:skyf} f''+\frac{\mc W(f)'}{\mc W(f)}f'-\frac{2f'^2f}{1+f^2}-\frac{2f(1-f^2)}{\mc W(f)(1+f^2)}\left [ \frac{4f'^2}{(1+f^2)^2}+\frac{4f^2}{r^2(1+f^2)^2}+1\right ]=0 \end{equation} where \[ \mc W(f)(r):=r^2+\frac{8f(r)^2}{[1+f(r)^2]^2}. \] Eq.~\eqref{eq:skyf} may be slightly simplified to give \begin{equation} \label{eq:skyf2} f''+\frac{2rf'}{\mc W(f)}-\frac{2f'^2f}{1+f^2}+\frac{2f(1-f^2)}{\mc W(f)(1+f^2)}\left [ \frac{4f'^2}{(1+f^2)^2}-\frac{4f^2}{r^2(1+f^2)^2}-1\right ]=0 \end{equation} Next, we set \[ f(r)=:r(1+r)g\left (\frac{r-1}{r+1}\right ). \] This yields \begin{align*} f\left (\frac{1+x}{1-x}\right )&=2\frac{1+x}{(1-x)^2}g(x) \\ f'\left (\frac{1+x}{1-x}\right )&=(1+x)g'(x)+\frac{3+x}{1-x}g(x) \\ f''\left (\frac{1+x}{1-x}\right )&=\tfrac12(1+x)(1-x)^2g''(x)+2(1-x)g'(x)+2g(x) \end{align*} for $x\in [-1,1)$. We compactify the problem by allowing $x\in [-1,1]$. In these new variables, Eq.~\eqref{eq:skyf} can be written as \begin{equation} \label{eq:skyg} \mc R(g)(x):=g''(x)+\Phi\big (x,g(x),g'(x)\big )=0 \end{equation} where $\Phi: (-1,1)\times \R^2\to \R$ is given by \begin{align} \label{def:Phi} \Phi(x,y,z):=\frac{1}{\Psi(x,y)}\sum_{k=0}^2\Phi_k(x,y)z^k \end{align} with \begin{align} \label{def:Phik} \Phi_0(x,y)&:=2^{-5}(1+x)^5(3+x)y^7 -2^{-6}(1+x)(1-x)^3(33-58x-16x^2+18x^3+7x^4)y^5 \nonumber \\ &\qquad +2^{-9}(1-x)^7(47-51x+33x^2+3x^3)y^3 +2^{-9}(1-x)^{11}y \nonumber \\ \Phi_1(x,y)&:=-2^{-4}(1+x)^7y^6-2^{-5}(1+x)^2(1-x)^4(14-21x+4x^2+7x^3)y^4 \nonumber \\ &\qquad +2^{-8}(1-x)^8(23-31x+13x^2+3x^3)y^2 +2^{-9}(1-x)^{12} \nonumber \\ \Phi_2(x,y)&:=-(1-x^2)\big [2^{-5}(1+x)^6y^5 +2^{-6}(1+x)^2(1-x)^4(7-10x+7x^2)y^3 \nonumber \\ &\qquad -2^{-9}(1-x)^8(3-10x+3x^2)y \big ] \end{align} and \begin{align} \label{def:Psi} \Psi(x,y):=&(1-x^2)\big [2^{-6}(1+x)^6y^6 +2^{-8}(1+x)^2(1-x)^4(11-10x+11x^2)y^4 \nonumber \\ &+2^{-10}(1-x)^8(11-10x+11x^2)y^2 +2^{-12}(1-x)^{12}\big ] . \end{align} Obviously, $\Psi(-1,y)=\Psi(1,y)=0$ for all $y$ and, since \begin{align} \label{eq:Psi1boundary} \sum_{k=0}^2 \Phi_k(-1,y)z^k&=4(1+8y^2)(y+2z) \nonumber \\ \sum_{k=0}^2 \Phi_k(1,y)z^k&=4y^6(y-2z), \end{align} we obtain the regularity conditions \begin{equation} \label{eq:RegRem} g'(-1)=-\tfrac12 g(-1),\qquad g'(1)=\tfrac12 g(1) \end{equation} for solutions of $\mc R(g)=0$ (at least if $g(1)\not=0$, which is the case we are interested in). \section{Numerical approximation of the Skyrmion} \subsection{Description of the numerical method} \label{sec:num} We will require a fairly precise approximation to the Skyrmion. Already from a numerical point of view this is not entirely trivial since a brute force approach is doomed to fail. That is why we employ a more sophisticated Chebyshev pseudospectral method. To this end, we use the basis functions $\phi_n: [-1,1]\to \R$, $n\in \N_0$, given by \begin{equation} \label{def:phi} \phi_n(x):=T_n(x)+a_n(1+x)+b_n(1-x), \end{equation} where $T_n$ are the standard Chebyshev polynomials. The constants $a_n$ and $b_n$ are chosen in such a way that the regularity conditions Eq.~\eqref{eq:RegRem} are satisfied, i.e., we require \begin{equation} \label{eq:Regphi} \phi_n'(-1)+\tfrac12 \phi_n(-1)=\phi_n'(1)-\tfrac12\phi_n(1)=0 \end{equation} for all $n\in \N_0$. This yields $\phi_0=\phi_1=0$ and \begin{align*} a_n&=-T_n'(-1)-\tfrac12 T_n(-1)=(-1)^{n}(n^2-\tfrac12) \\ b_n&=T_n'(1)-\tfrac12 T_n(1)=n^2-\tfrac12 \end{align*} for $n\geq 2$. Then we numerically solve the ($N_0-1$)-dimensional nonlinear root finding problem \[ \mc R\left (\sum_{n=2}^{N_0}\tilde c_n\phi_n\right )(x_k)=0, \quad x_k=\cos\left (\frac{k\pi}{N_0}\right ), \quad k=1,2,\dots,N_0-1 \] for $N_0=43$ with $\mc R$ given in Eq.~\eqref{eq:skyg}. The points $(x_k)_{k=1}^{N_0-1}$ are the standard Gau\ss-Lobatto collocation points for the Chebyshev pseudospectral method \cite{Boy01} with endpoints removed (we only have $N_0-1$ unknown coefficients due to $\phi_0=\phi_1=0$; in the standard Chebyshev method one has $N_0+1$ coefficients to determine). Finally, we rationalize the numerically obtained coefficients $(\tilde c_n)$. The $42$ coefficients $(c_n)_{n=2}^{43}\subset \Q$ obtained in this way are listed in Table \ref{tab:c}. \subsection{Methods for rigorous estimates} In order to obtain good estimates for the complicated rational functions that will show up in the sequel, the following elementary observation is useful. \begin{lemma} \label{lem:estf} Let $f\in C^1([-1,1])$ and set \[ \Omega_N:=\{-1+\tfrac{2k}{N}: k=0,1,2,\dots,N\}\subset [-1,1]\cap \Q,\qquad N\in \N . \] Then we have the bounds \begin{align*} \max_{[-1,1]}f&\leq \max_{\Omega_N}f+\tfrac{2}{N}\|f'\|_{L^\infty} \\ \min_{[-1,1]}f&\geq \min_{\Omega_N}f-\tfrac{2}{N}\|f'\|_{L^\infty} \\ \|f\|_{L^\infty}&\leq \max_{\Omega_N}|f|+\tfrac{2}{N}\|f'\|_{L^\infty} \end{align*} for any $N\in \N$. \end{lemma} \begin{proof} The statements are simple consequences of the mean value theorem. \end{proof} \begin{remark} In a typical application one first obtains a rigorous but crude bound on $f'$ by elementary estimates. Then one uses a computer to evaluate $f$ sufficiently many times in order to obtain a good bound on $f$. \end{remark} Another powerful method for estimating complicated functions is provided by interval arithmetic \cite{AleMay00, HicJuEmd01}. We use the following elementary rules for operations involving intervals. \begin{definition} Let $a,b,c,d\in \R$ with $a\leq b$ and $c\leq d$. \emph{Interval arithmetic} is defined by the following operations. \begin{align*} [a,b]+[c,d]&:=[a+c, b+d] \\ [a,b]-[c,d]&:=[a-d, b-c] \\ [a,b]\cdot [c,d]&:=[\min\{ac, ad, bc, bd\}, \max\{ac, ad, bc, bd\}] \\ \frac{[a,b]}{[c,d]}&:=[a,b]\cdot [\tfrac{1}{d}, \tfrac{1}{c}]\quad \mbox{provided }0\notin [c,d]. \end{align*} If $a,b,c,d\in \Q$, we speak of \emph{rational interval arithmetic}. Furthermore, standard (rational) arithmetic is embedded by identifying $a\in \R$ with $[a,a]$. \end{definition} \begin{lemma} \label{lem:interval} Let $x\in [a,b]$ and $y\in [c,d]$ and denote by $*$ any of the elementary operations $+, -, \cdot, /$. Then we have $x*y\in [a,b]*[c,d]$. \end{lemma} \begin{proof} The proof is an elementary exercise. \end{proof} \begin{remark} \label{rem:interval} If $f$ is a complicated rational function of several variables (with rational coefficients), rational interval arithmetic is an effective way to obtain a rigorous and reasonable bound on $f(\Omega)$, provided $\Omega$ is a product of closed intervals with rational endpoints. The necessary computations can easily be carried out on a computer as they only involve elementary operations in $\Q$. The quality of the bound, however, depends on the particular algebraic form that is used to represent $f$. Furthermore, in typical applications the bound can be improved considerably by splitting the domain $\Omega$ in smaller subdomains $\Omega_k$, i.e., $\Omega=\bigcup_k \Omega_k$, and by estimating each $f(\Omega_k)$ separately by interval arithmetic. \end{remark} \subsection{Rigorous bounds on the approximate Skyrmion} \begin{definition} We set \[ \gT(x):=\sum_{k=2}^{43}c_n\phi_n(x) \] where $(c_n)_{n=2}^{43}\subset \Q$ are given in Table \ref{tab:c}. \end{definition} \begin{proposition} \label{prop:Rem} The function $\gT$ satisfies \begin{align*} \tfrac{1}{100}+\tfrac{11}{20}&\leq \gT(x) \leq \tfrac{21}{20}-\tfrac{1}{100} \\ \tfrac{1}{100}-\tfrac{11}{20}&\leq \gT'(x)\leq \tfrac{1}{2}-\tfrac{1}{100} \end{align*} for all $x\in [-1,1]$. Furthermore, \[ \|\mc R(\gT)\|_{L^\infty}\leq \tfrac{1}{500}. \] \end{proposition} \begin{proof} From the bound $\|T_n''\|_{L^\infty}\leq \tfrac13 n^2(n^2-1)$ we infer \[ \|\gT''\|_{L^\infty}\leq \sum_{n=2}^{43}|c_n|\|T_n''\|_{L^\infty}\leq \tfrac13\sum_{n=2}^{43}n^2(n^2-1)|c_n|\leq 36 \] and Lemma \ref{lem:estf} with $N=7200$ yields \begin{align*} \max_{[-1,1]} \gT'&\leq \max_{\Omega_N} \gT'+\tfrac{2}{N}\|\gT''\|_{L^\infty} \leq \tfrac{47}{100}+\tfrac{1}{100}\leq \tfrac{1}{2}-\tfrac{1}{100} \\ \min_{[-1,1]}\gT'&\geq \min_{\Omega_N}\gT'-\tfrac{2}{N}\|\gT''\|_{L^\infty} \geq -\tfrac{51}{100}-\tfrac{1}{100}\geq -\tfrac{11}{20}+\tfrac{1}{100}. \end{align*} In particular, we obtain $\|\gT'\|_{L^\infty}\leq 1$ and with $N=200$ we find \begin{align*} \max_{[-1,1]} \gT&\leq \max_{\Omega_N} \gT+\tfrac{2}{N}\|\gT'\|_{L^\infty} \leq \tfrac{101}{100}+\tfrac{1}{100}\leq \tfrac{21}{20}-\tfrac{1}{100} \\ \min_{[-1,1]}\gT&\geq \min_{\Omega_N}\gT-\tfrac{2}{N}\|\gT'\|_{L^\infty} \geq \tfrac{58}{100}-\tfrac{1}{100}\geq \tfrac{11}{20}+\tfrac{1}{100}. \end{align*} This proves the first part of the Proposition. Next, we consider \[ \hat\Psi(x,y):=\frac{\Psi(x,y)}{1-x^2}. \] Rational interval arithmetic yields \[ \hat\Psi\left ([-1,0],[\tfrac{11}{20},\tfrac{21}{20}]\right )\subset \left [10^{-3},13\right ],\qquad \hat\Psi\left ([0,1],[\tfrac{11}{20},\tfrac{21}{20}]\right )\subset \left [10^{-4},2\right] \] and thus, $\hat \Psi(x,\gT(x))>0$ for all $x\in [-1,1]$. We set \begin{align*} P(x)&:=\frac{(\frac{21}{10}+\frac13 x-x^2)^7 }{1-x^2}\sum_{k=0}^2 \Phi_k\left (x,\gT(x)\right )[\gT'(x)]^k \\ \qquad Q(x)&:=\frac{(\tfrac{21}{10}+\tfrac13 x-x^2)^7}{1-x^2}\Psi\left (x,\gT(x)\right )=(\tfrac{21}{10}+\tfrac13 x-x^2)^7\hat\Psi(x,\gT(x)), \end{align*} which yields the representation \[ \Phi\left (x,\gT(x),\gT'(x)\right )=\frac{P(x)}{Q(x)}. \] The prefactor $(\frac{21}{10}+\frac13 x-x^2)^7$ is introduced \emph{ad hoc}. It is empirically found to improve some of the estimates that follow. By Eq.~\eqref{def:Psi}, $Q$ is a polynomial with rational coefficients and by the regularity conditions Eq.~\eqref{eq:Regphi} together with Eq.~\eqref{eq:Psi1boundary}, the same is true for $P$. Furthermore, $Q(x)>0$ for all $x\in [-1,1]$ and from the explicit expressions for $\Phi_k$ and $\Psi$, Eqs.~\eqref{def:Phik} and \eqref{def:Psi}, we read off the estimates $\deg P\leq 319$ and $\deg Q\leq 278$. For the following it is advantageous to straighten the denominator. To this end we obtain a truncated Chebyshev expansion of $1/Q$, \[ \frac{1}{Q(x)}\approx \sum_{n=0}^{14} r_n T_n(x)=:R(x), \] where \[ (r_n)=(\tfrac{11}{37},-\tfrac{1}{23}, -\tfrac{5}{44}, -\tfrac{3}{13}, \tfrac{9}{44}, \tfrac{1}{12}, -\tfrac{1}{766}, -\tfrac{3}{25}, \tfrac{1}{101}, \tfrac{1}{23}, \tfrac{1}{35}, -\tfrac{1}{36}, -\tfrac{1}{66}, \tfrac{1}{307}, \tfrac{1}{125}). \] The coefficients $(r_n)$ can be obtained numerically by a standard pseudospectral method as explained in Section \ref{sec:num}. Thus, we may write \begin{align*} \mc R(\gT)(x)&=\gT''(x)+\Phi\left (x,\gT(x),\gT'(x)\right )=\gT''(x)+\frac{P(x)}{Q(x)} \\ &=\frac{R(x)Q(x)\gT''(x)+R(x)P(x)}{R(x)Q(x)} \end{align*} and this modification is expected to improve the situation since the denominator $RQ$ is now approximately constant. Note further that $RP$ and $RQ$ are polynomials with rational coefficients and \[ \deg(RP)\leq 333,\quad \deg (RQ)\leq 292,\quad \deg(RQ\gT'')\leq 333. \] For brevity we set \[ \hat P:=RQ\gT''+RP,\qquad \hat Q:=RQ. \] We now re-expand $\hat P$ and $\hat Q$ as \[ \hat P(x)=\sum_{n=0}^{333}\hat p_nT_n(x), \qquad \hat Q(x)=\sum_{n=0}^{292}\hat q_nT_n(x). \] The expansion coefficients $(\hat p_n),(\hat q_n)\subset \Q$ are obtained by solving the linear equations\footnote{The choice of the evaluation points $(x_k)$ is arbitrary but since $\hat P$ has removable singularities at $-1$ and $1$, we prefer to avoid the endpoints. Furthermore, the equation for $(\hat q_n)$ is overdetermined so that one can re-use the computationally expensive LU decomposition.} \begin{align*} \sum_{n=0}^{333}\hat p_nT_n(x_k)=\hat P(x_k),\qquad \sum_{n=0}^{333}\hat q_nT_n(x_k)=\hat Q(x_k),\qquad x_k=-\tfrac12+\tfrac{k}{333} \end{align*} for $k=0,1,\dots,333$. From the bounds $\|T_n\|_{L^\infty}\leq 1$ and $\|T_n'\|_{L^\infty}\leq n^2$ we infer \begin{align*} \|\hat P\|_{L^\infty}\leq \sum_{n=0}^{333}|\hat p_n| \leq \tfrac{12}{10000}, \qquad \|\hat Q'\|_{L^\infty}\leq\sum_{n=0}^{292}n^2|\hat q_n|\leq 22. \end{align*} Consequently, Lemma \ref{lem:estf} with $N=500$ yields \begin{align*} \min_{[-1,1]} \hat Q\geq \min_{\Omega_N}\hat Q-\tfrac{2}{N}\|\hat Q'\|_{L^\infty}\geq \tfrac{93}{100}-\tfrac{44}{500}\geq \tfrac{4}{5} \end{align*} and, since $\mc R(\gT)=\hat P/\hat Q$, we obtain the estimate \[ \|\mc R(\gT)\|_{L^\infty}\leq \frac{\|\hat P\|_{L^\infty}}{\min_{[-1,1]}\hat Q}\leq \tfrac54\tfrac{12}{10000}=\tfrac{3}{2000}\leq \tfrac{4}{2000}=\tfrac{1}{500}. \] \end{proof} \section{Estimates for the nonlinearity} \noindent By employing rational interval arithmetic, we prove bounds on second derivatives of the function $\Phi$. This leads to explicit bounds for the nonlinear operator. All of the polynomials of two variables $x, y$ that appear in the sequel are implicitly assumed to be given in the following \emph{canonical form} \[ \sum_{k=0}^{k_0}(1+x)^{\alpha_k}(1-x)^{\beta_k}P_k(x)y^k \] where $k_0,\alpha_k,\beta_k \in \N_0$ and $P_k$ are polynomials with rational coefficients and $P_k(\pm 1)\not= 0$. This is important since the outcome of interval arithmetic depends on the representation of the function. \subsection{Pointwise estimates} \begin{lemma} \label{lem:pwN} Let $\Omega=[-1,1]\times [\tfrac{11}{20},\tfrac{21}{20}]\times [-\tfrac{11}{20},\tfrac12]$. Then we have the bounds \begin{align*} \|\partial_2^2 \Phi\|_{L^\infty(\Omega)}&\leq 70 \\ \|\partial_2\partial_3 \Phi\|_{L^\infty(\Omega)}&\leq 22 \\ \|\partial_3^2 \Phi\|_{L^\infty(\Omega)}&\leq 8. \end{align*} \end{lemma} \begin{proof} We begin with the simplest estimate, that is, the bound on $\partial_3^2\Phi$. We set \[ \hat \Phi_k(x,y):=\frac{\Phi_k(x,y)}{1-x^2},\qquad \hat\Psi(x,y):=\frac{\Psi(x,y)}{1-x^2} \] with $\Phi_k$ and $\Psi$ from Eqs.~\eqref{def:Phik} and \eqref{def:Psi}, respectively. Observe that $\hat\Phi_2$ is a polynomial. From Eq.~\eqref{def:Phi} we infer \[ \partial_z^2 \Phi(x,y,z)=\frac{2\Phi_2(x,y)}{\Psi(x,y)} =\frac{2\hat\Phi_2(x,y)}{\hat\Psi(x,y)} \] and from the proof of Proposition \ref{prop:Rem} we recall that $\hat\Psi([-1,1],[\frac{11}{20},\frac{21}{20}])\subset [10^{-4},13]$. Consequently, $\partial_3^2 \Phi$ is a rational function without poles in $\Omega$. Rational interval arithmetic then yields\footnote{Here and in the following, the domain $\Omega$ needs to be divided in sufficiently small subdomains $\Omega_k\subset \Omega$ such that $\Omega=\bigcup_k \Omega_k$, see Remark \ref{rem:interval}.} $\partial_3^2\Phi(\Omega) \subset [-8,8]$ and this proves the stated bound for $\partial_3^2\Phi$. Next, we consider $\partial_2\partial_3\Phi$. We have \begin{align*} \partial_y\partial_z\Phi(x,y,z)&=\partial_y \frac{\hat \Phi_1(x,y)+2\hat \Phi_2(x,y)z}{\hat \Psi(x,y)} \\ &=\frac{\hat\Psi(x,y)\partial_y\hat\Phi_1(x,y) -\partial_y\hat\Psi(x,y)\hat\Phi_1(x,y)}{\hat\Psi(x,y)^2} \\ &\quad+2z\frac{\hat\Psi(x,y)\partial_y\hat\Phi_2(x,y) -\partial_y\hat\Psi(x,y)\hat\Phi_2(x,y)}{\hat\Psi(x,y)^2} \end{align*} and, since $\hat\Phi_2$ is a polynomial, the last term is a rational function without poles in $\Omega$. Note further that the numerator of the second to last term appears to be singular at $x\in \{-1,1\}$, but in fact there is a cancellation so that \begin{align*} \hat\Psi(x,y)&\partial_y\hat\Phi_1(x,y)-\partial_y\hat\Psi(x,y)\hat\Phi_1(x,y) \\ &=2^{-11}(1+x)^7(1-x)^3(17-43x+7x^2+3x^3)y^9 \\ &\quad -2^{-11}(1+x)^5(1-x)^7(17-15x+7x^2+7x^3)y^7 \\ &\quad -2^{-14}(1+x)(1-x)^{11}(285-637x+794x^2-386x^3+41x^4+95x^5)y^5 \\ &\quad -2^{-15}(1+x)(1-x)^{15}(25-31x+15x^2+7x^3)y^3 \\ &\quad +2^{-19}(1-x)^{19}(1-12x+3x^2)y. \end{align*} We conclude that $\partial_2\partial_3\Phi$ is a rational function without poles in $\Omega$ and rational interval arithmetic yields $\partial_2\partial_3\Phi(\Omega)\subset [-22, 22]$. Finally, we turn to $\partial_2^2 \Phi$. We have \begin{align*} \partial_y \Phi(x,y,z)&=\sum_{k=0}^2 \frac{\hat\Psi(x,y)\partial_y\hat\Phi_k(x,y)z^k -\partial_y\hat\Psi(x,y)\hat\Phi_k(x,y)z^k}{\hat\Psi(x,y)^2} \\ &=\frac{1}{\hat\Psi(x,y)^2}\sum_{k=0}^2 \hat\Psi_k(x,y)z^k \end{align*} where $\hat\Psi_k:=\hat\Psi\partial_2\hat\Phi_k-\partial_2\hat\Psi\hat\Phi_k$. From above we recall that $\hat\Psi_1$ and $\hat\Psi_2$ are polynomials. We obtain \begin{align*} \partial_y^2 \Phi(x,y,z)=\sum_{k=0}^2 \frac{\hat\Psi(x,y)^2\partial_y\hat\Psi_k(x,y)z^k -2\hat\Psi(x,y)\partial_y\hat\Psi(x,y)\hat\Psi_k(x,y)z^k}{\hat\Psi(x,y)^4}. \end{align*} Again, the apparently singular term \[ \hat\Psi(x,y)^2\partial_y\hat\Psi_0(x,y) -2\hat\Psi(x,y)\partial_y\hat\Psi(x,y)\hat\Psi_0(x,y) \] is in fact a polynomial since it exhibits a special cancellation. Consequently, $\partial_2^2\Phi$ is a rational function without poles in $\Omega$ and rational interval arithmetic yields the desired bound. \end{proof} \subsection{The nonlinear operator} In this section we employ Einstein's summation convention, i.e., we sum over repeated indices (the range follows from the context). \begin{lemma} \label{lem:Taylor} Let $U\subset \R^d$ be open and convex and $f\in C^2(U)\cap W^{2,\infty}(U)$. Set \[ M:=\tfrac12\left (\sum_{j=1}^d\sum_{k=1}^d \|\partial_j\partial_k f\|_{L^\infty(U)}^2 \right )^{1/2}. \] Then we have \[ f(x_0+x)=f(x_0)+x^j\partial_j f(x_0)+N(x_0, x) \] where $N$ satisfies the bound \begin{align*} |N(x_0, x)-N(x_0, y)|&\leq M(|x|+|y|)|x-y| \end{align*} for all $x_0, x, y\in \R^d$ such that $x_0, x_0+x, x_0+y\in U$. \end{lemma} \begin{proof} From the fundamental theorem of calculus we infer \begin{align*} N(x_0,x)-N(x_0,y)&=f(x_0+x)-f(x_0+y)-(x^j-y^j)\partial_jf(x_0) \\ &=\int_0^1 \partial_t f\big (x_0+y+t(x-y)\big )\d t-(x^j-y^j)\partial_jf(x_0) \\ &=(x^j-y^j)\int_0^1 \big [\partial_j f\big (x_0+y+t(x-y)\big )-\partial_j f(x_0)\big ]\d t \\ &=(x^j-y^j)\int_0^1\int_0^1 \partial_s \partial_j f\big (x_0+sy+st(x-y)\big )\d s \d t \\ &=(x^j-y^j)\int_0^1[y^k+t(x^k-y^k)]\int_0^1 \partial_k\partial_j f\big (x_0+sy+st(x-y)\big )\d s \d t \end{align*} and Cauchy-Schwarz yields \begin{align*} |N(x_0,x)-N(x_0,y)|&\leq |x^j-x^j|\|\partial_j\partial_k f\|_{L^\infty(U)} \int_0^1 \big [t|x^k|+(1-t)|y^k|\big ]\d t \\ &=\tfrac12 |x^j-y^j|(|x^k|+|y^k|)\|\partial_j\partial_k f\|_{L^\infty(U)} \\ &\leq \tfrac12 |x-y|(|x^k|+|y^k|)\left (\sum_{j=1}^d \|\partial_j\partial_k f\|_{L^\infty(U)}^2\right )^{1/2} \\ &\leq M|x-y||x|+M|x-y||y|. \end{align*} \end{proof} \begin{proposition} \label{prop:CN} We have \[ \mc R(\gT+\delta)=\mc R(\gT)+\mc L\delta+\mc N(\delta) \] where \[ \mc Lu(x):=u''(x)+\partial_3\Phi\big(x,\gT(x),\gT'(x)\big )u'(x)+ \partial_2\Phi\big (x,\gT(x),\gT'(x)\big )u(x) \] and $\mc N$ satisfies the bounds \begin{align*} \|\mc N(u)\|_{L^\infty}&\leq 39\,\|u\|_{W^{1,\infty}}^2 \\ \|\mc N(u)-\mc N(v)\|_{L^\infty} &\leq 39\left (\|u\|_{W^{1,\infty}}+\|v\|_{W^{1,\infty}}\right ) \|u-v\|_{W^{1,\infty}} \end{align*} for all $u,v \in C^1[-1,1]$ with $\|u\|_{W^{1,\infty}}, \|v\|_{W^{1,\infty}}\leq \frac{1}{100}$. \end{proposition} \begin{proof} Let $\Omega=[-1,1]\times [\tfrac{11}{20},\tfrac{21}{20}]\times [-\tfrac{11}{20},\tfrac12]$. Lemma \ref{lem:Taylor} implies \[ \Phi(x,y_0+y,z_0+z)=\Phi(x, y_0, z_0) +\partial_2 \Phi(x, y_0, z_0)y+\partial_3 \Phi(x, y_0, z_0)z +N(x, y_0, z_0, y, z) \] where $N$ satisfies the bound \begin{align*} |N(x, y_0, z_0, y, z)-N(x,y_0,z_0,\tilde y,\tilde z)|&\leq M\sqrt{(y-\tilde y)^2+(z-\tilde z)^2} \left (\sqrt{y^2+z^2}+\sqrt{\tilde y^2+\tilde z^2}\right ) \end{align*} with \[ M=\tfrac12\sqrt{\|\partial_2^2\Phi\|_{L^\infty(\Omega)}^2 +2\|\partial_2\partial_3 \Phi\|_{L^\infty(\Omega)}^2 +\|\partial_3^2\Phi\|_{L^\infty(\Omega)}^2}. \] From Lemma \ref{lem:pwN} we infer $M\leq 39$ and thus, the claim follows from Proposition \ref{prop:Rem} by setting \[ \mc N(u)(x):=N\big (x, \gT(x), \gT'(x), u(x), u'(x)\big ). \] \end{proof} \section{Analysis of the linear operator} \noindent In this section we construct a linear operator $\tilde{\mc L}$ with an explicit fundamental system such that $\mc L-\tilde{\mc L}$ is small in $L^\infty(-1,1)$. Then we invert $\tilde{\mc L}$ and prove an explicit bound on the inverse. \subsection{Asymptotics} First, we study the asymptotic behavior of $\partial_2\Phi$ and $\partial_3 \Phi$. \begin{lemma} \label{lem:asymPhi} We have \begin{align*} \partial_2\Phi\big (x,\gT(x),\gT'(x)\big )&=\frac{2}{1+x}+O(x^0) \\ \partial_3\Phi\big (x,\gT(x),\gT'(x)\big )&=\frac{4}{1+x}+O(x^0) \end{align*} for $x\in (-1,0]$, as well as \begin{align*} \partial_2\Phi\big (x,\gT(x),\gT'(x)\big)&=\frac{2}{1-x}+O(x^0) \\ \partial_3\Phi\big (x,\gT(x),\gT'(x)\big)&=-\frac{4}{1-x}+O(x^0) \end{align*} for $x\in [0,1)$. \end{lemma} \begin{proof} As before, we set \[ \hat\Psi(x,y):=\frac{\Psi(x,y)}{1-x^2} \] with $\Psi$ from Eq.~\eqref{def:Psi}. Then we have \[ \Phi(x,y,z)=\frac{1}{(1-x^2)\hat\Psi(x,y)}\sum_{k=0}^2 \Phi_k(x,y)z^k \] with $\Phi_k$ given in Eq.~\eqref{def:Phik}. Recall that $\hat\Psi$ is a polynomial with no zeros in $[-1,1]\times [\frac{11}{20},\frac{21}{20}]$, see the proof of Proposition \ref{prop:Rem}. From Eqs.~\eqref{def:Phik} and \eqref{def:Psi} we obtain \begin{align*} \Phi_0(-1,y)&=4y+32y^3 & \Phi_0(1,y)&=4y^7 \\ \Phi_1(-1,y)&=8+64y^2 & \Phi_1(1,y)&=-8y^6 \\ \Phi_2(-1,y)&=0 & \Phi_2(1,y)&=0 \\ \hat\Psi(-1,y)&=1+8y^2 & \hat\Psi(1,y)&=y^6. \end{align*} Consequently, \begin{align*} \lim_{x\to -1}\left [(1+x)\partial_z \Phi(x,y,z)\right ] &=\frac{\Phi_1(-1,y)}{2\hat\Psi(-1,y)}=4 \\ \lim_{x\to 1}\left [(1-x)\partial_z \Phi(x,y,z)\right ] &=\frac{\Phi_1(1,y)}{2\hat\Psi(1,y)}=-4 . \end{align*} The other assertions are proved similarly. \end{proof} In order to isolate the singular behavior it is natural to write \[ \mc Lu=\mc L_0u +pu'+qu \] where \begin{align*} \mc L_0u(x)&=u''(x)+\left (\frac{4}{1+x}-\frac{4}{1-x}\right )u'(x)+\left (\frac{2}{1+x}+\frac{2}{1-x}\right )u(x) \\ &=u''(x)-\frac{8x}{1-x^2}u'(x)+\frac{4}{1-x^2}u(x) \\ p(x)&=\partial_3\Phi\big (x,\gT(x),\gT'(x)\big )-\frac{4}{1+x}+\frac{4}{1-x} \\ q(x)&=\partial_2\Phi\big (x,\gT(x),\gT'(x)\big )-\frac{2}{1+x}-\frac{2}{1-x}. \end{align*} Lemma \ref{lem:asymPhi} implies that $p$ and $q$ are rational functions with no poles in $[-1,1]$. \begin{lemma} \label{lem:fsasym} The equation $\mc Lu=0$ has fundamental systems $\{u_-,v_-\}$ and $\{u_+,v_+\}$ on $(-1,1)$ which satisfy \begin{align*} u_{-}(x)&=1+O(1+x) \\ u_{-}'(x)&=-\tfrac12+O(1+x) \\ v_{-}(x)&=O((1+x)^{-3}) \end{align*} for $x\in (-1,0]$, as well as \begin{align*} u_+(x)&=1+O(1-x) \\ u_+'(x)&=\tfrac12+O(1-x) \\ v_+(x)&=O((1-x)^{-3}) \end{align*} for $x \in [0,1)$. Furthermore, $u_-,v_-,u_+,v_+\in C^\infty(-1,1)$ and $u_- \in C^\infty([-1,1))$, $u_+\in C^\infty((-1,1])$. \end{lemma} \begin{proof} The coefficients of the equation $\mc Lu=0$ are rational functions and the only poles in $[-1,1]$ are at $x=-1$ and $x=1$. These poles are regular singular points of the equation with Frobenius indices $\{-3,0\}$. Consequently, the statements follow by Frobenius' method. \end{proof} \subsection{Numerical construction of an approximate fundamental system} We obtain an approximate fundamental system $\{u_-,u_+\}$, where $u_\pm$ is smooth at $\pm 1$, by a Chebyshev pseudospectral method. As always, special care has to be taken near the singular endpoints $\pm 1$. Solutions $u$ of $\mc Lu=0$ that are regular at $-1$ must satisfy $u'(-1)+\frac12 u(-1)=0$. Similarly, regularity at $1$ requires $u'(1)-\frac12 u(1)=0$, cf.~Eq.~\eqref{eq:RegRem}. If one sets \[ u_\pm(x)=\frac{w_\pm(x)}{(1\pm x)^3}, \] the regularity conditions $u_\pm'(\pm 1)=\pm \frac12 u_\pm(\pm 1)$ translate into $w_\pm'(\pm 1)=\pm 2w_\pm(\pm 1)$. Consequently, we use the basis functions $\psi_{\pm,n}: [-1,1]\to \R$, $n\in \N$, given by \begin{align} \label{def:psi} \psi_{\pm,n}(x)&:=T_n(x)\pm [T_n'(\pm 1)\mp 2T_n(\pm 1)](1\mp x) \end{align} which have the necessary regularity conditions automatically built in, i.e., $\psi_{\pm,n}'(\pm 1)=\pm 2\psi_{\pm,n}(\pm 1)$ for all $n\in \N$. Observe that $w_\pm$ is expected to be bounded on $[-1,1]$, see Lemma \ref{lem:fsasym}. For brevity, we also set \begin{equation} \hat\psi_{\pm,n}(x):=\frac{\psi_{\pm,n}(x)}{(1\pm x)^3}. \end{equation} We enforce the normalization \[ \sum_{n=1}^{N_\pm}c_{\pm,n}\hat\psi(\pm 1)=1, \] which is used to fix the coefficients $c_{\pm, 1}$. The remaining coefficients are obtained numerically by solving the root finding problem \[ \mc L\left (\sum_{n=1}^{N_\pm}c_{\pm,n}\hat\psi_{\pm,n}\right )(x_k)=0,\quad x_k=\cos\left (\frac{k\pi}{N_\pm}\right ),\quad k= 1,2,\dots,N_\pm-1 \] with $N_\pm=30$. Finally, we rationalize the floating-point coefficients. The resulting coefficients are listed in Tables \ref{tab:cm} and \ref{tab:cp}. \subsection{Rigorous bounds on the approximate fundamental system} The numerical approximation leads to the following definition. \begin{definition} \label{def:fs} We set \[ u_\pm(x):=\frac{w_\pm(x)}{(1\pm x)^3}:=\frac{1}{(1\pm x)^3}\sum_{n=1}^{30} c_{\pm,n}\psi_{\pm,n}(x) \] where the coefficients $(c_{\pm,n})_{n=2}^{30}\subset \Q$ are given in Tables \ref{tab:cm} and \ref{tab:cp}, respectively. The coefficients $c_{\pm,1}$ are determined by the requirement $u_\pm(\pm 1)=1$. \end{definition} Next, we analyze the approximate fundamental system $\{u_-,u_+\}$. \begin{proposition} \label{prop:fs} We have $W(u_-,u_+)(x)=(1-x^2)^{-4} W_0(x)$, where $W_0$ is a polynomial with no zeros in $[-1,1]$. Furthermore, the functions $u_\pm$ satisfy \[ \tilde{\mc L}u_{\pm}=0, \] where $\mc{\tilde L} u:=\mc L_0u+\tilde pu'+\tilde q u$, and \[ \|\tilde p-p\|_{L^\infty}\leq \tfrac{3}{100},\qquad \|\tilde q-q\|_{L^\infty}\leq \tfrac{1}{20}. \] \end{proposition} \begin{proof} We temporarily set $p_\pm(x):=(1\pm x)^{-3}$. Then we have \[ W(u_-,u_+)=W(p_- w_-,p_+ w_+)=W(p_-,p_+)w_-w_++p_-p_+W(w_-,w_+) \] and, since $W(p_-,p_+)(x)=-6(1-x^2)^{-4}$, we infer $W(u_-,u_+)(x)=(1-x^2)^{-4} W_0(x)$ with \[ W_0(x)=-6 w_-(x)w_+(x)+(1-x^2)W(w_-,w_+)(x). \] Obviously, $W_0$ is a polynomial with $\deg W_0\leq 61$, see Definition \ref{def:fs}. We re-expand $W_0$ in Chebyshev polynomials, \[ W_0(x)=\sum_{n=0}^{61} w_{0,n}T_n(x), \] by solving the (possibly overdetermined) system \[ \sum_{n=0}^{61} w_{0,n}T_n(x_k)=W_0(x_k),\quad x_k=-\tfrac12+\tfrac{k}{61}, \quad k=0,1,2,\dots,61 \] for the coefficients $(w_{0,n})_{n=0}^{61}\subset \Q$. From the re-expansion we obtain the estimate \[ \|W_0'\|_{L^\infty}\leq \sum_{n=0}^{61} |w_{0,n}|\|T_n'\|_{L^\infty} \leq \sum_{n=0}^{61} n^2 |w_{0,n}|\leq 400 \] and Lemma \ref{lem:estf} with $N=2000$ yields \[ \max_{[-1,1]}W_0\leq \max_{\Omega_N}W_0+\tfrac{2}{N}\|W_0'\|_{L^\infty} \leq -\tfrac{94}{100}+\tfrac{400}{1000}\leq -\tfrac12. \] This shows that $W_0$ has no zeros in $[-1,1]$. We set \[ \tilde p:=\frac{u_+\mc L_0u_- - u_-\mc L_0 u_+}{W(u_-,u_+)},\qquad \tilde q:=\frac{u_-'\mc L_0u_+ - u_+'\mc L_0 u_-}{W(u_-,u_+)}. \] By construction, we have $\tilde{\mc L} u_\pm=\mc L_0u_\pm+\tilde pu_\pm'+\tilde qu_\pm=0$. In order to estimate $p-\tilde p$, we first note that \[ u_+(x)\mc L_0 u_-(x)-u_-(x)\mc L_0 u_+(x)=O((1-x^2)^{-4}) \] since the most singular terms cancel. Consequently, \[ P_1(x):=(1-x^2)^4[u_+(x)\mc L_0u_-(x) - u_-(x)\mc L_0 u_+(x) ] \] is a polynomial of degree at most $66$. Furthermore, recall that \begin{align*} p(x)&=\partial_3 \Phi\big (x,\gT(x), \gT'(x)\big )+\frac{8x}{1-x^2} =\frac{\Phi_1(x,\gT(x))+2\Phi_2(x,\gT(x))\gT'(x)} {\Psi(x,y)}+\frac{8x}{1-x^2} \\ &=2\gT'(x)\frac{\hat\Phi_2(x,\gT(x))}{\hat\Psi(x,\gT(x))} +\frac{1}{1-x^2}\frac{\Phi_1(x,\gT(x))+8x\hat\Psi(x,\gT(x))}{\hat\Psi(x,\gT(x))}, \end{align*} where we use the notation \[ \hat\Psi(x,y)=\frac{\Psi(x,y)}{1-x^2},\qquad \hat\Phi_k(x,y)=\frac{\Phi_k(x,y)}{1-x^2}. \] From Eqs.~\eqref{def:Phik}, \eqref{def:Psi} it follows that $\hat\Psi$ and $\hat\Phi_2$ are polynomials. Moreover, we have \[ \Phi_1(x, y)+8x\hat\Psi(x,y)=0 \] for $x\in \{-1,1\}$ and this shows that $p$ is of the form $p(x)=\frac{P_2(x)}{P_3(x)}$ where \[ P_2(x):=2\gT'(x)\hat\Phi_2(x,\gT(x))+\frac{\Phi_1(x,\gT(x))+8x\hat\Psi(x,\gT(x))}{1-x^2} \] is a polynomial of degree at most $263$ and $P_3(x):=\hat\Psi(x,\gT(x))$. Recall that $P_3$ has no zeros on $[-1,1]$ and $\deg P_3\leq 264$. Consequently, we obtain \[ p-\tilde p=\frac{P_2}{P_3}-\frac{P_1}{W_0}=\frac{P_2 W_0-P_1 P_3}{P_3W_0}. \] In order to estimate this expression, we proceed as in the proof of Proposition \ref{prop:Rem}. First, we straighten the denominator, i.e., we try to find an approximation to $\frac{1}{W_0P_3}$ as a truncated Chebyshev expansion. To improve the numerical convergence, it is advantageous to multiply the numerator and denominator by the polynomial $(\frac{13}{10}-x^2)^8$ (this factor is found empirically). Consequently, we write $p-\tilde p=\frac{P_4}{P_5}$ where \[ P_4(x)=(\tfrac{13}{10}-x^2)^8[P_2(x)W_0(x)-P_1(x)P_3(x)],\qquad P_5(x)=(\tfrac{13}{10}-x^2)^8 P_3(x)W_0(x). \] Note that $P_4$ and $P_5$ are polynomials with rational coefficients and $\deg P_4\leq 346$, $\deg P_5\leq 341$. Next, we obtain an approximation to $1/P_5$ of the form \[ \frac{1}{P_5(x)}\approx \sum_{n=0}^{30}r_n T_n(x)=:R(x) \] where the coefficients $(r_n)_{n=1}^{30}\subset \Q$, obtained by a pseudospectral method, are given in Table \ref{tab:r1} and $r_0=-\frac{623}{23}$. We write $p-\tilde p=\frac{RP_4}{RP_5}$ and note that $\deg (RP_4)\leq 376$, $\deg (RP_5)\leq 371$. We re-expand $RP_4$ and $RP_5$ as \[ RP_4=\sum_{n=0}^{376}p_{4,n}T_n,\qquad RP_5=\sum_{n=0}^{376} p_{5,n}T_n \] by solving the linear equations \begin{align*} \sum_{n=0}^{376}p_{4,n}T_n(x_k)&=RP_4(x_k), \qquad \sum_{n=0}^{376}p_{5,n}T_n(x_k)=RP_5(x_k) \end{align*} for $x_k=-\tfrac12+\tfrac{k}{376}$ and $k=0,1,\dots,376$. This yields the bound \[ \|(RP_5)'\|_{L^\infty}\leq \sum_{n=0}^{376} |p_{5,n}|\|T_n'\|_{L^\infty} \leq \sum_{n=0}^{376}n^2 |p_{5,n}|\leq 17 \] and from Lemma \ref{lem:estf} with $N=1000$ we infer \[ \min_{[-1,1]}RP_5\geq \min_{\Omega_N}RP_5-\tfrac{2}{N}\|(RP_5)'\|_{L^\infty} \geq \tfrac{98}{100}-\tfrac{34}{1000}\geq \tfrac{94}{100}. \] Consequently, we find \[ \|p-\tilde p\|_{L^\infty}=\left \|\tfrac{RP_4}{RP_5}\right \|_{L^\infty} \leq \tfrac{100}{94}\sum_{n=0}^{376}|p_{4,n}|\leq \tfrac{3}{100}. \] The bound for $q-\tilde q$ is proved analogously. \end{proof} \begin{proposition} \label{prop:fsbounds} The approximate fundamental system $\{u_-,u_+\}$ satisfies the bounds \begin{align*} |u_-(x)|\int_x^1 \frac{|u_+(y)|}{|W(y)|}\d y+|u_+(x)|\int_{-1}^x \frac{|u_-(y)|}{|W(y)|}\d y&\leq \tfrac{7}{10} \\ |u_-'(x)|\int_x^1 \frac{|u_+(y)|}{|W(y)|}\d y+ |u_+'(x)|\int_{-1}^x \frac{|u_-(y)|}{|W(y)|}\d y&\leq \tfrac12 \end{align*} for all $x\in (-1,1)$, where $W(y):=W(u_-,u_+)(y)$. \end{proposition} \begin{proof} As before, we write $u_\pm(x)=(1\pm x)^{-3}w_\pm(x)$ and recall that $w_\pm$ are polynomials of degree $30$, see Definition \ref{def:fs}. First, we obtain an approximation to $1/W_0$, where $W(x)=(1-x^2)^{-4} W_0(x)$, see Proposition \ref{prop:fs}. By employing the usual pseudospectral method, we find \[ \frac{1}{W_0(x)}\approx \sum_{n=0}^{22}r_n T_n(x)=:R(x) \] with the coefficients $(r_n)_{n=0}^{22}\subset \Q$ given in Table \ref{tab:r2}. Next, we note that \[ |\psi_{-,n}'(x)|\leq |T_n'(x)|+|T_n'(-1)|+2|T_n(-1)|\leq 2n^2+2 \] for all $x\in [-1,1]$, see Eq.~\eqref{def:psi}, and thus, \[ \|w_-'\|_{L^\infty}\leq \sum_{n=1}^{30}|c_{-,n}|\|\psi_{-,n}'\|_{L^\infty} \leq 2\sum_{n=1}^{30}(n^2+1)|c_{-,n}|\leq 60. \] Consequently, Lemma \ref{lem:estf} with $N=600$ yields \[ \min_{[-1,1]}w_-\geq \min_{\Omega_N}w_- - \tfrac{2}{N}\|w_-'\|_{L^\infty}\geq \tfrac{7}{10}-\tfrac{1}{5}=\tfrac12 \] and in particular, $w_->0$. Analogously, we see that $w_+>0$ on $[-1,1]$. Furthermore, from the proof of Proposition \ref{prop:fs} we recall that $W_0<0$ on $[-1,1]$. Consequently, we find \begin{align*} A(x):&=|u_-(x)|\int_x^1 \frac{|u_+(y)|}{|W(y)|}\d y+|u_+(x)|\int_{-1}^x \frac{|u_-(y)|}{|W(y)|}\d y \\ &=-\frac{w_-(x)}{(1-x)^3}\int_x^1 (1-y)^4(1+y)\frac{R(y) w_+(y)}{R(y) W_0(y)}\d y \\ &\quad -\frac{w_+(x)}{(1+x)^3}\int_{-1}^x (1+y)^4(1-y)\frac{R(y) w_-(y)}{R(y) W_0(y)}\d y. \end{align*} Note that $RW_0$ is a polynomial of degree at most $22+61=83$, see the proof of Proposition \ref{prop:fs}. We re-expand $RW_0$ by solving the linear system \[ \sum_{n=0}^{83} a_n T_n(x_k)=R(x_k)W_0(x_k), \quad x_k=-\tfrac12+\tfrac{k}{83},\quad k=0,1,\dots,83 \] over $\Q$, which yields the estimate \[ \|(RW_0)'\|_{L^\infty}\leq \sum_{n=0}^{83}n^2 |a_n|\leq 3. \] Thus, from Lemma \ref{lem:estf} with $N=600$ we infer \[ \min_{[-1,1]}RW_0\geq \min_{\Omega_N}RW_0-\tfrac{2}{N}\|(RW_0)'\|_{L^\infty} \geq \tfrac{99}{100}-\tfrac{1}{100}=\tfrac{98}{100} \] and this yields \begin{align*} A(x)&\leq \tfrac{100}{98}\left [\frac{w_-(x)}{(1-x)^3}I_+(x)+\frac{w_+(x)}{(1+x)^3}I_-(x)\right ], \end{align*} where \begin{align} \label{def:Ipm} I_-(x)&:=\int_{-1}^x (1+y)^4(1-y)[-R(y)] w_-(y)\d y \nonumber \\ I_+(x)&:=\int_x^1 (1-y)^4(1+y)[-R(y)]w_+(y)\d y. \end{align} The integrands of $I_\pm$ are polynomials and hence, $I_\pm$ can be computed explicitly. More precisely, we write \[ P_\pm(y):=(1\mp y)^4(1\pm y)[-R(y)]w_\pm(y) \] and note that $\deg P_\pm \leq 57$. Consequently, we may re-expand $P_\pm$ as $P_\pm(y)=\sum_{n=0}^{57}p_{\pm,n}y^n$ by solving the linear systems \[ \sum_{n=0}^{57} p_{\pm,n}x_k^n=P_\pm(x_k),\quad x_k=-\tfrac12+\tfrac{k}{57},\quad k=0,1,2,\dots,57 \] over $\Q$. From this we obtain the explicit expressions \begin{align*} I_-(x)&=\sum_{n=0}^{57}\frac{p_{-,n}}{n+1}x^{n+1}-\sum_{n=0}^{57}\frac{p_{-,n}}{n+1}(-1)^{n+1} \\ I_+(x)&=\sum_{n=0}^{57}\frac{p_{+,n}}{n+1}-\sum_{n=0}^{57}\frac{p_{+,n}}{n+1}x^{n+1}. \end{align*} Furthermore, directly from Eq.~\eqref{def:Ipm} we see that $I_\pm(x)=O((1\mp x)^5)$. Consequently, \[ P(x):=\frac{w_-(x)}{(1-x)^3}I_+(x)+\frac{w_+(x)}{(1+x)^3}I_-(x) \] is a polynomial of degree at most $85$. Thus, another re-expansion yields the Chebyshev representation $P(x)=\sum_{n=0}^{85}p_n T_n(x)$ and we obtain the bound \[ \|P'\|_{L^\infty}\leq \sum_{n=0}^{85} n^2 |p_n|\leq 3. \] Consequently, Lemma \ref{lem:estf} with $N=1000$ yields \[ A(x)\leq \tfrac{100}{98}\|P\|_{L^\infty}\leq \tfrac{100}{98}\left (\max_{\Omega_N}|P|+\tfrac{2}{N}\|P'\|_{L^\infty}\right ) \leq \tfrac{100}{98}\left (\tfrac{591}{1000}+\tfrac{6}{1000} \right )\leq \tfrac{7}{10}. \] To prove the second bound, we set $Q_{\pm}(x):=u_\pm'(x)I_{\mp}(x)$ and note that \[ u_\pm'(x)=\frac{w_\pm'(x)}{(1\pm x)^3}\mp 3\frac{w_\pm(x)}{(1\pm x)^4}. \] Consequently, $Q_\pm$ are polynomials with $\deg Q_\pm\leq 84$ and a Chebyshev re-expansion yields \[ \|Q_-'\|_{L^\infty}+\|Q_+'\|_{L^\infty}\leq 20. \] Thus, from Lemma \ref{lem:estf} with $N=800$ we infer\footnote{Strictly speaking, a slight variant of Lemma \ref{lem:estf} is necessary here since the function $|Q_-|+|Q_+|$ is only piecewise $C^1$.} \begin{align*} \max_{[-1,1]} \left (|Q_-|+|Q_+|\right )&\leq \max_{\Omega_N}\left (|Q_-|+|Q_+|\right )+\tfrac{2}{N}\left (\|Q_-'\|_{L^\infty}+\|Q_+'\|_{L^\infty}\right ) \\ &\leq \tfrac{41}{100}+\tfrac{5}{100}=\tfrac{46}{100} \end{align*} which implies \begin{align*} |u_-'(x)|\int_x^1 \frac{|u_+(y)|}{|W(y)|}\d y+ |u_+'(x)|\int_{-1}^x \frac{|u_-(y)|}{|W(y)|}\d y&\leq \tfrac{100}{98}\left (|u_-'(x)I_+(x)|+|u_+'(x)I_-(x)|\right ) \\ &=\tfrac{100}{98}\left (|Q_-(x)|+|Q_+(x)|\right ) \\ &\leq \tfrac{100}{98}\tfrac{46}{100}\leq \tfrac12 \end{align*} for all $x\in (-1,1)$. \end{proof} \subsection{Construction of the Green function} Based on Proposition \ref{prop:fs} we can now invert the operator $\tilde{\mc L}$. A solution of the equation $\tilde{\mc L}u=f \in L^\infty(-1,1)$ is given by \[ u(x)=\int_{-1}^1 G(x,y)f(y)\d y, \] with the Green function \[ G(x,y)=\frac{1}{W(u_{-},u_+)(y)}\left \{\begin{array}{lr}u_{-}(x)u_+(y) & x\leq y \\ u_+(x)u_{-}(y) & x\geq y \end{array} \right . . \] In fact, this is the \emph{unique} solution that belongs to $L^\infty(-1,1)$. Consequently, we have \[ \tilde{\mc L}^{-1}f(x)=\int_{-1}^1 G(x,y)f(y)\d y. \] The bounds from Proposition \ref{prop:fsbounds} immediately imply the following estimate. \begin{corollary} \label{cor:CL} We have the bound \[ \|\tilde{\mc L}^{-1}f\|_{W^{1,\infty}}\leq \|f\|_{L^\infty} \] for all $f\in L^\infty(-1,1)$. \end{corollary} \begin{proof} By definition we have \[ \tilde{\mc L}^{-1}f(x)=u_-(x)\int_x^1 \frac{u_+(y)}{W(y)}f(y)\d y +u_+(x)\int_{-1}^x \frac{u_-(y)}{W(y)}f(y)\d y \] and thus, \[ (\tilde{\mc L}^{-1} f)'(x)=u_-'(x)\int_x^1 \frac{u_+(y)}{W(y)}f(y)\d y +u_+'(x)\int_{-1}^x \frac{u_-(y)}{W(y)}f(y)\d y, \] where $W(y)=W(u_-,u_+)(y)$. Consequently, from Proposition \ref{prop:fsbounds} we infer \begin{align*} \|\tilde{\mc L}^{-1}f\|_{W^{1,\infty}}= \left ( \|(\tilde{\mc L}^{-1}f)'\|_{L^\infty}^2+\|\tilde{\mc L}^{-1}f\|_{L^\infty}^2 \right )^{1/2} \leq \sqrt{(\tfrac12)^2+(\tfrac{7}{10})^2}\,\|f\|_{L^\infty}\leq \|f\|_{L^\infty}. \end{align*} \end{proof} \section{Linear stability of the Skyrmion} \noindent Now we are ready to conclude the proof of Theorem \ref{thm:main}. \subsection{The main contraction argument} Recall that we aim for solving the equation $\mc R(\gT+\delta)=0$, i.e., \[ \mc L\delta=-\mc R(\gT)-\mc N(\delta), \] see Proposition \ref{prop:CN}. We rewrite this equation as \[ \tilde{\mc L}\delta=-\mc R(\gT)+(\tilde{\mc L}-\mc L)\delta-\mc N(\delta) \] and apply $\tilde{\mc L}^{-1}$, which yields \[ \delta=-\tilde{\mc L}^{-1}\mc R(\gT)+\tilde{\mc L}^{-1}(\tilde{\mc L}-\mc L)\delta -\tilde{\mc L}^{-1}\mc N(\delta)=:\mc K(\delta) \] Thus, our goal is to prove that $\mc K$ has a fixed point. \begin{lemma} \label{lem:contr} Let $X:=\{u\in C^1[-1,1]: \|u\|_{W^{1,\infty}}\leq \frac{1}{150}\}$. Then $\mc K$ has a unique fixed point in $X$. \end{lemma} \begin{proof} From Propositions \ref{prop:Rem}, \ref{prop:CN}, \ref{prop:fs}, and Corollary \ref{cor:CL} we obtain the estimate \begin{align*} \|\mc K(u)\|_{W^{1,\infty}}&\leq \|\mc R(\gT)\|_{L^\infty} +\|\mc Lu-\tilde{\mc L}u\|_{L^\infty} +\|\mc N(u)\|_{L^\infty} \\ &\leq \tfrac{1}{500}+\|p-\tilde p\|_{L^\infty}\|u'\|_{L^\infty}+\|q-\tilde q\|_{L^\infty}\|u\|_{L^\infty}+39\|u\|_{W^{1,\infty}}^2 \\ &\leq \tfrac{1}{500}+\tfrac{3}{100}\tfrac{1}{150}+\tfrac{1}{20}\tfrac{1}{150}+39(\tfrac{1}{150})^2 \\ &\leq \tfrac{1}{150}. \end{align*} Consequently, $\mc K(u)\in X$ for all $u\in X$. Furthermore, \begin{align*} \|\mc K(u)-\mc K(v)\|_{W^{1,\infty}}&\leq \|(\mc L-\tilde{\mc L})(u-v)\|_{L^\infty} +\|\mc N(u)-\mc N(v)\|_{L^\infty} \\ &\leq \|p-\tilde p\|_{L^\infty}\|u'-v'\|_{L^\infty}+\|q-\tilde q\|_{L^\infty}\|u-v\|_{L^\infty}+39\tfrac{2}{150}\|u-v\|_{W^{1,\infty}} \\ &\leq \left (\tfrac{3}{100}+\tfrac{1}{20}+\tfrac{78}{150}\right )\|u-v\|_{W^{1,\infty}} \\ &=\tfrac35 \|u-v\|_{W^{1,\infty}} \end{align*} for all $u,v\in X$. Thus, the claim follows from the contraction mapping principle. \end{proof} Finally, we obtain the desired approximation to the Skyrmion. \begin{corollary} \label{cor:F0} There exists a $\delta \in C^1[-1,1]$ with $\|\delta\|_{W^{1,\infty}}\leq \tfrac{1}{150}$ such that the Skyrmion is given by \[ F_0(r)=2\arctan\left (r(1+r)\left (\gT\left (\frac{r-1}{r+1}\right ) +\delta\left ( \frac{r-1}{r+1}\right )\right )\right ). \] \end{corollary} \begin{proof} By construction, Lemma \ref{lem:contr}, and standard ODE regularity theory, there exists a $\delta$ with the stated properties such that $F_0$ is a smooth solution to the original Skyrmion equation \eqref{eq:sky}. Obviously, we have $F_0(0)=0$ and from $\gT(x)\in [\frac12,\frac32]$ for all $x\in [-1,1]$, see Proposition \ref{prop:Rem}, we infer $\lim_{r\to\infty}F_0(r)=\pi$. Since the Skyrmion is the unique solution of Eq.~\eqref{eq:sky} with these boundary values \cite{McLTro91}, the claim follows. \end{proof} \subsection{Spectral stability} Recall that the linear stability of the Skyrmion is governed by the Schr\"odinger operator \[ \mc Af(r)=-f''(r)+\frac{2}{r^2}f(r)+V(r)f(r) \] on $L^2(0,\infty)$, where the potential is given by \[ V=-4a^2\frac{1+3a^2+3a^4}{(1+2a^2)^2}, \qquad a(r)=\frac{\sin F_0(r)}{r}. \] From Corollary \ref{cor:F0} and the identity $\sin(2\arctan y)=\frac{2y}{1+y^2}$ we obtain \[ a(r)=\frac{2(1+r)\left [\gT \left(\frac{r-1}{r+1}\right )+\delta\left (\frac{r-1}{r+1}\right )\right ]} {1+r^2(1+r)^2 \left [\gT \left(\frac{r-1}{r+1}\right )+\delta\left (\frac{r-1}{r+1}\right )\right ]^2}. \] Furthermore, from $\|\delta\|_{L^\infty}\leq \frac{1}{150}$ and $\gT(x)\in [\frac12,\frac32]$ for all $x\in [-1,1]$, see Proposition \ref{prop:Rem}, we infer the bounds \begin{align*} |a(r)|&\leq \frac{2(1+r)\left [\gT \left(\frac{r-1}{r+1}\right )+\frac{1}{150}\right ]} {1+r^2(1+r)^2 \left [\gT \left(\frac{r-1}{r+1}\right )-\frac{1}{150}\right ]^2}=:A(r) \\ |a(r)|&\geq \frac{2(1+r)\left [\gT \left(\frac{r-1}{r+1}\right )-\frac{1}{150}\right ]} {1+r^2(1+r)^2 \left [\gT \left(\frac{r-1}{r+1}\right )+\frac{1}{150}\right ]^2}=:B(r) \end{align*} Consequently, we obtain the estimate \[ |V|\leq 4A^2\frac{1+3A^2+3A^4}{(1+2B^2)^2}. \] \begin{lemma} \label{lem:intV} We have the bound \[ \int_0^\infty r^7 |V(r)|^4\d r\leq 130.\] \end{lemma} \begin{proof} By employing the techniques introduced before, it is straightforward to obtain the stated estimate. More precisely, we introduce the new integration variable $x\in [-1,1]$, given by $r=\frac{1+x}{1-x}$, and write \[ \int_0^\infty r^7 |V(r)|^4\d r\leq \int_0^\infty r^7 \left [4A(r)^2\frac{1+3A(r)^2+3A(r)^4}{(1+2B(r)^2)^2}\right ]^4\d r=\int_{-1}^1 \frac{P(x)}{Q(x)}\d x, \] where $P$ and $Q$ are polynomials with rational coefficients. As before, by a pseudospectral method, we construct a truncated Chebyshev expansion $R(x)$ of $1/Q(x)$. Next, by a Chebyshev re-expansion we obtain an estimate for $\|(RQ)'\|_{L^\infty}$ and Lemma \ref{lem:estf} yields a lower bound on $\min_{[-1,1]}RQ$ which is close to $1$. From this we find \[ \int_{-1}^1 \frac{P(x)}{Q(x)}\d x=\int_{-1}^1 \frac{R(x)P(x)}{R(x)Q(x)}\d x \leq \frac{1}{\min_{[-1,1]}RQ}\int_{-1}^1 R(x)P(x)\d x \] and the last integral can be evaluated explicitly since the integrand is a polynomial. \end{proof} We can now conclude the main result. \begin{proof}[Proof of Theorem \ref{thm:main}] From Lemma \ref{lem:intV} we obtain \[ 3^{-7}\frac{3^3\Gamma(8)}{4^4\Gamma(4)^2}\int_0^\infty r^7 |V(r)|^4\d r\leq \frac{2275}{2592}<1. \] Consequently, the GGMT bound, see Appendix \ref{app:GGMT}, implies that $\mc A$ has no eigenvalues. \end{proof} \begin{appendix} \section{The GGMT bound} \label{app:GGMT} \noindent Consider $H=-\Delta + V$ in $\R^3$ where $V\in L^1\cap L^\infty(\R^3)$ (say) and radial. The GGMT bound~\cite{GlaGroMarThi75} is as follows (see also~\cite{GlaGroMar78}). We restrict ourselves to a smaller range of $p$ than necessary since it is technically easier and sufficient. \begin{theorem}\label{thm:GGMT} Write $V=V_+ - V_-$ where $V_{\pm}\ge0$. For any $\f32\le p<\I$, if \EQ{\label{eq:GGMT} \f{(p-1)^{p-1} \Gamma(2p)}{p^p \Gamma^2(p)} \int_{0}^\infty r^{2p-1} V_{-}^p(r)\, dr <1 } then $H$ has no negative eigenvalues. Furthermore, zero energy is neither an eigenvalue nor a resonance. \end{theorem} \begin{proof} Suppose $H$ has negative spectrum. Then there exists a ground state, $H\psi=E\psi$ with $\psi\in H^2(\R^3)$, $\|\psi\|_2=1$, and radial, $E<0$. So \EQ{\label{eq:0} \LR{ H\psi,\psi} <0 } which implies in particular that for any $\al\in \R$, \EQ{ \label{eq:V-} \int_{\R^3} |\nabla\psi(x)|^2\, dx &< \int_{\R^3} V_-(x) |\psi(x)|^2\, dx \\ &\leq \| r^\al V_-\|_p \| r^{-\f{\al}{2}} \psi\|_{2q}^2 } by H\"older, $\f1p+\f1q=1$ (which is only meaningful if the right-hand side is finite). We set $p(2-\al)=3$, $q(1+\al)=3$, which requires that $-1\leq \al\leq 2$. In fact, $\I\geq p\ge \f32$ means precisely that $2\geq \al\geq0$, and $1\le q\le 3$. Set \EQ{\label{eq:mu q} \mu_q := \inf_{\psi\in H^1_{\mathrm rad}\setminus\{0\}} \f{ \| \nabla \psi\|_2^2}{\| r^{\f{q-3}{2q}} \psi \|_{2q}^2} } Note that the denominator here is always a positive finite number. Indeed, it suffices to check this for $q=1$ and $q=3$, respectively. This amounts to \[ \| r^{-1} \psi\|_2 + \|\psi\|_6 \leq C\|\nabla \psi\|_2 \qquad \forall\; \psi\in H^1(\R^3) \] which is true by the Hardy and Sobolev inequalities. By Lemma~\ref{lem:calV}, $\mu_q>0$ and its value can be explicitly computed. Thus, by \eqref{eq:V-}, \[ \|\nabla\psi\|_2^2 \leq \mu_q^{-1} \| r^\al V_-\|_p \|\nabla\psi\|_2^2 \] which is a contradiction of $\mu_q^{-1} \| r^\al V_-\|_p<1$, the latter being precisely condition~\eqref{eq:GGMT}. It remains to discuss the case where $H$ has no negative spectrum but a zero eigenvalue or a zero resonance. If $0$ is an eigenvalue, then we have a solution $\psi\in H^2$ of \[ -\Delta \psi = V\psi \] which means that \[ \psi(x) = - \f{1}{4\pi} \int_{\R^3} \f{V(y)\psi(y)}{|x-y|}\, dy \] If $\int V\psi\ne0$, then $\psi(x)\simeq |x|^{-1}$ for large $x$, which is not $L^2$. So $\int V\psi=0$ and $\psi(x)=O(|x|^{-2})$ as $x\to\I$. One has $\LR{H\psi,\psi}=0$ instead of~\eqref{eq:0}. Replacing $H$ with $H_\eps=H-\eps e^{-|x|^2}$ for small $\eps>0$ we conclude that \[ \LR{ H_\eps \psi,\psi}<0 \] and $H_\eps$ therefore has negative spectrum, while~\eqref{eq:GGMT} still holds for small~$\eps$. By the previous case, this gives a contradiction. If $0$ is a resonance, this means that there is a solution $\psi \in H^2_{\mathrm{loc}}(\R^3)$ with $\psi(x)\simeq |x|^{-1}$ as $x\to\I$ (and by the reasoning above this holds if and only if $\int V\psi\ne 0$). In particular, since $\nabla \psi \in L^2$ and since $\int V\psi^2$ is absolutely convergent, we still arrive at the conclusion that $\LR{H\psi,\psi}=0$. Substituting $H_\eps$ for $H$ as above again gives a contradiction. To be precise, we evaluate the quadratic form of $H_\eps$ on the functions $$\psi_R(x):=\chi(x/R) \psi(x)$$ where $\chi$ is a standard bump function of compact support and equal to~$1$ on the unit ball. Sending $R\to\I$ then shows that $H_\eps$ has negative spectrum. \end{proof} The following lemma establishes the constant $\mu_q$ in the previous proof. The variational problem \eqref{eq:mu q} is invariant under a two-dimensional group of symmetries: $S_{\xi,\eta}(\psi)(x) = e^{\xi}\psi(e^{\eta} x)$, $\xi,\eta\in\R$. The scaling of the independent variable leads to a loss of compactness. To make it easier to apply the standard methods of concentration-compactness, we employ the same change of variables as in~\cite{GlaGroMarThi75}. \begin{lemma}\label{lem:calV} For $1< q\le 3$ we have \[ \mu_q = \f{p}{p-1} \Big[ 4\pi \f{(p-1)\Gamma^2(p)}{\Gamma(2p)}\Big]^{\f1p}, \] with $p$ the dual exponent to $q$. Equality in \eqref{eq:mu q} is attained by the radial functions \[ \psi_q(x) = \f{a}{\big(1+b r^{\f{1}{p-1}}\big)^{p-1}} \] where $a,b>0$ are arbitrary. \end{lemma} \begin{proof} We begin with the following claim \EQ{\label{eq:claim} \mu_q = \inf_{\fy\in H^1(\R),\fy\ne0} (4\pi)^{\f1p}\f{ \int_{-\I}^\I \big( \fy'^2 +\f14 \fy^2)(x)\, dx}{\Big( \int_{-\I}^\I \fy^{2q}(x)\, dx\Big)^{\f1q}} } To prove it, first note that we may take the infimum in \eqref{eq:mu q} over radial functions $\psi \in C^1(\R^3)$ of compact support. For this we use that $1\le q\le 3$ to control the denominator by the $\dot H^1(\R^3)$ norm. Then set $\fy(x)= \sqrt{r}\psi(r)$, $r=e^x$ and calculate \EQ{ \nonumber \| \nabla \psi\|_2^2 &= 4\pi \int_{-\I}^\I \big( \fy'^2 +\f14 \fy^2)(x)\, dx \\ \| r^{\f{q-3}{2q}} \psi \|_{2q}^2 &= (4\pi)^{\f1q} \Big( \int_{-\I}^\I \fy^{2q}(x)\, dx\Big)^{\f1q} } Note that $\fy(x)\to0$ as $x\to\pm\I$ (exponentially as $x\to-\I$, and identically vanishing for large $x>0$). This gives~\eqref{eq:claim} by density. Let $\fy_n\in H^1(\R)$ be a minimizing sequence for~\eqref{eq:claim} with $\|\fy_n\|_{2q}=1$. Clearly, $\fy_n$ is bounded in $H^1(\R)$ and by Sobolev embedding, it follows that $\mu_q>0$. By the concentration compactness method, see Proposition~3.1 in~\cite{HmiKer05}, there exist $V_j\in H^1(\R)$ for all $j\ge 1$, and $x_{j,n}\in \R$ such that (everything up to passing to subsequences) \EQ{\nn |x_{j,n} - x_{k,n} | &\to \I \text{\ \ \ \ for all $j\ne k$ as\ \ } n\to\I \\ \fy_n &= \sum_{j=1}^\ell V_j(\cdot-x_{j,n}) + g_{n,\ell} \text{\ \ \ \ for all\ \ } \ell\ge1 } where $$\limsup_{n\to\I} \|g_{n,\ell}\|_p \to 0$$ as $\ell\to\I$ for any $2<p<\I$. Moreover, \EQ{ \| \fy_n'\|_2^2 &= \sum_{j=1}^\ell \| V_j'\|_2^2 + \|g_{n,\ell}' \|_2^2 + o(1) \\ \| \fy_n \|_2^2 &= \sum_{j=1}^\ell \| V_j \|_2^2 + \|g_{n,\ell} \|_2^2 +o(1) } as $n\to\I$, and \[ 1= \|\fy_n \|_{2q}^{2q} = \sum_{j=1}^\ell \|V_j\|_{2q}^{2q} + o(1) \] as $n,\ell\to\I$. To be precise, for any $\eps>0$ we may find $\ell$ such that \[ \Big | 1- \sum_{j=1}^\ell \|V_j\|_{2q}^{2q} \Big| < \eps \] We have \EQ{ \label{eq:part} \| \fy_n'\|_2^2 + \f14 \|\fy_n\|_2^2 &\ge (4\pi)^{-\f1p}\mu_q \big( \sum_{j=1}^\ell \| V_j\|_{2q}^{2} + \|g_{n,\ell}\|_{2q}^2\big) -o(1) \\ &\ge (4\pi)^{-\f1p}\mu_q \big( \sum_{j=1}^\ell \| V_j\|_{2q}^{2q} + \|g_{n,\ell}\|_{2q}^{2q}\big)^{\f1q} -o(1) } If there were two nonzero profiles $V_j$, or if $$ \limsup_{n\to\I} \|g_{n,\ell}\|_{2q} \not\to0 $$ as $\ell\to\I$, then there exists $\delta>0$ (since $q>1$) so that \EQ{ \nn \| \fy_n'\|_2^2 + \f14 \|\fy_n\|_2^2 &\ge (4\pi)^{-\f1p}\mu_q (1+\delta ) -o(1) } as $n\to\I$, contradicting that $\fy_n$ is a minimizing sequence. So up to a translation, we may assume that $\fy_n$ is compact in $L^{2q}(\R)$ and in fact that $\fy_n\to \fy_\I$ in $L^{2q}(\R)$. In particular, $\|\fy_\I \|_{2q}=1$. Furthermore, we have the weak convergence $\fy_n'\rightharpoonup \fy_\I'$, $\fy_n\rightharpoonup \fy_\I$ in $L^2(\R)$ which implies that \[ \| \fy_\I'\|_2^2 + \f14 \|\fy_\I\|_2^2 \le \liminf_{n\to\I} \big(\| \fy_n'\|_2^2 + \f14 \|\fy_n\|_2^2\big) = (4\pi)^{-\f1p}\mu_q \] In conclusion, $\fy_n\to\fy_\I$ strongly in $H^1$, and $\fy_\I\in H^1(\R)\setminus\{0\}$ is a minimizer for $\mu_q$. Passing absolute values onto $\fy_n$ we may assume that $\fy_\I\ge0$. The associated Euler-Lagrange equation is \[ -2\fy_\I''+\f12 \fy_\I = k \fy_\I^{2q-1} \] first in the weak sense, but then in the classical one by basic regularity. Furthermore, $\fy>0$, $\fy_\I\in C^\I(\R)$, and $k>0$. Since $q>1$ we may absorb the constant which leads to an exponentially decaying, positive smooth solution to the equation \[ -f''(x) +\f14 f(x) = f^{2q-1}(x) \] By the phase portrait, such an $f$ is unique up to translation in $x$. It is given by the homoclinic orbit emanating from the origin and encircling the positive equilibrium. This homoclinic orbit (and its reflection together with the origin) make up the algebraic curve \EQ{\label{eq:hom} - f'^2 + \f14 f^2 = \f{1}{q} f^{2q} } The explicit form of the solution is obtained by integrating up the first order ODE \eqref{eq:hom} which leads to \EQ{\label{eq:expl} f(x-x_0) = \Big( \f{q}{4} \Big)^{\f{1}{2(q-1)}} \big( \cosh((q-1)x/2)\big)^{-\f{1}{q-1}} } where $x_0\in\R$. Finally, \[ \mu_q^p = 4\pi \int_{-\I}^\I f(x)^{2q}\, dx \] with $f$ as on the right-hand side of \eqref{eq:expl}. Thus, \EQ{\label{eq:muq cosh} \mu_q^p = 4\pi \Big( \f{q}{4} \Big)^{\f{q}{q-1}} \int_{-\I}^\I \big( \cosh((q-1)x/2)\big)^{-\f{2q}{q-1}} \, dx } To proceed, we recall that for any $b>0$ \EQ{\nn \int_{-\I}^\I (\cosh x)^{-b}\, dx &= 2^b \int_0^\I \f{u^{b-1}}{(1+u^2)^b}\, du = \f{\sqrt{\pi}\,\Gamma(b/2)}{\Gamma((b+1)/2)} } Inserting this into \eqref{eq:muq cosh} yields \[ \mu_q^p = 4\pi \Big( \f{q}{4} \Big)^{p}\f{2}{q-1} \f{\Gamma(p)}{\Gamma(p+\f12)} \] Using $\Gamma(p)\Gamma(p+\f12)=2^{1-2p}\sqrt{\pi}\,\Gamma(2p)$ this turns into \[ \mu_q^p = 4\pi \f{q^p}{q-1} \f{\Gamma(p)^2}{\Gamma(2p )} = 4\pi \Big( \f{p}{p-1} \Big)^p (p-1) \f{\Gamma(p)^2}{\Gamma(2p )} \] which is what the lemma set out to prove. The minimizers are obtained by transforming \eqref{eq:expl} back to the original coordinates. \end{proof} Theorem~\ref{thm:GGMT} is insufficient for linearized Skyrme. The reason being that the Helmholtz equation associated with the latter is of the form \[ -\psi'' + \big(\frac{2}{r^2} + V(r))\psi = k^2\psi \] which has extra repulsivity coming from the $\frac{2}{r^2}$ potential. On the level of the Schr\"odinger equation in $\R^3$ this precisely amounts to restricting to angular momentum $\ell=1$. So we expect that a weaker condition on $V$ than the one stated in Theorem~\ref{thm:GGMT} will suffice. This is essential for our applications to linearized Skyrme stability. In fact, as already noted in \cite{GlaGroMarThi75}, for general angular momentum $\ell>0$ we are faced with the minimzation problem which is obtained from~\eqref{eq:claim} by replacing $\frac14\fy^2$ with $\frac14(2\ell+1)^2 \fy^2$. However, the scaling $$\fy(x)=\fy_1((2\ell+1)x)$$ takes us back to the minimization problem~\eqref{eq:claim} with an extra factor of $(2\ell+1)^{1+\frac1q}$. Recall that Theorem~\ref{thm:GGMT} is nothing other than $\mu_q^{-p} \| r^\alpha V_-\|_p^p<1$. Therefore, to exclude eigenfunctions and threshold resonances of angular momentum $\ell$ condition \eqref{eq:GGMT} needs to be multiplied on the left by a factor of \[ (2\ell+1)^{-p(1+\frac1q)} = (2\ell+1)^{-(2p-1)} \] In the summary, the sufficient GGMT criterion for absence of bound states and threshold resonances in angular momentum $\ell$ reads \EQ{\label{eq:GGMT*} \f{(p-1)^{p-1} \Gamma(2p)}{(2\ell+1)^{2p-1}p^p \Gamma^2(p)} \int_{0}^\infty r^{2p-1} V_{-}^p(r)\, dr <1 } for any $\frac32\le p<\infty$. For linear Skyrme stability we use this criterion with $\ell=1$ and $p=4$. \section{Tables of expansion coefficients} \begin{table}[ht] \centering \caption{Expansion coefficients for approximate Skyrmion} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$\\ \hline $c_n$ & $\frac{13039}{72146}$ & $\frac{2909}{229801}$ & $-\frac{11670}{500821}$ & $-\frac{301}{39257}$ & $\frac{621}{122813}$ & $\frac{871}{221909}$ & $-\frac{42}{55481}$\\[2pt] \hline $n$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$\\ \hline $c_n$ & $-\frac{64}{36275}$ & $-\frac{18}{77071}$ & $\frac{94}{139483}$ & $\frac{13}{40736}$ & $-\frac{31}{158602}$ & $-\frac{9}{42953}$ & $\frac{2}{100443}$\\[2pt] \hline $n$ & $16$ & $17$ & $18$ & $19$ & $20$ & $21$ & $22$\\ \hline $c_n$ & $\frac{11}{105144}$ & $\frac{2}{76485}$ & $-\frac{5}{121747}$ & $-\frac{5}{186976}$ & $\frac{1}{92977}$ & $\frac{2}{118683}$ & $\frac{1}{1805239}$\\[2pt] \hline $n$ & $23$ & $24$ & $25$ & $26$ & $27$ & $28$ & $29$\\ \hline $c_n$ & $-\frac{1}{122146}$ & $-\frac{1}{317774}$ & $\frac{1}{332077}$ & $\frac{1}{377050}$ & $-\frac{1}{1689008}$ & $-\frac{1}{640158}$ & $-\frac{1}{3975308}$\\[2pt] \hline $n$ & $30$ & $31$ & $32$ & $33$ & $34$ & $35$ & $36$\\ \hline $c_n$ & $\frac{1}{1402566}$ & $\frac{1}{2606123}$ & $-\frac{1}{4324868}$ & $-\frac{1}{3550160}$ & $\frac{1}{54392687}$ & $\frac{1}{6563655}$ & $\frac{1}{21696717}$\\[2pt] \hline $n$ & $37$ & $38$ & $39$ & $40$ & $41$ & $42$ & $43$\\ \hline $c_n$ & $-\frac{1}{16289508}$ & $-\frac{1}{21329884}$ & $\frac{1}{86396283}$ & $\frac{1}{36311458}$ & $\frac{1}{128282128}$ & $-\frac{1}{128832209}$ & $-\frac{1}{196527234}$\\[2pt] \hline \end{tabular} \label{tab:c} \end{table} \begin{table}[ht] \centering \caption{Expansion coefficients for $u_-$} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ \hline $c_{-,n}$ & $c_{-,1}$ & $\frac{5384}{2621}$ & $-\frac{711}{1909}$ & $\frac{417}{3424}$ & $\frac{18}{1817}$ & $\frac{2}{3169}$\\[2pt] \hline $n$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$\\ \hline $c_{-,n}$ & $-\frac{23}{3399}$ & $-\frac{22}{4655}$ & $\frac{4}{4097}$ & $\frac{7}{2589}$ & $\frac{2}{3607}$ & $-\frac{8}{6937}$\\[2pt] \hline $n$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$\\ \hline $c_{-,n}$ & $-\frac{3}{4310}$ & $\frac{1}{2886}$ & $\frac{2}{4135}$ & $-\frac{1}{90728}$ & $-\frac{1}{3865}$ & $-\frac{1}{11699}$\\[2pt] \hline $n$ & $19$ & $20$ & $21$ & $22$ & $23$ & $24$\\ \hline $c_{-,n}$ & $\frac{1}{9323}$ & $\frac{1}{11955}$ & $-\frac{1}{36563}$ & $-\frac{1}{18412}$ & $-\frac{1}{192414}$ & $\frac{1}{37653}$\\[2pt] \hline $n$ & $25$ & $26$ & $27$ & $28$ & $29$ & $30$\\ \hline $c_{-,n}$ & $\frac{1}{79523}$ & $-\frac{1}{119499}$ & $-\frac{1}{105631}$ & $-\frac{1}{1857125}$ & $\frac{1}{285782}$ & $\frac{1}{619658}$\\[2pt] \hline \end{tabular} \label{tab:cm} \end{table} \begin{table}[ht] \centering \caption{Expansion coefficients for $u_+$} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ \hline $c_{+,n}$ & $c_{+,1}$ & $\frac{1371}{769}$ & $\frac{1734}{3319}$ & $\frac{230}{3431}$ & $-\frac{167}{6071}$ & $\frac{33}{5231}$\\[2pt] \hline $n$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$\\ \hline $c_{+,n}$ & $\frac{59}{4580}$ & $-\frac{19}{7202}$ & $-\frac{19}{2849}$ & $\frac{1}{13495}$ & $\frac{11}{3203}$ & $\frac{4}{4737}$\\[2pt] \hline $n$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$\\ \hline $c_{+,n}$ & $-\frac{7}{4481}$ & $-\frac{2}{2217}$ & $\frac{1}{1808}$ & $\frac{1}{1529}$ & $-\frac{1}{11699}$ & $-\frac{1}{2637}$\\[2pt] \hline $n$ & $19$ & $20$ & $21$ & $22$ & $23$ & $24$\\ \hline $c_{+,n}$ & $-\frac{1}{12409}$ & $\frac{1}{5625}$ & $\frac{1}{9479}$ & $-\frac{1}{16801}$ & $-\frac{1}{12593}$ & $\frac{1}{300485}$\\[2pt] \hline $n$ & $25$ & $26$ & $27$ & $28$ & $29$ & $30$\\ \hline $c_{+,n}$ & $\frac{1}{21636}$ & $\frac{1}{56764}$ & $-\frac{1}{51904}$ & $-\frac{1}{51451}$ & $\frac{1}{307476}$ & $\frac{1}{121058}$\\[2pt] \hline \end{tabular} \label{tab:cp} \end{table} \begin{table}[ht] \centering \caption{Expansion coefficients for approximation to $1/P_5$ (Proposition \ref{prop:fs})} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ \hline $r_n$ & $-\frac{437}{24}$ & $-\frac{811}{20}$ & $-\frac{229}{17}$ & $-\frac{2391}{61}$ & $-\frac{397}{30}$ & $-\frac{178}{7}$\\[2pt] \hline $n$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$\\ \hline $r_n$ & $-\frac{184}{27}$ & $-\frac{518}{27}$ & $-\frac{98}{15}$ & $-\frac{1345}{114}$ & $-\frac{86}{31}$ & $-\frac{284}{39}$\\[2pt] \hline $n$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$\\ \hline $r_n$ & $-\frac{59}{24}$ & $-\frac{156}{35}$ & $-\frac{107}{106}$ & $-\frac{86}{37}$ & $-\frac{23}{31}$ & $-\frac{23}{16}$\\[2pt] \hline $n$ & $19$ & $20$ & $21$ & $22$ & $23$ & $24$\\ \hline $r_n$ & $-\frac{9}{26}$ & $-\frac{73}{110}$ & $-\frac{5}{27}$ & $-\frac{13}{32}$ & $-\frac{1}{9}$ & $-\frac{2}{11}$\\[2pt] \hline $n$ & $25$ & $26$ & $27$ & $28$ & $29$ & $30$\\ \hline $r_n$ & $-\frac{1}{24}$ & $-\frac{3}{29}$ & $-\frac{1}{29}$ & $-\frac{2}{33}$ & $-\frac{1}{62}$ & $-\frac{1}{47}$\\[2pt] \hline \end{tabular} \label{tab:r1} \end{table} \begin{table}[ht] \centering \caption{Expansion coefficients for approximation to $1/W_0$ (Proposition \ref{prop:fsbounds})} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$\\ \hline $r_n$ & $-\frac{19}{69}$ & $\frac{11}{106}$ & $\frac{37}{103}$ & $-\frac{14}{107}$ & $-\frac{23}{128}$ & $\frac{7}{81}$ & $\frac{9}{109}$ & $-\frac{3}{67}$\\[2pt] \hline $n$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$\\ \hline $r_n$ & $-\frac{3}{79}$ & $\frac{2}{99}$ & $\frac{2}{111}$ & $-\frac{1}{124}$ & $-\frac{1}{114}$ & $\frac{1}{376}$ & $\frac{1}{233}$ & $-\frac{1}{1792}$\\[2pt] \hline $n$ & $16$ & $17$ & $18$ & $19$ & $20$ & $21$ & $22$ & $23$\\ \hline $r_n$ & $-\frac{1}{481}$ & $-\frac{1}{8890}$ & $\frac{1}{1025}$ & $\frac{1}{4569}$ & $-\frac{1}{2336}$ & $-\frac{1}{6718}$ & $\frac{1}{5790}$ & $0$\\[2pt] \hline \end{tabular} \label{tab:r2} \end{table} \end{appendix} \clearpage \bibliography{skyrme} \bibliographystyle{plain} \end{document}
{"config": "arxiv", "file": "1603.03662/skyrme.tex"}
TITLE: Test for Convergence $\sum_{n=1}^\infty\frac{7^{3n}}{n!}$ and $\sum_{n=1}^\infty\sqrt{\ln\frac{n+5}{n+2}}$ QUESTION [0 upvotes]: Test the following series for convergence $$\sum_{n=1}^\infty\frac{7^{3n}}{n!}$$ and $$\sum_{n=1}^\infty\sqrt{\ln\frac{n+5}{n+2}}$$ I need this for studying purposes, I have an exam next week and I am struggling to work on them. Can anyone assist? I tried using the ratio test, am I on the right track? REPLY [3 votes]: For first question, use ratio test. For second question, notice that when $n$ is large, $\frac{n + 5}{n + 2} < e$ so $\log\frac{n + 5}{n + 2} < 1$. Now, if you take square root of a number less than one, you get a larger number. That is, $$ 0 \leq x \leq 1 \implies \sqrt{x} \geq x $$ Therefore for $n$ large enough, let's say $n \geq 100$, $$ \sum_{n=100}^{\infty} \sqrt{\log\frac{n + 5}{n + 2}} \geq \sum_{n=100}^{\infty} \log\frac{n + 5}{n + 2} = \log\left(\prod_{n=100}^{\infty} \frac{n + 5}{n + 2}\right) = \infty $$ Since the product is telescoping.
{"set_name": "stack_exchange", "score": 0, "question_id": 4471574}
\begin{document} \begin{center} {\LARGE An algebraic construction of quantum\\[0.25ex] flows with unbounded generators} \vspace*{1ex} \begin{multicols}{2} {\large Alexander C.~R.~Belton}\\[0.5ex] {\small Department of Mathematics and Statistics\\ Lancaster University, United Kingdom\\[0.5ex] \textsf{a.belton@lancaster.ac.uk}} \columnbreak {\large Stephen J.~Wills}\\[0.5ex] {\small School of Mathematical Sciences\\ University College Cork, Ireland\\[0.5ex] \textsf{s.wills@ucc.ie}} \end{multicols} {\small \today} \end{center} \begin{abstract} {\small \noindent It is shown how to construct $*$-homomorphic quantum stochastic Feller cocycles for certain unbounded generators, and so obtain dilations of strongly continuous quantum dynamical semigroups on $C^*$~algebras; this generalises the construction of a classical Feller process and semigroup from a given generator. The construction is possible provided the generator satisfies an invariance property for some dense subalgebra $\alg_0$ of the $C^*$~algebra $\alg$ and obeys the necessary structure relations; the iterates of the generator, when applied to a generating set for $\alg_0$, must satisfy a growth condition. Furthermore, it is assumed that either the subalgebra~$\alg_0$ is generated by isometries and $\alg$ is universal, or $\alg_0$ contains its square roots. These conditions are verified in four cases: classical random walks on discrete groups, Rebolledo's symmetric quantum exclusion processes and flows on the non-commutative torus and the universal rotation algebra.} \end{abstract} {\small\textit{Key words:} quantum dynamical semigroup; quantum Markov semigroup; CPC semigroup; strongly continuous semigroup; semigroup dilation; Feller cocycle; higher-order It\^{o} product formula; random walks on discrete groups; quantum exclusion process; non-commutative torus} {\small\textit{MSC 2000:} 81S25 (primary); 46L53, 46N50, 47D06, 60J27 (secondary).} \section{Introduction} The connexion between time-homogeneous Markov processes and one-parameter contraction semigroups is an excellent example of the interplay between probability theory and functional analysis. Given a measurable space $( E, \mathcal{E} )$, a \emph{Markov semigroup} $T$ with state space $E$ is a family $( T_t )_{t \ge 0}$ of positive contraction operators on $L^\infty( E )$ such that \[ T_{s + t} = T_s \comp T_t \quad \text{for all } s, t \ge 0 \qquad \text{and} \qquad T_0 f = f \quad \text{for all } f \in L^\infty( E ); \] the semigroup is \emph{conservative} if $T_t 1 = 1$ for all $t \ge 0$. Typically, such a semigroup is defined by setting \[ ( T_t f )( x ) = \int_E f( y ) p_t( x, \rd y ) \] for a family of transition kernels $p_t : E \times \mathcal{E} \to [ 0, 1 ]$. Given a time-homogeneous Markov process $( X_t )_{t \ge 0}$ with values in $E$, the associated Markov semigroup is obtained from the prescription \begin{equation}\label{eqn:markov} ( T_t f )( x ) = \expn[ f( X_t ) | X_0 = x ], \end{equation} so that $p_t( x, A ) = \mathbb{P}( X_t \in A | X_0 = x )$ is the probability of moving from $x$ into $A$ in time~$t$. When the state space~$E$ is a locally compact Hausdorff space we may specialise further: a \emph{Feller semigroup} is a Markov semigroup $T$ such that \[ T_t\bigl( C_0 ( E ) \bigr) \subseteq C_0 ( E ) \quad \text{for all } t \ge 0 \quad \text{ and } \quad \| T_t f - f \|_\infty \to 0 \text{ as } t \to 0 \quad \text{ for all } f \in C_0 ( E ). \] Any sufficiently nice Markov process, such as a L\'{e}vy process, gives rise to a Feller semigroup; conversely, if $E$ is separable then any Feller semigroup gives rise to a Markov process with \textit{c\`{a}dl\`{a}g} paths. A celebrated theorem of Gelfand and Naimark states that every commutative $C^*$~algebra is of the form $C_0( E )$ for some locally compact Hausdorff space $E$. Thus the first step in generalising Feller semigroups, and so Markov processes, to a non-commutative setting is to replace $C_0( E )$ with a general $C^*$~algebra~$\alg$. Moreover, a strengthening of positivity, called \emph{complete positivity}, is required for a satisfactory theory: a map $\phi : \alg \to \blg$ between $C^*$~algebras is completely positive if the ampliation \[ \phi^{(n)} : M_n( \alg ) \to M_n( \blg ); \ ( x_{i j} ) \mapsto \bigl( \phi( x_{i j} ) \bigr) \] is positive for all $n \ge 1$. This property is justified on physical grounds and is equivalent to the usual form of positivity when either algebra $\alg$ or $\blg$ is commutative. The resulting object, a semigroup of completely positive contractions on a $C^*$~algebra $\alg$, is known as a \emph{quantum dynamical semigroup} or, when conservative, a \emph{quantum Markov semigroup}; such semigroups are used to describe the evolution of quantum-mechanical systems which interact irreversibly with their environment. Any strongly continuous quantum dynamical semigroup $T$ is characterised by its infinitesimal generator $\tau$, the closed linear operator such that \[ \dom \tau = \Bigl\{ f \in \alg : \lim_{t \to 0} \frac{T_t f - f}{t} \text{ exists} \Bigr\} \quad \text{and} \quad \tau f = \lim_{t \to 0} \frac{T_t f - f}{t}. \] For a Feller semigroup, the form of the generator $\tau$ may reveal properties of the corresponding process; for instance, a classical L\'{e}vy process may be specified, \textit{via} the L\'{e}vy--Khintchine formula, by the characteristics of its generator, \textit{viz.}\ a drift vector, a diffusion matrix describing the Brownian-motion component and a L\'{e}vy measure characterising its jumps. If we start with a putative generator $\tau$ then operator-theoretic methods may be used to construct the semigroup, although there are often considerable analytical challenges to be met. Verifying that $\tau$ satisfies the hypotheses of the Hille--Yosida theorem, the key analytical tool for this construction, is often difficult. In this paper we provide, for a suitable class of generators, another method of constructing quantum dynamical semigroups and the corresponding non-commutative Markov processes. To understand how the relationship between semigroups and Markov processes generalises to the non-commutative framework, recall first that any locally compact Hausdorff space $E$ may be made compact by adjoining a point at infinity, which corresponds to adding an identity to the algebra $C_0( E )$ or adding a coffin state for an $E$-valued Markov process; it is sufficient, therefore, to restrict our attention to compact Hausdorff spaces or, equivalently, unital $C^*$~algebras. The correct analogue of an $E$-valued random variable $X$ is then a unital $*$-homomorphism $j$ from $\alg$ to some unital $C^*$~algebra $\blg$; classically, $j$ is the map $f \mapsto f \comp X$, where $f \in \alg = C_0( E )$ and~$\blg$ is~$L^\infty( \Pr )$ for some probability measure $\Pr$. A family of unital $*$-homomorphisms $( j_t : \alg \to \blg )_{t \ge 0}$, \ie a non-commutative stochastic process, is said to \emph{dilate} the quantum dynamical semigroup~$T$ if $\alg$ is a subalgebra of $\blg$ and $\expn \comp j_t = T_t$ for all $t \ge 0$, where~$\expn$ is a conditional expectation from $\blg$ to $\alg$; the relationship to \eqref{eqn:markov} is clear. Thus finding a dilation for a given semigroup is analogous to constructing a Markov process from a family of transition kernels. The tool used here for constructing semigroups and their dilations is a stochastic calculus: the quantum stochastic calculus introduced by Hudson and Parthasarathy in their 1984 paper \cite{HuP84}. In its simplest form, this is a non-commutative theory of stochastic integration with respect to three operator martingales which correspond to the creation, annihilation and gauge processes of quantum field theory. It generalises simultaneously the It\^{o}--Doob $L^2$ integral with respect to either Brownian motion or the compensated Poisson process; as emphasised by Meyer \cite{Mey95} and Attal \cite{Att98}, the $L^2$~theory of any normal martingale having the chaotic-representation property, such as Brownian motion, the compensated Poisson process or Az\'{e}ma's martingale, gives a classical probabilistic interpretation of Boson Fock space, the ambient space of quantum stochastic calculus. We develop below new techniques for obtaining $*$-homomorphic solutions to the Evans--Hudson quantum stochastic differential equation (QSDE) \begin{equation}\label{eqn:EHqsde} \rd j_t = ( j_t \otimes \iota_{\bop{\mmul}{}} ) \comp \phi \std \Lambda_t, \end{equation} where the solution $j_t$ acts on a unital $C^*$~algebra $\alg$. In this way, we obtain the process $j$ and the quantum dynamical semigroup $T$ simultaneously. The components of the \emph{flow generator}~$\phi$ include $\tau$, the restriction of a semigroup generator, and $\delta$, a bimodule derivation, which are related to one another through the Bakry--\'{E}mery \textit{carr\'{e} du champ} operator: see Remark~\ref{rem:BEcdc}. If~$\alg$ is commutative then, by Theorem~\ref{thm:abelian}, the process $j$ is classical, in the sense that the algebra generated by $\{ j_t( a ): t \ge 0, \ a \in \alg \}$ is also commutative. The recent expository paper \cite{Bia10} on quantum stochastic methods, written for an audience of probabilists, includes Parthasarathy and Sinha's method \cite{PaS90} for constructing continuous-time Markov chains with finite state spaces by solving quantum stochastic differential equations. To quote Biane, \begin{quote} ``It may seem strange to the classical probabilist to use noncommutative objects in order to describe a perfectly commutative situation, however, this seems to be necessary if one wants to deal with processes with jumps \ldots\ The right mathematical notion \ldots, which generalizes to the noncommutative situation, is that of a derivation into a bimodule \ldots\ Using this formalism, we can use the Fock space as a uniform source of noise, and construct general Markov processes (both continuous and discontinuous) using stochastic differential equations.''. \end{quote} The results herein give a further illustration of this philosophy. The use of quantum stochastic calculus to produce dilations has now been studied for nearly thirty years. Most results, by Hudson and Parthasarathy, Fagnola, Mohari, Sinha \etc, are obtained in the case that $\alg = \bop{\ini}{}$ by first solving an operator-valued QSDE, the Hudson--Parthasarathy equation, to obtain a unitary process~$U$, and defining~$j$ through conjugation by $U$; see~\cite{Fag99} and references therein. The corresponding theory for the Heisenberg rather than the Schr\"{o}dinger viewpoint, solving the Evans--Hudson equation~\eqref{eqn:EHqsde}, has mainly been developed under the standing assumption that the generator $\phi$ is completely bounded, which is necessary if the corresponding semigroup~$T$ is norm continuous \cite{LiW00a}. When one deviates from this assumption, which is analytically convenient but very restrictive, there are few results. The earliest general method is due to Fagnola and Sinha \cite{FaS93}, with later results by Goswami, Sahu and Sinha for a particular model \cite{GSS05} and a more general method developed by Goswami and Sinha in \cite{SiG07}. Another approach based on semigroup methods has yet to yield existence results for the Evans--Hudson equation: see \cite{AcK01} and \cite{LiW12}. Our method here has an attractive simplicity, imposing minimal conditions on the generator $\phi$. It must be a $*$-linear map \[ \phi : \alg_0 \to \alg_0 \otimes \bop{\C \oplus \mul}{}, \] where $\alg_0$ is a dense $*$-subalgebra of the unital $C^*$~algebra $\alg \subseteq \bop{\ini}{}$ which contains $\id = \id_\ini$ and $\mul$ is a Hilbert space, called the \emph{multiplicity space}, the dimension of which measures the amount of noise available in the system. This incorporates an assumption that, if $\phi$ is viewed as a matrix of maps, its components leave~$\alg_0$ invariant, a hypothesis also used in~\cite{FaS93}. Furthermore, $\phi$ must be such that $\phi( \id ) = 0$ and the \emph{first-order It\^o product formula} holds: \begin{equation}\label{eqn:firstIto} \phi( x y ) = \phi( x ) ( y \otimes \id_\mmul ) + ( x \otimes \id_\mmul ) \phi( y ) + \phi( x ) \Delta \phi( y ) \qquad \text{for all } x, y \in \alg_0, \end{equation} where $\mmul := \C \oplus \mul$ and $\Delta \in \alg_0 \otimes \bop{\mmul}{}$ is the orthogonal projection onto $\ini \ctimes \mul$. Both these conditions are known to be necessary if $\phi$ is to generate a family of unital $*$-homomorphisms. Finally, a growth bound must be established for the iterates of $\phi$ applied to elements taken from a suitable subset of $\alg_0$. Our approach is an elementary one for those adept in quantum stochastic calculus, relying on familiar techniques such as representing the solution to the Evans--Hudson QSDE as a sum of quantum Wiener integrals. An essential tool is the higher-order It\^o product formula, presented in Section~\ref{sec:hopf}. This formula was first stated, for finite-dimensional noise, in \cite{CEH95}, was proved for that case in \cite{HPu98} and reached its definitive form in~\cite{LiW03}. In that last paper it was shown that~\eqref{eqn:firstIto} is but the first of a sequence of identities that must be satisfied in order to show that the solution $j$ of the QSDE is weakly multiplicative. However, there are many situations in which the validity of~\eqref{eqn:firstIto} implies that the other identities hold \cite[Corollary~4.2]{LiW03}, and this is the case for $\phi$ as above. Moreover, one of our main observations, Corollary~\ref{cor:bound}, is that, by exploiting the algebraic structure imposed by this sequence of identities, it is sufficient to establish pointwise growth bounds on a $*$-generating set of~$\alg_0$; this is a major simplification when compared with~\cite{FaS93}. Also, by using the coordinate-free approach to quantum stochastic analysis given in~\cite{Lin05}, we can take $\mul$ to be any Hilbert space, removing the restriction in~\cite{FaS93} that $\mul$ be finite dimensional. The growth bounds obtained Section~\ref{sec:hopf} are employed in Section~\ref{sec:qf} to produce a family of weakly multiplicative $*$-linear maps from the algebra $\alg_0$ into the space of linear operators in~$\ini \ctimes \fock$, where $\fock$ is the Boson Fock space over $\elltwo$. It is shown that these maps extend to unital $*$-homomorphisms in two distinct situations. Theorem~\ref{thm:extension1}, which includes the case of AF~algebras, exploits a square-root trick that is well known in the literature; Theorem~\ref{thm:extension2}, which applies to universal $C^*$~algebras such as the non-commutative torus or the Cuntz algebras, is believed to be novel. Uniqueness of the solution is proved, and it is also shown that $j$ is a cocycle, \ie it satisfies the evolution equation \begin{equation}\label{eqn:cocycle} j_{s + t} = ( j_s \ctimes \iota_{\bop{\fock_{[ s ,\infty )}}{}} ) \comp \sigma_s \comp j_t \qquad \text{for all } s, t \ge 0, \end{equation} where $( \sigma_t )_{t \ge 0}$ is the shift semigroup on the algebra of all bounded operators on~$\fock$. At this point we see another novel feature of our work in contrast to previous results, all of which start with a particular quantum dynamical semigroup $T$. In these other papers the generator~$\tau$ of~$T$ is then augmented to produce $\phi$, and the QSDE solved to give a dilation of $T$. For example, in~\cite{FaS93} it is assumed that $T$ is an analytic semigroup and that the composition of $\tau$ with the other components of $\phi$ is well behaved in a certain sense; in~\cite{SiG07} it is assumed that $T$ is covariant with respect to some group action on $\alg$. For us, the starting point is the map $\phi$, which yields the cocycle $j$, and hence, by compression, a quantum dynamical semigroup $T$ generated by the closure of $\tau$, which has core $\alg_0$; this semigroup, \textit{a fortiori}, is dilated by $j$. Thus we do not have to check that $\tau$ is a semigroup generator with good properties at the outset, thereby rendering our method easier to apply. Our first application of Theorem~\ref{thm:extension1}, in Section~\ref{sec:walks}, is to construct the Markov semigroups which correspond to certain random walks on discrete groups. Theorem~\ref{thm:extension1} is also employed in Section~\ref{sec:sqep} to produce a dilation of the symmetric quantum exclusion semigroup. This object, a model for systems of interacting quantum particles, was introduced by Rebolledo \cite{Reb05} as a non-commutative generalisation of the classical exclusion process \cite{Lig99} and has generated much interest: see~\cite{PMQ09} and~\cite{GQPM11}. The multiplicity space $\mul$ is required to be infinite dimensional for this process, as in previous work on processes arising from quantum interacting particle systems, \eg \cite{GSS05}. In Section~\ref{sec:uni} we use Theorem~\ref{thm:extension2} to obtain flows on some universal $C^*$~algebras, namely the non-commutative torus and the universal rotation algebra \cite{AnP89}; the former is a particularly important example in non-commutative geometry. Quantum flows on these algebras have previously been considered by Goswami, Sahu and Sinha \cite{GSS05} and by Hudson and Robinson \cite{HuR88}, respectively. \subsection{Conventions and notation} The quantity $:=$ is to be read as `is defined to be' or similarly. The quantity $\tfn{P}$ equals $1$ if the proposition $P$ is true and $0$ if $P$ is false, where $1$ and $0$ are the appropriate multiplicative and additive identities. The set of natural numbers is denoted by $\N := \{ 1, 2, 3, \ldots \}$; the set of non-negative integers is denoted by $\Z_+ := \{ 0, 1, 2, \ldots \}$. The linear span of the set $S$ in the vector space $V$ is denoted by $\lin S$; all vector spaces have complex scalar field and inner products are linear on the right. The \emph{algebraic} tensor product is denoted by $\otimes$; the Hilbert-space tensor product is denoted by $\ctimes$, as is the ultraweak tensor product. The domain of the linear operator~$T$ is denoted by $\dom T$. The identity transformation on the vector space~$V$ is denoted by~$\id_V$. If $P$ is an orthogonal projection on the inner-product space $V$ then the complement $P^\perp := \id_V - P$, the projection onto the orthogonal complement of the range of $P$. The Banach space of bounded operators from the Banach space $X_1$ to the Banach space $X_2$ is denoted by $\bop{X_1}{X_2}$, or by~$\bop{X_1}{}$ if $X_1$ and $X_2$ are equal. The identity automorphism on the algebra $\alg$ is denoted by~$\iota_\alg$. If $a$ and $b$ are elements in an algebra $\alg$ then $[ a, b ] := a b - b a$ and $\{ a, b \} := a b + b a$ denote their commutator and anti-commutator, respectively. If $\alg_0$ is a $*$-algebra, $\hilb_1$ and $\hilb_2$ are Hilbert spaces and $\alpha: \alg_0 \to \bop{\hilb_1}{\hilb_2}$ is a linear map then the \emph{adjoint} map $\alpha^\dag: \alg_0 \to \bop{\hilb_2}{\hilb_1}$ is such that $\alpha^\dag( a ) := \alpha( a^* )^*$ for all $a \in \alg_0$. \section{A higher-order product formula}\label{sec:hopf} \begin{notation} The Dirac bra-ket notation will be useful: for any Hilbert space $\hilb$ and vectors $\xi$, $\chi \in \hilb$, let \begin{alignat*}{3} \ket{\hilb} & := \bop{\C}{\hilb}, & \qquad & \ket{\xi} : \C \to \hilb; \ \lambda \mapsto \lambda \xi & \qquad \text{(\emph{ket})\hphantom{.}} \\[2ex] \text{and} \qquad \bra{\hilb} & := \bop{\hilb}{\C}, & & \bra{\chi} : \hilb \to \C; \ \eta \mapsto \langle \chi, \eta \rangle & \qquad \text{(\emph{bra})}. \end{alignat*} \end{notation} In particular, we have the linear map $\dyad{\xi}{\chi} \in \bop{\hilb}{}$ such that $\dyad{\xi}{\chi} \eta = \langle \chi, \eta \rangle \xi$ for all $\eta \in \hilb$. Let $\alg \subseteq \bop{\ini}{}$ be a unital $C^*$~algebra with identity $\id = \id_\ini$, whose elements act as bounded operators on the \emph{initial space} $\ini$, a Hilbert space. Let $\alg_0 \subseteq \alg$ be a norm-dense $*$-subalgebra of~$\alg$ which contains~$\id$. Let the \emph{extended multiplicity space} $\mmul := \C \oplus \mul$, where the \emph{multiplicity space} $\mul$ is a Hilbert space, and distinguish the unit vector $\vac := ( 1, 0 )$. For brevity, let $\bopp := \bop{\mmul}{}$. Let $\Delta := \id \otimes P_\mul \in \alg_0 \otimes \bopp$, where $P_\mul := \dyad{\vac}{\vac}^\perp \in \bopp$ is the orthogonal projection onto $\mul \subset \mmul$. \begin{lemma}\label{lem:gen} The map $\phi : \alg_0 \to \alg_0 \otimes \bopp$ is $*$-linear, such that $\phi( \id ) = 0$ and such that \begin{equation}\label{eqn:gen} \phi( x y ) = \phi( x ) ( y \otimes \id_\mul ) + ( x \otimes \id_\mul ) \phi( y ) + \phi( x ) \Delta \phi( y ) \qquad \text{for all } x, y \in \alg_0 \end{equation} if and only if \begin{equation}\label{eqn:gencpts} \phi( x ) = \begin{bmatrix} \tau( x ) & \delta^\dagger( x ) \\[1ex] \delta( x ) & \pi( x ) - x \otimes \id_\mul \end{bmatrix} \qquad \text{for all } x \in \alg_0, \end{equation} where $\pi : \alg_0 \to \alg_0 \otimes \bop{\mul}{}$ is a unital $*$-homomorphism, $\delta : \alg_0 \to \alg_0 \otimes \ket{\mul}$ is a $\pi$-derivation, \ie a linear map such that \[ \delta( x y ) = \delta( x ) y + \pi( x ) \delta( y ) \qquad \text{for all } x, y \in \alg_0, \] and $\tau : \alg_0 \to \alg_0$ is a $*$-linear map such that \begin{equation}\label{eqn:tau} \tau( x y ) - \tau( x ) y - x \tau( y ) = \delta^\dagger( x ) \delta( y ) \qquad \text{for all } x, y \in \alg_0. \end{equation} \end{lemma} \begin{proof} This is a straightforward exercise in elementary algebra. \end{proof} \begin{definition} A $*$-linear map $\phi : \alg_0 \to \alg_0 \otimes \bopp$ such that $\phi( \id ) = 0$ and such that~\eqref{eqn:gen} holds is a \emph{flow generator}. \end{definition} \begin{remark}\label{rem:BEcdc} Condition~\eqref{eqn:tau} may be expressed in terms of the Bakry--\'{E}mery \textit{carr\'{e} du champ} operator \[ \Gamma : \alg_0 \times \alg_0 \to \alg_0; \ ( x, y ) \mapsto \hlf \bigl( \tau( x y ) - \tau( x ) y - x \tau ( y ) \bigr); \] for \eqref{eqn:tau} to be satisfied, it is necessary and sufficient that $2 \Gamma( x, y ) = \delta^\dagger( x ) \delta( y )$ for all $x$, $y \in \alg_0$. The $\pi$-derivation $\delta$ becomes a bimodule derivation if $\alg_0 \otimes \ket{\mul}$ is made into an $\alg_0$-$\alg_0$ bimodule by setting $x \cdot z \cdot y := \pi( x ) z y$ for all $x$, $y \in \alg_0$ and $z \in \alg_0 \otimes \ket{\mul}$. \end{remark} \begin{lemma}\label{lem:bddphi} Let $\alg_0 = \alg$, let $\pi : \alg \to \alg \otimes \bop{\mul}{}$ be a unital $*$-homomorphism, let $z \in \alg \otimes \ket{\mul}$ and let $h \in \alg$ be self adjoint. Define \[ \delta : \alg \to \alg \otimes \ket{\mul}; \ x \mapsto z x - \pi( x ) z \] and \[ \tau : \alg \to \alg; \ x \mapsto \I [ h, x ] - \hlf \{ z ^* z, x \} + z^* \pi( x ) z. \] Then the map $\phi: \alg \to \alg \otimes \bopp$ defined in terms of $\pi$, $\delta$ and $\tau$ through~\eqref{eqn:gencpts} is a flow generator. \end{lemma} \begin{proof} This is another straightforward exercise. \end{proof} \begin{remark} Modulo important considerations regarding tensor products and the ranges of $\delta$ and $\tau$, the above form for $\phi$ is, essentially, the only one possible \cite[Lemma~6.4]{LiW00b}. The quantum exclusion process in Section~\ref{sec:sqep} has a generator of the same form but with unbounded $z$ and $h$. \end{remark} \begin{definition}\label{dfn:qrw} Given a flow generator $\phi : \alg_0 \to \alg_0 \otimes \bopp$, the \emph{quantum random walk} $\bigl( \phi_n \bigr)_{n \in \Z_+}$ is a family of $*$-linear maps \[ \phi_n : \alg_0 \to \alg_0 \otimes \bopp^{\otimes n} \] defined by setting \[ \phi_0 := \iota_{\alg_0} \qquad \text{and} \qquad \phi_{n + 1} := \bigl( \phi_n \otimes \iota_{\bopp} \bigr) \comp \phi \qquad \text{for all } n \in \Z_+. \] The following identity is useful: if $\xi_1$, $\chi_1$, \ldots, $\xi_n$, $\chi_n \in \mmul$ and $x \in \alg_0$ then \begin{equation}\label{eqn:ncpts} \bigl( \id_\ini \otimes \bra{\xi_1} \otimes \cdots \otimes \bra{\xi_n} \bigr) \phi_n( x ) \bigl( \id_\ini \otimes \ket{\chi_1} \otimes \cdots \otimes \ket{\chi_n} \bigr) = \phi^{\xi_1}_{\chi_1} \comp \cdots \comp \phi^{\xi_n}_{\chi_n}( x ), \end{equation} where \[ \phi^\xi_\chi : \alg_0 \to \alg_0; \ x \mapsto ( 1_\ini \otimes \bra{\xi} ) \phi (x) (1_\ini \otimes \ket{\chi} ) \] is a linear map for each choice of $\xi$, $\chi \in \mmul$. \end{definition} \begin{remark}\label{rem:ordering} The paper \cite{LiW03}, results from which will be employed below, uses a different convention to that adopted in Definition~\ref{dfn:qrw}: the components of the product $\bopp^{\otimes n}$ appear in the reverse order to how they do above. \end{remark} \begin{notation} Let $\alpha \subseteq \{ 1, \ldots, n \}$, with elements arranged in increasing order, and denote its cardinality by~$| \alpha |$. The unital $*$-homomorphism \[ \alg_0 \otimes \bopp^{\otimes | \alpha |} \to \alg_0 \otimes \bopp^{\otimes n}; \ T \mapsto T( n, \alpha ) \] is defined by linear extension of the map \[ A \otimes B_1 \otimes \cdots \otimes B_{| \alpha |} \mapsto A \otimes C_1 \otimes \cdots \otimes C_n, \] where \[ C_i := \left\{ \begin{array}{ll} B_j & \text{if $i$ is the $j$th element of $\alpha$}, \\[1ex] \id_\mmul & \text{if $i$ is not an element of $\alpha$}. \end{array}\right. \] For example, if $\alpha = \{ 1, 3, 4 \}$ and $n = 5$ then \[ ( A \otimes B_1 \otimes B_2 \otimes B_3 )( 5, \alpha ) = A \otimes B_1 \otimes \id_\mmul \otimes B_2 \otimes B_3 \otimes \id_\mmul. \] Given a flow generator $\phi : \alg_0 \to \alg_0 \otimes \bopp$, for all $n \in \Z_+$ and $\alpha \subseteq \{ 1, \ldots, n \}$, let \[ \phi_{| \alpha |}( x; n, \alpha ) := \bigl( \phi_{| \alpha |}( x ) \bigr)( n, \alpha ) \qquad \text{for all } x \in \alg_0 \] and let \[ \Delta( n, \alpha ) := ( \id_\ini \otimes P_\mul^{\otimes | \alpha|} )( n, \alpha ), \] so that, in the latter, $P_\mul$ acts on the components of $\mmul^{\otimes n}$ which have indices in $\alpha$ and $\id_\mmul$ acts on the others. \end{notation} \begin{theorem} Let $\bigl( \phi_n \bigr)_{n \in \Z_+}$ be the quantum random walk given by the flow generator $\phi$. For all $n \in \Z_+$ and $x$,~$y \in \alg_0$, \begin{equation}\label{eqn:hoprod} \phi_n( x y ) = \sum_{\alpha \cup \beta = \{ 1, \ldots, n \} } \phi_{| \alpha |}( x ; n, \alpha ) \Delta( n, \alpha \cap \beta ) \phi_{| \beta |}( y ; n, \beta), \end{equation} where the summation is taken over all sets $\alpha$ and $\beta$ whose union is $\{ 1, \ldots n \}$. \end{theorem} \begin{proof} This may be established inductively: see \cite[Proof of Theorem~4.1]{LiW03}. \end{proof} \begin{definition} The set $S \subseteq \alg_0$ is \emph{$*$-generating for $\alg_0$} if $\alg_0$ is the smallest unital $*$-algebra which contains~$S$. \end{definition} \begin{corollary}\label{cor:bound} For a flow generator $\phi : \alg_0 \to \alg_0 \otimes \bopp$, let \begin{equation}\label{eqn:growth} \alg_\phi := \{ x \in \alg_0 : \text{there exist $C_x$,~$M_x > 0$ such that $\| \phi_n( x ) \| \le C_x M_x^n$ for all } n \in \Z_+ \}. \end{equation} Then $\alg_\phi$ is a unital $*$-subalgebra of $\alg_0$, which is equal to $\alg_0$ if $\alg_\phi$ contains a $*$-generating set for~$\alg_0$. \end{corollary} \begin{proof} It suffices to demonstrate that $\alg_\phi$ is closed under products. To see this, let $x$,~$y \in \alg_\phi$ and suppose $C_x$, $M_x$ and $C_y$, $M_y$ are as in~\eqref{eqn:growth}. Then~\eqref{eqn:hoprod} implies that \begin{align*} \| \phi_n( x y ) \| & \le \sum_{\alpha \cup \beta = \{ 1, \ldots, n \} } \| \phi_{| \alpha |}( x ) \| \, \| \phi_{| \beta |}( y ) \| \\[1ex] & \le C_x C_y \sum_{k = 0}^n \binom{n}{k} M_x^k \sum_{l = 0}^k \binom{k}{l} M_y^{n - k + l} \qquad (k = | \alpha |,\ l = | \alpha \cap \beta |) \\[1ex] & = C_x C_y \sum_{k = 0}^n \binom{n}{k} M_x^k M_y^{n - k} ( 1 + M_y )^k \\[1ex] & = C_x C_y ( M_x + M_x M_y + M_y )^n \end{align*} for all $n \in \Z_+$, as required. \end{proof} \begin{lemma} If the flow generator $\phi$ is as defined in Lemma~\ref{lem:bddphi} then $\alg_\phi = \alg_0$. \end{lemma} \begin{proof} This follows immediately, since $\phi$ is completely bounded and $\| \phi_n \| \le \| \phi_n \|_\cb \le \| \phi \|_\cb^n$ for all~$n \in \Z_+$. \end{proof} The following result shows that, given a flow generator $\phi$ and vectors $\chi$, $\xi \in \mmul$, the elements of~$\alg_\phi$ are entire vectors for $\phi^\xi_\chi$. \begin{lemma}\label{lem:abe} Let $\phi : \alg_0 \to \alg_0 \otimes \bopp$ be a flow generator. For all $\xi$,~$\chi \in \mmul$ we have $\phi_\chi^\xi( \alg_\phi ) \subseteq \alg_\phi$, and the series \begin{equation}\label{eqn:series} \exp( z \phi_\chi^\xi ) := \sum_{n = 0}^\infty \frac{z^n ( \phi_\chi^\xi )^n}{n!} \end{equation} is strongly absolutely convergent on $\alg_\phi$ for all $z \in \C$. \end{lemma} \begin{proof} Suppose $\| \phi_n( x ) \| \le C_x M_x^n$ for all $n \in \Z_+$. It follows from~\eqref{eqn:ncpts} that \begin{equation}\label{eqn:qrwbk} \bigl( \id_{\ini \ctimes \mmul^{\ctimes n}} \otimes \bra{\xi} \bigr) \phi_{n + 1}( x ) \bigl( \id_{\ini \ctimes \mmul^{\ctimes n}} \otimes \ket{\chi} \bigr) = \phi_n\bigl( \phi_\chi^\xi( x ) \bigr), \end{equation} so \[ \bigl\| \phi_n \bigl( \phi_\chi^\xi( x ) \bigr) \bigr\| \le \| \xi \| C_x M_x^{n + 1} \| \chi \| = ( \| \xi \| \, \| \chi \| C_x M_x ) M_x^n \] and $\phi_\chi^\xi( x ) \in \alg_\phi$. Moreover~\eqref{eqn:ncpts} also gives that \begin{equation}\label{eqn:qrwslicebound} \bigl\| \bigl( \phi_{\chi_1}^{\xi_1} \comp \cdots \comp \phi_{\chi_n}^{\xi_n} \bigr)( x ) \bigr\| \le \| \xi_1 \| \cdots \| \xi_n \| \| \chi_1 \| \cdots \| \chi_n \| C_x M_x^n, \end{equation} hence the series~\eqref{eqn:series} converges as claimed. \end{proof} \section{Quantum flows}\label{sec:qf} \begin{notation} Let $\fock$ denote Boson Fock space over $\elltwo$, the Hilbert space of $\mul$-valued, square-integrable functions on the half line, and let \[ \evecs := \lin\{ \evec{f} : f \in \elltwo \} \] denote the linear span of the total set of exponential vectors in $\fock$. As is customary, elementary tensors in $\ini \otimes \fock$ are written without a tensor-product sign: in other words, $u \evec{f} := u \otimes \evec{f}$ for all $u \in \ini$ and $f \in \elltwo$, \etc. If $f \in \elltwo$ and $t \ge 0$ then $\wh{f}( t ) := \wh{f( t )}$, where $\wh{\xi} := \vac + \xi \in \mmul$ for all $\xi \in \mul$. Given $f \in \elltwo$ and an interval $I \subseteq \R_+$, let $f_I \in \elltwo$ be defined to equal $f$ on~$I$ and~$0$ elsewhere, with $f_{t)} := f_{[ 0, t )}$ and $f_{[t} := f_{[ t, \infty )}$ for all $t \ge 0$. \end{notation} \begin{definition} A family of linear operators $( T_t )_{t \ge 0}$ in $\ini \ctimes \fock$ with domains including $\ini \otimes \evecs$ is \emph{adapted} if \[ \langle u \evec{f}, T_t v \evec{g} \rangle = \langle u \evec{f_{t)}}, T_t v \evec{g_{t)}} \rangle \langle \evec{f_{[t}}, \evec{g_{[t}} \rangle \] for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$ and~$t \ge 0$. \end{definition} \begin{theorem}\label{thm:qwi} For all $n \in \N$ and $T \in \bop{\ini \ctimes \mmul^{\ctimes n}}{}$ there exists a family $\Lambda^n( T ) = \bigl( \Lambda^n_t( T ) \bigr)_{t \ge 0}$ of linear operators in $\ini \ctimes \fock$, with domains including $\ini \otimes \evecs$, that is adapted and such that \begin{equation}\label{eqn:qwiip} \langle u \evec{f}, \Lambda^n_t( T ) v \evec{g} \rangle = \int_{\simp_n( t )} \langle u \otimes \wh{f}^{\otimes n}( \bt ), T v\otimes \wh{g}^{\otimes n}( \bt ) \rangle \std \bt \, \langle \evec{f}, \evec{g} \rangle \end{equation} for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$ and $t \ge 0$. Here the simplex \[ \simp_n( t ) := \{ \bt := ( t_1, \ldots, t_n ) \in [ 0 , t ]^n : t_1 < \cdots < t_n \} \] and \[ \wh{f}^{\otimes n}( \bt ) := \wh{f}( t_1 ) \otimes \cdots \otimes \wh{f}( t_n ), \qquad \text{\etc}. \] We extend this definition to include $n = 0$ by setting $\Lambda^0_t( T ) := T \otimes \id_\fock$ for all $t \ge 0$. If $f \in \elltwo$ then \begin{equation}\label{eqn:qwi} \| \Lambda^n_t( T ) u \evec{f} \| \le \frac{K_{f, t}^n}{\sqrt{n!}} \, \| T \| \, \| u \evec{f} \| \qquad \text{for all }t \ge 0 \text{ and } u \in \ini, \end{equation} where $K_{f, t} := \sqrt{(2 + 4 \| f \|^2 ) ( t + \| f \|^2 )}$, and the map \[ \R_+ \to \bop{\ini}{\ini \ctimes \fock}; \ t \mapsto \Lambda^n_t( T ) \bigl( \id_\ini \otimes \ket{\evec{f}} \bigr) \] is norm continuous. \end{theorem} \begin{proof} This is an extension of Proposition~3.18 of~\cite{Lin05}, from which we borrow the notation; as for Remark~\ref{rem:ordering}, the ordering of the components in tensor products is different but this is no more than a convention. For each $f \in \elltwo$ define $C_f \ge 0$ so that \[ C_f^2 = \bigl( \| f \| + \sqrt{1 + \| f \|^2} \bigr)^2 \le 2 + 4 \| f \|^2, \] and note that, by inequality~(3.21) of \cite{Lin05}, \begin{align*} \| \Lambda^n_t ( T ) u \evec{f} \|^2 & \le \bigl( C_{f_{t)}} \bigr)^{2n} \int_{\simp_n(t)} \| T u \otimes \wh{f}^{\otimes n}( \bt ) \|^2 \std \bt \, \| \evec{f} \|^2 \\ & \le \frac{K_{f, t}^{2 n}}{n!} \, \| T \|^2 \| u \evec{f} \|^2. \end{align*} To show continuity, let $\wt{T}$ denote $T$ considered as an operator on $( \ini \ctimes \mmul ) \ctimes \mmul^{\ctimes ( n - 1 )}$, where the right-most copy of $\mmul$ in the $n$-fold tensor product has moved next to the initial space $\ini$. Then \[ \Lambda^n_t( T ) - \Lambda^n_s( T ) = \Lambda_t \bigl( 1_{( s, t ]}( \cdot ) \Lambda^{n - 1}_\cdot( \wt{T} ) \bigr), \] and so, using Theorem~3.13 of~\cite{Lin05}, \begin{align*} \| \bigl( \Lambda^n_t( T ) - \Lambda^n_s( T ) \bigr) u \evec{f} \|^2 & \le 2 ( t + C_f^2 ) \int^t_s \| \Lambda^{n - 1}_r( \wt{T} ) \bigl( u \otimes \wh{f}( r ) \bigr) \evec{f} \|^2 \std r \\ & \le 2 ( t + C_f^2 ) \Bigl( \int^t_s \| \wh{f} (r) \|^2 \std r \Bigr) \frac{K_{f, t}^{2 n - 2}}{( n - 1 )!} \, \| T \|^2 \| u \evec{f} \|^2. \qedhere \end{align*} \end{proof} The family $\Lambda^n( T )$ is the \emph{$n$-fold quantum Wiener integral} of~$T$. \begin{remark} It may be shown \cite[Proof of Theorem~2.2]{LiW03} that \[ \dom \Lambda^l_t( S )^* \supseteq \Lambda^m_t( T )( \ini \otimes \evecs ) \] for all $l$,~$m \in \Z_+$, $S \in \bop{\ini \ctimes \mmul^{\ctimes l}}{}$, $T \in \bop{\ini \ctimes \mmul^{\ctimes m}}{}$ and $t \ge 0$. \end{remark} \begin{theorem}\label{thm:intrep} Let $\phi : \alg_0 \to \alg_0 \otimes \bopp$ be a flow generator. If $x \in \alg_\phi$ then the series \begin{equation}\label{eqn:jdef} j_t( x ) := \sum_{n = 0}^\infty \Lambda^n_t\bigl( \phi_n( x ) \bigr) \end{equation} is strongly absolutely convergent on $\ini \otimes \evecs$ for all $t \ge 0$, uniformly so on compact subsets of $\R_+$. The map \[ \R_+ \to \bop{\ini}{\ini \ctimes \fock}; \ t \mapsto j_t( x ) \bigl( \id_\ini \otimes \ket{\evec{f}} \bigr) \] is norm continuous for all $f \in \elltwo$, the family $\bigl( j_t( x ) \bigr)_{t \ge 0}$ is adapted and \begin{equation}\label{eqn:wqsde} \langle u \evec{f}, j_t( x ) v\evec{g} \rangle = \langle u \evec{f}, ( x v ) \evec{g} \rangle + \int_0^t \bigl\langle u \evec{f}, j_s \bigl( \phi_{\wh{g}( s )}^{\wh{f}( s )}( x ) \bigr) v \evec{g} \bigr\rangle \std s \end{equation} for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$, $x \in \alg_\phi$ and $t \ge 0$. Furthermore, \begin{equation}\label{eqn:matsp} \bigl( \id_\ini \otimes \bra{\evec{f}} \bigr) j_t( x ) \bigl( \id_\ini \otimes \ket{\evec{g}} \bigr) \in \alg \end{equation} for all~$x \in \alg_\phi$, $f$,~$g \in \elltwo$ and $t \ge 0$. \end{theorem} \begin{proof} The first two claims are a consequence of the estimate~\eqref{eqn:qwi}, the definition of~$\alg_\phi$ and the continuity result from Theorem~\ref{thm:qwi}; adaptedness is inherited from the adaptedness of the quantum Wiener integrals. Lemma~\ref{lem:abe} implies that the integrand on the right-hand side of~\eqref{eqn:wqsde} is well defined and, by~\eqref{eqn:qrwbk}, \begin{multline*} \bigl\langle u\evec{f}, \Lambda^n_s\bigl( \phi_n \bigl( \phi_{\wh{g}( s )}^{\wh{f}( s )}( x ) \bigr) \bigr) v \evec{g} \bigr\rangle \\[1ex] \begin{aligned} & = \int_{\simp_n( s )} \langle u \otimes \wh{f}^{\otimes n}( \bt ), \phi_n\bigl( \phi_{\wh{g}( s )}^{\wh{f}( s )}( x ) \bigr) v \otimes \wh{g}^{\otimes n}( \bt ) \rangle \std \bt \langle \evec{f}, \evec{g} \rangle \\[1ex] & = \int_{\simp_n( s )} \langle u \otimes \wh{f}^{\otimes n}( \bt ) \otimes \wh{f}( s ), \phi_{n + 1}( x ) v \otimes \wh{g}^{\otimes n}( \bt ) \otimes \wh{g}( s ) \rangle \std \bt \langle \evec{f}, \evec{g} \rangle; \end{aligned} \end{multline*} integrating with respect to $s$ then taking the sum of these terms gives~\eqref{eqn:wqsde}. For the final claim, note that for any $f$, $g \in \elltwo$, the $\alg_0$-valued map \[ \simp_n( t ) \ni \bt \mapsto \phi^{\wh{f}( t_1 )}_{\wh{g}( t_1 )} \comp \cdots \comp \phi^{\wh{f}( t_n )}_{\wh{g}( t_n )}( x ) = \bigl( \id_\ini \otimes \bra{ \wh{f}^{\otimes n}( \bt ) } \bigr) \phi_n( x ) \bigl( \id_\ini \otimes \ket{ \wh{g}^{\otimes n}( \bt ) } \bigr) \] is Bochner integrable, hence \begin{equation}\label{eqn:cdqwi} \bigl( \id_\ini \otimes \bra{\evec{f}} \bigr) \Lambda_t^n\bigl( \phi_n( x ) \bigr) \bigl( \id_\ini \otimes \ket{\evec{g}} \bigr) = e^{\langle f, g \rangle} \int_{\simp_n( t )} \bigl( \phi_{\wh{g}( t_1 )}^{\wh{f}( t_1 )} \comp \cdots \comp \phi_{\wh{g}( t_n )}^{\wh{f}( t_n )} \bigr)( x ) \std \bt \in \alg. \end{equation} By~\eqref{eqn:qrwslicebound}, we may sum~\eqref{eqn:cdqwi} over all $n \in \Z_+$, with the resulting series being norm convergent, and so the final claim follows. \end{proof} \begin{remark}\label{rmk:jid} For all $t \ge 0$, let $j_t$ be as in Theorem~\ref{thm:intrep}. Since $\alg_\phi$ is a subspace of $\alg_0$ containing~$1$, and each $\phi_n$ is linear with $\phi_n( 1 ) = 0$, it follows from~\eqref{eqn:jdef} and Theorem~\ref{thm:qwi} that each $j_t$ is linear and unital, as a map into the space of operators with domain $\ini \otimes \evecs$. Moreover, the maps $j_t$ are weakly $*$-homomorphic in the following sense. \end{remark} \begin{lemma}\label{lem:mult} Let $\phi : \alg_0 \to \alg_0 \otimes \bopp$ be a flow generator and let $j_t$ be as in Theorem~\ref{thm:intrep} for all~$t \ge 0$. If $x$,~$y \in \alg_\phi$ then $x^* y \in \alg_\phi$, with \begin{equation}\label{eqn:wmult} \langle j_t( x ) u \evec{f}, j_t( y ) v \evec{g} \rangle = \langle u \evec{f}, j_t( x^* y ) v \evec{g} \rangle \end{equation} for all $u$,~$v \in \ini$ and $f$,~$g \in \elltwo$. In particular, if $x \in \alg_\phi$ then $j_t( x )^* \supseteq j_t( x^* )$. \end{lemma} \begin{proof} As $\alg_\phi$ is a $*$-algebra, so $x^* y \in \alg_\phi$. Let $N \in \Z_+$ and note that, by \cite[Theorem~2.2]{LiW03}, \begin{equation}\label{eqn:finiteIto} \sum_{l, m = 0}^N \Lambda^l_t\bigl( \phi_l( x ) \bigr)^* \Lambda^m_t\bigl( \phi_m( y ) \bigr) = \sum_{n = 0}^{2 N} \Lambda^n_t\bigl( \phi_{n, N]}( x^* y ) \bigr) \qquad \text{on } \ini \otimes \evecs, \end{equation} where \[ \phi_{n, N]}( x^* y ) := \sum_{\substack{\alpha \cup \beta = \{ 1, \ldots, n \}\\[0.5ex] | \alpha |, \, | \beta | \le N }} \phi_{| \alpha |}( x^* ; n, \alpha ) \Delta( n, \alpha \cap \beta ) \phi_{| \beta |}( y ; n, \beta ). \] Working as in the proof of Corollary~\ref{cor:bound} yields the inequality \[ \| \phi_{n, N]}( x^* y ) \| \le C_{x^*} C_y ( M_{x^*} + M_{x^*} M_y + M_y )^n, \] and so, by \eqref{eqn:qwi}, \begin{equation}\label{eqn:cutoffest} | \langle u \evec{f}, \Lambda^n_t\bigl( \phi_{n, N]}( x^* y ) \bigr) v \evec{g} \rangle | \le \frac{K_{g, t}^n ( M_{x^*} + M_{x^*} M_y + M_y )^n}{\sqrt{n!}} \, C_{x^*} C_y \, \| u \evec{f} \| \, \| v \evec{g} \|. \end{equation} As $\phi_{n,N]} = \phi_n$ if $n \in \{0, 1, \ldots, N\}$, it follows that \begin{align*} \langle j_t( x ) u \evec{f}, j_t( y ) v \evec{g} \rangle & = \lim_{N \to \infty} \sum_{l, m = 0}^N \langle u \evec{f}, \Lambda^l_t\bigl( \phi_l( x ) \bigr)^* \Lambda^m_t\bigl( \phi_m( y ) \bigr) v \evec{g} \rangle \\[1ex] & = \lim_{N \to \infty} \sum_{n = 0}^N \langle u \evec{f}, \Lambda^n_t\bigl( \phi_n( x^* y ) \bigr) v \evec{g} \rangle \\[1.5ex] & \hspace{5em} + \lim_{N \to \infty} \smash[t]{\sum_{n = N + 1}^{2 N}} \langle u \evec{f}, \Lambda^n_t\bigl( \phi_{n, N]}( x^* y ) \bigr) v \evec{g} \rangle \\[1ex] & = \langle u \evec{f}, j_t( x^* y ) v \evec{g} \rangle, \label{eqn:ll} \end{align*} since the final limit is zero by~\eqref{eqn:cutoffest}. \end{proof} \begin{lemma}\label{lem:uniquesoln} If $\alg_\phi$ is dense in $\alg$ then there is at most one family of $*$-homomorphisms $( \jt_t )_{t \ge 0}$ from $\alg$ to $\bop{\ini \ctimes \fock}{}$ that satisfies~\eqref{eqn:wqsde}. \end{lemma} \begin{proof} Suppose that $j^{( 1 )}$ and $j^{( 2 )}$ are two families of $*$-homomorphisms from $\alg$ to $\bop{\ini \otimes \fock}{}$ that satisfy~\eqref{eqn:wqsde}. Set $k_t := j^{( 1 )}_t - j^{( 2 )}_t$ and note we have that \[ \langle u \evec{f}, k_t( x ) v \evec{g} \rangle = \int_0^t \langle u \evec{f}, k_s \bigl( \phi^{\wh{f}( s )}_{\wh{g}( s )}( x ) \bigr) v \evec{g} \rangle \std s \] for all $u$, $v \in \ini$, $f$, $g \in \elltwo$ and $x \in \alg_\phi$. Iterating the above, and using the fact that $\| k_t \| \le 2$ for all $t \ge 0$, we obtain the inequality \[ | \langle u \evec{f}, k_t( x ) v \evec{g} \rangle | \le 2 \int_{\simp_n( t )} \| \phi^{\wh{f}( t_1 )}_{\wh{g}( t_1 )} \comp \cdots \comp \phi^{\wh{f}( t_n )}_{\wh{g}( t_n )}( x ) \| \std \bt \, \| u \evec{f}\| \, \| v \evec{g} \|. \] However~\eqref{eqn:qrwslicebound} now gives that \[ | \langle u \evec{f}, k_t (x) v \evec{g} \rangle | \le 2 C_x \frac{\bigl( M_x \| \wh{f}_{t)} \| \|\wh{g}_{t)} \| \bigr)^n}{n!} \, \| u \evec{f} \| \, \| v \evec{g} \| \] and the result follows by letting $n \to \infty$. \end{proof} \begin{theorem}\label{thm:extension1} Let $\phi : \alg_0 \to \alg_0 \otimes \bopp$ be a flow generator and suppose $\alg_0$ contains its square roots: for all non-negative $x \in \alg_0$, the square root~$x^{1 / 2}$ lies in $\alg_0$. If $\alg_\phi = \alg_0$ then, for all~$t \ge 0$, there exists a unital $*$-homomorphism \[ \jt_t : \alg \to \bop{\ini \ctimes \fock}{} \] such that $\jt_t( x ) = j_t( x )$ on $\ini \otimes \evecs$ for all $x \in \alg_0$, where $j_t( x )$ is as defined in Theorem~\ref{thm:intrep}. \end{theorem} \begin{proof} Let $x \in \alg_0$ and suppose first that $x \ge 0$. If $y := ( \| x \| \id - x )^{1 / 2}$, which lies in $\alg_0$ by assumption, then Lemma~\ref{lem:mult} and Remark~\ref{rmk:jid} imply that \[ 0 \le \| j_t( y ) \theta \|^2 = \langle \theta, j_t( y^2 ) \theta \rangle = \| x \| \, \| \theta \|^2 - \langle \theta, j_t( x ) \theta \rangle \qquad \text{for all } \theta \in \ini \otimes \evecs. \] If $x$ is now an arbitrary element of $\alg_0$, it follows that \[ \| j_t( x ) \theta \|^2 = \langle \theta, j_t( x^* x ) \theta \rangle \le \| x^* x \| \, \| \theta \|^2 = \| x \|^2 \| \theta \|^2. \] Thus $j_t( x )$ extends to $\jt_t( x ) \in \bop{\ini \ctimes \fock}{}$, which has norm at most $\| x \|$, and the map \[ \alg_0 \to \bop{\ini \ctimes \fock}{}; \ x \mapsto \jt_t( x ) \] is a $*$-linear contraction, which itself extends to a $*$-linear contraction \[ \jt_t : \alg \to \bop{\ini \ctimes \fock}{}. \] Furthermore, if $x$,~$y \in \alg_0$ and $\theta$,~$\zeta \in \ini \otimes \evecs$ then, by Lemma~\ref{lem:mult}, \[ \langle \theta, \jt_t( x ) \jt_t( y ) \zeta \rangle = \langle \jt_t( x^* ) \theta, \jt( y ) \zeta \rangle = \langle j_t( x^* ) \theta, j_t( y ) \zeta \rangle = \langle \theta, j_t( x y ) \zeta \rangle = \langle \theta, \jt_t( x y ) \zeta \rangle, \] so $\jt_t$ is multiplicative on $\alg_0$. An approximation argument now gives that $\jt_t$ is multiplicative on the whole of~$\alg$. \end{proof} \begin{remark}\label{rmk:af} If $\alg$ is an AF algebra, \ie the norm closure of an increasing sequence of finite-dimensional $*$-subalgebras, then its local algebra $\alg_0$, the union of these subalgebras, contains its square roots, since every finite-dimensional $C^*$~algebra is closed in $\alg$. \end{remark} \begin{definition} The unital $C^*$~algebra $\alg$ has \emph{generators} $\{ a_i : i \in I \}$ if $\alg$ is the smallest unital $C^*$~algebra which contains $\{ a_i : i \in I \}$. These generators \emph{satisfy the relations} $\{ p_k : k \in K \}$ if each $p_k$ is a complex polynomial in the non-commuting indeterminate $\langle X_i, X_i^* : i \in I \rangle$ and, for all $k \in K$, the algebra element $p_k( a_i , a_i^* : i \in I )$, obtained from $p_k$ by replacing $X_i$ by $a_i$ and~$X_i^*$ by $a_i^*$ for all $i \in I$, is equal to $0$. Suppose $\alg$ has generators $\{ a_i : i \in I \}$ which satisfy the relations $\{ p_k : k \in K \}$. Then $\alg$ is \emph{generated by isometries} if $\{ X_i^* X_i - 1 : i \in I \} \subseteq \{ p_k : k \in K \}$ and is \emph{generated by unitaries} if~$\{ X_i^* X_i - 1, \ X_i X_i^* - 1 : i \in I \} \subseteq \{ p_k : k \in K \}$. The algebra $\alg$ is \emph{universal} if, given any unital $C^*$~algebra $\blg$ containing a set of elements $\{ b_i : i \in I \}$ which satisfies the relations $\{ p_k : k \in K \}$, \ie~$p_k( b_i, b_i^* : i \in I ) = 0$ for all $k \in K$, there exists a unique $*$-homomorphism $\pi : \alg \to \blg$ such that~$\pi( a_i ) = b_i$ for all $i \in I$. \end{definition} \begin{theorem}\label{thm:extension2} Let $\alg$ be the universal $C^*$~algebra generated by isometries $\{ s_i : i \in I \}$ which satisfy the relations $\{ p_k : k \in K \}$, and let $\alg_0$ be the $*$-algebra generated by $\{ s_i : i \in I \}$. If~$\phi : \alg_0 \to \alg_0 \otimes \bopp$ is a flow generator such that $\alg_\phi = \alg_0$ then, for all $t \ge 0$, there exists a unital $*$-homomorphism \[ \jt_t : \alg \to \bop{\ini \ctimes \fock}{} \] such that $\jt_t( x ) = j_t( x )$ on $\ini \otimes \evecs$ for all $x \in \alg_0$, where $j_t( x )$ is as defined in Theorem~\ref{thm:intrep}. \end{theorem} \begin{proof} Remark~\ref{rmk:jid} and Lemma~\ref{lem:mult} imply that $j_t( s_i )$ is isometric and that $j_t( s_i^* )$ is contractive for all $i \in I$. Repeated application of~\eqref{eqn:wmult} then shows that $j_t( x )$ is bounded for each $x \in \alg_0$, and that $j_t$ extends to a unital $*$-homomorphism from $\alg_0$ to $\bop{\ini \ctimes \fock}{}$. Furthermore, the set~$\{ j_t( s_i ) : i \in I \}$ satisfies the relations $\{ p_k : k \in K \}$ so, by the universal nature of $\alg$, there exists a $*$-homomorphism $\pi$ from $\alg$ into $\bop{\ini \ctimes \fock}{}$ such that $\pi( s_i ) = j_t( s_i )$ for all $i \in I$ and $\jt_t := \pi$ is as required. \end{proof} \begin{corollary}\label{cor:stsoln} The family $\bigl( \jt_t: \alg \to \bop{\ini \ctimes \fock}{} \bigr)_{t \ge 0}$ constructed in Theorems~\ref{thm:extension1} and~\ref{thm:extension2} is a strong solution of the QSDE~\eqref{eqn:EHqsde}. \end{corollary} \begin{proof} Fix $x \in \alg_\phi$ and let \begin{equation}\label{eqn:integrand} L_t := \Sigma\bigl( ( \jt_t \otimes \iota_\bopp )( \phi( x ) ) \bigr) \end{equation} for all $t \ge 0$, where $\Sigma : \bop{\ini \ctimes \fock \ctimes \mmul}{} \to \bop{\ini \ctimes \mmul \ctimes \fock}{}$ is the isomorphism that swaps the last two components of simple tensors. If $f \in \elltwo$ then \[ \| L_t u \otimes \wh{f}( t ) \otimes \evec{f} \| \le \| \phi( x ) \| \, \| \wh{f}( t ) \| \, \| u \evec{f} \|, \] so if $t \mapsto L_t u \otimes \wh{f}( t ) \otimes \evec{f}$ is strongly measurable then $t \mapsto L_t$ is quantum stochastically integrable \cite[p.232]{Lin05} and $\jt$ satisfies the QSDE in the strong sense, since we already have from~\eqref{eqn:wqsde} that it is a weak solution. Now, Theorem~\ref{thm:intrep} implies that for each $x \in \alg_\phi = \alg_0$ and $\theta \in \ini \otimes \evecs$ the map $t \mapsto \jt_t( x ) \theta$ is continuous, hence so is \[ t \mapsto ( \jt_t \otimes \iota_\bopp )( y \otimes T ) ( \theta \otimes \xi ) = \jt_t( y ) \theta \otimes T \xi \] for all $y \in \alg_0$, $T \in \bop{\mmul}{}$ and $\xi \in \mmul$. As $\| L_t \| = \| \phi( x ) \|$ for all $t \ge 0$, it follows that $t \mapsto L_t$ and $t \mapsto L^*_t$ are strongly continuous on $\ini \ctimes \mmul \ctimes \fock$. Hence $t \mapsto L_t (u \otimes \wh{f}( t ) \otimes \evec{f})$ is separably valued and weakly measurable, so Pettis's theorem gives the result. \end{proof} \begin{remark} Property~\eqref{eqn:matsp} implies that the homomorphism $\jt_t$ given by Theorems~\ref{thm:extension1} and~\ref{thm:extension2} takes values in the matrix space $\alg \matten \bop{\fock}{}$ \cite{Lin05}. \end{remark} \begin{notation} For all $t \ge 0$, $f$,~$g \in \elltwo$ and $a \in \alg$, let \[ \jt_t[ f, g ]( a ) := \bigl( \id_\ini \otimes \bra{\evec{f_{t)}}} \bigr) \jt_t( a ) \bigl( \id_\ini \otimes \ket{\evec{g_{t)}}} \bigr). \] \end{notation} \begin{theorem}\label{thm:cocycle} The family of $*$-homomorphisms $( \jt_t )_{t \ge 0}$ given by Theorems~\ref{thm:extension1} and \ref{thm:extension2} forms a Feller cocycle~\emph{\cite[Section~2.4]{LiW01}} for the shift semigroup on~$\bop{\fock}{}$: for all $s$,~$t \ge 0$, $f$,~$g \in \elltwo$ and~$a \in \alg$, \[ \begin{array}{rl} \text{\rm (i)} & \jt_0[ 0, 0 ]( a ) = a, \\[1ex] \text{\rm (ii)} & \jt_t[ f, g ]( a ) \in \alg, \\[1ex] \text{\rm (iii)} & t \mapsto \jt_t[ f, g ]( a ) \text{ is norm continuous} \\[1ex] \text{and} \quad \text{\rm (iv)} & \jt_{s + t}[ f, g ] = \jt_s[ f, g ] \comp \jt_t[ f( \cdot + s ), g( \cdot + s ) ]. \end{array} \] Consequently, setting \[ T_t( a ) := \jt_t[ 0, 0 ]( a ) = \bigl( \id_\ini \otimes \bra{\evec{0}} \bigr) \jt_t( a ) \bigl( \id_\ini \otimes \ket{\evec{0}} \bigr) \qquad \text{for all } a \in \alg \] gives a strongly continuous semigroup $T = ( T_t )_{t \ge 0}$ of completely positive contractions on~$\alg$ such that $T_t( x ) = \exp( t \phi_\vac^\vac )( x )$ for all $x \in \alg_0$ and $t \ge 0$. In particular, $T_t( \id ) = \id$ for all $t \ge 0$ and~$\alg_0$ is a core for the generator of $T$. \end{theorem} \begin{proof} Properties (i) and (ii) are immediate consequences of~\eqref{eqn:wqsde} and~\eqref{eqn:matsp} respectively. For (iii), note that if $x \in \alg_0$ and $f$,~$g \in \elltwo$ then Theorem~\ref{thm:intrep} implies that \[ t \mapsto \jt_t[ f, g ]( x ) = \bigl( \id_\ini \otimes \bra{\evec{f}} \bigr) j_t( x ) \bigl( \id_\ini \otimes \ket{\evec{g}} \bigr) \exp\Bigl( -\int_t^\infty \langle f( s ), g( s ) \rangle \std s \Bigr) \] is norm continuous; the general case follows by approximation. In order to establish (iv), fix~$s \ge 0$ and continuous functions $f$,~$g \in \elltwo$, and let \[ J_t := \jt_s[ f, g ] \comp \jt_t[ f( \cdot + s ), g( \cdot + s ) ] \qquad \text{for all } t \ge 0. \] We will show that $J_t = \jt_{s + t}[ f, g ]$. First note that for any $x \in \alg_0$ and $t > 0$, the map \[ F : [ 0, t ] \to \alg; \ r \mapsto \jt_r [ f( \cdot + s ), g( \cdot + s ) ] \bigl( \phi^{\wh{f}( r + s )}_{\wh{g}( r + s )}( x ) \bigr) \langle \evec{f_{[ s + r , s + t )}}, \evec{g_{[ s + r, s + t )}} \rangle \] is continuous, hence Bochner integrable, and so \[ x \langle \evec{f_{[ s, s + t )}}, \evec{g_{[ s, s + t )}} \rangle + \int^t_0 F( r ) \std r \in \alg. \] By the adaptedness of $\jt_t( x )$ and~\eqref{eqn:wqsde}, \begin{multline*} \langle u, \Bigl( x \langle \evec{f_{[ s, s + t )}}, \evec{g_{[ s, s + t )}} \rangle + \int^t_0 F( r ) \std r \Bigr) v \rangle \\ \begin{aligned} & = \langle u, x v \rangle \langle \evec{f( \cdot + s )_{t)}}, \evec{g( \cdot + s )_{t)}} \rangle \\ & \quad + \int^t_0 \langle u \evec{f( \cdot + s )_{r)}}, j_r \bigl( \phi^{\wh{f}(r + s)}_{\wh{g}(r + s)}( x ) \bigr) v \evec{g( \cdot + s )_{r)}} \langle \evec{f( \cdot + s)_{[ r, t )}}, \evec{g( \cdot + s)_{[ r , t )}} \rangle \std r \\ & = \langle u \evec{f( \cdot + s )_{t)}}, j_t( x ) v \evec{g( \cdot + s )_{t)}} \rangle \\ & = \langle u, \jt_t[ f( \cdot + s ), g( \cdot + s )]( x ) v \rangle. \end{aligned} \end{multline*} Consequently, \begin{align*} \langle u, J_t( x ) v \rangle & = \langle u, \jt_s[ f, g ]( x ) v \rangle \langle \evec{f_{[ s, s + t )}}, \evec{g_{[ s, s + t )}} \rangle \\ & \quad + \int^t_0 \langle u, \jt_s[ f, g ] \comp \jt_r[ f( \cdot + s ), g( \cdot + s ) ]\bigl( \phi^{\wh{f}( r + s )}_{\wh{g}( r + s )}( x ) \bigr) v \rangle \langle \evec{f_{[ s + r, s + t )}}, \evec{g_{[ s + r, s + t )}} \rangle \std r \\ & = \langle u, \jt_s[ f, g ]( x ) v \rangle \langle \evec{f_{[ s, s + t )}}, \evec{g_{[ s, s + t )}} \rangle \\ & \quad + \int^t_0 \langle u, J_r\bigl( \phi^{\wh{f}(r + s )}_{\wh{g}(r + s )}( x ) \bigr) v \rangle \langle \evec{f_{[ s + r, s + t )}}, \evec{g_{[ s + r, s + t )}} \rangle \std r. \end{align*} On the other hand, by~\eqref{eqn:wqsde}, \begin{align*} \langle u, \jt_{s + t}[ f, g ]( x ) v \rangle & = \langle u, x v \rangle \langle \evec{f_{s + t)}}, \evec{g_{ s + t)}} \rangle + \int^s_0 \langle u \evec{f_{s + t)}}, j_r\bigl( \phi^{\wh{f}( r )}_{\wh{g}( r )}( x ) \bigr) v \evec{g_{s + t)}} \rangle \std r \\ & \quad + \int^{s + t}_s \langle u \evec{f_{s + t)}}, j_r\bigl( \phi^{\wh{f}( r )}_{\wh{g}( r )}( x ) \bigr) v \evec{g_{s + t)}} \rangle \std r \\ & = \langle u \evec{f_{s + t)}}, j_s( x ) v \evec{g_{s + t)}} \rangle + \int^t_0 \langle u \evec{f_{s + t)}}, j_{q + s}\bigl( \phi^{\wh{f}( q + s )}_{\wh{g}(q + s)}( x ) \bigr) v \evec{g_{s + t)}} \rangle \std q \\ & = \langle u, \jt_s[ f, g ]( x ) v \rangle \langle \evec{f_{[ s, s + t )}}, \evec{g_{[ s, s + t )}} \rangle \\ & \quad + \int^t_0 \langle u, \jt_{q + s}[ f, g ]\bigl( \phi^{\wh{f}( q + s )}_{\wh{g}( q + s )}( x ) \bigr) v \rangle \langle \evec{f_{[ s + q, s + t )}}, \evec{g_{[ s + q, s + t )}} \std q. \end{align*} Now set $K_t := J_t - \jt_{s+t}[ f, g ]$, so that \[ \langle u, K_t( x ) v \rangle = \int^t_0 \langle u, K_r \bigl( \phi^{\wh{f}( r + s )}_{\wh{g}( r + s )} ( x ) \bigr) v \rangle \, G( r ) \std r, \] where $G : r \mapsto \langle \evec{f_{[ s + r, s + t )}}, \evec{g_{[ s + r, s + t )}} \rangle$ is continuous. As \[ \| K_t \| \le 2 \exp \Bigl( \hlf \bigl( \| f \|^2 + \| g \|^2 \bigr) \Bigr) \qquad \text{for all } t \ge 0, \] iterating the above and estimating as in the proof of Lemma~\ref{lem:uniquesoln} shows that $K \equiv 0$. The density of $\alg_0$ in $\alg$ and of continuous functions in $\elltwo$ now gives (iv). That $T$ is a semigroup follows from this cocycle property (iv): note that \[ T_{s + t} = \jt_{s + t}[ 0, 0 ] = \jt_s[ 0, 0 ] \comp \jt_t[ 0, 0 ] = T_s \comp T_t \qquad \text{for all } s, t \ge 0. \] Contractivity, complete positivity and strong continuity of $T$ are immediate; the exponential identity holds because \begin{equation}\label{eqn:sie} \langle u, T_t( x ) v \rangle = \langle u, x v \rangle + \int_0^t \langle u, T_s\bigl( \phi_\vac^\vac( x ) \bigr) v \rangle \std s \end{equation} for all $u$,~$v \in \ini$, $t \ge 0$ and $x \in \alg_0$, by~\eqref{eqn:wqsde}. That $\alg_0$ is a core for the generator of $T$ follows from Lemma~\ref{lem:abe} and~\cite[Corollary~3.1.20]{BrR02a}. \end{proof} \begin{remark} A $*$-homomorphic Feller cocycle as in Theorem~\ref{thm:cocycle} is called a \emph{quantum flow}; a strongly continuous semigroup $( T_t )_{t \ge 0}$ of completely positive contractions is known as a \emph{quantum dynamical semigroup}, and the condition $T_t( \id ) = \id$ for all $t \ge 0$ means that the semigroup is \emph{conservative}; conservative quantum dynamical semigroups are also known as \emph{quantum Markov semigroups}. Hence Theorem~\ref{thm:cocycle} gives the existence of a quantum flow which dilates a quantum Markov semigroup on the $C^*$~algebra $\alg$. \end{remark} \begin{remark} By Theorem~\ref{thm:cocycle}, the component $\phi_\omega^\omega = \tau$ of the flow generator $\phi$ is closable, with $\overline{\tau}$ being the generator of the quantum Markov semigroup $T$. However, closability of the bimodule map $\delta$ seems to be a much more delicate issue and remains an open question. \end{remark} \begin{theorem}\label{thm:abelian} Consider the family of $*$-homomorphisms $( \jt_t )_{t \ge 0}$ constructed in Theorems~\ref{thm:extension1} and~\ref{thm:extension2}. If $\alg_c$ is a commutative $*$-subalgebra of $\alg$ such that \[ \begin{array}{rl} \text{\rm (i)} & \phi( \alg_c \cap \alg_0 ) \subseteq \alg_c \otimes \bopp \\[1ex] \text{and} \quad \text{\rm (ii)} & \alg_c \cap \alg_0 \text{ is dense in } \alg_c \end{array} \] then the family $\{ \jt_t( a ) : t \ge 0, \ a \in \alg_c \}$ is commutative, \ie the commutator $[ \jt_s( a ), \jt_t( b ) ] = 0$ for all $s$, $t \ge 0$ and $a$,~$b \in \alg_c$. \end{theorem} \begin{proof} The result is immediate when $s = t$, so assume without loss of generality that $s < t$ and let $b \in \alg_c \cap \alg_0$; if \[ K_t( b ) := \langle u \evec{f}, [\jt_s (a), \jt_t (b)] v \evec{g} \rangle = 0, \] where $u$, $v \in \ini$, $f$, $g \in \elltwo$ and $a \in \alg_c$ are arbitrary, then the result follows by~(ii) and the continuity of~$\jt_t$. Write $\jt_t( b ) = \jt_s( b ) + \int_s^t L_r \std \Lambda_r$, where $L = ( L_r )_{r \ge 0}$ is the process defined in~\eqref{eqn:integrand} with $x$ changed to $b$. It is straightforward, using adaptedness, to show that \[ A \int^t_s L_r \std \Lambda_r B = \int^t_s \Sigma( A \otimes 1_\mmul ) \, L_r \, \Sigma( B \otimes 1_\mmul ) \std \Lambda_r \] for any $A$, $B \in \bop{\ini \ctimes \fock_{[ 0, s )}}{} \ctimes 1_{\fock_{[ s, \infty )}}$, where $\Sigma$ is the swap isomorphism defined after~\eqref{eqn:integrand}. Since $\jt$ is a strong solution of the QSDE~\eqref{eqn:EHqsde}, by Corollary~\ref{cor:stsoln}, it follows that \[ K_t( b ) = \int^t_s K_r\bigl( \phi^{\wh{f}( r )}_{\wh{g}( r )}( b ) \bigr) \std r. \] Assumption~(i) allows us to iterate this identity; noting also that \[ | K_r( c ) | \le 2 \| u \| \, \| v \| \, \| \evec{f} \| \, \| \evec{g} \| \, \| a \| \, \| c \| \qquad \text{for all } c \in \alg_c \cap \alg_0, \] one readily obtains the estimate \[ | K_t( b ) | \le 2 \| u \| \, \| v \| \, \| \evec{f} \| \, \| \evec{g} \| \, \| a \| C_b M_b^n \frac{1}{n!} \Bigl( \int_s^t \| \wh{f}( r ) \| \, \| \wh{g}( r ) \| \std r \Bigr)^n, \] where $C_b$ and $M_b$ are constants associated to $b$ through its membership of $\alg_\phi$. Letting $n \to \infty$ gives the result. \end{proof} \begin{remark} If $\alg$ is commutative then conditions~(i) and~(ii) of Theorem~\ref{thm:abelian} are satisfied automatically when $\alg_c = \alg$, so Theorems~\ref{thm:extension1} and \ref{thm:extension2} produce classical Markov semigroups in this case. However, Theorem~\ref{thm:abelian} also allows for the possibility of dealing with different commutative subalgebras that do not commute with one another, a necessary feature of quantum dynamics. \end{remark} \section{Random walks on groups}\label{sec:walks} \begin{definition}\label{dfn:walks} Let $\alg = C_0( G ) \oplus \C 1 \subseteq \bopp\bigl( \ell^2( G ) \bigr)$, where $G$ is a discrete group and $x \in C_0( G )$ acts on~$\ell^2( G )$ by multiplication, and let $\alg_0 = \lin\{ 1, \, e_g : g \in G \}$, where $e_g( h ) := \tfn{g = h}$ for all $h \in G$. That is, $\alg$ is the unitisation of the $C^*$~algebra of functions on $G$ which vanish at infinity and $\alg_0$ is the dense unital subalgebra generated by the functions with finite support; as positivity in the $C^*$-algebraic sense corresponds here to the pointwise positivity of functions,~$\alg_0$ contains its square roots. Let $H$ be a non-empty finite subset of $G \setminus \{ e \}$ and let the Hilbert space~$\mul$ have orthonormal basis $\{ f_h : h \in H \}$; the maps \[ \lambda_h : G \to G; \ g \mapsto h g \qquad ( h \in H ) \] correspond to the permitted moves in the random walk constructed on~$G$. \end{definition} \begin{lemma}\label{lem:walks} Given a \emph{transition function} \[ t : H \times G \to \C; \ ( h, g ) \mapsto t_h( g ), \] the map \[ \phi : \alg_0 \to \alg_0 \otimes \bopp; \ x \mapsto \begin{bmatrix} \sum_{h \in H} | t_h |^2 ( x \comp \lambda_h - x ) & \sum_{h \in H} \overline{t_h} ( x \comp \lambda_h - x ) \otimes \bra{f_h} \\[1ex] \sum_{h \in H} t_h ( x \comp \lambda_h - x ) \otimes \ket{f_h} & \sum_{h \in H} ( x \comp \lambda_h - x ) \otimes \dyad{f_h}{f_h} \end{bmatrix} \] is a flow generator such that \[ \phi( e_g ) = e_g \otimes m_e( g ) + \sum_{h \in H} e_{h^{-1}g} \otimes m_h( h^{-1} g ) \qquad \text{for all } g \in G, \] where \[ m_e( g ) := \begin{bmatrix} -\sum_{h \in H} | t_h( g ) |^2 & -\sum_{h \in H} \overline{t_h( g )} \bra{f_h} \\[1ex] -\sum_{h \in H} t_h( g ) \ket{f_h} & -1_\mul \end{bmatrix} \quad \text{and} \quad m_h( g ) := \begin{bmatrix} | t_h( g ) |^2 & \overline{t_h( g )} \bra{f_h} \\[1ex] t_h( g ) \ket{f_h} & \dyad{f_h}{f_h} \end{bmatrix}. \] Hence \[ \phi_n( e_g ) = \sum_{h_1 \in H \cup \{ e \}} \cdots \sum_{h_n \in H \cup \{ e \}} e_{h_n^{-1} \cdots h_1^{-1} g} \otimes m_{h_n}( h_n^{-1} \cdots h_1^{-1} g ) \otimes \cdots \otimes m_{h_1}( h_1^{-1} g ) \] for all $n \in \N$ and $g \in G$. \end{lemma} \begin{proof} The first claim is readily verified with the aid of Lemma~\ref{lem:gen}; the second is immediate. \end{proof} \begin{theorem}\label{thm:walks} Let $\alg$ be as in Definition~\ref{dfn:walks} and $\phi$ as in Lemma~\ref{lem:walks}. If the transition function~$t$ is chosen such that $\alg_\phi = \alg_0$ then there exists an adapted family of unital $*$-homomorphisms $\bigl( \jt_t : \alg \to \bop{\ini \ctimes \fock}{} \bigr)_{t \ge 0}$ which forms a Feller cocycle in the sense of Theorem~\ref{thm:cocycle} and satisfies the quantum stochastic differential equation~\eqref{eqn:EHqsde} in the strong sense on~$\alg_0$ for all~$t \ge 0$. Setting \[ T_t( a ) := \bigl( \id_\ini \otimes \bra{\evec{0}} \bigr) \jt_t( a ) \bigl( \id_\ini \otimes \ket{\evec{0}} \bigr) \qquad \text{for all } a \in \alg \text{ and } t \ge 0 \] gives a classical Markov semigroup~$T$ on $\alg$ whose generator is the closure of \[ \tau : \alg_0 \to \alg_0; \ x \mapsto \sum_{h \in H} | t_h |^2 ( x \comp \lambda_h - x ). \] \end{theorem} \begin{proof} This follows from Theorems~\ref{thm:extension1} and~\ref{thm:abelian} together with Lemma~\ref{lem:walks}. \end{proof} \begin{remark}\label{rem:walks} Given $g \in G$, let $A := \bigl[ B \ 1_\mul \bigr] \in \bop{\C \oplus \mul}{\mul}$, where $B := \sum_{h \in H} t_h( g ) \ket{f_h}$. Then $m_e( g ) = -A^* A$ and \[ \| m_e( g ) \| = \| A A^* \| = \| B B^* + 1_\mul \| = \| B^* B \| + 1 = 1 + \sum_{h \in H} | t_h( g ) |^2. \] It may be shown similarly that $\| m_h( g ) \| = 1 + | t_h( g ) |^2$ for all $g \in G$ and $h \in H$, so if \begin{equation}\label{eqn:walks} M_g := \lim_{n \to \infty} \sup\bigl\{ | t_h( h_n^{-1} \cdots h_1^{-1} g ) | : h_1, \ldots, h_n \in H \cup \{ e \}, \ h \in H \bigr\} < \infty \end{equation} then \[ \| \phi_n( e_g ) \| \le ( 1 + | H | + 2 | H | M_g^2 )^n \qquad \text{for all } n \in \Z_+, \] where $| H |$ denotes the cardinality of $H$. Hence $\alg_\phi = \alg_0$ if \eqref{eqn:walks} holds for all $g \in G$. \end{remark} \begin{remark} If $t$ is bounded then clearly~\eqref{eqn:walks} holds for all $g \in G$. In this case, there exist bounded operators $L \in \bop{\ini}{\ini \ctimes \mul}$, $S \in \bop{\ini \ctimes \mul}{}$ and $F \in \bop{\ini \ctimes \mmul}{}$ such that \[ L = \sum_{h \in H} t_h \otimes \ket{f_h}, \quad S = \sum_{h \in H} S_h \otimes \dyad{f_h}{f_h} \quad \text{and} \quad F = \begin{bmatrix} -\frac{1}{2} L^* L & -L^* \\[1ex] S L & S - 1_{\ini \otimes \mul} \end{bmatrix}, \] where $t_h$ acts by multiplication and $S_h$ is the unitary operator on $\ell^2( G )$ such that $e_g \mapsto e_{h g}$. It follows from \cite[Theorems~7.1 and 7.5]{LiW00b} that the Hudson--Parthasarathy QSDE \[ U_0 = I_{\ini \otimes \fock}, \qquad \rd U_t = (F \otimes 1_\fock) \Sigma ( U_t \otimes I_\mmul ) \std \Lambda_t, \] where $\Sigma$ is the swap isomorphism defined after~\eqref{eqn:integrand}, has a unique solution which is a unitary cocycle. Furthermore, by \cite[Theorem~7.4]{LiW00b}, setting \[ k_t( a ) := U^*_t ( a \otimes 1_\fock ) U_t \quad \text{for all } a \in \bop{\ini}{} \text{ and } t \ge 0 \] defines a quantum flow $k$ with generator \[ \varphi : \bop{\ini}{} \to \bop{\ini \ctimes \mmul}{}; \ a \mapsto ( a \otimes 1_\mmul ) F + F^* ( a \otimes 1_\mmul ) + F^* \Delta ( a \otimes 1_\mmul ) F. \] A short calculation shows that $\varphi$ is of the form covered by Lemma~\ref{lem:bddphi}, with \[ \pi( a ) = S^* ( a \otimes 1_\mul ) S, \qquad \delta( a ) = -L a + \pi( a ) L \quad \text{and} \quad \tau( a ) = -\hlf\{ L^* L, a \} + L^* \pi( a ) L \] for all $a \in \bop{\ini}{}$. It follows that $\varphi|_{\alg_0} = \phi$, where $\phi$ is the flow generator of Lemma~\ref{lem:walks}, and so the cocycle $\jt$ given by Theorem~\ref{thm:walks} is the restriction of $k$ to $\alg$. However, this construction by conjugation does not give the Feller property, that $\alg$ is preserved by $k$. \end{remark} \begin{example} If $G = ( \Z, + )$, $H = \{ \pm 1 \}$ and the transition function $t$ is bounded, with $t_{+1}( g ) = 0$ for all $g < 0$ and $t_{-1}( g ) = 0$ for all $g \le 0$, then the Markov semigroup $T$ given by Theorem~\ref{thm:walks} corresponds to the classical birth-death process with birth and dates rates $| t_{+1} |^2$ and $| t_{-1} |^2$, respectively. The cocycle constructed here is Feller, as it acts on $\alg = C_0( \Z ) \oplus \C 1$, in contrast to \cite[Example~3.3]{PaS90}, where the cocycle acts on the whole of $\ell^\infty( \Z )$. \end{example} \begin{remark} If $G = ( \Z, + )$, $H = \{ +1 \}$ and $t_{+1} : g \mapsto 2^g$ then $M_g = 2^g$ and the condition \eqref{eqn:walks} holds for all $g \in G$. Thus Theorem~\ref{thm:walks} applies to examples where the transition function $t$ is unbounded. \end{remark} \section{The symmetric quantum exclusion process}\label{sec:sqep} This section was inspired by Rebolledo's treatment of the quantum exclusion process: see~\cite[Examples~2.4.3 and~4.1.3]{Reb05}. \begin{definition} Let $I$ be a non-empty set. The \emph{CAR algebra} is the unital $C^*$~algebra $\alg$ with generators $\{ b_i : i \in I \}$, subject to the anti-commutation relations \begin{equation}\label{eqn:car} \{ b_i, b_j \} = 0 \qquad \text{and} \qquad \{ b_i, b_j^* \} = \tfn{i = j} \qquad \text{for all } i, j \in I. \end{equation} It follows from~\eqref{eqn:car} that the $b_i$ are nonzero partial isometries for all $i \in I$. As is well known \cite[Proposition~5.2.2]{BrR02b}, $\alg$ is represented faithfully and irreducibly on $\fock_-\bigl( \ell^2( I ) \bigr)$, the Fermionic Fock space over $\ell^2( I )$; in other words, we may (and do) suppose that $\alg \subseteq \bop{\ini}{}$, where $\ini := \fock_-\bigl( \ell^2 ( I ) \bigr)$, and the algebra identity $\id = \id_\ini$. \end{definition} \begin{remark} The elements of $I$ may be taken to correspond to sites at which Fermionic particles may exist, with the operators $b_i$ and $b_i^*$ representing the annihilation and creation, respectively, of a particle at site~$i$. \end{remark} \begin{notation} Let $\alg_0$ be the unital algebra generated by $\{ b_i, b_i^* : i \in I \}$; by definition, this is a norm-dense unital $*$-subalgebra of $\alg$. \end{notation} \begin{lemma}\label{lem:fdsa} For each $x \in \alg_0$ there exists a finite subset $J \subseteq I$ such that $x$ lies in the finite-dimensional $*$-subalgebra \[ \alg_J := \lin\bigl\{ b_{j_1}^* \cdots b_{j_q}^* b_{i_1} \cdots b_{i_p} : 0 \le p, q \le |J|, \ \{ i_1, \ldots, i_p \} \in J^{(p)}, \ \{ j_1, \ldots, j_q \} \in J^{(q)} \} \subseteq \alg_0, \] where $J^{(p)}$ denote the set of subsets of~$J$ with cardinality $p$ \etc. Consequently, $\alg$ is an AF algebra and $\alg_0$ contains its square roots. \end{lemma} \begin{proof} By employing the anti-commutation relations~\eqref{eqn:car}, any finite product of terms from the generating set $\{ b_i, b_i^* : i \in I \}$ may be reduced to a linear combination of words of the form \begin{equation}\label{eqn:canonical} b_{j_1}^* \cdots b_{j_q}^* b_{i_1} \cdots b_{i_p}, \end{equation} where $i_1$, \ldots, $i_p$ are distinct elements of $I$, as are $j_1$, \ldots, $j_q$, and $p$,~$q \in \Z_+$, with an empty product equal to~$\id$. As every element of $\alg_0$ is a finite linear combination of such terms, the first claim follows. The second claim holds by Remark~\ref{rmk:af}. \end{proof} \begin{definition} Let $\{ \alpha_{i, j} : i, j \in I \} \subseteq \C$ be a fixed collection of \emph{amplitudes}. We may view $( I, \{ \alpha_{i, j} : i, j \in I \} )$ as a weighted directed graph, where $I$ is the set of vertices, an edge exists from $i$ to~$j$ if $\alpha_{i, j} \neq 0$ and $\alpha_{i, j}$ is a complex weight on the edge from vertex~$i$ to vertex~$j$, which may differ from the weight $\alpha_{j, i}$ from $j$ to $i$. For all $i \in I$, let \[ \supp( i ) := \{ j \in I : \alpha_{i, j} \neq 0 \} \ \text{ and } \ \supp^+( i ) := \supp( i ) \cup \{ i \}. \] Thus $\supp( i )$ is the set of sites with which site $i$ interacts and $| \supp( i ) |$ is the valency of the vertex $i$. We require that the valencies are finite: \begin{equation}\label{eqn:finite} | \supp( i ) | < \infty \quad \text{for all } i \in I. \end{equation} The transport of a particle from site~$i$ to site~$j$ with amplitude $\alpha_{i, j}$ is described by the operator \[ t_{i, j} :=\alpha_{i, j} \, b_j^* b_i. \] \end{definition} \begin{definition} Let $\{ \eta_i : i \in I \} \subseteq \R$ be fixed. The total energy in the system is given by \[ h := \sum_{i \in I} \eta_i \, b_i^* b_i, \] where $\eta_i$ gives the energy of a particle at site~$i$. If the set $\{ i \in I: \eta_i \neq 0 \}$ is infinite then the proper interpretation of~$h$ involves issues of convergence; below it will only appear in a commutator with elements of $\alg_0$, which is sufficient to give a well-defined quantity. \end{definition} \begin{lemma}\label{lem:tau} Let \[ \tau_{i, j}( x ) := t_{i, j}^* [ t_{i, j}, x ] + [ x, t_{i, j}^* ] t_{i, j} = | \alpha_{i, j} |^2 \bigl( b_i^* b_j [ b_j^* b_i, x ] + [ x, b_i^* b_j ] b_j^* b_i \bigr) \] for all $i$,~$j \in I$ and $x \in \alg$, and let \begin{equation}\label{eqn:energy} [ h, x ] := \sum_{i \in I } \eta_i [ b_i^* b_i, x ] \end{equation} for all $x \in \alg_0$. Setting \begin{equation} \tau( x ) := \I [ h, x ] - \hlf \sum_{i, j \in I} \tau_{i, j}( x ) \end{equation} defines a $*$-linear map $\tau : \alg_0 \to \alg_0$. \end{lemma} \begin{proof} Let $x \in \alg_0$ and note that $x \in \alg_J$ for some finite set $J \subseteq I$, by Lemma~\ref{lem:fdsa}. Furthermore, \[ [ b_j^* b_i, x ] = b^*_j \{ b_i, x \} - \{ b^*_j, x \} b_i = 0 \qquad \text{whenever } i \not \in J \text{ and } j \not \in J, \] so \[ [ h, x ] = \sum_{i \in J} \eta_i [ b_i^* b_i , x ] \in \alg_J \qquad \text{and} \qquad \tau( x ) = \I [ h, x ] - \hlf \sum_{i, j \in J^+} \tau_{i, j}( x ) \in \alg_{J^{+}}, \] where \begin{equation}\label{eqn:Jdash} J^+ := \bigcup_{k \in J} \supp^+( k ). \end{equation} Hence $\tau( \alg_J ) \subseteq \alg_{J^+}$ and, as~\eqref{eqn:finite} implies that $J^+$ is finite, it follows that $\alg_0$ is invariant under~$\tau$. The $*$-linearity of $\tau$ is immediately verified. \end{proof} \begin{lemma}\label{lem:delta} Let \[ \delta_{i, j}( x ) := [ t_{i, j} , x ] = \alpha_{i, j} ( b_j^* b_i x - x b_j^* b_i ) \] for all $i$, $j \in I$ and $x \in \alg$, and let $\mul$ be a Hilbert space with orthonormal basis $\{ f_{i, j} : i, j \in I \}$. Setting \begin{equation}\label{eqn:deldef} \delta( x ) := \sum_{i, j \in I} \delta_{i, j}( x ) \otimes \ket{f_{i, j}} \end{equation} for all $x \in \alg_0$ defines a linear map $\delta : \alg_0 \to \alg_0 \otimes \ket{\mul}$ such that \begin{align} \delta( x y ) & = \delta( x ) y + ( x \otimes \id_\mul ) \delta( y ) \label{eqn:derv} \\[1ex] \text{and} \qquad \delta^\dagger( x ) \delta( y ) & = \tau( x y ) - \tau( x ) y - x \tau( y ) \end{align} for all $x$, $y \in \alg_0$, where $\tau$ is as defined in Lemma~\ref{lem:tau}. \end{lemma} \begin{proof} The series in~\eqref{eqn:deldef} contains only finitely many terms, since if $x \in \alg_J$ then \[ \delta_{i, j}( x ) = 0 \qquad \text{when } \{ i, j \} \not \subseteq J^+. \] Hence $\delta$ is well defined, and~\eqref{eqn:derv} holds because each $\delta_{i, j}$ is a derivation. A short calculation shows that \begin{equation}\label{eqn:diss} \tau_{i, j}( x y ) - \tau_{i, j}( x ) y - x \tau_{i, j}( y ) = -2 \delta_{i, j}^\dagger( x ) \delta_{i, j}( y ) \end{equation} for all $x$, $y \in \alg$. Since $x \mapsto [ b_i^* b_i, x ]$ is a derivation for all $i \in I$, it follows from~\eqref{eqn:diss} that \[ \tau( x y ) - \tau( x ) y - x \tau( y ) = \sum_{i, j \in I} \delta_{i, j}^\dagger( x ) \delta_{i, j}( y ) = \delta^\dagger( x ) \delta( y ) \qquad \text{for all } x, y \in \alg_0. \qed \] \renewcommand{\qed}{} \end{proof} \begin{lemma}\label{lem:sqepphi} The map \begin{equation}\label{eqn:sqepphi} \phi : \alg_0 \to \alg_0 \otimes \bopp; \ x \mapsto \begin{bmatrix} \tau( x ) & \delta^\dagger( x ) \\[1ex] \delta( x ) & 0 \end{bmatrix}, \end{equation} where $\tau$, $\delta$ and $\delta^\dagger$ are as defined in Lemmas~\ref{lem:tau} and~\ref{lem:delta}, is a flow generator. If the amplitudes satisfy the \emph{symmetry condition} \begin{equation}\label{eqn:symmetric} | \alpha_{i, j} | = | \alpha_{j, i} | \qquad \text{for all } i, j \in I \end{equation} then, for all $n \in \N$ and $i_0 \in I$, \begin{equation}\label{eqn:phid} \phi_n( b_{i_0} ) = \sum_{i_1 \in \supp^+( i_0 )} \cdots \sum_{i_n \in \supp^+( i_{n - 1} )} b_{i_n} \otimes B_{i_{n - 1}, i_n} \otimes \cdots \otimes B_{i_0, i_1}, \end{equation} where \[ B_{i, j} := \tfn{j = i} \lambda_i \dyad{\vac}{\vac} + \dyad{\vac}{\alpha_{i, j} f_{i, j}} - \dyad{\alpha_{j, i} f_{j, i}}{\vac} \] and \[ \lambda_i := -\I \eta_i - \hlf \sum_{j \in \supp( i )} | \alpha_{j, i} |^2 \] for all $i$, $j \in I$. \end{lemma} \begin{proof} The first claim is an immediate consequence of Lemmas~\ref{lem:tau}, \ref{lem:delta} and \ref{lem:gen}. If $i$,~$j$,~$k \in I$ then a short calculation shows that \[ \tau_{j, k}( b_i ) = \left\{ \begin{array}{ll} | \alpha_{i, i} |^2 b_i & ( j = i, k = i ), \\[1ex] | \alpha_{j, i} |^2 b_j^* b_j b_i & ( j \neq i, k = i ), \\[1ex] | \alpha_{i, k} |^2 b_k b_k^* b_i & ( j = i, k \neq i ), \\[1ex] 0 & ( j \neq i, k \neq i ). \end{array} \right. \] Since \[ [ h, b_i ] = \sum_{j \in I} \eta_j [ b_j^* b_j, b_i ] = \eta_i [ b_i^* b_i, b_i ] = - \eta_i b_i, \] the symmetry condition~\eqref{eqn:symmetric} implies that \[ \tau( b_i ) = \lambda_i b_i \qquad \text{for all } i \in I. \] Furthermore, if $i$, $j$, $k \in I$ then \[ \delta_{j, k}( b_i ) = \alpha_{j, k} ( b_k^* b_j b_i - b_i b_k^* b_j ) = -\alpha_{j, k} \{ b_k^*, b_i \} b_j = - \tfn{k = i} \alpha_{j, i} \, b_j \] and \[ \delta^\dagger_{j, k}( b_i ) = \overline{\alpha_{j, k}} ( b_i b_j^* b_k - b_j^* b_k b_i ) = \overline{\alpha_{j, k}} \{ b_i, b_j^* \} b_k = \tfn{j = i} \overline{\alpha_{i, k}} \, b_k; \] thus \[ \delta( b_i ) = \sum_{j, k \in I} \delta_{j, k}( b_i ) \otimes \ket{f_{j, k}} = -\sum_{j \in \supp( i )} \alpha_{j, i} \, b_j \otimes \ket{f_{j, i}} \] and \[ \delta^\dagger( b_i ) = \sum_{j, k \in I} \delta^\dagger_{j, k}( b_i ) \otimes \bra{f_{j, k}} = \sum_{k \in \supp( i )} \overline{\alpha_{i, k}} \, b_k \otimes \bra{f_{i, k}}. \] Hence \begin{align*} \phi( b_i ) & = \lambda_i b_i \otimes \dyad{\vac}{\vac} - \sum_{j \in \supp( i )} \alpha_{j, i} b_j \otimes \dyad{f_{j, i}}{\vac} + \sum_{k \in \supp( i )} \overline{\alpha_{i, k}} b_k \otimes \dyad{\vac}{f_{i, k}} \\[1ex] & = \sum_{j \in \supp^+( i )} b_j \otimes \bigl( \tfn{j = i} \lambda_i \dyad{\vac}{\vac} + \dyad{\vac}{\alpha_{i, j} f_{i, j}} - \dyad{\alpha_{j, i} f_{j, i}}{\vac} \bigr) \end{align*} and the identity~\eqref{eqn:phid} follows. \end{proof} \begin{theorem} Let $\alg$ be the CAR algebra and let $\phi$ be defined as in Lemma~\ref{lem:sqepphi}. If the amplitudes $\{ \alpha_{i , j} \}$ and energies $\{ \eta_i \}$ are chosen so that $\alg_\phi = \alg_0$ then there exists an adapted family of unital $*$-homomorphisms $\bigl( j_t : \alg \to \bop{\ini \ctimes \fock}{} \bigr)_{t \ge 0}$ which forms a Feller cocycle in the sense of Theorem~\ref{thm:cocycle} and satisfies the quantum stochastic differential equation~\eqref{eqn:EHqsde} in the strong sense on~$\alg_0$ for all~$t \ge 0$. Setting \[ T_t( a ) := \bigl( \id_\ini \otimes \bra{\evec{0}} \bigr) j_t( a ) \bigl( \id_\ini \otimes \ket{\evec{0}} \bigr) \qquad \text{for all } a \in \alg \text{ and } t \ge 0 \] gives a quantum Markov semigroup~$T$ on $\alg$ whose generator is the closure of \[ \tau : \alg_0 \to \alg_0; \ x \mapsto \I \sum_{i \in I} \eta_i [ b_i^* b_i , x ] - \hlf \sum_{i, j \in I} | \alpha_{i, j} |^2 \bigl( b_i^* b_j [ b_j^* b_i, x ] + [ x, b_i^* b_j ] b_j^* b_i \bigr). \] \end{theorem} \begin{proof} This is an immediate consequence of Theorem~\ref{thm:extension1}, Theorem~\ref{thm:cocycle} and Lemma~\ref{lem:sqepphi}. \end{proof} \begin{example}\label{ex:sym+bounds} Suppose that the amplitudes satisfy the symmetry condition~\eqref{eqn:symmetric}, and further that there are uniform bounds on the amplitudes, valencies and energies: \begin{equation}\label{eqn:allbdd} M := \sup_{ i, j \in I } | \alpha_{ i, j } | < \infty, \qquad V := \sup_{i \in I} | \supp( i ) | < \infty \qquad \text{and} \qquad H := \sup_{i \in I} | \eta_i | < \infty. \end{equation} It follows that \[ | \lambda_i | \le | \eta_i | + \hlf V M^2 \qquad \text{and} \qquad \| B_{i, j} \| \le | \lambda_i | + 2 M \le H + \hlf V M^2 + 2 M \] for all $i$,~$j \in I$. Hence, for all $n \in \Z_+$, \[ \| \phi_n( b_i ) \| \le ( V + 1 )^n \bigl( H + \hlf V M^2 + 2 M \bigr)^n \] and so $\alg_\phi = \alg_0$, by Corollary~\ref{cor:bound}. Hence there is a flow on $\alg$ for this generator. \end{example} \begin{example} We can lift the boundedness assumptions in Example~\ref{ex:sym+bounds} by taking $I$ to be a disjoint union of subsets, \[ I = \bigsqcup_{k \in K} I_k, \] such that there is no transport between any of these subsets, \ie \[ \alpha_{ i, j } \neq 0 \text{ only if there is some } k \in K \text{ such that } i, j \in I_k. \] Assume the symmetry condition~\eqref{eqn:symmetric} once again. Suppose that in each $I_k$ the conditions of~\eqref{eqn:allbdd} are satisfied, but with respect to constants $M_k$, $V_k$ and $H_k$ that depend on $k$. Then, if $i \in I_k$, we get the estimate \[ \| \phi_n( b_i ) \| \le ( V_k + 1 )^n \bigl( H_k + \hlf V_k M_k^2 + 2 M_k \bigr)^n \] and so $\alg_\phi = \alg_0$ once more, but now it is possible that $M = \infty$ \etc. \end{example} \begin{example} To create an example where the graph associated to $I$ has only one component, but where we do not assume $M < \infty$ as in Example~\ref{ex:sym+bounds}, assume once again that $I$ is decomposed into a disjoint union: \[ I = \bigsqcup_{k \in \Z_+} I_k \qquad \text{with } | I_k | < \infty \text{ for all } k \in \Z_+. \] This time assume, as well as the symmetry condition~\eqref{eqn:symmetric}, that $\alpha_{i,j} = 0$ unless there is some $k \in \Z_+$ such that $i \in I_k$ and $j \in I_{k + 1}$, or $j \in I_k$ and $i \in I_{k + 1}$, so that there is transport only between neighbouring levels in $I$. Set \[ a_k = \sup \{ |\alpha_{ i, j }|: i \in I_k, \ j \in I_{k + 1} \} \qquad \text{for all } k \in \Z_+, \] and furthermore assume that the energies are bounded, \ie $H < \infty$. Now if $k \in \N$ and $i \in I_k$ then \begin{align*} \sum_{j \in \supp^+( i )} \| B_{i, j} \| & \le \| B_{i, i} \| + \sum_{j \in I_{k - 1}} \| B_{i, j} \| + \sum_{j \in I_{k + 1}} \| B_{i, j} \| \\ & \le | \lambda_i | + 2 | I_{k - 1} | a_{k - 1} + 2 | I_{k + 1} | a_k, \end{align*} with a similar estimate holding if $i \in I_0$. Furthermore, \[ | \lambda_i | \le H + \hlf | I_{k - 1} | a_{k - 1}^2 + \hlf | I_{k + 1} | a_k^2. \] As in Example~\ref{ex:sym+bounds}, if it can be shown that \[ \sum_{j \in \supp^+( i )} \| B_{ i, j } \| \le C \] for some constant $C$ that does not depend on $i$, it follows that $\| \phi_n( b_i ) \| \le C^n$ for each $n \in \Z_+$ and $i \in I$, and so $\alg_\phi = \alg_0$ once more. Here, the previous working shows this will hold if there are constants $a > 0$, $b > 0$ and $p \ge 1$ such that \[ a_k \le \frac{a}{( k + 2 )^p} \quad \text{and} \quad | I_k | \le b ( k + 1 )^p \qquad \text{for all } k \in \Z_+. \] It is clear that this can yield an example where $M = \infty$, \ie there is no upper bound on the valencies. \end{example} \section[Flows on universal C* algebras] {Flows on universal \boldmath{$C^*$}~algebras}\label{sec:uni} \subsection{The non-commutative torus} \begin{definition}\label{dfn:nct} Let $\lambda \in \T$, the set of complex numbers with unit modulus. The \emph{non-commutative torus} is the universal $C^*$~algebra $\alg$ generated by unitaries $U$ and $V$ which satisfy the relation \[ U V = \lambda V U. \] Let $\alg_0$ denote the dense $*$-subalgebra of $\alg$ generated by $U$ and $V$. There is a faithful trace $\tr$ on~$\alg$ such that $\tau( U^m V^n ) = \tfn{m = n = 0}$ for all $m$, $n \in \Z$; the proof of this in \cite[pp.166--168]{Dav96} is valid for all $\lambda$. Consequently $\{U^m V^n : m, n \in \Z\}$ is a basis for~$\alg_0$. \end{definition} \begin{lemma}\label{lem:nct-real} Let $\ini := \ell^2( \Z^2 )$, let \[ ( U_c u )_{m, n} = u_{m + 1, n} \quad \text{and} \quad ( V_c u )_{m, n} = \lambda^m u_{m, n + 1} \qquad \text{for all } u \in \ini \text{ and } m, n \in \Z, \] and let $\alg_c \subseteq \bop{\ini}{}$ be the $C^*$~algebra generated by $U_c$ and $V_c$. There is a $C^*$~isomorphism from~$\alg$ to $\alg_c$ such that $U \mapsto U_c$ and $V \mapsto V_c$. Moreover, under this map the trace $\tr$ corresponds to the vector state given by $e \in \ini$ such that $e_{m, n} = \tfn{m = n = 0}$ for all $m$, $n \in \Z$. \end{lemma} \begin{proof} Unitarity of $U_c$ and $V_c$ is immediately verified, as is the identity $U_c V_c = \lambda V_c U_c$, so the universality of~$\alg$ gives a surjective $*$-homomorphism from $\alg$ to $\alg_c$. Injectivity is a consequence of the final observation, that $\tr$ corresponds to the vector state given by~$e$. \end{proof} From now on we will identify $\alg$ and $\alg_c$. \begin{definition}\label{dfn:rotates} For each $(\mu, \nu) \in \T^2$, let $\pi_{\mu, \nu}$ be the automorphism of $\alg$ such that \[ \pi_{\mu, \nu}( U^m V^n ) = \mu^m \nu^n U^m V^n \qquad \text{for all } m, n \in \Z; \] the existence of $\pi_{\mu, \nu}$ is an immediate consequence of universality. \end{definition} The proofs of the next two lemmas are a matter of routine algebraic computation. \begin{lemma}\label{lem:newderivs} For all $a$, $b \in \Z$, define maps ${}_a\delta : \alg_0 \to \alg_0$ and $\delta_b : \alg_0 \to \alg_0$ by linear extension of the identities \[ {}_a\delta( U^m V^n ) = m U^{a + m} V^n \quad \text{and} \quad \delta_b( U^m V^n ) = n \lambda^{-b m} U^m V^{b + n} \qquad \text{for all } m, n \in \Z. \] Then ${}_a\delta$ is a $\pi_{1, \lambda^a}$-derivation and $\delta_b$ is a $\pi_{\lambda^{-b}, 1}$-derivation; moreover, their adjoints are such that \[ {}_a\delta^\dagger( U^m V^n ) = -m \lambda^{a n} U^{-a + m} V^n \quad \text{and} \quad \delta_b^\dagger( U^m V^n ) = -n U^m V^{-b + n} \] for all $m$, $n \in \Z$. \end{lemma} \begin{remark} The sufficient condition in Lemma~\ref{lem:newderivs} is also necessary. It is easy to show that if~${}_a\delta$ is a $\pi_{\mu, \nu}$-derivation then $\mu = 1$ and $\nu = \lambda^a$; similarly, if $\delta_b$ is a $\pi_{\mu, \nu}$-derivation then $\mu = \lambda^{-b}$ and $\nu = 1$. \end{remark} \begin{lemma}\label{lem:nct} With $\alg_0$ as in Definition~\ref{dfn:nct}, and ${}_a\delta$ and $\delta_b$ as in Lemma~\ref{lem:newderivs}, fix $c_1$,~$c_2 \in \C$ and let \[ \phi : \alg_0 \to \alg_0 \otimes \bop{\C^3}{}; \ x \mapsto \begin{bmatrix} \tau( x ) & \overline{c_1} \, {}_a\delta^\dagger( x ) & \overline{c_2} \, \delta^\dagger_b( x ) \\[1ex] c_1 \, {}_a\delta( x ) & \pi_{1, \lambda^a}( x ) - x & 0 \\[1ex] c_2 \, \delta_b( x ) & 0 & \pi_{\lambda^{-b}, 1}( x ) - x \end{bmatrix}, \] where the map \[ \tau : \alg_0 \to \alg_0; \ U^m V^n \mapsto -\hlf \bigl( | c_1 |^2 m^2 + | c_2 |^2 n^2 \bigr) U^m V^n. \] Then $\tau$ is $*$-linear and $\phi$ is a flow generator. \end{lemma} \begin{lemma}\label{lem:inorout} Let $\phi$ be as in Lemma~\ref{lem:nct}. If $a = b = 0$ then $\alg_\phi = \alg_0$; conversely, if $a \neq 0$ and $c_1 \neq 0$ then $U \notin \alg_\phi$, and if $b \neq 0$ and $c_2 \neq 0$ then $V \notin \alg_\phi$. \end{lemma} \begin{proof} When $a = b = 0$, note that $\phi( U ) = U \otimes m_U$ and $\phi( V ) = V \otimes m_V$, where \[ m_U := \begin{bmatrix} -\hlf | c_1 |^2 & -\overline{c_1} & 0 \\[1ex] c_1 & 0 & 0 \\[1ex] 0 & 0 & 0 \end{bmatrix} \qquad \text{and} \qquad m_V := \begin{bmatrix} -\hlf | c_2 |^2 & 0 & -\overline{c_2} \\[1ex] 0 & 0 & 0 \\[1ex] c_2 & 0 & 0 \end{bmatrix}. \] Hence $\phi_n( U ) = U \otimes m_U^{\otimes n}$ and $\phi_n( V ) = V \otimes m_V^{\otimes n}$, so $U$, $V \in \alg_\phi$, as claimed, and $\alg_\phi = \alg_0$, by Corollary~\ref{cor:bound}. If $a > 0$ then, by induction, one gets that \[ {}_a\delta^n( U ) = \prod_{i = 0}^{n - 1} \bigl( i a + 1 \bigr) U^{a n + 1} \qquad \text{for all } n \in \N. \] Let $e = [1 \ 0 \ 0]^T$ and $f = [0 \ 1 \ 0]^T$ be unit vectors in~$\C^3$, and note that \[ \bigl( \id_\ini \otimes \bra{f} \otimes \cdots \otimes \bra{f} \bigr) \phi_n( x ) \bigl( \id_\ini \otimes \ket{e} \otimes \cdots \otimes \ket{e} \bigr) = c_1^n \, {}_a\delta^n( x ) \qquad \text{for all } x \in \alg_0, \] so \[ \| \phi_n( U ) \| \ge | c_1 |^n \prod_{i = 0}^{n - 1} \bigl( i a + 1 \bigr) \ge | c_1 |^n n!. \] If $a < 0$ then, by considering ${}_a\delta^\dagger$ instead, we see that \[ \| \phi_n( U ) \| \ge \| \bigl( \id_\ini \otimes \bra{e} \otimes \cdots \otimes \bra{e} \bigr) \phi_n( U ) \bigl( \id_\ini \otimes \ket{f} \otimes \cdots \otimes \ket{f} \bigr) \| \ge | c_1 |^n n!. \] A similar proof shows that $V \notin \alg_\phi$ when $b \neq 0$. \end{proof} \begin{remark} The lower bounds obtained in Lemma~\ref{lem:inorout} when $a \neq 0$ or $b \neq 0$ show that our techniques do not apply in these cases. The same problem arises if one attempts to use the results of \cite{FaS93} instead. \end{remark} The following theorem gives the existence of a quantum flow used by Goswami, Sahu and Sinha \cite[Theorem~2.1(i)]{GSS05}. \begin{theorem}\label{thm:nctcgs} Let $\alg$ be as in Definition~\ref{dfn:nct} and $\phi$ as in Lemma~\ref{lem:nct} for $a = b = 0$. There exists an adapted family $j$ of unital $*$-homomorphisms from $\alg$ to $\bop{\ini \ctimes \fock}{}$ such that \[ \langle u \evec{f}, j_t( x ) v \evec{g} \rangle = \langle u \evec{f}, ( x v ) \evec{g} \rangle + \int_0^t \langle u \evec{f}, j_s\bigl( \phi^{\wh{f}( s )}_{\wh{g}( s )}( x ) \bigr) v \evec{g} \rangle \std s \] for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$, $x \in \alg_0$ and $t \ge 0$. \end{theorem} \begin{proof} This follows from Theorem~\ref{thm:extension2}, Lemma~\ref{lem:nct} and Lemma~\ref{lem:inorout}. \end{proof} \begin{remark} The cocycle constructed in Theorem~\ref{thm:nctcgs} is essentially a classical object: as noted in \cite[Theorem~2.1]{CGS03}, when $c_1 = c_2 = \I$ one may take \[ j_t( x ) := \beta\bigl( \exp( 2 \pi \I B^1_t ), \exp( 2 \pi \I B^2_t ) \bigr)( x ) \qquad \text{for all } x \in \alg \text{ and } t \ge 0, \] where $\beta : \T^2 \to \Aut( \alg )$ is the natural action of the $2$-torus $\T^2$ on $\alg$, so that \[ \beta( z, w )( U^m V^n ) = z^m w^n U^m V^n \qquad \text{for all } ( z, w ) \in \T^2, \] and the Fock space $\fock$ is identified in the usual manner with the $L^2$~space of the two-dimensional classical Brownian motion $( B^1, B^2 )$. \end{remark} The existence of flows where the generator has non-zero gauge part may also be established. \begin{lemma}\label{lem:gauge} Fix $( \mu, \nu ) \in \T^2$ with $\mu \neq 1$. Let $\alg_0$ be as in Definition~\ref{dfn:nct} and $\pi_{\mu,\nu}$ as in Definition~\ref{dfn:rotates}. There exists a flow generator \[ \phi : \alg_0 \to \alg_0 \otimes \bop{\C^2}{}; \ x \mapsto \begin{bmatrix} \tau( x ) & -\mu \delta( x ) \\[1ex] \delta( x ) & \pi_{\mu, \nu}( x ) - x \end{bmatrix}, \] where the $\pi_{\mu, \nu}$-derivation \begin{equation}\label{eqn:pidact} \delta : \alg_0 \to \alg_0; \ U^m V^n \mapsto \frac{1 - \mu^m \nu^n}{1 - \mu} U^m V^n \end{equation} is such that $\delta^\dagger = -\mu \delta$, and the map \[ \tau := \frac{\mu}{1 - \mu} \delta : \alg_0 \to \alg_0; \ U^m V^n \mapsto \frac{\mu ( 1 - \mu^m \nu^n )}{( 1 - \mu )^2} U^m V^n. \] Furthermore, $U$, $V \in \alg_\phi$ and so $\alg_\phi = \alg_0$. \end{lemma} \begin{proof} Using the basis $\{ U^m V^n: m, n \in \Z \}$, one can readily verify that $\delta$ is a $\pi_{\mu, \nu}$-derivation such that $\delta^\dagger = -\mu \delta$, and hence $\phi$ is a flow generator. Since \[ \phi( U ) = U \otimes \begin{bmatrix} \ds\frac{\mu}{1 - \mu} & -\mu \\[2ex] 1 & \mu - 1 \end{bmatrix} \qquad \text{and} \qquad \phi( V ) = V \otimes \frac{1 - \nu}{1 - \mu} \begin{bmatrix} \ds\frac{\mu}{1 - \mu} & -\mu \\[2ex] 1 & \mu - 1 \end{bmatrix}, \] the fact that $\{ U, V \} \subseteq \alg_\phi$ follows as in the proof of Lemma~\ref{lem:inorout}. \end{proof} \begin{remark} It is curious that for $\phi$ as in Lemma~\ref{lem:gauge} we have $\tau = \mu ( 1 - \mu )^{-1} \delta$, and so $\tau$ is first rather than second order. Whether or not $\phi$ or, equivalently, $\delta$ is bounded is an open question; our existence result obviates the need to determine this. \end{remark} \begin{theorem} Let $\alg$ be as in Definition~\ref{dfn:nct} and $\phi$ as in Lemma~\ref{lem:gauge}. There exists an adapted family $j$ of unital $*$-homomorphisms from $\alg$ to $\bop{\ini \ctimes \fock}{}$ such that \[ \langle u \evec{f}, j_t( x ) v \evec{g} \rangle = \langle u \evec{f}, ( x v ) \evec{g} \rangle + \int_0^t \langle u \evec{f}, j_s\bigl( \phi^{\wh{f}( s )}_{\wh{g}( s )}( x ) \bigr) v \evec{g} \rangle \std s \] for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$, $x \in \alg_0$ and $t \ge 0$. \end{theorem} As noted by Hudson and Robinson~\cite{HuR88}, the following result makes clear why in Theorem~\ref{thm:nctcgs} it is necessary to use two dimensions of noise to obtain a process whose flow generator includes both of the derivations $c_1 \, {}_0\delta$ and $c_2 \, \delta_0$: the linear combination $\delta = c_1 \, {}_0\delta + c_2 \, \delta_0$ can appear on the right-hand side of \eqref{eqn:tau} only when the coefficients $c_1$ and $c_2$ satisfy a particular algebraic relation. \begin{proposition}\label{prp:nogo} Let ${}_0\delta$ and $\delta_0$ be as in Lemma~\ref{lem:newderivs}, and let $\delta = c_1 \, {}_0 \delta + c_2 \, \delta_0$ for complex numbers $c_1$ and $c_2$. A necessary and sufficient condition for the existence of a linear map $\tau : \alg_0 \to \alg$ such that \[ \tau( x y ) - \tau( x ) y - x \tau( y ) = \delta^\dagger( x ) \delta( y ) \qquad \text{for all } x, y \in \alg_0 \] is the equality $c_1 \overline{c_2} = \overline{c_1} c_2$. \end{proposition} \begin{proof} This may be established by adapting slightly the proof of \cite[Theorem~2.2]{Rob90}. \end{proof} \subsection{The universal rotation algebra} To avoid the issue of Proposition~\ref{prp:nogo}, Hudson and Robinson work with the universal rotation algebra. \begin{definition}\label{dfn:ura} Let $\alg$ be the \emph{universal rotation algebra} \cite{AnP89}: this is the universal $C^*$~algebra with unitary generators $U$, $V$ and $Z$ satisfying the relations \[ U V = Z V U, \qquad U Z = Z U \quad \text{and} \quad V Z = Z V. \] It may be viewed as the group $C^*$~algebra corresponding to the discrete Heisenberg group \[ \Gamma := \langle u, v, z \mid u v = z v u, \ u z = z u, \ v z = z v \rangle; \] from this perspective, its universal nature is immediately apparent. Letting $\alg_0$ denote the $*$-subalgebra generated by $U$, $V$ and~$Z$, there are skew-adjoint derivations \[ \delta_1 : \alg_0 \to \alg_0; \ U^m V^n Z^p \mapsto m U^m V^n Z^p \quad \text{and} \quad \delta_2 : \alg_0 \to \alg_0; \ U^m V^n Z^p \mapsto n U^m V^n Z^p \] for all $m$, $n$, $p \in \Z$. \end{definition} \begin{remark} For a concrete version of the universal rotation algebra, let $\ini := \ell^2( \Z^3 )$ and define operators $U_c$, $V_c$ and $Z_c$ by setting \[ ( U_c u )_{m, n, p} = u_{m + 1, n, p}, \quad ( V_c u )_{m, n, p} = u_{m, n + 1, m + p} \quad \text{and} \quad ( Z_c u )_{m, n, p} = u_{m, n, p + 1} \] for all $u \in \ini$ and $m$, $n$, $p \in \Z$. It is readily verified that $U_c$, $V_c$ and $Z_c$ are unitary and satisfy the commutation relations as claimed; let $\alg_c$ be the $C^*$~algebra generated by these operators. Universality gives a surjective $*$-homomorphism from $\alg$ to $\alg_c$ such that $U \mapsto U_c$, $V \mapsto V_c$ and $Z \mapsto Z_c$, and injectivity may be established in the same manner as for the non-commutative torus: there is a faithful state $\tau$ on $\alg$ such that $\tau( U^m V^n Z^p ) = \tfn{m = n = p = 0}$ and this corresponds to the vector state given by $e \in \ini$ such that $e_{m, n, p} = \tfn{m = n = p = 0}$. \end{remark} \begin{lemma}\label{lem:ura} With $\alg_0$, $\delta_1$ and $\delta_2$ as in Definition~\ref{dfn:ura}, fix $c_1$,~$c_2 \in \C$, let $\delta = c_1 \delta_1 + c_2 \delta_2$ and define the \emph{Bellissard map} \[ \tau : \alg_0 \to \alg_0; \ U^m V^n Z^p \mapsto - \bigl( \hlf | c_1 |^2 m^2 + \hlf | c_2 |^2 n^2 + \overline{c_1} c_2 m n + ( \overline{c_1} c_2 - c_1 \overline{c_2} ) p \bigr) U^m V^n Z^p, \] Then $\tau$ is $*$-linear and such that \[ \tau( x y ) - \tau( x ) y - x \tau( y ) = \delta^\dagger( x ) \delta( y ) \qquad \text{for all } x, y \in \alg_0, \] so the map \[ \phi : \alg_0 \to \alg_0 \otimes \bop{\C^2}{}; \ x \mapsto \begin{bmatrix} \tau( x ) & \delta^\dagger( x ) \\[1ex] \delta( x ) & 0 \end{bmatrix} \] is a flow generator. Furthermore, $U$, $V$, $Z \in \alg_\phi$ and $\alg_\phi = \alg_0$. \end{lemma} \begin{proof} The algebraic statements are readily verified, and a short calculation shows that \[ \phi( U ) = U \otimes m_U, \qquad \phi( V ) = V \otimes m_V \qquad \text{and} \qquad \phi( Z ) = Z \otimes m_Z, \] where \[ m_U = \begin{bmatrix} -\hlf | c_1 |^2 & -\overline{c_1} \\[1ex] c_1 & 0 \end{bmatrix}, \qquad m_V = \begin{bmatrix} -\hlf | c_2 |^2 & -\overline{c_2} \\[1ex] c_2 & 0 \end{bmatrix} \qquad \text{and} \qquad m_Z = \begin{bmatrix} c_1 \overline{c_2} - \overline{c_1} c_2 & 0 \\[1ex] 0 & 0 \end{bmatrix}. \] Hence \[ \phi_n( U ) = U \otimes m_U^{\otimes n}, \qquad \phi_n( V ) = V \otimes m_V^{\otimes n} \qquad \text{and} \qquad \phi_n( Z ) = Z \otimes m_Z^{\otimes n} \] for all $n \in \Z_+$, so $U$, $V$, $Z \in \alg_\phi$ and $\alg_\phi = \alg_0$, by Corollary~\ref{cor:bound}. \end{proof} The following theorem is an algebraic version of the result presented by Hudson and Robinson in \cite[Section~4]{HuR88}. \begin{theorem} Let $\alg$ be as in Definition~\ref{dfn:ura} and $\phi$ as in Lemma~\ref{lem:ura}. There exists an adapted family $j$ of unital $*$-homomorphisms from $\alg$ to $\bop{\ini \ctimes \fock}{}$ such that \[ \langle u \evec{f}, j_t( x ) v \evec{g} \rangle = \langle u \evec{f}, ( x v ) \evec{g} \rangle + \int_0^t \langle u \evec{f}, j_s\bigl( \phi^{\wh{f}( s )}_{\wh{g}( s )}( x ) \bigr) v \evec{g} \rangle \std s \] for all $u$,~$v \in \ini$, $f$,~$g \in \elltwo$, $x \in \alg_0$ and $t \ge 0$. \hfill$\Box$ \end{theorem} \section*{Acknowledgements} ACRB thanks Professors Kalyan Sinha and Tirthankar Bhattacharyya for hospitality at the Indian Institute of Science, Bangalore, and in Munnar, Kerala; part of this work was completed during a visit to India supported by the UKIERI research network \emph{Quantum Probability, Noncommutative Geometry and Quantum Information}. Thanks are also due to Professor Martin Lindsay for helpful discussions. Funding from Lancaster University's Research Support Office and Faculty of Science and Technology is gratefully acknowledged. SJW thanks Professor Rolando Rebolledo for a very pleasant visit to Santiago in 2006 where thoughts about the quantum exclusion process were first encouraged. Both authors are indebted to the two anonymous referees and the associate editor for their constructive comments on an earlier draft of this paper. \section{References}
{"config": "arxiv", "file": "1209.3639.tex"}
TITLE: Annihilator and Projective Dimension QUESTION [0 upvotes]: I was reading the book A Course in Ring Theory by Passman and in it is the following lemma; and after this lemma there's a example which I don't quite understand; The main thing that I don't understand is the structure of the ring $R$. What is meant by $K[x]$ and why is $R=K\dot{+}K\,\overline{x}$ ? I hope somebody can clarify these to me. REPLY [0 votes]: $K[x]$ means the polynomial ring over $K$. You can prove $K[X]/(x^2)$ is isomorphic to the direct sum $K\dot{+}K\bar{x}$, where $\bar{x}^2=0$, by thinking about the homomorphism $$\phi:K[X]\to K\dot{+}K\bar{x}$$ sending $p(x)$ to $p(\bar{x})$, which is also just the remainder of $p(x)$ when divided by $x^2$ (except with $x$ replaced by $\bar{x}$). The kernel of this map is the ideal $(x^2)$, and $\phi$ is surjective, so $K[x]/(x^2)$ is isomorphic to $K\dot{+}K\bar{x}$ by the First Isomorphism Theorem.
{"set_name": "stack_exchange", "score": 0, "question_id": 716336}
\begin{document} \title[{\scriptsize On a new class of series identities}]{On a new class of series identities} \author{Arjun K. Rathie } \address[Arjun K. Rathie]{Department of Mathematics, Vedant College of Engineering \& Technology (Rajasthan Technical University), Village: Tulsi, Post: Jakhamund, Dist. Bundi, Rajasthan State, India,} \email{arjunkumarrathie@gmail.com} \maketitle \begin{abstract}The aim of this paper is to provide a new class of series identities in the form of four general results. The results are established with the help of generalizatons of the classical Kummer's summation theorem obtained earlier by Rakha and Rathie. Results obtained earlier by Srivastava, Bailey and Rathie et al. follow special cases of our main findings. \\ \textbf{2010 Mathematics Subject Classifications :} Primary : 33B20, 33C20, ~~ Secondary : 33B15, 33C05 \vskip 1pt \noindent \textbf{Keywords:} Generalized hypergeometric function, Kummer's summation theorem, product formulas, generalization, Double series \\ \end{abstract} \section{Introduction and Results Required} We start with the following two very interesting results involving product of generalized hypergeometric series due to Bailey[1] viz. \begin{equation} \begin{aligned} {}_0F_1& \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ \rho \end{array}; -x\right]\\ &= {}_0F_3\left[\begin{array}{c} - \\ \rho, \frac{1}{2}\rho, \frac{1}{2}\rho +\frac{1}{2} \end{array}; -\frac{x^2}{4}\right] \end{aligned} \end{equation} and \begin{equation} \begin{aligned} {}_0F_1& \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ 2-\rho \end{array}; -x\right]\\ &= {}_0F_3\left[\begin{array}{c} - \\ \frac{1}{2}, \frac{1}{2}\rho+\frac{1}{2}, \frac{3}{2}-\frac{1}{2}\rho \end{array}; -\frac{x^2}{4}\right]\\ & + \frac{2(1-\rho)x}{\rho(2-\rho)} ~{}_0F_3\left[\begin{array}{c} - \\ \frac{3}{2}, \frac{1}{2}\rho+ 1, 2-\frac{1}{2}\rho \end{array}; -\frac{x^2}{4}\right] \end{aligned} \end{equation} Bailey[1] established these results with the help of the following classical Kummer's summation theorem[2] viz. \begin{equation} {}_2F_1\left[\begin{array}{c}a, ~b \\ 1+a-b \end{array}; -1\right] = \frac{\Gamma\left(1+\frac{1}{2}a\right)~\Gamma\left(1+a-b\right)}{\Gamma\left(1+a\right)~\Gamma\left(1+\frac{1}{2}a - b\right)} \end{equation} Very recently, Rathie et al.[6] have obtained explicit expressions of\\ (i)$\displaystyle ~{}_0F_1 \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ \rho+i \end{array}; -x\right]$\\ (ii)~ $\displaystyle {}_0F_1 \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ \rho-i \end{array}; -x\right]$\\ (iii) ~$\displaystyle {}_0F_1 \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ 2-\rho+i \end{array}; -x\right]$\\ (iv) ~$\displaystyle {}_0F_1 \left[\begin{array}{c} - \\ \rho \end{array}; x\right] \times {}_0F_1\left[\begin{array}{c} - \\ 2-\rho-i \end{array}; -x\right]$\\ in the most general form for any $i \in \mathbb{Z}_0$ and provided the natural generalizations of the results (1.1) and (1.2). The aim of this paper is to obtain explicit expressions of \\ (a) ~$\displaystyle \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho+i)_n ~m!~n!}$\\ (b) ~$\displaystyle \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho-i)_n ~m!~n!}$\\ (c) ~$\displaystyle \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(2-\rho+i)_n ~m!~n!}$\\ (d) ~$\displaystyle \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(2-\rho-i)_n ~m!~n!}$\\ in the most general form for any $i \in \mathbb{Z}_0$. Here $\{\Delta_m\}$ is a sequence of arbitrary complex numbers. The results are derived with the help of the following generalizations of Kummer's summation theorem obtained earlier by Rakha and Rathie[4] for $i \in \mathbb{Z}_0$ viz. \begin{align}\label{kst1} {}_{2}F_{1} \left[ \begin{matrix} a&b\\1+a-b+i\end{matrix};-1\right] &=\frac{2^{-a}\Gamma\left(\frac{1}{2}\right)\Gamma(b-i)\Gamma(1+a-b+i)}{\Gamma(b)\Gamma\left(\frac{1}{2}a-b+\frac{1}{2}i+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}a-b+\frac{1}{2}i+1\right)}\\ &\times \sum_{r=0}^{i}{i\choose r} (-1)^{r} \frac{\Gamma\left(\frac{1}{2}a-b+\frac{1}{2}i+\frac{1}{2}r+\frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2}a-\frac{1}{2}i+\frac{1}{2}r+\frac{1}{2}\right)}\notag \end{align} and \begin{align}\label{kst2} {}_{2}F_{1} \left[ \begin{matrix} a&b\\1+a-b-i\end{matrix};-1\right] &=\frac{2^{-a}\Gamma\left(\frac{1}{2})\Gamma(1+a-b+i\right)}{\Gamma\left(\frac{1}{2}a-b+\frac{1}{2}i+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}a-b+\frac{1}{2}i+1\right)}\\ &\times \sum_{r=0}^{i}{i\choose r}\frac{\Gamma\left(\frac{1}{2}a-b-\frac{1}{2}i+\frac{1}{2}r+\frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2}a-\frac{1}{2}i+\frac{1}{2}r+\frac{1}{2}\right)}\notag \end{align} Results obtained earlier by Srivastava[8], Bailey[1] and Rathie et al.[6] follow special cases of our main findings. \section{Main Results} The results to be established in this paper are given in the following theorem. \begin{theorem} Let $\{\Delta_n\}$ be a bounded sequence of complex numbers. Then for $i \in \mathbb{Z}_0$, the following general results hold true: \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho+i)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} \frac{\Delta_m ~x^m}{(\rho)_m ~ m!} ~\frac{2^m~\Gamma\left(\h\right)\Gamma(\rho+i)\Gamma(1-\rho-m-i)}{\Gamma(1-\rho-m)~\Gamma\left( \rho+\h i+\h m - \h\right) \Gamma\left(\rho+\h i + \h m\right)}\\ & \times\left( \sum_{r=0}^{i} (-1)^r \binom{i}{r} \frac{\Gamma\left(\rho+\h m + \h i + \h r-\h\right)}{\Gamma\left( \h r- \h i - \h m + \h \right)}\right)\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho-i)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} (-1)^n \frac{\Delta_m ~x^m}{(\rho_m ~ m!} ~\frac{2^m~\Gamma\left(\h\right)\Gamma(\rho-i)}{\Gamma\left( \rho-\h i+\h m - \h\right) \Gamma\left(\rho-\h i + \h m\right)}\\ & \times\left( \sum_{r=0}^{i} \binom{i}{r} \frac{\Gamma\left(\rho+\h m - \h i + \h r-\h\right)}{\Gamma\left( \h r- \h i - \h m + \h \right)}\right) \end{aligned} \end{equation} \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(2-\rho+i)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} \frac{\Delta_m ~x^m}{(\rho)_m ~ m!} ~\frac{2^{\rho-1+m}~\Gamma\left(\h\right)\Gamma(2-\rho+i)\Gamma(-m-i)}{\Gamma(-m)~\Gamma\left( \h m-\h \rho +\h i+1\right) \Gamma\left(\h m - \h \rho+ \h i \right)}\\ & \times\left( \sum_{r=0}^{i} (-1)^r \binom{i}{r} \frac{\Gamma\left(\h m -\h \rho+ \h i + \h r+1\right)}{\Gamma\left( \h r -\h \rho - \h m -\h i+ 1 \right)}\right)\\ \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(2-\rho-i)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} \frac{\Delta_m ~x^m}{(\rho)_m ~ m!} ~\frac{2^{\rho-1+m}~\Gamma\left(\h\right)\Gamma(2-\rho-i)}{\Gamma\left( \h m-\h \rho -\h i+1\right) \Gamma\left(\h m - \h \rho- \h i+\frac{3}{2} \right)}\\ & \times\left( \sum_{r=0}^{i} \binom{i}{r} \frac{\Gamma\left(\h m -\h \rho- \h i + \h r+1\right)}{\Gamma\left( \h r -\h \rho - \h m -\h i+ 1 \right)}\right)\\ \end{aligned} \end{equation} \end{theorem} \subsection*{Derivations :} In order to establish the first result (2.1) asserted in the theorem, we proceed as follows. Denoting the left hand side of (2.1) by $S$, we have $$S=\sum_{m=0}^{\infty} \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho+i)_n ~m!~n!}$$ Replacing $m$ by $m-n$ and using the result[3, Equ.1, p.56] viz. $$\sum_{n=0}^{\infty} \sum_{k=0}^{\infty} A(k, n) = \sum_{n=0}^{\infty} \sum_{k=0}^{n}A(k, n-k)$$ we have $$S=\sum_{m=0}^{\infty} \sum_{n=0}^{m} (-1)^n \frac{\Delta_{m}~ x^{m}}{(\rho)_{m-n}~(\rho+i)_n ~(m-n)!~n!}$$ Using elementary identities[3, p.58] $$(\alpha)_{m-n}=\frac{(-1)^n~(\alpha)_m}{(1-\alpha-m)_n}$$ and $$(m-n)!=\frac{(-1)^n~m!}{(-m)_n}$$ we have, after some algebra $$ S=\sum_{m=0}^{\infty}\frac{\Delta_{m}~ x^{m}}{(\rho)_{m}~m!}~\sum_{n=0}^{m}(-1)^n ~\frac{(-m)_n(1-\rho-m)_n}{(\rho+i)_n n!}$$ Summing up the inner series, we have $$ S= \sum_{m=0}^{\infty} \frac{\Delta_{m}~ x^{m}}{(\rho)_{m}~m!} ~{}_2F_1\left[ \begin{array}{c}-m, 1-\rho-m\\ \rho+i \end{array}; -1\right]$$ We now observe that the series ${}_2F_1$ can be evaluated with the help of the known result (1.4) and we easily arrive at the right hand side of (2.1). This completes the proof of the first result (2.1) asserted in the theorem. In exactly the same manner, the results (2.2) to (2.4) can be established. So we prefer to omit the details. \section{Corollaries} In this section, we shall mention some of the interesting known as well as new results of our main findings. \\ (a) In the result (2.1) or (2.2), if we take $i=0$, we have \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(\rho)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} \frac{\Delta_{2m}~ (-x^2)^{m}}{(\rho)_m~(\h \rho)_m (\h \rho + \h)_m ~2^{2m}~m!} \end{aligned} \end{equation} This is a known result due to Srivastava[8]. Further setting $\Delta_m=1 ~(m \in \mathbb{N}_0)$, we at once get the result (1.1) due to Bailey. \\ (b) In the result (2.3) or (2.4), if we take $i=0$, we have \begin{equation} \begin{aligned} \sum_{m=0}^{\infty}& \sum_{n=0}^{\infty} (-1)^n \frac{\Delta_{m+n}~ x^{m+n}}{(\rho)_m~(2-\rho)_n ~m!~n!}\\ &= \sum_{m=0}^{\infty} \frac{\Delta_{2m}~ (-x^2)^{m}}{\left(\h\right)_m \left(\h \rho + \h\right)_m~\left(\frac{3}{2}-\h \rho\right)_m ~2^{2m}~m!}\\ & + \frac{2(1-\rho)x}{\rho(2-\rho)}~\sum_{m=0}^{\infty} \frac{\Delta_{m}~ (-x^2)^{m}}{\left(\frac{3}{2}\right)_m \left(\h \rho + 1\right)_m~\left(2-\h \rho\right)_m ~2^{2m}~m!} \end{aligned} \end{equation} which appears to be a new result. Further setting $\Delta_m=1 ~(m \in \mathbb{N}_0)$, we at once get another result (1.2) due to Bailey. \\ (c) In (2.1), if we take $i=0, 1, \cdots, 9$; we get known results recorded in [7].\\ (d) In (2.2), if we take $i=0,1, \cdots, 9$; we get known results recorded in [7].\\ (e)In (2.1) to (2.4), if we get $\Delta_m=1 ~(m \in \mathbb{N}_0)$, we get known results obtained very recently by Rathie et al.[6]. We conclude the paper by remarking that the details about the result presented in this paper together with a large number of special cases(known and new), are given in [5].
{"config": "arxiv", "file": "2101.08864.tex"}
TITLE: Mathematical Induction for $4 + 10 + 16 +…+ (6n−2) = n(3n +1)$ QUESTION [0 upvotes]: Use mathematical induction to prove: $$4 + 10 + 16 +…+ (6n−2) = n(3n +1)$$ I'm having a hard time understanding the induction process. Can someone please explain this to me? REPLY [2 votes]: Since you are asking about the inductive process in general, let's use a classic example. Suppose I asked you to sum the numbers from $1$ to $100$ and then someone standing next to you immediately said "$5050$". If you didn't know there was a simple formula to do this you may be impressed. If I told you the formula was $$s=\frac{n(n+1)}{2}$$ where $s$ is the sum, how would you verify that the formula worked? You would try it for different numbers. You would start with $1$, then $2$ then maybe you would try $10$ or $20$ and others until you were confident that the formula worked. But how would you prove that it works for every number? You cannot try an infinite set of numbers because you don't have an infinite amount of time. This is where induction comes in. The idea is this. I can establish that the formula works for a single number. That is what we call the base case. We usually use $0$ or $1$. That's usually pretty easy. Now, for the ingenious part. We can't try every number so we say, suppose it is true for some number. We call that $k$. We then prove that if the formula works for $k$, it must work for $k+1$. That's the trick. We assume nothing about $k$ except that it is a number. By proving that if the formula works for $k$ it works for $k+1$ we establish that whenever we've demonstrated the truth of the formula for say $10$, it must be true for $11$. Now we use that result with our base case of $1$. Since the formula works for $1$, it works for $2$. Since it works for $2$ it must work for $3$ and so on. In that way we have proved that the formula is true for all numbers. Let's do it for this formula. The sum of all numbers from $1$ to $1$ is $1$ and $\frac{1(1+1)}{2}=\frac{2}{2}=1$ so the formula works for $n=1$. That's the base case. For the inductive part, we assume the formula is true for $k$. That means we are assuming that $1+2+...+k=\frac{k(k+1)}{2}$. Therefore, $1+2+...+k+(k+1)= \frac{k(k+1)}{2}+k+1=\frac{k(k+1)}{2}+\frac{2(k+1)}{2}=\frac{(k+1)(k+2)}{2}$ and $$\frac{(k+1)(k+2)}{2}= \frac{(k+1)((k+1)+1)}{2}$$ which if you look at carefully you will see is exactly what we would have gotten by plugging $k+1$ into our formula. Therefore, we conclude that the formula works for all numbers. REPLY [1 votes]: There is a formal proof that induction works but it is easier to just use your intuition. The first step is to check if the statement is true for a starting value. Nothing hard so far. Next we prove that if the statement is true for k then it is true for k+1. What this says, if you have a value for which the statement is true, then the next value after it is also true. Since we checked at the beginning that the statement is true for a starting value, then from what we just showed it is true for the next value. using the same logic is is true for the next value too, an so on. Let's use your exercise as an example. We check if the statement is true for $n=1$. $(6*1-2)=1(3*1+1)$ $4=4$ So the statement is true for $n=1$. Now the next step is to assume that the statement is true for n=k $4 + 10 + 16 +…+ (6k−2) = k(3k +1)$ Now using this asusmption we have to prove that the statement is true for n=k+1. To prove that we have to simply add the term $(6(k+1)-2)$ to both sides of the assumption, since we know it is true. $4 + 10 + 16 +…+ (6k−2)+(6(k+1)-2) = k(3k +1)+(6(k+1)-2)$ Let's look at the right side now $k(3k +1)+(6(k+1)-2)=3k^2+k+6k-4=(k+1)(3(k+1)+1)$ Since this is the form we wanted to acquire we have proven the statement is true for all Natural numbers.
{"set_name": "stack_exchange", "score": 0, "question_id": 1494808}
TITLE: A problem about compact order topological space QUESTION [3 upvotes]: This problem is from terrytao.wordpress.com/books/analysis-ii,Errata to the second edition (hardcover),P.390,Exercise 13.5.8 .I have difficulty proving this,appreciate any help! Show that there exists an uncountable well-ordered set $\omega_1+1$ that has a maximal element $\infty$, and such that the initial segments $\{ x \in \omega_1+1: x < y \}$ are countable for all $y \in \omega_1+1 \backslash \{\infty\}$. (Hint: Well-order the real numbers, take the union of all the countable initial segments, and then adjoin a maximal element $\infty$.) If we give $\omega_1+1$ the order topology, show that $\omega_1+1$ is compact; however, show that not every sequence has a convergent subsequence. REPLY [3 votes]: The set $\omega_1+1$ constructed in the exercise is order-isomorphic to the ordinal of the same name. To show that $\omega_1+1$ is compact, you need to use the well-ordering principle. In any open cover of $\omega_1+1$, $\infty$ (usually called $\omega_1$) must be covered by some open set. This must contain some open interval covering $\infty$, with lower endpoint $\alpha$, say. Then move down to $\alpha$ and look at the open set covering it, and so on. By the well-ordering principle, this process must terminate in a finite number of steps. Collecting the open sets you came across then gives you a finite subcover of the original open cover. The other part of the exercise is incorrect. $\omega_1+1$ is sequentially compact (meaning that any infinite sequence in $\omega_1+1$ has a convergent subsequence.) To see this, take an infinite sequence in $\omega_1+1$. By the well-ordering principle, this sequence has a smallest member, $x_0$, say. The members of the sequence coming after $x_0$ then also have a smallest member, $x_1$, which must be at least as large as $x_0$. Similarly, the members of the sequence after $x_1$ have a smallest member $x_2\ge x_1$, and so on. This gives you an infinite nondecreasing subsequence $x_0\le x_1\le x_2\le \cdots$. Now, let $S:=\{z\mid z\ge x_i \text{ for all } i\}$, which is a nonempty set as it contains $\infty$. Then the sequence $(x_i)$ converges to the minimum element of $S$. Although $\omega_1+1$ is not such a space, there are topological spaces which are compact but not sequentially compact. An example of such a space is $2^{2^\omega}$, where as usual $2=\{0,1\}$ and $\omega=\{0,1,2,3,\dots\}$, and the space is given the product topology of uncountably many copies of the discrete topology on $\{0,1\}$. This space is compact by Tychonoff's theorem but it's easy to prove that the evaluation sequence $$ e_0, e_1, e_2, \dots, \qquad \qquad e_i(q)=q(i) \ \ \text{for all } q\in 2^{\omega} \text{ and } i\in\omega, $$ has no convergent subsequence.
{"set_name": "stack_exchange", "score": 3, "question_id": 330886}
\begin{document} \maketitle \begin{abstract} We define rigorously operators of the form $f(\partial_t)$, in which $f$ is an analytic function on a simply connected domain. Our formalism is based on the Borel transform on entire functions of exponential type. We study existence and regularity of real-valued solutions for the nonlocal in time equation \begin{equation*} f(\partial_t) \phi = J(t) \; \; , \quad t\in \mathbb{R}\; , \end{equation*} and we find its more general solution as a restriction to $\mathbb{R}$ of an entire function of exponential type. As an important special case, we solve explicitly the linear nonlocal zeta field equation \begin{equation*} \zeta(\partial_t^2+h)\phi = J(t)\; , \end{equation*} in which $h$ is a real parameter, $\zeta$ is the Riemann zeta function, and $J$ is an entire function of exponential type. We also analyse the case in which $J$ is a more general analytic function (subject to some weak technical assumptions). This case turns out to be rather delicate: we need to re-interpret the symbol $\zeta(\partial_t^2+h)$ and to leave the class of functions of exponential type. We prove that in this case the zeta-nonlocal equation above admits an analytic solution on a Runge domain determined by $J$. This The linear zeta field equation is a linear version of a field model depending on the Riemann zeta function arising from $p$-adic string theory. \end{abstract} \maketitle \section{Introduction}\label{sec:Introduction} \setcounter{equation}{0} In this paper a {\em nonlocal operator} is an expression of the form $f(\partial_t)$, in which $f$ is an analytic function, and a {\em nonlocal equation} is an equation in which a nonlocal operator appears. It is of course well-known how to define the action of $f(\partial_t)$ on a given class of functions if the ``the symbol" $f$ is a polynomial. It is not obvious how to extend this definition to more general symbols $f$: for instance, $f$ may be beyond the reach of classical tools used in the study of pseudo-differential operators ({\em e.g.}, the derivatives of $f$ may not satisfy appropriated bounds, see \cite{H,DubinskyBook,U}). We provide a rigorous definition of $f(\partial_t)$ for a large class of analytic functions $f$ ---which includes the Riemann zeta function $\zeta$--- in the main body of this work. The theory presented herein includes symbols which cannot be considered via Laplace transform (as in our previous paper \cite{CPR_Laplace}). Our main example of such a symbol is $\zeta(\partial_t^2+h)$, see \cite[Section 6]{CPR_Laplace} and Section 2 below. \medskip We frame our discussion within classical analytic function theory. Interestingly, nonlocal operators appear naturally in this abstract mathematical framework, for example, in the study of the distribution of zeroes of entire functions, see \cite{CardonDA02,CardonGas05,LevinBook1} and references therein. We present one result as an illustration: The Laguerre-P\'olya class, denoted by $\mathcal{L}\mathcal{P}$, is defined as the collection of entire functions $f$ having only real zeros, and such that $f$ has the following factorization \cite[sections 2.6 and 2.7]{BoasBook}: $$ f(z)=cz^m e^{\alpha z- \beta z^2} \prod _{k}\left(1-\dfrac{z}{\alpha_k} \right)e^{z/\alpha_k}\; , $$ where $c,\alpha, \beta, \alpha_k $ are real numbers, $\beta \geq 0$, $\alpha_k \not = 0$, $m$ is a non-negative integer, and $\sum_{k=1}^{\infty}\alpha_k^{-2}< \infty$. Let $D$ be the differentiation operator and $\phi \in \mathcal{L}\mathcal{P}$; the following lemma presents one important instance in which the nonlocal operator $\phi(D)$ (formally defined via power series, see \cite{LevinBook1} and Subsection 3.2 below) is well defined, see \cite[Theorem 8, p. 360]{LevinBook1}. \begin{lemma}\label{lemma0} Let $\phi, f \in \mathcal{L}\mathcal{P}$ such that $$\phi(z)= e^{-\alpha z^2}\phi_1(z) {\mbox \; and \; } f(z)= e^{-\beta z^2}f_1(z)\; ,$$ where $\phi_1,f_1$ have genus $0$ or $1$ and $\alpha, \beta \geq 0$. If $\alpha \beta <1/4$, then $\phi(D) f \in \mathcal{L}\mathcal{P}$. \end{lemma} \noindent The notion of the genus of a function is explained in \cite[p. 22]{BoasBook}. We have, see \cite[Theorem 1]{CardonGas05}. \begin{theorem} \label{ef} Let $\phi, f \in \mathcal{L}\mathcal{P}$ such that $$\phi(z)= e^{-\alpha z^2}\phi_1(z) {\mbox \; and \; } f(z)= e^{-\beta z^2}f_1(z) \; ,$$ where $\phi_1,f_1$ have genus $0$ or $1$ and $\alpha, \beta \geq 0$. If $\alpha \beta <1/4$ and $\phi$ has infinitely many zeros, then $\phi(D) f$ has only simple and real zeros. \end{theorem} We will not use this result explicitly in this paper, but zeroes of entire functions {\em are} crucial in our theory, see for instance Theorem 3.17 and Example 4.1 below, and so we expect that results such as Theorem \ref{ef} will play a role in future developments. Now, as we stated in \cite{CPR_Laplace}, our main motivation for the study of nonlocal equations comes from Physics. Nonlocal operators and equations appear in contemporary physical theories such as string theory, cosmology and non-local theories of gravity. We refer the reader to \cite{BK1,BK2,BiswasTir15,GianlucaBook1,LeiFen17} for information on the last two topics. With respect to string theory, let us mention two important equations: \begin{equation} \label{padic-1} p^{a\, \partial^2_t}\phi = \phi^p \; , \; \; \; \; \; a > 0 \; , \end{equation} and \begin{equation} \label{padic-101} e^{\, \partial^2_t/4}\phi = \phi^{p^2}\; . \end{equation} Equation (\ref{padic-1}) describes the dynamics of the open $p$-adic string for the scalar tachyon field, while equation (\ref{padic-101}) describes the dynamics of the closed $p$-adic string for the scalar tachyon field, see {\it e.g.} \cite{AV,BK2,D,Moeller,V,VV,Vla3}. These equations have been studied formally via integral equations and a ``heat equation" method, see {\em e.g.} \cite{AV,Moeller,MN,V,VV,Vla4}. In the intriguing paper \cite{D}, B. Dragovich constructs a field theory starting from (\ref{padic-1}) whose equation of motion (in $1+0$ dimension) is \begin{equation} \label{zeta} \zeta \left(- \frac{1}{m^2} \, \partial_t^2 + h \right) \phi = U(\phi) \; . \end{equation} Here $\zeta$ is the Riemann zeta function as we noted before; $m,h$ are real parameters and $U$ is some nonlinear function. We understand the Riemann zeta function as the analytic extension of the infinite series $$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s},\; \; Re(s)>1\; ,$$ to the whole complex plane, except $s=1$, where it has a pole of order $1.$ The main aim of this work is the development of a general theory (see section 3) to solve linear nonlocal in time equations of the form \begin{equation}\label{Gen_Eq} f(\partial_t)\phi(t)=g(t)\; , \; \; \; \; \; t \in \mathbb{R}\; , \end{equation} in which the {\em symbol} $f$ is an arbitrary analytic function on a simply connected domain. We have presented a rigorous setting for (a restricted class of) equations (\ref{Gen_Eq}) in \cite{CPR_Laplace}, using Laplace transform. However, in \cite{CPR_Laplace} we also remark that the important equation \begin{equation} \label{zeta1} \zeta \left(- \frac{1}{m^2} \, \partial_t^2 + h \right) \phi = J(t) \; , \end{equation} naturally motivated by (\ref{zeta}), is beyond the reach of our Laplace transform method. One way to move forward is to use Borel transform, as in \cite{CPR}. However, the analysis of \cite{CPR} is restricted to symbols $f$ (see Equation (\ref{Gen_Eq})) which are entire functions, and to right hand side terms $J$ which are functions of exponential type. Equation (\ref{zeta1}) motivates us to remove these two restrictions. This is precisely what we do in this work. \smallskip Let $\Omega \subseteq \mathbb{C}$ be a simply connected domain and, denote by $Exp(\Omega)$ the space of entire functions of finite exponential type such that its elements have Borel transform with singularities in $\Omega$ (see section 3). Let $f$ be an holomorphic function in $\Omega$; we use the Borel transform to define rigorously the operator $f(\partial_t)$ on the space $Exp(\Omega)$, in a way that evokes the definition of classical pseudo-differential operators via Fourier transform. Then, using this theory, we find the most general solution to Equation (\ref{Gen_Eq}) as a restriction to $\mathbb{R}$ of a function in $Exp(\Omega)$, and we apply our theory to the following (normalized version of (\ref{zeta1}) with our signature conventions) linear zeta-nonlocal field equation \begin{equation}\label{eqzeta2} \zeta(\partial_t^2+h)\phi = J\; . \end{equation} Now we expose the organization of this work. In Section 2 we give some preliminary comments which motivate the present work. In Section 3 we consider a general analytic symbol $f$ and we define the action of the operator $f(\partial_t)$ on the space of entire functions of exponential type using Borel transform. We also solve explicitly Equation (\ref{Gen_Eq}). In Section 4 we apply the theory developed in Section 3 to the linear zeta-nonlocal scalar field equation (\ref{eqzeta2}): the zeroes of the Riemann zeta function play an important role in representing its solution. Also in this section, we introduce the space $\mathcal{L}_{>}(\mathbb{R}_+)$ of all real functions $g$ with domain $[0,+\infty)$ such that there exist their Laplace transform $\mathcal{L}(g)$, and $\mathcal{L}(g)$ has an analytic extension to an angular contour, and we study and solve equation (\ref{eqzeta2}) for right hand side in $\mathcal{L}_{>}(\mathbb{R}_+)$. This study involves some delicate limit procedures which take us outside the class of functions of exponential type. Finally, in an appendix we formally derive some equations of motion of interest from a mathematical point of view, including the zeta-nonlocal scalar field proposed by B. Dragovich. \section{A preliminary discussion} As it is discussed in the work of Dragovich \cite{D,D1}, see also the appendix of this work, the following nonlocal equation \begin{equation}\label{Zeq_04} \zeta\left( \dfrac{\square}{2m^2}+h \right )\psi= g(\psi) \end{equation} in which $g(z)$ is an analytic non-linear function of $z$, appears naturally as an interesting mathematical modification of $p$-adic string theory \cite{D,D1,D2}. Motivated by Equation (\ref{Zeq_04}), we study linear equations in $1+0$ dimensions of the form \begin{equation} \label{Zeq_044} \zeta(\partial_t^2+h) \phi = J\; . \end{equation} in which we are using a signature so that $\square = \partial_t^2$ simply for comparison purposes with our previous articles. In our previous work \cite{CPR_Laplace}, we have studied linear nonlocal equations (and its associated Cauchy problem) using an approach based on Laplace transforms and the Doetsch representation theorem, see \cite{Doetsch2}: If $L^p([0,+\infty))$ is the standard $L^p$-Lebesgue space and $H^q(\mathbb{C}_+)$ is the Hardy space, there exists a correspondence between these spaces determined by the Laplace transform $\mathcal{L}:L^p([0,+\infty))\to H^q(\mathbb{C}_+)$ for appropriated Lebesgue exponents $p,q$. In this situation, we obtained exact formulas for the representation of the solution for equations such as (\ref{Gen_Eq}). (The approach of \cite{CPR_Laplace} supersedes previous work \cite{GPR_JMP,GPR_CQG}). One of our results is the following theorem (The function $r$ appearing therein in a ``generalized initial condition", see \cite[Section 3]{CPR_Laplace}): \begin{theorem} \label{main_thm_1} Let us fix a function $f$ which is analytic in a region $D$ which contains the half-plane $\{s \in \mathbb{C} : Re(s) > 0\}$. We also fix $p$ and $p'$ such that $1<p\leq 2$ and $\frac{1}{p}+\frac{1}{p'}=1$, and we consider a function $J \in L^{p'}(\mathbb{R}_+)$ such that $\mathcal{L}(J) \in H^p(\mathbb{C}_+)$. We assume that the function $(\mathcal{L}( J ) + r)/f$ is in the space $H^p(\mathbb{C}_+)$. Then, the linear equation \begin{equation} \label{lin_gen_1} f(\partial_t)\phi = J \end{equation} can be uniquely solved on $L^{p'}(0,\infty)$. Moreover, the solution is given by the explicit formula \begin{equation}\label{lin_gen_1111} \phi = \mathcal{L}^{-1} \left( \frac{\mathcal{L}( J ) + r}{f} \, \right) \; . \end{equation} \end{theorem} \noindent Moreover, using some technical assumptions (see \cite[corollary 2.10]{CPR_Laplace}), the representation formula (\ref{lin_gen_1111}) for the solution can be reduced to \begin{equation}\label{sol_lor_11} \phi = \mathcal{L}^{-1} \left( \frac{\mathcal{L}( J )}{f}\right) + \mathcal{L}^{-1} \left(\frac{r}{f} \, \right) \; . \end{equation} The theory can be applied to various field models; in particular, it can be applied to zeta-nonlocal field models of the form \begin{equation*} \label{Zeq_0444} \zeta(\partial_t+h) \phi = J\; . \end{equation*} for appropriate functions $J$, see \cite[Section 4]{CPR_Laplace}. Now, let us denote by $\mathcal{A}(\mathbb{C}_+)$ the class of functions which are analytic in a region $D$ which contains the half-plane $\{s \in \mathbb{C} : Re(s) > 0\}$. We can see that for $h>1$ the symbol $\zeta(s+h)$ is in the class $\mathcal{A}(\mathbb{C}_+)$ while, if $p(s):=s^2$, the symbol $\zeta_h\circ p(s):=\zeta(s^2+h)$ is not, as we explain presently. It follows from this observation that for some basic forces $J$ (e.g. piecewise smooth functions with exponential decay) we have $ \frac{\mathcal{L}(J)}{\zeta_h\circ p} \not \in H^p(\mathbb{C}_+)\,$, and therefore the representation formula (\ref{sol_lor_11}) of the solution breaks down. We conclude that the study of Equation (\ref{Zeq_044}) requires a generalization of the theory developed in \cite{CPR_Laplace}. First of all, we observe that the properties of the Riemann zeta function (see for instance \cite[Section 4]{CPR_Laplace}) imply that the symbol \begin{equation} \label{nszf} \zeta(s^2+h)=\sum_{n=0}^{\infty}\dfrac{1}{n^{s^2+h}} \end{equation} is analytic in the region $\Gamma:= \{ s \in \mathbb{C} : Re(s)^2-Im(s)^2>1-h \}$, which is not a half-plane; its analytic extension $\zeta \circ p$ has poles at the vertices of the hyperbolas $Re(s)^2-Im(s)^2=1-h $, and its critical region is the set $\{ s \in \mathbb{C} : -h< Re(s)^2-Im(s)^2<1-h \}$. In fact, we recall from \cite[Section 6]{CPR_Laplace} that according to the value of $h$ we have: \begin{itemize} \item [i)]For $h>1$, $\Gamma$ is the region limited by the interior of the dark hyperbola $Re(s)^2-Im(s)^2=1-h $ containing the real axis: \begin{center} \includegraphics[width=4cm, height=4cm]{111} \end{center} {\footnotesize The poles of $\zeta(s^2+h)$ are the vertices of dark hyperbola, indicated by two thick dots. The trivial zeroes of $\zeta(s^2+h)$ are indicated by thin dots on the imaginary axis; and the non-trivial zeroes are located on the darker painted region (critical region).} \item [ii)]For $h<1$, $\Gamma$ is the interior of the dark hyperbola $Re(s)^2-Im(s)^2=1-h $ containing the imaginary axis: \begin{center} \includegraphics[width=4cm, height=4cm]{222} \end{center} {\footnotesize The poles of $\zeta(s^2+h)$ are the vertices of dark hyperbola, indicated by two thick dots. The trivial zeroes of $\zeta(s^2+h)$ are indicated by thin dots on the real axis; the non-trivial zeroes are located on the darker painted region (critical region).} \item [iii)]For $h=1$, $\Gamma$ is the interior of the cones limited by the curves $y=|x|, y=-|x|$. \begin{center} \includegraphics[width=4cm, height=4cm]{333} \end{center} {\footnotesize The pole of $\zeta(s^2+1)$ is the origin (vertex of dark curves $y=|x|, y=-|x|$). The trivial zeroes of $\zeta(s^2+h)$ are indicated by thin dots on the imaginary axis; the non-trivial zeroes are located on the darker painted region (critical region).} \end{itemize} On the other hand, since the Riemann zeta function has an infinite number of nontrivial zeroes in the critical strip (as famously proven by Hadamard and Hardy, see \cite{KaVo} for original references), then the function $\zeta_{h}\circ p(\cdot)$ has also an infinite number of nontrivial zeroes on its critical region. We denote by $\mathcal{Z}$ the set of all such zeroes. By i), ii) and iii) we have that $\mathcal{Z}$ is contained in the corresponding dark dotted region. Moreover $$ \sup_{z\in \mathcal{Z}}|Re(z)|=+\infty.$$ This analysis implies that the function $ \mathcal{L}(J)/(\zeta_h\circ p)$ does not necessarily belongs to $H^p(\mathbb{C}_+)\,$, and therefore the expression $\mathcal{L}^{-1}\left( \frac{\mathcal{L}(J)}{\zeta_h\circ p} \right)$ in the representation of the solution (\ref{sol_lor_11}) does not always make sense. Thus, a new approach for the study of Equation (\ref{Zeq_04}) is necessary. As stated in Section 1, the method that we use is based on the Borel transform, see \cite{CPR,U,BoasBook,DubinskyBook} and references therein. \section{The general theory for nonlocal equations} \subsection{Entire functions of exponential type} \begin{defi} An entire function $\phi:\mathbb{C}\to \mathbb{C}$ is said to be of finite exponential type $\tau_{\phi}$ and finite order $\rho_{\phi}$ if $\tau_{\phi}$ and $\rho_{\phi}$ are the infimum of the positive numbers $\tau, \rho$ such that the following inequiality holds: $$|\phi(z)|\leq Ce^{\tau|z|^{\rho}}, \quad \forall z\in \mathbb{C}\; ,\; {\; and\; some\; } C>0.$$ \end{defi} \noindent When $\rho_{\phi}=1$, the function ${\phi}$ is said to be of {\bf exponential type}, or of {\bf exponential type $\tau_{\phi}$}, if we need to specify its type. If we know the representation of a entire function $\phi$ as a power series, then a standard way to calculate its order, see \cite[Theorem 2.2.2]{BoasBook}, is by using the formula \begin{equation}\label{Order} \rho = \left( 1-\lim_{n\to \infty}\sup\dfrac{\ln{|\phi^{(n)}(0)|}}{n\ln(n)} \right)^{-1}\; , \end{equation} while its type is calculated as follows (see formula 2.2.12, page 11 in \cite{BoasBook}): \begin{equation}\label{Type} \sigma=\lim_{n\to \infty}\sup |\phi^{(n)}(0)|^{1/n}\; . \end{equation} The space of functions of exponential type will be denoted by $Exp(\mathbb{C})$. \begin{defi} Let $\phi$ be an entire function of exponential type $\tau_{\phi}$. If $\phi(z)=\sum_{n=0}^{\infty}a_nz^n$; then, the Borel transform of $\phi$ is defined by $$B(\phi)(z):=\sum_{n=0}^{\infty}\dfrac{a_nn!}{z^{n+1}}\; .$$ \end{defi} It can be checked that $B(\phi)(z)$ converges uniformly for $|z|>\tau_{\phi}$, see \cite[p. 106]{U}, and therefore it defines an analytic function on $\{z\in \mathbb{C} : |z|>\tau_{\phi}\}$. An alternative way to calculate the Borel transform of an entire function $\phi$ of exponential type $\tau_{\phi}$ is the use of the complex Laplace transform, see \cite{BoasBook}: if $z = |z| \exp(i \theta)$ is such that $|z|=r>\tau_{\phi}$, then \begin{equation}\label{EqCBT} B(\phi)(re^{i\theta})=e^{i\theta}\int_0^{\infty}\phi(te^{i\theta})e^{-rt}dt\; . \end{equation} \noindent In particular, if $z\in \mathbb{R}$ is such that $z>\tau_{\phi}$, then $B(\phi)$ can be obtained as the analytic continuation of its real Laplace transform: \begin{equation}\label{EqRBT} \mathcal{L}(\phi)(z)=\int_0^{\infty}\phi(t)e^{-zt}dt\; . \end{equation} For $\phi \in Exp(\mathbb{C})$, we let $s(B(\phi))$ denote the set of singularities of the Borel transform of $\phi$, and we also denote by $S(\phi)$ the {\em conjugate diagram} of $B(\phi)$, this is, the closed convex hull of the set of singularities $s(B(\phi))$. The set $S(\phi)$ is a convex compact subset of $\mathbb{C}$, and we can check that $B(\phi)$ is an analytic function in $\mathbb{S}\setminus S(\phi)$, where $\mathbb{S}$ is the extended complex plane $\mathbb{C}\cup \{\infty\}$ and we have set $B(\phi)(\infty)=0$. \begin{rem} Hereafter we will use the following notation: if $\Omega \subset \mathbb{C}$ is a domain, then $\Omega^c$ denotes the complement of $\Omega$ in the extended complex plane $\mathbb{S}$. \end{rem} \begin{defi} Let $\Omega$ be a simple connected domain; we define the space $Exp(\Omega)$ as the set of all entire functions $\phi$ of exponential type such that its Borel transform $B(\phi)$ has all its singularities in $\Omega$ and such that $B(\phi)$ admits an analytic continuation to $\Omega^c$. This continuation will continue being denoted by $\mathcal{B}(\phi)$. \end{defi} \begin{rem} Since $\Omega^c$ is closed, the fact that $\mathcal{B}(\phi)$ is analytic in $\Omega^c$ means that there exists an open set $U\subset \mathbb{S}$ such that $\mathcal{B}(\phi)$ is analytic in $U$ and $\Omega^c\subset U$. Therefore, using the alternative definition of Borel transform {\rm (\ref{EqCBT})}, we understand $\mathcal{B}(\phi)$ as the analytic continuation of its real Laplace transform {\rm (\ref{EqRBT})}. \end{rem} \begin{rem} In what follows, {\bf the Borel transform of} $\phi \in Exp(\Omega)$ always refers to the complex function $\mathcal{B} (\phi)$ together with an open set $U$ in the extended plane $\mathbb{S}$ such that $\mathcal{B}(\phi)$ is analytic in $U$ and $\Omega^c\subset U$. \end{rem} \begin{defi} For a function $\phi \in Exp(\Omega)$, we define the set $H_1(\phi)$ to be the class of closed rectifiable and simple curves in $\mathbb{C}$ which are pairwise homologous and contain the set $s(B(\phi))$ in their bounded regions. \end{defi} The following theorem is a classical result about the representation of entire functions of exponential type. \begin{theorem}\label{Pol_Rep}(Polya's Representation Theorem). Let $\phi$ be a function of exponential type and let $\gamma\in H_1(\phi)$. Then, $$\phi(z)=\dfrac{1}{2\pi i}\int_{\gamma}e^{sz}\mathcal{B}(\phi)(s)ds.$$ In particular, if $\phi$ is of type $\tau$ and $R> \tau$, then $$\phi(z)=\dfrac{1}{2\pi i}\int_{|s|=R}e^{sz}B(\phi)(s)ds.$$ \end{theorem} This theorem is discussed for instance in \cite[p. 107]{U} and \cite{CPR}. A proof appears in \cite[Theorem 5.3.5]{BoasBook}. \begin{defi} If $d$ is a distribution with compact support in $\mathbb{C}$, we define the $\mathcal{P}$-transform of $d$ by: $$\mathcal{P}(d)(z):= \, <e^{sz},d>\; , \quad z\in \mathbb{C}.$$ \end{defi} The $\mathcal{P}$-transform is called the Fourier-Laplace transform in \cite{U} and the Fourier-Borel transform in Martineau's classical paper \cite{Mar}. For the particular case in which $d=\mu$ is a complex measure with compact support, the $\mathcal{P}$-transform is $$\mathcal{P}(\mu)(z) = \underset{\mathbb{C}}\int e^{sz}d\mu (s)\; , z\in \mathbb{C}.$$ \begin{proposition}\label{ProChar} Let $\mathcal{O} \subset \mathbb{C}$ be a simply connected domain; if $\mu$ is a complex measure with compact support contained in $\mathcal{O}$, then $\mathcal{P}(\mu)\in Exp(\mathcal{O})$. Conversely, given any function $\phi\in Exp(\mathcal{O})$, there exists a complex measure $\mu_{\phi}$ with compact support in $\mathcal{O}$ and such that $\mathcal{P}(\mu_{\phi})(z)=\phi(z)$. The measure $\mu_{\phi}$ is not unique: it can be chosen to have support on any given curve $\gamma\in H_1(\phi)$. \end{proposition} \begin{proof} Let $K$ be the support of the complex measure $\mu$ (which is of finite variation). The $\mathcal{P}$-transform of $\mu$ is $$\mathcal{P}(\mu)(z)=\underset{\mathbb{C}}\int e^{sz}d\mu (s)\; ,$$ which is an entire function. Now, if $R=sup_{s\in K}|s|$, we have $$|\mathcal{P}(\mu)(z)|\leq \underset{\mathbb{C}}\int e^{R|z|}|d\mu (s)|\leq e^{R|z|}||\mu||\; ,$$ that is, $\mathcal{P}(\mu)$ is an entire function of exponential type. It remains to show that $s(B(\mathcal{P}(\mu)))\subset \mathcal{O}$. To do that, we compute the Borel transform of $\mathcal{P}(\mu)$ as the analytic continuation of its real Laplace transform. Let $z$ be a real number such that $z> R$. Then, we have \begin{eqnarray*} B(\mathcal{P}(\mu))(z)&=&\int_0^{+\infty}e^{-zt}\mathcal{P}(\mu)(t)dt\\ &=&\int_0^{+\infty}e^{-zt}\underset{K}\int e^{st}d\mu (s)dt\\ &=&\underset{K}\int \int_0^{+\infty} e^{(s-z)t}dt d\mu (s)\\ &=&\underset{K}\int \dfrac{1}{z-s}d\mu (s)\; , \end{eqnarray*} in which we have used Fubini's theorem. From these computations we have that $\mathcal{P}(\mu) \in Exp(\mathcal{O})$. In fact, the last integral is the analytic continuation $\mathcal{B}(\mathcal{P}(\mu))$. To prove the converse implication, let $\gamma\in H_1(\phi)$. Then, Polya's representation theorem (Theorem \ref{Pol_Rep}) means that $\phi$ can be represented as $\phi(z)=\mathcal{P}(\mu_{\phi})(z)$ for the complex measure $\mu_{\phi}$ defined by \begin{equation}\label{defMea} d\mu_{\phi}(s):=\mathcal{B}(\phi)(s)\dfrac{ds}{2\pi i}\; , \quad s\in \gamma\; . \end{equation} \end{proof} \begin{rem} The analogous of this proposition for general distributions can be found in {\rm \cite{CPR}}. We prefer our version with complex measures because in this work we do not use the machinary of distributions. \end{rem} \subsection{Functions of $\partial_t$ via Borel transform} Let $\Omega$ be a simply connected domain (equivalently, let $\Omega$ be a Runge domain, see \cite[Prop. 17.2]{TrevesBook}). In what follows we denote by $Hol(\Omega)$ the set of holomorphic functions on $\Omega$. \begin{defi}\label{Def_Oper} Let $f \in Hol(\Omega)$, $\phi \in Exp(\Omega)$, and assume that $\mu_{\phi}$ is the complex measure defined in {\rm \ref{defMea}} with compact support on a curve $\gamma \in H_1(\phi)$ so that $\mathcal{P}(\mu_{\phi})=\phi$. We define the operator $f(\partial_t)\phi$ as $$f(\partial_t)\phi:=\mathcal{P}(f \mu_{\phi})\; .$$ \end{defi} In Definition \ref{Def_Oper} we assume that the curve $\gamma\in H_1(\phi)$, which defines the measure $\mu_\phi$, is contained in the region $\Omega$. By Cauchy's Theorem, the operator $f(\partial_t)$ is independent of such a $\gamma$ and therefore is well defined. In this way, using the new measure $f\mu_{\phi}$ and the definition of the $\mathcal{P}$-transform, we see that Equation (\ref{Gen_Eq}) is understood as the following integral equation \begin{equation}\label{EqInt} \int_{\gamma}e^{st}f(s)\mathcal{B}(\phi)(s)\frac{ds}{2\pi i}=g(t),\; \; \gamma \in H_1(\phi). \end{equation} We may wonder whether it is necessary to restrict ourselves to Runge domains. Indeed, the counterexample below is proposed in \cite[pag. 27]{DubinskyBook} in order to show that $f(\partial_t)\phi$ can be multivalued if $\Omega$ is not a Runge domain. We reproduce it here for the sake of completeness. \begin{example} \label{runge} Set $\Omega = \mathbb{C}\setminus \{0\}$. We consider the symbol $f(s)=\dfrac{1}{s}$ which is holomorphic in $\Omega$ and the function $\phi_{\lambda}(z)=e^{\lambda z}, \lambda \in \Omega$. Then the set $s(\mathcal{B}(\phi_{\lambda}))=\{\lambda\}$ and we have, for a closed curve $\gamma$ in $\Omega$ containing $\lambda$, two possible values for $f(\partial_t)\phi_{\lambda}(z)$: either $$f(\partial_t)\phi_{\lambda}(z)=\dfrac{1}{\lambda}(e^{\lambda z}-1) \; ,$$ or $$f(\partial_t)\phi_{\lambda}(z)=\dfrac{1}{\lambda}e^{\lambda z} \; ,$$ depending on whether $\gamma$ encloses the point $\{0\}$ or not. \end{example} \smallskip The following proposition justifies some of the formal computations appearing in physical papers (see \cite{BK1,BK2} and references therein). It says that the integral operator $f(\partial_t)$ is {\em locally} ({\em i.e.}, whenever $f$ can be expanded as a power series in an adequated ball contained in $\Omega$) a differential operator of infinite order. \begin{proposition}\label{IODEq} Let $R>0$ and assume that $B_R(0)\subset \Omega$. Suppose that $\phi\in Exp(\Omega)$ is such that $s(\mathcal{B}(\phi))\subset B_R(0)$ and take $f \in Hol(\Omega)$ with $f(z)=\sum_{k=0}^{\infty}a_kz^k, \, \, |z| < R$. Then, there exist a measure $\mu_{\phi}$ supported on a curve $\gamma \in H_1(\phi)$ contained in $B_R(0)$ such that $\phi=\mathcal{P}(\mu_{\phi})$, and moreover $$f(\partial_t)\phi(t)=\mathcal{P}(f \mu_{\phi})(t)=\lim_{l\to \infty}\sum_{k=0}^{l}a_k(\partial_t^k\phi)(t),$$ uniformly on compact sets. \end{proposition} \begin{proof} We note that, since $s(\mathcal{B}(\phi))$ is a discrete set, there exists a real number $\delta > 0$ such that $dist(s(\mathcal{B}(\phi)),\{s: |s|=R\})< \delta$. From \cite[Theorem 5.3.12]{BoasBook} we have that $\tau_\phi= \sup_{\omega \in s(\mathcal{B}(\phi)) } |\omega|$ is the type of $\phi$, then there exist a curve $\gamma \subset (B_{\tau_\phi}(0))^{c}\cap B_R(0)$ such that $\gamma \in H_1(f)$. Let $\mu_\phi$ be the measure described by Theorem \ref{ProChar} supported on $\gamma $, then $\phi=\mathcal{P}(\mu_{\phi})$. Moreover, using the measure $\mu_\phi$, we compute: \begin{eqnarray*} \dfrac{d}{dz}\phi(z)&=&\dfrac{d}{dz}\mathcal{P}(\mu_{\phi})(z) = \dfrac{d}{dz} \underset{\gamma}\int e^{sz}d\mu_{\phi}(s) =\underset{\gamma}\int se^{sz}d\mu_{\phi}(s) =\mathcal{P}(s\mu_{\phi}). \end{eqnarray*} From this, we obtain $$\sum_{k=0}^{l}a_k\dfrac{d^k}{dz^k}\phi(z)-\mathcal{P}(f \mu_{\phi})(z)= \mathcal{P}\left(\left\{\sum_{k=0}^{l}a_ks^k-f(s)\right\}\mu_{\phi}\right)(z)\; ,$$ and therefore \begin{eqnarray*} \left| \sum_{k=0}^{l}a_k\dfrac{d^k}{dz^k}\phi(z)-\mathcal{P}(f \mu_{\phi,R})(z)\right| & = & \left| \underset{\gamma}\int e^{sz}\{\sum_{k=0}^{l}a_ks^k-f(s)\}d\mu_{\phi}(s) \right| \\ &\leq& \underset{\gamma}\int e^{|z||s|}\, \left|\sum_{k=0}^{l}a_ks^k-f(s)\right| \, |d \mu_{\phi}|(s) \; . \end{eqnarray*} Now we take limits as $l\to \infty$. The result follows from the Lebesgue dominated convergence theorem. We note that the convergence is uniform over compact subsets of $\mathbb{C}$. \end{proof} Thus, under the hypothesis of this proposition, we have seen that Equation (\ref{Gen_Eq}) becomes ``locally" the following infinite order differential equation: \begin{equation}\label{EqIOD} \sum_{k=0}^{\infty}a_k\dfrac{d^k}{dt^k}\phi(t)=g(t). \end{equation} Interestingly, Proposition \ref{IODEq} also shows that $f(\partial_t)$ is linear on the space of functions $\phi$ satisfying the hypothesis appearing therein. We now show that linearity is true in general: \begin{lemma} The operator $f(\partial_t): Exp(\Omega) \to Exp(\Omega)$ is linear. \end{lemma} \begin{proof} For $\phi,\psi\in Exp(\Omega)$ we have that $s(\mathcal{B}((\phi+\psi)))\subseteq s(\mathcal{B}(\phi))\cup s(\mathcal{B}(\psi))\subset \Omega$. Let $\gamma\in H_1(\phi+\psi)$ such that $s(\mathcal{B}(\phi))\cup s(\mathcal{B}(\psi))$ is enclosed by $\gamma$. This implies that $\gamma\in H_1(\phi)\; , \gamma \in H_1(\psi)$; then \begin{eqnarray*} f(\partial_t)(\phi+\psi)(t)&=&\mathcal{P}(f \mu_{\phi+\psi})(t)=\int_{\gamma}e^{st}f(s)B(\phi+\psi)(s)\dfrac{ds}{2\pi i} \\ &=&\int_{\gamma}e^{st}f(s)B(\phi)(s)\dfrac{ds}{2\pi i}+\int_{\gamma}e^{st}f(s)B(\psi)(s)\dfrac{ds}{2\pi i}\\ &=&f(\partial_t)(\phi)(t)+f(\partial_t)(\psi)(t). \end{eqnarray*} \end{proof} \begin{rem} We remark that, if $\phi,\psi\in Exp(\Omega)$ then the inclusions $H_1(\phi+\psi)\subseteq H_1(\phi)$ and $H_1(\phi+\psi)\subseteq H_1(\psi)$ do not necessarily hold. This is why we had to choose an appropriated curve $\gamma$ to carry out the above proof. For example, let $R>2$ and set $\lambda \in \mathbb{C}$ with $|\lambda|=3/2$; also consider two functions $g_1,g_2 \in Exp_{1/2}(\mathbb{C})$; then, the functions $f_1(z)=e^{\lambda z}+g_1(z)$ and $f_2(z)=-e^{\lambda z}+g_2(z)$ are elements in $Exp(B_R(0))$. Furthermore, if $\gamma \in H_1(f_1+f_2)$ with $\gamma \subset B_{1/2}(0)^{c}\cap B_1(0)$, then $\gamma \not \in H_1(f_1)$ and also $\gamma \not \in H_1(f_2)$. \end{rem} \begin{rem}\label{remOp} Let us assume that $Exp(\Omega)$ is endowed with the topology of uniform convergence on compact sets. With this topology, the operator $f(\partial_t)$ is not bounded: It is enough to take $\Omega=\mathbb{C}$ and as symbol $f$ the identity map. The following specific example shows less trivially that the linear operator $f(\partial_t)$ is not necessarily continuous: Let $\Omega=\mathbb{C}\setminus \mathbb{R}^{+}_0$ (a Runge Domain) and consider the symbol $f(s)=\dfrac{1}{s}$. Then $f\in Hol(\Omega)$; we consider the sequence $\phi_n(z)=e^{\frac{i}{n}z}-e^{\frac{-i}{n}z}$. We have $$s(\mathcal{B}(\phi_n))=\left\{-\frac{i}{n},\frac{i}{n} \right\}\subset \Omega\; .$$ We can see that $\phi_n\to 0$ in the topology of uniform convergence on compact sets. On the other hand we have (See Example \ref{runge}) $$f(\partial_t)(\phi_n)(z)=\frac{n}{i}\left( e^{\frac{i}{n}z}+e^{\frac{-i}{n}z} \right)\; ,$$ and considering the compact ball $\overline{B_k(0)}$ with centre the origin and radius $k$, we have $$\sup_{z\in \overline{B_k(0)}}|f(\partial_t)(\phi_n)(z)|\geq |f(\partial_t)(\phi_n)(0)|=2n$$ which goes to infinity when $n\to \infty$. \end{rem} The following lemma says that nonlocal equations involving $f(\partial_t)$ can be solved in $Exp(\Omega)$: \begin{lemma}\label{SurjLem} The operator $f(\partial_t): Exp(\Omega) \to Exp(\Omega)$ is surjective. \end{lemma} \begin{proof} The surjectivity of the operator comes from the solvability of the following equation \begin{equation} f(\partial_t)\phi=g, \,\, \quad g\in Exp(\Omega) \; . \end{equation} Since the zet of zeros of $f$, $\mathcal{Z}(f)$ say, is a set of isolated points and $g\in Exp(\Omega)$, there is a curve $\gamma\in H_1(g)$ such that $\mathcal{Z}(f)\cap \gamma= \emptyset$. Also, there is a measure $\mu_g$ suported on $\gamma$ such that $g=\mathcal{P}(\mu_g)$. Set $\phi=\mathcal{P} (\dfrac{\mu_g}{f})$; i.e. $$\phi(z)=\dfrac{1}{2\pi i}\int_{\gamma} \dfrac{ e^{z\eta}\mathcal{B}(g)(\eta)}{f(\eta)}d\eta\; .$$ It is evident that $\phi\in Exp(\mathbb{C})$, now we want to see that $s(\mathcal{B}(\phi))\subset \Omega$. For that, let us calculate the Borel Transform of $\phi$ as the analytic continuation of its real Laplace transform. Let $z\in \mathbb{R}$ be sufficiently large so that $|Re(\eta)|<z\; $ for all $\eta \in \gamma$; we have \begin{eqnarray*} \int_0^{+\infty}e^{-zt}\phi(t)dt &=& \int_0^{+\infty}e^{-zt}\dfrac{1}{2\pi i}\int_{\gamma} \dfrac{ e^{z\eta}\mathcal{B}(g)(\eta)}{f(\eta)}d\eta dt\\ &=&\dfrac{1}{2\pi i}\int_{\gamma} \int_0^{+\infty}\dfrac{ e^{t(\eta-z)}\mathcal{B}(g)(\eta)}{f(\eta)}d\eta\\ &=&\dfrac{1}{2\pi i}\int_{\gamma}\dfrac{\mathcal{B}(g)(\eta)}{f(\eta)(z-\eta)}d\eta\; , \end{eqnarray*} in which we have used Fubini's Theorem. Therefore, $$\mathcal{B}(\phi)(z)=\dfrac{1}{2\pi i}\int_{\gamma}\dfrac{\mathcal{B}(g)(\eta)}{f(\eta)(z-\eta)}d\eta\; ,$$ and using Morera's theorem, we can see that $\mathcal{B}(\phi)$ is analytic in $\Omega^{c}$; thus $s(\mathcal{B}(\phi))\subset \Omega$. On the other hand, it is not difficult to see that it satisfies $f(\partial_t)\phi=g$. \end{proof} \subsection{A representation formula for solutions to $f(\partial_t)\phi = g$} \begin{proposition}\label{FDim} Let $f \in Hol(\Omega)$ and denote $\mathcal{Z}(f)$ for the set of its zeros. A function $\phi \in Exp(\Omega)$ of exponential type $\tau_{\phi}$ is solution to the homogeneous equation $f(\partial_t)\phi=0$ if and only if there exist polynomials $p_k$ of degree less than the multiplicity of the root $s_k \in \mathcal{Z}(f) \cap B_{\tau_{\phi}}(0)$, such that $$\phi(t)=\sum_{\substack{s_k\in \mathcal{Z}(f) \\ |s_k|<\tau_{\phi}}} p_k(t) e^{t s_k}\; .$$ \end{proposition} \begin{proof}({\sl Sufficiency}) in order to prove that the function $$\phi(t)=\sum_{\substack{s_k\in \mathcal{Z}(f)\\ |s_k|<\tau_{\phi}}} p_k(t) e^{t s_k}\; ,$$ is solution to the homogeneous equation $f(\partial_t)\phi=0$, it is enough to see that for a given $k$ and $s_k\in \mathcal{Z}(f)$ with $|s_k|<\tau_{\phi}$, the following holds: $$f(\partial_t)(p_k(t)e^{ts_k})=0.$$ Indeed, we first note that for a natural number $d$ and a complex number $a_d$, the Borel transform of $a_d s^de^{\lambda s}$ is \begin{eqnarray}\label{NewIdent} \mathcal{B}(a_d s^de^{\lambda s})= a_d\frac{d!}{(s-\lambda)^{d+1}}\; . \end{eqnarray} Now, let $s_k$ be a zero of $f$ of order $d_k+1$, $p_k$ a polynomial of degree $deg(p_k)\leq d_k$ and suppose that $\gamma_k \in H_1(p_k(z)e^{zs_k})$; then, using linearity of the Borel transform and the Cauchy theorem we have $$f(\partial_t)\left(p_k(t)e^{ts_k} \right)=\dfrac{1}{2\pi i}\int_{\gamma_k}e^{t\eta}f(\eta)\mathcal{B}(p_k(z)e^{zs_k})(\eta) d\eta =0\; .$$ From these computations, we deduce that $$f(\partial_t)\left( \sum_{s_k\in \mathcal{Z}(f): |s_k|<\tau_{\phi}} p_k(t)e^{ts_k} \right)=0.$$ On the other hand, it is evident that $$\phi(t)=\sum_{\substack{s_k\in \mathcal{Z}(f) \\ |s_k|<\tau_{\phi}}} p_k(t)e^{t s_k}\; ,$$ has exponential type $\tau_{\phi}$ and from (\ref{NewIdent}) we conclude that $\phi \in Exp(\Omega)$ . Before proving {\sl necessity}, we must ensure finite dimensionality of a vector space to be defined below. We write this fact as a separate result, because it is interesting in its own right; after that we will finish the proof of this proposition. \end{proof} We use the following notations. Let $\mathcal{R}$ be the closure of a bounded and simply connected region which does not contain any singularity of $f$ and let $\gamma$ denotes its boundary. We also denote by $A(\mathcal{R})$ the set of continuous functions that are analytic in the interior of $\mathcal{R}$ endowed with the supremum norm and, for $z \in \mathbb{C}$, we let $E_z:\mathcal{R}\to\mathbb{C}$ be the complex exponential function $E_z(\xi)=e^{z\xi}$. Finally, we let $$\mathcal{M}_{f,\gamma}:=cl(span\{E_z\cdot f: z\in \mathbb{C}\})\; ,$$ where $cl$ denotes closure in $A(\mathcal{R})$. \begin{lemma} Let $\{s_k\}_{k=1}^K$ be an enumeration of all the zeros of $f$ in $\mathcal{R}$ and let $m_k$ denote their corresponding multiplicities. Then \begin{equation} \label{zeroes} \mathcal{M}_{f,\gamma}= \{\psi\in A(\mathcal{R}): \psi \, \, is \, \, zero \, \, at\, \, s_k\, \, with \, \, multiplicity \, \, \geq m_k, \, \, 1\leq k\leq K\}\; . \end{equation} \end{lemma} \begin{proof} It is not difficult to see that $\mathcal{M}_{f,\gamma}$ is a subset of the set appearing in the right hand side of (\ref{zeroes}). Let us prove the other inclusion. If $\psi$ belongs to the right hand side of (\ref{zeroes}), then $\dfrac{\psi}{f}\in A(\mathcal{R})$. Since $\mathcal{R}$ is compact with connected complement, by Mergelyan's Theorem (see \cite[Theorem 20.5]{WRudin} and \cite{SMerge1,SMerge2}) we know that the set of polynomials $Pol$ is dense in $A(\mathcal{R})$. Therefore, given $\epsilon >0$ there is a polynomial $p\in \, Pol$ such that $||\dfrac{\psi}{f}-p||_{A(\mathcal{R})}<\epsilon$. It follows that $\psi \in cl(f \cdot Pol)$. Now we note that $$Pol\subset cl(span\{E_z:z\in \mathbb{C}\}).$$ Indeed, it is sufficient to note that the right hand side is an algebra which contains $1$ and $\xi$ for any $\xi \in \mathcal{R}$, and certainly, we have $\xi=\lim\limits_{n\to\infty}\dfrac{e^{\xi 1/n}-1}{1/n}$. Therefore $$\psi\in cl(f \cdot Pol)\subset cl(span\{f \cdot E_z: z\in \mathbb{C}\})\; .$$ \end{proof} A special case of this lemma is in \cite[Lemma 5.4]{CPR}. \begin{lemma}\label{Dimension} Under the conditions of previous lemma, the space $A(\mathcal{R})/M_{f,\gamma}$ has dimension $M=m_1+m_2+\cdots +m_K$. \end{lemma} \begin{proof} We can note that $M_{f,\gamma}=(\prod_{k=1}^K (z-s_k)^{m_k})$ is a closed ideal of $A(\mathcal{R})$. First of all, given a complex number $\omega \in \mathcal{R}$ and an integer number $m>0$ we claim that the quotient space $A(\mathcal{R})/((z-\omega)^{m})$ has dimension $m$ and that a basis is given by the set $\beta_1:=\{\overline{1},\overline{z-\omega},\overline{(z-\omega)^2},\cdots,\overline{(z-\omega)^{m-1}}\}$ (here an overline indicates equivalence class). In fact, let $\alpha_0,\alpha_1,\cdots ,\alpha_{m-1}$ be complex numbers, then $$\sum_{l=0}^{m-1}\alpha_l(z-\omega)^{l}=0\, \, \quad \mbox{ belongs to } \quad \, \, A(\mathcal{R})/((z-\omega)^{m})$$ if and only if $\sum_{l=0}^{m-1}\alpha_l(z-\omega)^{l} \in ((z-\omega)^{m})$. Thus, there exists a function $\psi \in ((z-\omega)^{m})$ such that $\sum_{l=0}^{m-1}\alpha_l(z-\omega)^{l}=\psi(z)$, and therefore $\alpha_l=\dfrac{1}{l!}\dfrac{d^l}{dz^l}\psi(z)|_{z=\omega}=0,$ for $\, \, 0\leq l \leq m-1$. It follows that $\beta_1$ is a linearly independent set. Now let $\overline{h}\in A(\mathcal{R})/((z-\omega)^{m})$ and consider the complex numbers $\alpha_l=\dfrac{1}{l!}\dfrac{d^l}{dz^l}h(z)|_{z=\omega}, \, \, l=0,1,\cdots, m-1$; then $$h(z)=\sum_{l=0}^{m-1}\alpha_l(z-s_1)^{l}\, \, \quad \mbox{ belongs to } \quad \, \, A(\mathcal{R})/((z-\omega)^{m})\; .$$ \noindent As a second step, we show that for any $k\in \{1,2,\cdots,K\}$ the following equality holds $$A(\mathcal{R})/M_{f,\gamma}=A(\mathcal{R})/((z-s_k)^{\sum_{j=1}^K m_j})\; .$$ In fact, fix any $k\in\{1,2,\cdots,K\}$ and set $h\in A(\mathcal{R})/M_{f,\gamma},$ then $$h(z)=r(z)+\prod_{i=1}^K (z-s_i)^{m_i}\psi_0(z)\; .$$ Also, we have $$(z-s_i)^{m_i}=((z-s_k)+(s_k-s_i))^{m_i}=\sum_{n=0}^{m_i} a_n(z-s_k)^{m_i-n}(s_k-s_i)^{n}=(z-s_k)^{m_i} + p_i(z)\; ,$$ where the polynomial $p_i$ has degree $m_i-1$; therefore we obtain $$h(z)=r(z)+\prod_{i=1}^K (z-s_i)^{m_i}\psi_0(z)= r(z)+r_1(z)+(z-s_k)^{\sum_{j=1}^K m_j}\psi_0(z),$$ which implies that $h\in A(\mathcal{R})/((z-s_k)^{\sum_{j=1}^K m_j})$.\\ \noindent Conversely, let $h\in A(\mathcal{R})/((z-s_k)^{\sum_{j=1}^K m_j})$. Then $$h(z)= r(z)+(z-s_k)^{\sum_{j=1}^K m_j}\psi_0(z)=r(z)+\prod_{i=1}^K (z-s_k)^{m_i}\psi_0(z)\; .$$ Now, as in the previous step, we have $$(z-s_k)^{m_i}=((z-s_i)+(s_i-s_k))^{m_i}=\sum_{n=0}^{m_i} a_n(z-s_i)^{m_i-n}(s_i-s_k)^{n}=(z-s_i)^{m_i} + p_i(z),$$ where the polynomial $p_i$ has degree $m_i-1$. It follows that $$h(z)=r(z)+r_1(z)+\prod_{i=1}^K (z-s_i)^{m_i}\psi_0(z)\; ,$$ which implies that $h\in A(\mathcal{R})/M_{f,\gamma}\,$. Therefore, using the first step, we conclude that $m_1+m_2+\cdots+m_K$ is the dimension of the quotient space $A(\mathcal{R})/M_{f,\gamma}\,$, as claimed. \end{proof} \noindent As an immediate consequence of this lemma we have: \begin{cor} Let $\Omega \subset \mathbb{C}$ be an unbounded domain and assume that $f\in Hol(\Omega)$ has an infinite number of zeros $\{s_k\}$ with multiplicity $m_k$ for $k \in \{1,2,3, \cdots\}$. Then the space $A(\Omega)/M_{f}$ has infinite dimension, where \begin{equation*} \mathcal{M}_{f}= \{\psi\in Hol(\Omega): \psi \, \, is \, \, zero \, \, at\, \, s_k\, \, with \, \, multiplicity \, \, \geq m_k, \, \, 1\leq k < \infty\}\; . \end{equation*} \end{cor} \noindent {\sl Now we proceed to finish the proof of Proposition \ref{FDim}.} \begin{proof} Let $\phi \in Exp(\Omega)$ be given. From \cite[Theorem 5.3.12]{BoasBook} we have that its type is $\tau_{\phi}=\max_{\omega \in s(\mathcal{B}(\phi))}|\omega|$. Since $s(\mathcal{B}(\phi)) \subset \Omega$ and is a discrete set, we can find a curve $\gamma$ in $\Omega$ whose enclosed region $\mathcal{R}$ contains the set $s(\mathcal{B}(\phi))$ (i.e $\gamma \in H_1(\phi)$) and such that it also contains all zeros $s_i \in \Omega$ of the symbol $f$ with $ |s_i|<\tau_{\phi}\,$. Let $\{s_i\}_{i=1}^k$ be an enumeration of the zeros of $f$ in $\mathcal{R}\cap B_{\tau_{\phi}}(0)$ and let $m_i$ denote their corresponding multiplicities. We note also that (using Proposition \ref{ProChar}) we know that there exist a measure $\mu$ supported on $\gamma \in H_1(\phi)$ such that $\phi=\mathcal{P}(\mu)$. Now, we note that an element $h\in A(\mathcal{R})/M_{f,\gamma}$ is completely determined by the following set \begin{equation}\label{Elem} \left\{\dfrac{d^j}{dz^j}h(z)|_{z=s_i}: 0\leq j\leq m_i-1;\, \, 1\leq i\leq k\right\}\; . \end{equation} From Lemma \ref{Dimension}, we have that $A(\mathcal{R})/M_{f,\gamma}$ has dimension $m_1+m_2+\cdots +m_k$; therefore its dual space has the same dimension. Moreover, it is not difficult to see that the following collection of linear functionals $$\left\{d_{i,j}=\dfrac{d^j}{dz^j}|_{z=s_i}: 0\leq j\leq m_k-1;\, \, 1\leq i\leq k \right\}\; ,$$ are $m_1+m_2+\cdots m_k$-elements in the space $ (A(\mathcal{R}))^*$ which annihilate $M_{f,\gamma}$; therefore they induces the following $m_1+m_2+\cdots m_k$-linearly independent elements in the dual space of $ A(\mathcal{R})/M_{f,\gamma}$ $$ \{\widetilde{d_{i,j}}: 0\leq j\leq m_k-1;\, \, 1\leq i\leq k\}\; ;$$ where $\widetilde{d_{i,j}}(\overline{\phi})=d_{i,j}(\phi)$ for $\overline{\phi}\in A(\mathcal{R})/M_{f,\gamma}\,$. Consequently, every element $\varrho \in (A(\mathcal{R})/M_{f,\gamma})^*$ can be written in the form $$\varrho =\sum_{i=1}^k \sum_{j=0}^{m_i-1}a_{i,j}\widetilde{d_{i,j}} $$ for some $a_{i,j}\in \mathbb{C}$. Now, given and element $\phi \in A(\mathcal{R})$, there exist a unique $r\in A(\mathcal{R})$ such that $\overline{\phi}= \overline{r}$ in $A(\mathcal{R})/M_{f,\gamma}$, and using the characterization given in (\ref{Elem}) we have $$\varrho(\overline{\phi})=\varrho(\overline{r})=\sum_{i=1}^k \sum_{j=0}^{m_i-1}a_{i,j}d_{i,j}(r)= \sum_{i=1}^k \sum_{j=0}^{m_i-1}a_{i,j}\dfrac{d^j}{dz^j}(\phi)|_{z=s_i}\; .$$ On the other hand, from the equation $\mathcal{P}(f\cdot\mu)=0$ we have that the measure $\mu$ defines a functional on $A(\mathcal{R})$ which annihilates $M_{f,\gamma}$ and it induces a functional $\widetilde{\mu}$ on $A(\mathcal{R})/M_{f,\gamma}\,$. Therefore, there exist complex numbers $b_{i,j}$ such that $$\widetilde{\mu}=\sum_{i=1}^k \sum_{j=0}^{m_i-1}b_{i,j}\widetilde{d_{i,j}}\; .$$ Then, we have \begin{eqnarray*} \phi(t)&=&\mathcal{P}(\mu)(t)= \int_{\gamma}e^{tz}d\mu(z)=\widetilde{\mu}(\overline{e^{tz}})=\sum_{i=1}^k \sum_{j=0}^{m_i-1}a_{i,j}\dfrac{d^j}{dz^j}(e^{tz})|_{z=s_i}\\ &=&\sum_{i=1}^k \left( \sum_{j=0}^{m_i-1}a_{i,j}t^j \right) e^{ts_i}\\ &=&\sum_{i=1}^k p_i(t)e^{ts_i}\; . \end{eqnarray*} The proof of Proposition \ref{FDim} is finished. \end{proof} \begin{cor}\label{corcor} Let $R>0$ and $f\in Hol(B_R(0))$. Then, a function $\phi \in Exp(B_R(0))$ of exponential type $\tau_{\phi}$ is a solution of the homogeneous equation $f(\partial_t)\phi=0$, if and only if there exist polynomials $p_k$ of degree less than the multiplicity of the root $s_k \in \mathcal{Z}(f)\cap B_{\tau_{\phi}}(0)$, such that $$\phi(t)=\sum_{\substack{s_k\in \mathcal{Z}(f) \\ |s_k|<\tau_{\phi}}} p_k(t)e^{ts_k}\; .$$ \end{cor} In particular, if the symbol $f$ is an entire function, we deduce Theorem 5.1 in \cite{CPR} from Corollary \ref{corcor}. The following theorem is an easy application of the previous results; it generalizes Proposition \ref{FDim}. \begin{theorem}\label{TheSol} Let $f \in Hol(\Omega)$ and $g\in Exp(\Omega)$. Then a function $\phi \in Exp(\Omega)$ of exponential type $\tau_{\phi}$ is solution for the non-homogeneous equation $f(\partial_t)\phi=g$ if and only if there exist polynomials $p_k$ of degree less than the multiplicity of the root $s_k \in \mathcal{Z}(f)\cap B_{\tau_{\phi}}(0)$, such that $$\phi(t)=\mathcal{P}\left( \dfrac{\mu_{g}}{f} \right)(t)+\sum_{\substack{s_k\in \mathcal{Z}(f) \\ |s_k|<\tau_{\phi}}} p_k(t)e^{ts_k}\; .$$ \end{theorem} \section{Linear zeta-nonlocal field equations} Now we apply the theory developed in the previous section to find explicit solutions of the following linear zeta-nonlocal field equation: \begin{equation}\label{EquEnt} \zeta(\partial^2_t+h)\phi=g\; , \end{equation} in which $h$ is a real parameter. Our solution depends crucially on the properties of $g$. We show that if $g$ is of exponential type, then so is $\phi$ and solving (\ref{EquEnt}) explicitly is rather straightforward. However, if the data $g$ is not of exponential type, analysis become very delicated. We consider this problem in 4.2, in which we assume that the Laplace transform $\mathcal{L}(g)$ exists and it has an analytic extension to an appropriated angular contour. In section we use notation introduced in Section 2. \subsection{Zeta-nonlocal field equation with source function in $Exp(\Omega)$} Equation (\ref{EquEnt}) can be solved completely in the space of entire functions of exponential type. Since Equation (\ref{EquEnt}) depends on the values of $h$, we study it in three different cases: \subsubsection{ {\bf Case $h>1$.}} In this case the symbol $\zeta_h\circ p(s)=\zeta(s^2+h)$ has poles $i\sqrt{h-1}$ and $-i\sqrt{h-1}$. As we have already pointed out, the behavior of $\zeta_h\circ p(s)$ can be represented in the following picture: \begin{center} \includegraphics[width=4cm, height=4cm]{111} \end{center} {\footnotesize The poles of $\zeta(s^2+h)$ are the vertices of dark hyperbola, indicated by two thick dots. The trivial zeros of $\zeta(s^2+h)$ are indicated by thin dots on the imaginary axis; and the non-trivial zeros are located on the darker painted region (critical region).} \\ Now, let us consider the simply connected domain $$\Omega := \mathbb{C} \setminus \left\{ s\in \mathbb{C}: Re(s)\geq 0, \; \; |Im(s)|=\sqrt{h-1} \right\}\; .$$ We see that the symbol $\zeta_h(s)$ is holomorphic in $\Omega$, and therefore for a source function $g \in Exp(\Omega)$, equation (\ref{EquEnt}) is the following integral equation for the measure $\mu_{\phi}$: \begin{equation}\label{ZIntEq} \mathcal{P}((\zeta_h\circ p)\cdot \mu_{\phi})(t)=g(t)\; . \end{equation} \begin{theorem}\label{TeoIEZ} Let $g\in Exp(\Omega)$. Then a function $\phi \in Exp(\Omega)$ of exponential type $\tau_{\phi}$ is solution for the integral equation $(\ref{ZIntEq})$ if and only if there exist polynomials $p_k$ of degree less than the multiplicity of the root $s_k \in \mathcal{Z}(\zeta_h\circ p)\cap B_{\tau_{\phi}}(0)$, such that $$\phi(t)=\int_{\gamma}\dfrac{e^{ts}}{\zeta(s^2+h)} d\mu_g(s)+ \sum_{\substack{s_k\in \mathcal{Z}(\zeta_h\circ p) \\ |s_k|<\tau_{\phi}}} p_k(t)e^{ts_k}.$$ Where $\gamma \in H_1(g)$ and enclose the root $s_k \in \mathcal{Z}(\zeta_h\circ p)\cap B_{\tau_{\phi}}(0)$. \end{theorem} \begin{rem} In this theorem (and also in the results that follow) we find that the solution $\varphi(t)$ depends on polynomials $p_k(t)$. These polynomials are calculated using the zeroes (and their orders) of the function $\zeta_h\circ p$, see Proposition 3.6 and Theorem 3.11. We comment further on this in Subsection 4.2. \end{rem} On the other hand, we can note that for given $R<\sqrt{h-1}$, the domain $\Omega$ contains the ball $B_R(0)$, and since the symbol $\zeta_h\circ p(s)$ is analytic in this ball, it can be expressed there in its Taylor series representation, say $$\zeta(s^2+h)= \sum_{k=0}^{\infty}a_k(h)s^k\; , |s|<R\; .$$ Therefore, using proposition \ref{IODEq}, in the space $Exp(B_R(0))$ we have that equation (\ref{EquEnt}) can be viewed as the following infinite order differential equation \begin{equation}\label{ZIODEq} \sum_{k=0}^{\infty}a_k(h)\dfrac{d^k}{dt^k}\phi(t)=g(t)\; . \end{equation} In this situation, we have the following result: \begin{theorem}\label{TeoIOZ} Let $R<\sqrt{h-1}$ and $g\in Exp(B_R(0))$. Then, a function $\phi \in Exp(B_R(0))$ of exponential type $\tau_{\phi}$ is solution of the infinite order zeta-nonlocal field equation $(\ref{ZIODEq})$ if and only if there exist polynomials $p_k$ of degree less than the multiplicity of the root $s_k \in \mathcal{Z}(\zeta_h\circ p) \cap B_R(0)$, such that $$\phi(t)=\int_{|s|=R}\dfrac{e^{ts}}{\zeta(s^2+h)} d\mu_g(s) +\sum_{\substack{s_k\in \mathcal{Z}(\zeta_h\circ p) \\ |s_k|<\tau_{\phi}}} p_k(t)e^{ts_k}\; . $$ \end{theorem} \subsubsection{ {\bf Case $h<1$.}} In this case we have that $\sqrt{1-h}$ and $-\sqrt{1-h}$ are the poles of the symbol $\zeta_h\circ p$. The behavior of $\zeta_h\circ p$ is represented in the following picture \begin{center} \includegraphics[width=4cm, height=4cm]{222} \end{center} {\footnotesize The poles of $\zeta(s^2+h)$ are the vertices of dark hyperbola, indicated by two thick dots. The trivial zeros of $\zeta(s^2+h)$ are indicated by thin dots on the real axis; the non-trivial zeros are located on the darker painted region (critical region).} Therefore choosing as our basic region $\Omega$ the following domain: $$\Omega := \mathbb{C} \setminus \left\{ s\in \mathbb{C}: Im(s)\geq 0, \; \; |Re(s)|=\sqrt{1-h} \right\}\; ,$$ we can obtain theorems for the equation $\zeta(\partial_t^2+h)\phi=g$ which are similar to Theorem \ref{TeoIEZ} and Theorem \ref{TeoIOZ}. \subsubsection{ {\bf Case $h=1$.}} In this case there is a pole at $s=0$. We saw that the behavior of $\zeta_1\circ p$ is represented in the following picture \begin{center} \includegraphics[width=4cm, height=4cm]{333} \end{center} {\footnotesize The pole of $\zeta(s^2+1)$ is the origin (vertex of dark curves $y=|x|, y=-|x|$). The trivial zeros of $\zeta(s^2+h)$ are indicated by thin dots on the imaginary axis; the non-trivial zeros are located on the darker painted region (critical region).} \\ Let $\Omega$, be the region $$\Omega := \mathbb{C} \setminus \{s\in \mathbb{C}: Re(s)\geq 0, \; \; |Im(s)|=0 \} \; .$$ Since we cannot construct a ball around the origen in which $\zeta_1 \circ p$ is analytic, we cannot obtain a result analogous to Theorem \ref{TeoIOZ} for the equation $$\zeta(\partial_t^2)\phi=g\; , $$ this is, we {\em do not have} an ``infinite order equation" but a genuine nonlocal equation. On the other hand, it is possible to state a result analogous to Theorem \ref{TeoIEZ}. We omit details. \subsection{Zeta-nonlocal field equation with source function in $\mathcal{L}_{>}(\mathbb{R}_+)$} Now we consider the case in which the source function $g(t), t\geq 0$ is an analytic function not necessarily of exponential type. We assume that it possesses Laplace transform, and therefore there exists a real number $a$ such that the following integral $$\mathcal{L}(g)(z)=\int_0^{\infty}e^{-tz}g(t)dt \; ,$$ converges absolutely and uniformly on the half-plane $\{z\in \mathbb{C} : |z|>a\}$, and for which the function $z\to \mathcal{L}(g)(z)$ is analytic. We also assume that $\mathcal{L}(g)$ has an analytic extension to the left of $Re(s)=a$ until a singularity $a_0$, and that this new region of analyticity has an angular contour $\kappa_{\infty}$ as its boundary. Hereafter we denote by $\mathcal{L}_{>}(\mathbb{R}_+)$ the space of analytic functions that possess the properties described above. The problem of interest in this situation is to solve the following equation \begin{equation}\label{ExEq} \zeta(\partial_t^2+h)f=g, \end{equation} for a given $g\in \mathcal{L}_{>}(\mathbb{R}^+)$, where the operator $\zeta(\partial_t^2+h)$ needs to be properly defined in order to have a correctly posed problem. The solution of Equation (\ref{ExEq}) if it exists, will not necessarily be an entire function of exponential type. Let $g\in \mathcal{L}_{>}(\mathbb{R}_+)$ and let the first singularity of the analytic extension of $\mathcal{L}(g)$ up to an angular contour $\kappa_{\infty}$ be $a_0=0$. Now consider an angle $\frac{\pi}{2} < \psi \leq \pi$, a positive real number $r>0$ and let $\kappa_r$ be a finite angular contour contained in $\kappa_{\infty}$. Concretely, $\kappa_r$ is composed by a circular sector of radius $\delta$ centered at the origen and the respective rays of opening $\pm \psi$ as in the following picture: \begin{center} \includegraphics[width=6cm, height=5cm]{0000} \end{center} \noindent Now, let us pick the complex measure $$d\mu_r(s):=\mathcal{X}_{\kappa_r}(s)\mathcal{L}(g)(s)\frac{ds}{2\pi i}\; ,$$ where $\mathcal{X}_{\kappa_r}$ denotes the characteristic function of the contour $\kappa_r$. This measure allows us to define the following function $g_r:\mathbb{C}\to \mathbb{C}$ using $\mathcal{P}$-transform: $$g_r(z):=\mathcal{P}(\mu_r)(z)=\int_{\kappa_r}e^{zs}\mathcal{L}(g)(s)\frac{ds}{2\pi i}\; .$$ \begin{lemma}\label{LemFin1} We have: \begin{enumerate} \item The function $g_r$ is an entire function of order $1$ and exponential type $r$. \item For each $r>0$, the analytic continuation of the Borel Transform of $g_r$ is $\mathcal{B}(g_r)(z)=K*\mu_r(z)$, where $K(z)= 1/z$, and its conjugate diagram is the convex hull of the contour $\kappa_r$. In particular, if we consider the measure $$d\mu_{g_r}(s)=K*\mu_r(s)\frac{ds}{2\pi i}\; ,$$ then $g_r=\mathcal{P}(\mu_r)=\mathcal{P}(\mu_{g_r}).$ \end{enumerate} \end{lemma} \begin{proof} \noindent We prove item {\sl 1}. Let $r>0$ fixed; first, we note that for $n\geq 0$ $$g^{(n)}_r(0)=\int_{\kappa_r}s^n\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}\; .$$ Now, defining $$M_r:=\dfrac{1}{2\pi}\int_{\kappa_r}|\mathcal{L}(g)(s)ds|\; ,$$ we obtain $$\dfrac{\ln{|g^{(n)}_r(0)|}}{n\ln n}\leq \dfrac{\ln{r^nM_r}}{n\ln n} = \dfrac{n\ln{r}+\ln{M_r}}{n\ln n}\; ,$$ which approaches zero as $n\to \infty$. Therefore using formula (\ref{Order}) we obtain the order of $g_r$ as $$\rho = \left(1-\lim_{n\to \infty}\sup\dfrac{\ln{|g_{r}^{(n)}(0)|}}{n\ln(n)}\right)^{-1}=1$$ With this information, we compute the type of $g_r$ using formula (\ref{Type}), $$\sigma=\lim_{n\to \infty}\sup |g^{(n)}_r(0)|^{1/n}.$$ It is not difficult to see that $\sigma \leq r$; we will conclude that $\sigma =r$ by considering the region of analyticity of the Borel transform of $g_r$ using item {\sl 2}.\\ \noindent {\sl 2}. Since $\kappa_r$ is compact we have $$g_r(z)=\int_{\kappa_r}\sum_{n=0}^{\infty}\frac{(sz)^n}{n!}\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}= \sum_{n=0}^{\infty}\frac{z^n}{n!}\int_{\kappa_r}s^n\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}= \sum_{n=0}^{\infty}\frac{a_n}{n!}z^n\; ,$$ where $$a_n:=\int_{\kappa_r}s^n\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}$$ and we have used uniform convergence. Now, for $|z|>r$ we have, $$B(g_r)(z)= \sum_{n=0}^{\infty}\frac{a_n}{z^{n+1}}= \sum_{n=0}^{\infty}\int_{\kappa_r}\dfrac{1}{z}\left( \dfrac{s}{z}\right)^n\mathcal{L}(g)(s)\dfrac{ds}{2\pi i} = \int_{\kappa_r}\dfrac{1}{z-s}\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}\; .$$ This calculation means that the analytic continuation of the Borel transform for $g_r$ is $$\mathcal{B}(g_r)(z)= \int_{\kappa_r}\dfrac{1}{z-s}\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}= \int_{\mathbb{C}}K(z-s)d\mu_r(s)=K*\mu_r(z)\; ,$$ which is an analytic function for every $z\in \mathbb{C} - \kappa_r$. As a by product we have that the conjugate diagram of $\mathcal{B}(g_r)$ is the convex hull of the contour $\kappa_r$. Moreover, this means that the type of the function $g_r$ must be $\tau_{g_r}\geq r$, so that by using the calculus in Item {\sl 1} we conclude that $\tau_{g_r} = r$. This completes the proof of Item {\sl 1}. Finally we note that the description of the Borel transform of $g_r$ implies that $g_r$ is recovered via $\mathcal{P}$-Transform from the measure $$d\mu_{g_r}(s)=K*\mu_r(s)\frac{ds}{2\pi i} \; .$$ \end{proof} \subsubsection{The truncated equation.} In this subsection we consider the following "truncated" equation \begin{equation} \label{Treq} \zeta(\partial^2_t+h)f_r=g_r\; , \quad \; \; h>1\; , \end{equation} for each $r>0$. We note that in the case $h>1$, the poles of the function are $i\sqrt{h-1}$ and $-i\sqrt{h-1}$, and therefore we analyse Equation (\ref{Treq}), in the domain $$\Omega := \mathbb{C} \setminus \{s\in \mathbb{C}: Re(s)\geq 0, \; \; |Im(s)|=\sqrt{h-1} \}\; ,$$ which was used in subsection 4.1. The following theorem shows that Equation (\ref{Treq}) is well posed in the space $Exp(\Omega)$. \begin{theorem}\label{TeoSolZ} A general solution to Equation $(\ref{Treq})$ in the space $Exp(\Omega)$, is provided by the function \begin{equation}\label{SolZeta} \phi_r(z):=\int_{\gamma'}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}= \int_{\kappa_r}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i} +\sum_{j=1}^{N_r}p_j(z)e^{\tau_jz}\; , \end{equation} where $\gamma' \in H_{1}(g_r)$ is such that it encloses the zeros $\{\tau_j, j=1,2, \cdots,N_r\}$ of the function $\zeta(s^2+h)$ which lie in the closed ball $\overline{B}_r(0)$, and $p_j(z)$ are polynomials of degree $ord(\tau_j)-1$. \end{theorem} \begin{proof} By Theorem \ref{TheSol} we know that a solution for the Equation (\ref{Treq}) is $$\int_{\gamma'}e^{sz}\dfrac{\mathcal{B}(g_r)(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}= \int_{\gamma'}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i} \; ,$$ where $\gamma'$ is the curve in the following picture \begin{center} \includegraphics[width=6cm, height=5cm]{555} \end{center} Since the conjugate diagram $S$ for $\mathcal{B}(g_r)(z)$ is the convex hull of the contour $\kappa_r$, we can decompose the circle $\{z:|z|'=r\}$ into three pieces $\gamma_1,\gamma_2,\gamma_3$ in which $\gamma_1$ is in the region of analyticity of $\zeta(s^2+h)$ and contains the set $S$ in its interior, while the other two closed paths contain the zeros of $\zeta(s^2+h)$ in its interior, as in the following picture \begin{center} \includegraphics[width=6cm, height=5cm]{666} \end{center} Therefore, \begin{eqnarray}\label{3Int} \phi_r(z)=\int_{|s|'=r}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}&=&\int_{\gamma_1}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}+\int_{\gamma_2}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}+\nonumber \\ &+&\int_{\gamma_3}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}\; . \end{eqnarray} \noindent We compute the first integral. Using Fubini's Theorem and the Cauchy integral formula, we obtain \begin{eqnarray*} \int_{\gamma_1}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}&=&\int_{\gamma_1}\dfrac{e^{sz}}{\zeta(s^2+h)}\int_{\kappa_r}\dfrac{1}{s-\omega}\mathcal{L}(g)(\omega)\dfrac{d\omega}{2\pi i}\frac{ds}{2\pi i}\\ &=&\int_{\kappa_r}\mathcal{L}(g)(\omega)\int_{\gamma_1}\dfrac{e^{sz}}{\zeta(s^2+h)}\dfrac{1}{s-\omega}\frac{ds}{2\pi i}\dfrac{d\omega}{2\pi i}\\ &=&\int_{\kappa_r}\dfrac{e^{\omega z}}{\zeta(\omega ^2+h)}\mathcal{L}(g)(\omega)\dfrac{d\omega}{2\pi i}\; . \end{eqnarray*} \noindent Now the second integral. Using Fubini's theorem again we have \begin{eqnarray*} \int_{\gamma_2}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}&=&\int_{\kappa_r}\mathcal{L}(g)(\omega)\int_{\gamma_2}\dfrac{e^{sz}}{\zeta(s^2+h)}\dfrac{1}{(s-\omega)}\frac{ds}{2\pi i}\dfrac{d\omega}{2\pi i}\; , \end{eqnarray*} but now we cannot apply Cauchy's integral formula as before, since the zeros of $\zeta(s^2+h)$ are now poles of the function \begin{equation}\label{PolFunct} F(s)=\dfrac{e^{sz}}{\zeta(s^2+h)}\; \dfrac{1}{(s-\omega)}\; ; \end{equation} but, we can use the Residue Theorem. Let $\tau_j$ be a zero of the function $\zeta(s^2+h)$ lying inside the region enclosed by the curve $\gamma_2$. We have, $$Res (F,\tau_j)=\sum_{l=0}^{ord(\tau_j)-1}h_l(\omega,\tau_j)z^le^{\tau_j z}\; ,$$ for some functions $h_l$. Now we let $N_{2,r}$ be the number of zeros of the function $\zeta(s^2+h)$ inside the region enclosed by the curve $\gamma_2$. We conclude that the second integral becomes \begin{eqnarray*} \int_{\gamma_2}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}&=&\int_{\kappa_r}\mathcal{L}(g)(\omega)\sum_{j=1}^{N_{2,r}} \left( \sum_{l=0}^{ord(\tau_j)-1}h_l(\omega,\tau_j)z^le^{\tau_j z}\right) \dfrac{d\omega}{2\pi i}\\ &=&\sum_{j=1}^{N_{2,r}} \sum_{l=0}^{ord(\tau_j)-1} z^le^{\tau_j z} \int_{\kappa_r}\mathcal{L}(g)(\omega)h_l(\omega,\tau_j)\dfrac{d\omega}{2\pi i}\\ &=& \sum_{j=1}^{N_{2,r}} \sum_{l=0}^{ord(\tau_j)-1} A_l(\tau_j) z^le^{\tau_j z}\\ &=&\sum_{j=1}^{N_{2,r}}a_j(z)e^{\tau_j z}\; , \end{eqnarray*} where we have defined the polynomials $$a_j(z):=\sum_{l=0}^{ord(\tau_j)-1} A_l(\tau_j) z^l\; .$$ \noindent Finally, let $N_{3,r}$ be the number of zeros of the function $\zeta(s^2+h)$ inside the region enclosed by the curve $\gamma_3$. We use the same strategy as above for the third integral in (\ref{3Int}) and we obtain \begin{eqnarray*} \int_{\gamma_3}e^{sz}\dfrac{K*\mu_r(s)}{\zeta(s^2+h)}\frac{ds}{2\pi i}=\sum_{j=1}^{N_{3,r}}b_j(z)e^{\tau_j z}, \end{eqnarray*} Putting $N_r=N_{2,r}+N_{3,r}$ as the number of zeros inside of the closed ball $\overline{B}_r(0)$, and setting $p_j= a_j$ for $j=1,2, \cdots , N_{2,r}$ and $p_j= b_j$ for $j=1,2, \cdots , N_{3,r}$ we obtain equality (\ref{SolZeta}) and the theorem is proved. \end{proof} In what follows we consider only the particular solution \begin{equation}\label{ForParSol} \phi_r(z)=\int_{\kappa_r}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i}\; \end{equation} \noindent to Equation (\ref{Treq}). This solution is obtained from Theorem \ref{TeoSolZ} by using a curve $\gamma_1$ as in the following picture \begin{center} \includegraphics[width=6cm, height=5cm]{777} \end{center} One reason for considering only this expression is that the contribution of the second summand in (\ref{SolZeta}) ``can be ignored'', since it corresponds to a solution of the homogeneous equation $\zeta(\partial_t^2+h)f_r=0$. Also, we note that it is still an open problem whether the zeros of the Riemann Zeta function are simple or not (see for example \cite{Anderson83,Bui13,Cheer93}); consequently, we do not even know a precise upper bound for the degree of the polynomials $p_j$ appearing in Theorem (\ref{TeoSolZ})! Such an information could be used, for example, for the study of the uniform convergence of the sequence of partial sums determined for the second summand in (\ref{SolZeta}) for each $r$. \begin{rem} On the other hand, from {\rm \cite{Brend79,Brend82,VdeLune83,VdeLune86}}, we know that the first zeros of the Riemann zeta function are simple; therefore the first zeros of $\zeta(s^2+h)$ are also simple. Let $r>0$ and suppose that the curve $\gamma' \in H_1(g_r)$ encloses the first known simple zeros of $\zeta(s^2+h)$; then, in this situation the full representation formula for the solution given in Theorem \ref{TeoSolZ} is more concrete. This situation is treated in the example that follows. \end{rem} \begin{example} From the work \cite{VdeLune86} (and references therein) we know that at least the first $1.500.000.001$ zeros of the Riemann Zeta function are simple and are located at the critical line; therefore the first zeros of $\zeta(s^2+h)$ are also simple. This implies that the first terms of the sequence of sums in the representation formula (\ref{SolZeta}) are easy to calculate. In fact, let $r>0$ be such that the curve $\gamma' \in H_1(g_r)$ encloses the first $3.000.000.002$ simple zeros of the shifted Riemann Zeta function $\zeta(s^2+h)$. \\ Let $j\in \{1,2,3,\cdots ,3.000.000.002\}$ and let $\tau_j$ be the corresponding simple zero. If we define $$\zeta_j=\lim_{s\to \tau_j}\dfrac{\zeta(s^2+h)}{s-\tau_j}\; ,$$ then, applying the residue theorem to the function $F$ defined in Equation (\ref{PolFunct}) we obtain $$Res(F,\tau_j)=\dfrac{e^{\tau_jz}}{\zeta_j (\tau_j-\omega)}\; .$$ Therefore, from the proof of Theorem \ref{TeoSolZ} we have that the representation formula of the solution is reduced to $$\phi_r(z)=\int_{\kappa_r}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i} +\sum_{j=1}^{N_r}c_je^{\tau_jz}\; ,$$ where $c_j$ are the following complex numbers: $$c_j:=\dfrac{1}{\zeta_j}\int_{\kappa_r}\dfrac{\mathcal{L}(g)(\omega)}{\zeta_j(\tau_j-\omega)}\dfrac{d\omega}{2\pi i}\; .$$ \end{example} \subsubsection{A particular solution} The proof of the following lemma can be found in \cite[Theorem 36.1]{Doetsch} \begin{lemma}\label{Lemmm} Let $g \in \mathcal{L}_{>}(\mathbb{R}_+)$ and let $\kappa$ be the angular contour of the domain of the analytic extension of $\mathcal{L}(g)$ with centre $a_0=0$ and half-angle of opening $\psi$, where $\frac{\pi}{2}<\psi\leq \pi$. Then, the function $$g_{\infty}(z):=\int_{\kappa_{\infty}}e^{zs}\mathcal{L}(g)(s)\frac{ds}{2\pi i},$$ is analytic in an angular region with horizontal bisector and half-angle of opening $\psi-\frac{\pi}{2}$. \end{lemma} \noindent Let us denote by $D_{\psi}$ the angular region with horizontal bisector and half-angle of opening $\psi-\frac{\pi}{2}$ arising in the previous lemma, see \cite[figure 32, p. 243]{Doetsch}. We note that $g_{\infty}$ is analytic in $D_{\psi}$. We can estimate $\psi$. We can see that the real functions $y=|x|$ and $y=-|x|$ are asymptotes to the region which contain the zeros of $\zeta(s^2+h)$. Therefore, the angle $\psi$ satisfies $\frac{3\pi}{4}< \psi \leq \pi$. This gives us a natural fixed angular region $D_{\frac{3\pi}{4}}$ on which the function $g_{\infty}$ is analytic, since $D_{\frac{3\pi}{4}}\subset D_{\psi}$ for all $\frac{3\pi}{4}< \psi \leq \pi$. \begin{proposition} \label{lim} Let $a_0=0$ be the first singularity of the analytic extension of $\mathcal{L}(g)$ and also let $\frac{3\pi}{4}< \psi \leq \pi$ be the angle described in lemma {\rm \ref{Lemmm}}. Then, on compact subsets of $D_{\psi} \subset \mathbb{C}$ we have: \begin{enumerate} \item The sequence $\{g_r\}_{r>0}$ converge uniformly to $$g_{\infty}(z)=\int_{\kappa_{\infty}}e^{zs}\mathcal{L}(g)(s)\frac{ds}{2\pi i}\; .$$ \item The sequence $\{f_r\}_{r>0}$ converge uniformly to $$f_{\infty}(z):=\int_{\kappa_{\infty}}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i}\; .$$ \end{enumerate} In particular both conclusions hold on $D_{\frac{3\pi}{4}}$. \end{proposition} \begin{proof} We prove Item {\sl 1}. Let $K$ be a compact subset of $D_{\psi}$; since it is closed and bounded, there is a positive number $\delta$ such that the distance between $D_{\psi}$ and $\partial K$ (the topological boundary of $K$) is at least $\delta$. Also, there exist a positive number $A$ and angles $\theta_1,\; \theta_2$ satisfying $\frac{\pi}{2}-\psi<\theta_1<\theta_2<\psi-\frac{\pi}{2}$, such that for all $z\in K$ \begin{enumerate} \item[a.] $|z|\geq A$, and \item[b.] $\theta_1<\theta_z<\theta_2$, where $\theta_z$ denotes the angle of $z$ with the real line, $z = |z| \exp(i \theta_z)$. \end{enumerate} Therefore, for $z\in K$ we have \begin{eqnarray*} |g_r(z)-g_{\infty}(z)|&=&\left|\int_{\kappa_{\infty}-\kappa_r}e^{sz}\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}\right|\\ &\leq &\left|\int_r^{\infty}e^{te^{i\psi}z}\mathcal{L}(g)(te^{i\psi})e^{i\psi}\dfrac{dt}{2\pi}\right|+\left|\int_r^{\infty}e^{te^{-i\psi}z}\mathcal{L}(g)(te^{-i\psi})e^{-i\psi}\dfrac{dt}{2\pi}\right|. \end{eqnarray*} Let $l_z=|z|$; for the first integral, we have \begin{eqnarray*} \left|\int_r^{\infty}e^{te^{i\psi}z}\mathcal{L}(g)(te^{i\psi})e^{i\psi}\dfrac{dt}{2\pi}\right|&=&\left|\int_r^{\infty}e^{tl_ze^{i(\psi+\theta_z)}}\mathcal{L}(g)(te^{i\psi})e^{i\psi}\dfrac{dt}{2\pi}\right|\\ &\leq&\int_r^{\infty}e^{tl_z\cos(\psi+\theta_z)}\left|\mathcal{L}(g)(te^{i\psi})\right|\dfrac{dt}{2\pi}\; . \end{eqnarray*} By the Riemann-Lebesgue Lemma we have that $\mathcal{L}(g)$ is bounded, and therefore $|\mathcal{L}(g)(te^{i\psi})|\leq M_{\mathcal{L}(g)}$ for $t\geq r$. Also, $\frac{\pi}{2}<\theta_1+\psi<\psi+\theta_z<\theta_2+\psi<\frac{3\pi}{2}$, which implies that $\cos(\psi+\theta_z)<-B<0$, for some $B>0$. Therefore, we have \begin{eqnarray*} \int_r^{\infty}e^{tl_z\cos(\psi+\theta_z)}\left|\mathcal{L}(g)(te^{i\psi})\right|\dfrac{dt}{2\pi} &\leq& M_{\mathcal{L}(g)}\int_r^{\infty}e^{tl_z\cos(\psi+\theta_z)}\dfrac{dt}{2\pi}\\ &\leq&\int_r^{\infty}e^{-tl_zB}\dfrac{dt}{2\pi}=\dfrac{1}{l_zB}e^{-rl_zB}\\ &\leq&\dfrac{1}{AB}e^{-rAB}\; . \end{eqnarray*} For the second integral, we have $-\frac{3\pi}{2}<\theta_1-\psi<\theta_z-\psi<\theta_2-\psi<-\frac{\pi}{2}$, and therefore there is a constant $C>0$ such that $\cos(\theta_z-\psi)<-C<0$. Then \begin{eqnarray*} \left|\int_r^{\infty}e^{te^{-i\psi}z}\mathcal{L}(g)(te^{-i\psi})e^{-i\psi}\dfrac{dt}{2\pi} \right| & = & \int_r^{\infty}e^{tl_z\cos(\theta_z-\psi)}\left|\mathcal{L}(g)(te^{-i\psi})\right|\dfrac{dt}{2\pi}\\ &\leq& M_{\mathcal{L}(g)}\int_r^{\infty}e^{tl_z\cos(\theta_z-\psi)}\dfrac{dt}{2\pi}\\ &\leq&\int_r^{\infty}e^{-tl_zC}\dfrac{dt}{2\pi}=\dfrac{1}{l_zC}e^{-rl_zC}\\ &\leq&\dfrac{1}{AC}e^{-rAC}\; . \end{eqnarray*} These computations allow us to conclude that given $\epsilon>0$ there is $r_0>0$ such that for every $r>r_0$ and for every $z\in K$ we have the estimate: \begin{eqnarray*} |g_r(z)-g_{\infty}(z)|&\leq& \dfrac{1}{AB}e^{-rAB}+\dfrac{1}{AC}e^{-rAC}<\epsilon\; . \end{eqnarray*} Item 2 follows from the fact that the function $\dfrac{1}{\zeta(s^2+h)}$ is bounded on the angular contour $\kappa_{\infty}$ for $|s|\to \infty$. \end{proof} Let us now denote the angle $\psi$ described in Lemma \ref{Lemmm} by $\psi(g)$. From the result in Proposition \ref{lim} we have the following remark \begin{rem} \label{EndRem} We have \begin{enumerate} \item Proposition {\rm \ref{lim}} implies that the function $g_{\infty}$ is an anlytic function which extends $g$; that is $g_{\infty}(t)=g(t) \; \forall t \in \mathbb{R}_+$. \item The sequences $\{f_r\}_{r>0}$ and $\{g_r\}_{r>0}$ are sequences of entire functions of increasing exponential type $r$. On the other hand, functions $f_{\infty} $ and $g_{\infty}$ from Proposition {\rm \ref{lim}} are, generally speaking, neither entire nor of finite exponential type. \item In principle the functions $g_{\infty}$ and $f_{\infty}$ depend on $\psi$, with $\frac{3\pi}{4}<\psi \leq \psi(g)$: for each angle $\psi$ in $ ]\frac{3\pi}{4},\psi(g)]$ and each $r>0$, we obtain the finite angular contour $\kappa^{\psi}_r$ (which is part of the infinite angular contour $\kappa^{\psi}$), the sequence of functions $\{f^{\psi}_r\}_{r>0}$ and $\{g^{\psi}_r\}_{r>0}$, and the limit functions $ g^{\psi}_{\infty}$ and $ f^{\psi}_{\infty}$. We also note that for $\psi_1\leq\psi_2$ in $]\frac{3\pi}{4},\psi(g)]$ the functions $g^{\psi_2}_{\infty}$ and $f^{\psi_2}_{\infty}$ are analytic extensions of $g^{\psi_1}_{\infty}$ and $f^{\psi_1}_{\infty}$ respectively. \end{enumerate} \end{rem} Motivated by this remark and Proposition \ref{lim}, we define the following nonempty set: \begin{eqnarray*} \mathcal{W}(g):=\bigg \{ f^{\psi}_{\infty} \; : \; \psi \in \; ] \frac{3\pi}{4},\psi(g)]\bigg \} \; . \end{eqnarray*} Also, we denote by $\Omega_{\frac{3\pi}{4}}$ the reflexion of $D_{\frac{3\pi}{4}}$ with respect to the imaginary axis. We define the operator $\widetilde{\zeta}(\partial_t^2+h)$ on $\mathcal{W}(g)$ as follows: \begin{defi}\label{DefFin} Let $f^{\psi}_{\infty} \in \mathcal{W}(g)$ and let $f^{\psi}_r \in Exp(\Omega_{\frac{3\pi}{4}})$ be a family which satisfies Equation $(\ref{Treq})$ and such that $f^{\psi}_r \to f^{\psi}_{\infty}$ in the the topology of uniform convergence on compact subsets of $Dom(f^{\psi}_{\infty})$. Then, \begin{equation}\label{EqEq} \widetilde{\zeta}(\partial_t^2+h)f^{\psi}_{\infty} := \lim_{r\to\infty}\zeta(\partial_t^2+h)f^{\psi}_r \; , \end{equation} where the limit is also taken in the topology of uniform convergence on compact subsets of $Dom(f^{\psi}_{\infty})$. \end{defi} Because of \cite[Theorem 25.1]{Doetsch} the limit appearing in the right hand side of Equation (\ref{EqEq}) {\em does not depend} on the choice of the angle $\psi$. By the same reason the function $f^{\psi}_{\infty}$ {\em does not depend} on $\psi$. Thus we can use Definition \ref{DefFin} to interpret Equation (\ref{ExEq}) in the case in which the data $g\in \mathcal{L}_{>}(\mathbb{R}_+)$: we look, for a fixed $\psi$, a solution $f^{\psi}_{\infty}$ in the set $ \mathcal{W}(g)$ to the following equation: \begin{equation}\label{FiEqn} \widetilde{\zeta}(\partial_t^2+h)f^{\psi}_{\infty}=g\; , \end{equation} and we understand Equation (\ref{FiEqn}) in the following limit sense: $$\lim_{r\to\infty}\zeta(\partial_t^2+h)f^{\psi}_r =\lim_{r\to\infty}g^{\psi}_r = g^{\psi}_{\infty}\; ,$$ where the sequences $\{f^{\psi}_r\}_{r>0}$ and $\{g^{\psi}_r\}_{r>0}$ are in $Exp(\Omega_{\frac{3\pi}{4}})$ and they are related as in Proposition \ref{lim}. We recall once more that limit is taken under the topology of uniform convergence on compact subsets of $Dom(f^{\psi}_{\infty})$, and that $g^{\psi}_{\infty}$ do not depend on the angle $\psi$ (again because of \cite[Theorem 25.1]{Doetsch},). \begin{proposition} Let us consider the particular angle $\psi=\psi(g)$ defined after Proposition {\rm \ref{lim}}. The solution to Equation {\rm (\ref{FiEqn})} is the function $f^{\psi(g)}_{\infty}\in \mathcal{W}(g)$ given in Proposition {\rm \ref{lim}}. \end{proposition} \begin{proof} From Proposition \ref{lim}, we recall that $$f^{\psi(g)}_{\infty}(z)= \int_{\kappa^{\psi(g)}}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i}\; ,$$ and that there exist a function $g^{\psi(g)}_{\infty}$ given by $$g^{\psi(g)}_{\infty}(z)=\int_{\kappa^{\psi(g)}}e^{zs}\mathcal{L}(g)(s)\frac{ds}{2\pi i}\; .$$ on the domain $ Dom(f^{\psi(g)}_{\infty})$. The analytic function $g^{\psi(g)}_{\infty}$ extends the function $g$ defined in principle on $\mathbb{R}_+$. Furthermore, there exist explicit sequences $\{f^{\psi(g)}_r\}_{r>0}$ and $\{g^{\psi(g)}_r\}_{r>0}$ in $Exp(\Omega_{\frac{3\pi}{4}})$ given by: $$f_{r}^{\psi(g)}(z)=\int_{\kappa^{\psi(g)}_r}e^{sz}\dfrac{\mathcal{L}(g)(s)}{\zeta(s^2+h)}\dfrac{ds}{2\pi i}\; ,$$ and $$g_r^{\psi(g)}(z)=\int_{\kappa^{\psi(g)}_r}e^{sz}\mathcal{L}(g)(s)\dfrac{ds}{2\pi i}\; .$$ These sequences, for each $r>0$, satisfy the following truncated equations on $Exp(\Omega_{\frac{3\pi}{4}})$ \begin{equation}\label{FInFIn} \zeta(\partial_t^2+h)f^{\psi(g)}_r=g^{\psi(g)}_r\; . \end{equation} Furthermore, in Proposition \ref{lim} we proved that on compact subsets of $ Dom(f^{\psi(g)}_{\infty})$, the following two uniform limits holds \begin{enumerate} \item [a).] $$\lim_{r\to \infty}g^{\psi(g)}_r(z)=g^{\psi(g)}_{\infty}(z)\; ,$$ \item [b).] $$\lim_{r\to \infty}f^{\psi(g)}_r(z)= f^{\psi(g)}_{\infty}(z)\; .$$ \end{enumerate} Therefore, taking limits in Equation (\ref{FInFIn}) and using items a) and b), the following equality hold (on $Dom(f^{\psi(g)}_{\infty})$) $$\lim_{r\to \infty} \zeta(\partial_t^2+h)f^{\psi(g)}_r(z)=\lim_{r\to \infty}g^{\psi(g)}_r(z)=g^{\psi(g)}_{\infty}(z)\; .$$ That is, on $Dom(f^{\psi(g)}_{\infty})$ we have $$\widetilde{\zeta}(\partial_t^2+h)f^{\psi(g)}_{\infty}=g^{\psi(g)}\; . $$ In particular $$\widetilde{\zeta}(\partial_t^2+h)f^{\psi(g)}_{\infty}(t)=g(t) \; \; in \; \; \mathbb{R}_+ \; .$$ \end{proof} \section*{Appendix: Some Zeta-nonlocal scalar fields} \subsection{Equations of motion} Following Dragovich's work \cite{D}, we show how to deduce several mathematical interesting nonlocal scalar field equations whose dynamics depends on the Riemann zeta function, Hurwitz-zeta function and also on a Dirichlet-Taylor series. Recall that, given a prime number $p$, the Lagrangian formulation of the open $p-$adic string tachyon is \begin{equation}\label{Zeq_03} L_p=\dfrac{m_p^D}{g_p^2}\dfrac{p^2}{p-1}\big{(-}\dfrac{1}{2}\phi p^{-\square/(2m_p^2)}\phi+\dfrac{1}{p+1}\phi^{p+1}\big{)}\; , \end{equation} where $\square$ is the D'Alembert operator defined by $\, \square:=-\partial_t^2+ \triangle_x$, in which $\triangle_x$ is the Laplace operator and we are using metric signature $(-,+,\cdots,+)$, following \cite{D}. This Lagrangian is defined only formally; as we have shown here, the terms appearing therein are well-defined in the $1+0$ case, see also \cite{CPR_Laplace,CPR,GPR_CQG}. The equation of motion for (\ref{Zeq_03}) is $$p^{-\square/(2m_p^2)}\phi=\phi^p.$$ Dragovich has considered the model $$L=\sum_{n=1}^{\infty}C_nL_n= \sum_{n=1}^{\infty}C_n\dfrac{m_n^D}{g_n^2}\dfrac{n^2}{n-1}\big{(-}\dfrac{1}{2}\phi n^{-\square/(2m_n^2)}\phi + \dfrac{1}{n+1}\phi^{n+1}\big{)}\; ,$$ in which all lagrangians $L_n$ given by (\ref{Zeq_03}) are taken into account. Explicit lagrangians $L$ depend on the choices of the coefficients $C_n$. Some particular cases are considered bellow. \subsubsection{The Riemann zeta function as a symbol} This is the case in \cite{D} and one of our main motivations. We recall once again that the Riemann zeta function is defined by (see for instant \cite{KaVo}) $$\zeta(s):=\sum_{n=1}^{\infty}\dfrac{1}{n^s}\; , \; \; \quad Re(s)>1\; .$$ It is analytic on its domain of definition and it has an analytic extension to the whole complex plane with the exception of the point $s=1$, at which it has a simple pole with residue $1$. If we consider the explicit coefficient $$C_n= \dfrac{n-1}{n^{2+h}}\, ,$$ in which $h$ is a real number, Dragovich's Lagrangian becomes $$ L_h=\dfrac{m^D}{g^2}\left(-\dfrac{1}{2}\phi \sum_{n=1}^{\infty}n^{-\square/(2m_n^2)-h}\phi + \sum_{n=1}^{\infty}\dfrac{n^{-h}}{n+1}\phi^{n+1}\right)\, .$$ We write $L_h$ in terms of the zeta function and, in order to avoid convergence issues, we replace the nonlinear term for an adequate analytic function $G(\phi)$. The Lagrangian $L_h$ becomes: $$L_h= \dfrac{m^D}{g^2}\left( -\dfrac{1}{2}\phi \zeta(\dfrac{\square}{2m^2}+h)\phi + G(\phi) \right)\; .$$ The equation of motion is $$\zeta(\dfrac{\square}{2m^2}+h)\phi=g(\phi)\; ,$$ in which $g = G'$. \subsubsection{Dirichlet zeta function as symbol} Let us consider $\chi$ a Dirichlet character modulo $m$ and let us define $$C_n =\dfrac{\chi(n)(n-1)}{n^{2+h}}\; .$$ We recall that a $L$-Dirichlet series is of the following form: $$L(s,\chi)=\sum_{n=1}^{\infty}\dfrac{\chi(n)}{n^s}$$ following Dragovich's approach, we can consider the Lagrangian $$L_h= \dfrac{m^D}{g^2}\left( -\dfrac{1}{2}\phi L(\dfrac{\square}{2m^2}+h,\chi)\phi + F(\varphi) \right) $$ and the corresponding equation of motion \begin{equation} L(\dfrac{\square}{2m^2}+h,\chi)\phi=f(\phi)\; , \end{equation} in which $f = F'$. \subsubsection{Almost periodic Dirichlet series as symbol} Let $\{a_n\}$ be a sequence of complex numbers. A Dirichlet series is a series of the form $$F(s):=\sum_{n=1}^{\infty}\dfrac{a_n}{n^s}\; .$$ Then, for a given sequence $\{a_n\}$, if we consider the coefficients $$C_n=\dfrac{a_n(n-1)}{n^{2+h}},$$ we arrive at the following Lagrangian and equation of motion: $$L_h= \dfrac{m^D}{g^2} \left( -\dfrac{1}{2}\phi F(\dfrac{\square}{2m^2}+h)\phi + D(\phi) \right)\; ,$$ \begin{equation}\label{eqDiSe} F(\dfrac{\square}{2m^2}+h)\phi=d(\phi)\; , \end{equation} in which $d = D'$. A particular case of this equation is the equation with dynamics depending on Dirichlet series with {\sl almost periodic coefficients}: following \cite{OKnill}, we consider a piecewise continuous, $1$-periodic and $L^2$-function $f: \mathbb{R}\to \mathbb{C}$ with Fourier expansion $f(x)= \sum_{k=-\infty}^{\infty}b_ke^{2\pi i kx}$; the particular symbol of interest for equation (\ref{eqDiSe}) is the following {\sl almost periodic Dirichlet series}: $$F_{\alpha}(s):= \sum_{n=1}^{\infty}\dfrac{f(n\alpha)}{n^s}\; .$$ \paragraph{Acknowledgments} A.C. has been supported by PRONABEC (Ministerio de Educaci\'on, Per\'u) and FONDECYT through grant \# 1161691; H.P. and E.G.R. have been partially supported by the FONDECYT operating grants \# 1170571 and \# 1161691 respectively.
{"config": "arxiv", "file": "1907.02617/CPR_Borel_ArXiv_Ref.tex"}
TITLE: analytical solution for linear 1st order PDE using laplace and seperation of variables QUESTION [1 upvotes]: I am looking for the solution of the following pde: $\frac{\partial y(x,t)}{\partial t} = a* \frac{\partial y(z,t)}{\partial x} + b* y(x,t) + c$ and need help with the boundary and initial conditions: $y(x=0,t)=0 $ bzw $y(x=0,t)=const. $ and $y(x,t=0)=f(x)$ a, b are negative constants and c is a positive constant. I used Laplace Transform to derive a solution as follows: $L^{-1}(\frac{\partial y(x,t)}{\partial t}) = L^{-1}(a \frac{\partial y(z,t)}{\partial x} + b y(x,t) + c) $ $s*Y(x,s) - y(x,0)=a* \frac{\partial Y(z,s)}{\partial x} +b* Y(x,s) + \frac{c}{s} $ $ \frac{\partial Y(z,s)}{\partial x} + \frac{b-s}{a} * Y(x,s)= -\frac{1}{a} * (y(x,0)+\frac{c}{s})$ which results in $\int d(exp(\frac{b-s}{a}*x)*Y(x,s)) =\int -\frac{1}{a} * (y(x,0)+\frac{c}{s}) dx$ with $F(x) = \int f(x) dx= \int y(x,0)dx$ and $y(x=0,s)=0$ this results in $Y(x,s)=- \frac{1}{a}*exp(\frac{s-b}{a}*x)*(F(x)-F(0)+\frac{c}{s}*x)$ Transforming this result back into time domain gives $y(x,t) = -\frac{1}{a}*exp(\frac{-b}{a}*x)*((F(x)-F(0))* \delta (t+\frac{x}{a}) +c*x* H(t+\frac{x}{a})) $ with $\delta$ beeing the dirac delta function and $H$ beeing the heaviside step function. This analytical equation cannot reproduce my numeric solution, in my opinion because the initial conditions are only multiplied with a dirac impulse, so is the transformation correct? I also looked at the solution seperating the variables as shown in Analytical Solution of a PDE $y(x,t)=C*exp(kt) * exp (\frac{k-b}{a}*x) - \frac{c}{b} $, however when I use the boundary condition, this results in a time independend solution, since the only way for the solution to be $0$ at $x=0$ is $y(x,t)=\frac{c}{b}*(exp(\frac{k-b}{a}*x)-1)$ Am I overseeing something? Thanks for your help REPLY [0 votes]: The solution is $$ y(x,t) = \left( e^{-\frac{b}{a}x}y_l + e^{-\frac{b}{a}x}\frac{c}{b}\left( 1 - e^{b(t+\frac{x}{a})} \right)\right)\sigma\left(t + \frac{x}{a}\right) - \frac{c}{b}\left(1-e^{bt}\right) - \mathcal{L}^{-1}\{ F(x,s) \}, $$ where $$F(x,s) = \frac{1}{as}e^{\frac{s-b}{a}x} \int\limits_0^x e^{\frac{b-s}{a}\xi}f(\xi) \text{d}\xi, $$ with $y_l := y(0,t)$ and $f(x) := y(x,0)$ the boundary and initial condition, respectively. To verify this for $f(x)\equiv 0$ note that $y(x,t)$ satisfies the boundary condition $$y(0,t) = \left( y_l + \frac{c}{b}\left( 1 - e^{bt} \right) \right) \sigma(t) - \frac{c}{b}\left( 1 - e^{bt} \right) \sigma(t) = y_l \sigma(t),$$ and the initial condition $$ y(x,0) = \left( e^{-\frac{b}{a}x}y_l + e^{-\frac{b}{a}x}\frac{c}{b}\left( 1-e^{\frac{b}{a}x} \right) \right)\sigma\left(\frac{x}{a}\right) - \frac{c}{b}\left( 1 - e^{b\cdot 0}\right) = 0 \equiv f(x).$$ Since $\frac{x}{a} < 0$, we have $\sigma\left(\frac{x}{a}\right) = 0$. To show that $y(x,t)$ satisfies the pde, we derive over $t$ and $x$ $$ \begin{align} y_t(x,t) = & \left( e^{-\frac{b}{a}x}y_l + e^{-\frac{b}{a}x}\frac{c}{b}\left( 1 - e^{b\left( t + \frac{x}{a} \right)} \right) \right) \delta\left( t + \frac{x}{a} \right) \\ & - e^{-\frac{b}{a}x} \frac{c}{b} e^{b+\left( t + \frac{x}{a} \right)}b \sigma\left( t + \frac{x}{a} + ce^{bt} \right), \\ y_x(x,t) = & \left( -\frac{b}{a}e^{-\frac{b}{a}x}y_l + \frac{c}{b}\left( -\frac{b}{a}e^{-\frac{b}{a}x} \left( 1 - e^{b\left( t + \frac{x}{a} \right)} \right) + e^{-\frac{b}{a}x} \left( -e^{b\left( t + \frac{x}{a} \right)} \right) \right) \right) \sigma\left( t + \frac{x}{a} \right) \\ & + \left( e^{-\frac{b}{a}x} y_l + e^{-\frac{b}{a}x}\frac{c}{b} \left( 1 - e^{b\left( t + \frac{x}{a} \right)} \right) \right) \delta\left( t + \frac{x}{a} \right) \frac{1}{a}, \end{align} $$ such that $y_t - ay_x - by - c = 0$ for $t < \left|\frac{x}{a}\right|$, $t > \left|\frac{x}{a}\right|$ and $t = \left|\frac{x}{a}\right|$. How to get there: The Laplace transform of the pde yields $$ sY(x,s) - Y(x,0) = aY_x(x,s) + bY(x,s) + \frac{c}{s}, $$ or, equivalently, $$ Y_x(x,s) = Y(x,s) \frac{s-b}{a} - \frac{1}{a} \left( Y(x,0) + \frac{c}{s} \right) $$ The general solution to this linear, inhomogeneous 1$^\text{st}$-order ode in $x$ is $$ Y(x,s) = e^{\frac{s-b}{a}x}Y(0,s) - \frac{1}{a}\int\limits_0^x e^{\frac{s-b}{a}(x-\xi)}\left( Y(\xi,0) + \frac{c}{s} \right) \text{d}\xi. $$ With $Y(0,s) = \mathcal{L}\{ y(0,t) \} = \frac{y_l}{s}$ and $Y(x,0) = \mathcal{L}\{ y(x,0) \} = \frac{f(x)}{s}$, we arrive at the solution $Y(x,s)$ in the frequency domain: $$ Y(x,s) = e^{s\frac{x}{a}} \left( \frac{1}{s}e^{-\frac{b}{a}x}y_l - \frac{c}{s(s-b)}e^{-\frac{b}{a}x} \right) + \frac{c}{s(s-b)} - F(x,s), $$ which can then be back-transformed by using the following correspondences: $$ \frac{a}{s(s+a)} = \mathcal{L}\{(1-e^{-at})\} $$ $$ e^{-as}F(s) = \mathcal{L}\{ f(t-a) \}$$ $$ \frac{1}{s} = \mathcal{L}\{ \sigma(t) \} $$ $$ 1 = \mathcal{L}\{ \delta(t) \}. $$
{"set_name": "stack_exchange", "score": 1, "question_id": 848252}
\begin{document} \title{Abstract Fractals} \author[]{Marat Akhmet\thanks{Corresponding Author Tel.: +90 312 210 5355, Fax: +90 312 210 2972, E-mail: marat@metu.edu.tr} } \author[]{Ejaily Milad Alejaily} \affil[]{Department of Mathematics, Middle East Technical University, 06800 Ankara, Turkey} \date{} \maketitle We develop a new definition of fractals which can be considered as an abstraction of the fractals determined through self-similarity. The definition is formulated through imposing conditions which are governed the relation between the subsets of a metric space to build a porous self-similar structure. Examples are provided to confirm that the definition is satisfied by large class of self-similar fractals. The new concepts create new frontiers for fractals and chaos investigations. \section{Introduction} Fractals are class of complex geometric shapes with certain properties. One of the main features of the objects is self-similarity which can be defined as the property whereby parts hold similarity to the whole at any level of magnification \cite{Addison}. Fractional dimension is suggested by Mandelbrot to be a property of fractals when he defined a fractal as a set whose Hausdorff dimension strictly larger than its topological dimension \cite{Mandelbrot1}. Roots of the idea of self-similarity date back to the 17th century when Leibniz introduced the notions of recursive self-similarity \cite{Zmeskal}. The first mathematical definition of a self-similar shape was introduced in 1872 by Karl Weierstrass during his study of functions that were continuous but not differentiable. The most famous examples of fractals that display exact self-similarity are Cantor set, Koch curve and Sierpinski gasket and carpet which where discovered by Georg Cantor in 1883, Helge von Koch in 1904 and Waclaw Sierpinski in 1916 respectively. Julia sets, discovered by Gaston Julia and Pierre Fatou in 1917-19, gained significance in being generated using the dynamics of iterative function. In 1979, Mandelbrot visualized Julia sets including the most popular fractal called Mandelbrot set. In this paper we introduce a new mathematical concept and call it abstract fractal. This concept is an attempt to establish a pure foundation for fractals by abstracting the idea of self-similarity. We define the abstract fractal as a collection of points in a metric space. The points are represented through an iterative construction algorithm with specific conditions. The conditions are introduced to governor the relationship between the sets at each iteration. Our approach of construction is based on the concept of porosity rather than the roughness notion introduced by Mandelbrot. Porosity is an intrinsic property of materials and it is usually defined as the ratio of void volume to total volume \cite{Anovitz}. The concept of porosity plays an important role in several fields of research such as geology, soil mechanics, material science, civil engineering, etc. \cite{Anovitz,Ganji}. Fractal geometry has been widely used to study properties of porous materials. However, the concept of porosity was not utilized as as a criterion for fractal structures, and the relevant researches have investigated the relationship between porosity and fractalness \cite{Davis,Yu,Huang,Guyon,Cai}. For instance several researches such as \cite{Puzenko,Tang,Xia} determined fractal dimension of some pore-structures using the pore properties of them. The simplicity and importance of the porosity concept insistently invite us to develop a new definition of fractals through porosity. In other words the property should be involved in fractal theory as a feature to be equivalent to self-similarity and fractional dimension. This needs to specify the concept of porosity to surfaces and lines. In the present paper, we do not pay attention to equivalence between the definition of fractals in terms of porosity and those through self-similarity and dimension rather we introduce an abstract definition which, we hope, to be useful in application domains. \section{The Definition} \label{AbsFractals} In this paper we shall consider the metric measure space defined by the triple $ (X, d, \mu) $, where $ (X, d) $ is a compact metric space, $ d $ is a metric on $ X $ and $ \mu $ is a measure on $ X $. To construct abstract fractal, let us consider the initial set $ F \subset X $ and fix two natural numbers $ m $ and $ M $ such that $ 1 < m < M $. We assume that there exist $ M $ nonempty disjoint subsets, $ F_{i}, \; i=1, 2, ... M $, such that $ \mathcal{F} = \cup_{i=1}^{M} \mathcal{F}_i $. For each $ i=1, 2, ... m $, again, there exist $ M $ nonempty disjoint subsets $ F_{ij}, \; j=1, 2, ... M $ such that $ \mathcal{F}_i = \cup_{j=1}^{M} \mathcal{F}_{ij} $. Generally, for each $ i_1, i_2, ..., i_n, \; i_k=1, 2, ... m $, there exist $ M $ nonempty disjoint sets $ F_{i_1 i_2 ... i_nj}, \; j=1, 2, ... M $, such that $ \mathcal{F}_{i_1 i_2 ... i_n} = \cup_{j=1}^{M} \mathcal{F}_{i_1 i_2 ... i_n j} $, for each natural number $ n $. The following conditions are needed: \noindent There exist two positive numbers, $ r $ and $ R $, such that for each natural number $ n $ we have \begin{equation} \label{RatioCond} r \leq \frac{ \sum_{j=1}^{m} \mu \big( F_{i_1 i_2 ... i_{n-1} j} \big)}{ \sum_{j=m+1}^{M} \mu \big( F_{i_1 i_2 ... i_{n-1} j} \big)} \leq R. \end{equation} where $ i_k = 1, 2, ... , m, \; k=1, 2, ... n-1 $. We call the relation (\ref{RatioCond}) the \textit{ratio condition}. The numbers $ r $ and $ R $ in (\ref{RatioCond}) are characteristics for porosity. Another condition is the \textit{adjacent condition} and it is formulated as follows: \noindent For each $ i_1 i_2 ... i_n, \; i_k=1, 2, ... , m $ there exists $ j, \; j=m+1, m+2, ... , M $, such that \begin{equation} \label{AdjtCond} d(F_{i_1 i_2 ... i_n}, F_{i_1 i_2 ... i_{n-1} j}) = 0. \end{equation} We call $ F_{i_1 i_2 ... i_{n-1} i_n} $ a complement set of order $ n $ if $ i_k=1, 2, ... , m, \; k=1, 2, ... n-1 $ and $ i_n = m+1, m+2, ... , M $. An accumulation point of any couple of complement sets does not belong to any of them. We dub this stipulation the \textit{accumulation condition}. Let us define the diameter of a bounded subset $ A $ in $ X $ by $ \mathrm{diam}(A) = \sup \{ d(\textbf{x}, \textbf{y}) : \textbf{x}, \textbf{y} \in A \} $. Considering the above construction, we assume that the \textit{diameter condition} holds for the sets $ F_{i_1 i_2 ... i_n} $, i.e., \begin{equation} \label{DiamCond} \max_{i_k=1,2, ... M} \mathrm{diam}(F_{i_1 i_2 ... i_n}) \to 0 \;\; \text{as} \;\; n \to \infty. \end{equation} Fix an infinite sequence $ i_1 i_2 ... i_n ... \, $. The diameter conditions as well as the compactness of $ X $ imply that there exists a sequence $ (p_n) $, such that $ p_0 \in F $, $ p_1 \in F_{i_1} $, $ p_2 \in F_{i_1 i_2} $, ... , $ p_n \in F_{i_1 i_2 ... i_n}, \; n=1, 2, ...\, $, which converges to a point in $ X $. The points are denoted by $ F_{i_1 i_2 ... i_n ...} $. We define the \textit{abstract fractal} $ \mathcal{F} $ as the collection of the points $ F_{i_1 i_2 ... i_n ...} $ such that $ i_k=1,2, ... m $, that is \begin{equation} \label{AbsFracSet} \mathcal{F} = \big\{F_{i_1 i_2 ... i_n ... } \; | \; i_k=1,2, ... m \big\}, \end{equation} provided that the above four conditions hold. The subsets of $ \mathcal{F} $ can be represented by \begin{equation} \label{AbsFracSubSet} \mathcal{F}_{i_1 i_2 ... i_n} = \big\{ F_{i_1 i_2 ... i_n i_{n+1} i_{n+2} ... } \; | \; i_k=1, 2, ... , m \big\}, \end{equation} where $ i_1 i_2 ... i_n $ are fixed numbers. We call such subsets \textit{subfractals} of order $ n $. \section{Abstract structure of geometrical fractals} In this section we find the pattern of abstract fractal in some geometrical well-known fractals, Sierpinski carpet, Pascal triangles and Koch curve. \subsection{The Sierpinski Carpet} To construct an abstract fractal corresponding to the Sierpinski carpet, let us consider a square as an initial set $ F $. Firstly, we divide $ F $ into nine $ (M=9) $ equal squares and denote them by $ F_{i}, \; i_1=1, 2, ... 9 $ (see Fig. \ref{SCConst} (a)). In the second step, each square $ F_{i}, \; i=1, 2, ... 8 $ is again divided into nine equal squares denoted as $ F_{i j}, \; j=1, 2, ... 9 $. Figure \ref{SCConst} (b) illustrates the sub-squares of $ F_1 $. We continue in this way such that at the $ n^{th} $ step, each set $ F_{i_1 i_2 ... i_{n-1}}, \; i_k=1, 2, ... 8 $, is divided into nine subset $ F_{i_1 i_2 ... i_{n-1}j}, \; j=1, 2, ... 9 $. For the Sierpinski carpet the number $ m $ is $ 8 $, and the measure ratio (\ref{RatioCond}) can be evaluated as follows. If we consider the first order sets $ F_{i_1}, \; i_1=1, 2, ... 9 $, then \[ \frac{ \sum_{j=1}^{8} \mu \big( F_{j} \big)}{ \mu \big( F_{9} \big)} = 8. \] Thus, the ratio condition holds. From the construction, we can see that each $ F_{i_1 i_2 ... i_n}, \; i_k=1, 2, ... , 8 $ has common boundary with $ F_{i_1 i_2 ... i_{n-1} j}, \; j= 9 $. Therefore, the adjacent condition holds. Since the construction consists of division into smaller parts, the diameter condition is also valid. Moreover, It is clear that the accumulation condition holds as well. As a result, the points of the desired abstract fractal $ \mathcal{F} $ can be represented as $ F_{i_1 i_2 ... i_n ... } $ and the abstract Sierpinski carpet is defines by \[ \mathcal{F} = \big\{F_{i_1 i_2 ... i_n ... } | i_k=1,2, ... 5 \big\}. \] Figure \ref{SCConst} (c) shows the set $ \mathcal{F} $ and illustrates its $ 1^{st} $ order subfractals. \begin{figure}[H] \centering \subfigure[]{\includegraphics[width = 1.6in]{SC1}} \hspace{1cm} \subfigure[]{\includegraphics[width = 1.2in]{SC2}} \hspace{1cm} \subfigure[]{\includegraphics[width = 1.5in]{SC}} \caption{Sierpinski carpet} \label{SCConst} \end{figure} \subsection{The Pascal Triangle} Pascal triangle is a mathematical structure consists of triangular array of numbers. Triangular fractals can be obtained if these numbers are plotted using specific moduli. The Sierpinski gasket, for instance, is the Pascal's triangle modulo 2. Let us build an abstract fractal on the basis of a fractal associated with Pascal triangle modulo 3. Consider an equilateral triangle as an initial set $ F $. In the first step, we divide $ F $ into nine smaller equilateral triangles ad denote them as $ F_i, \; i=1, 2, ... , 9 $ as shown in Fig. \ref{PTConst} (a). Next, each triangle $ F_i, \; i=1, 2, ... , 6 $ is again into nine equilateral triangles named as $ F_{ij}, \; j=1, 2, ... , 9 $. Figure \ref{PTConst} (a) illustrates the second step for the set $ F_1 $. Similarly, the subsequent steps are performed such that at the $ n^{th} $ step, each set $ F_{i_1 i_2 ... i_{n-1}}, \; i_k=1, 2, ... 6 $, is divided into nine subset $ F_{i_1 i_2 ... i_{n-1}j}, \; j=1, 2, ... 9 $. In this case we have $ m=6 $ and $ M=9 $. Therefore, \[ \frac{ \sum_{j=1}^{6} \mu \big( F_{j} \big)}{ \sum_{j=7}^{9} \mu \big( F_{j} \big)} = 2, \] and the ratio condition holds. One can also verify that the adjacent, the accumulation, and the diameter conditions are also valid. Based on this, the points of the fractal can be defined by $ F_{i_1 i_2 ... i_n ... } $, and thus, the abstract Pascal triangle is defined by \[ \mathcal{F} = \big\{F_{i_1 i_2 ... i_n ... } | i_k=1,2, ... 6 \big\}, \] and the $ n^{th} $ order subfractals can be written as \[ \mathcal{F}_{i_1 i_2 ... i_n} = \big\{ F_{i_1 i_2 ... i_n i_{n+1} ... } \; | \; i_k=1, 2, ... , 6 \big\}, \] where $ i_1 i_2 ... i_n $ are fixed numbers. \begin{figure}[H] \centering \subfigure[]{\includegraphics[width = 2.0in]{PT1}} \hspace{1cm} \subfigure[]{\includegraphics[width = 1.6in]{PT2}} \hspace{1cm} \subfigure[]{\includegraphics[width = 2.0in]{PT}} \caption{Pascal triangle modulo 3} \label{PTConst} \end{figure} \subsection{The Koch Curve} In this subsection, we shall show how to build an abstract fractal $ \mathcal{F} $ conformable to the Koch curve. For this purpose, we consider the following construction of the Koch curve. Start with with an isosceles triangle $ F $ with base angles of $ 30^\circ $. The first step of the construction consists in dividing $ F $ into three equal-area triangles $ F_1, F_2 $ and $ F_3 $ (see Fig. \ref{AKCConst} (b)). The triangles $ F_1 $ and $ F_2 $ are isosceles with base angles of $ 30^\circ $, whereas the central triangle $ F_3 $ is an equilateral one. In the second step, each $ F_i, \, i=1, 2 $ is similarly divided into three triangles, two isosceles, $ F_{i1} $ and $ F_{i2} $, and one equilateral, $ F_{i3} $. Figure \ref{AKCConst} (c) illustrate the step. In each subsequent step, the same procedure is repeated for each isosceles triangles resulting from the preceding step. That is, in the $ n^{th} $ step, each $ F_{i_1 i_2 ... i_{n-1}}, \; i_k=1, 2 $, is divided into three parts, two isosceles triangles $ F_{i_1 i_2 ... i_{n-1}j}, \; j=1, 2 $, with base angles of $ 30^\circ $, and one equilateral triangle $ F_{i_1 i_2 ... i_{n-1}3} $. In this construction, we have $ m=2 $ and $ M=3 $, thus, the measure ratio is \[ \frac{\mu (F_{1}) + \mu (F_{2})}{\mu (F_{3})} = 2, \] and the ratio condition holds. From the construction, it is clear that the adjacent, the accumulation, and the diameter conditions are also valid. Based on this, the points in $ \mathcal{F} $ can be represented $ F_{i_1 i_2 ... i_n ...} $, and thus, the abstract Koch curve is defined by \begin{equation*} \label{AbsKochFract} \mathcal{F} = \big\{F_{i_1 i_2 ... i_n ... } \; | \; i_k=1,2 \big\}. \end{equation*} \begin{figure}[H] \vspace{0.3cm} \centering \subfigure[]{\includegraphics[width = 2.6in]{KC0}} \hspace{0.8cm} \subfigure[]{\includegraphics[width = 2.6in]{KC1}} \hspace{0.8cm} \subfigure[]{\includegraphics[width = 2.6in]{KC2}} \caption{Abstract Koch curve construction} \label{AKCConst} \vspace{0.3cm} \end{figure} The $ n^{th} $ order subfractals of $ \mathcal{F} $ are represented by \begin{equation} \label{AbsKochSubFract} \mathcal{F}_{i_1 i_2 ... i_n} = \big\{ F_{i_1 i_2 ... i_n i_{n+1} i_{n+2} ... } \; | \; i_k=1, 2 \big\}, \end{equation} where $ i_1 i_2 ... i_n $ are fixed numbers. Figure illustrats examples of $ 1^{st}, 2^{nd}, 3^{rd} $ and $ 4^{th} $ subfractals of the abstract Koch curve. \begin{figure}[H] \centering \includegraphics[width=0.50\linewidth]{KC} \caption{Subfractals of the abstract Koch curve} \label{KochSubFract} \end{figure} \section{Abstract self-similarity and Chaos} In paper \cite{AkhmetSimilarity}, we have introduced the notion of the abstract self-similarity and defined a self-similar set by \begin{equation} \label{AbstSelfSimiSet} \mathcal{F} = \big\{\mathcal{F}_{i_1 i_2 ... i_n ... } : i_k=1,2, ..., m, \; k=1, 2, ... \big\}, \end{equation} where $ \mathcal{F}_{i_1 i_2 ... i_n ... }, \; i_k=1,2, ..., m $ represent the points of the set. For fixed indexes $ i_1, i_2, ..., i_n $, the subsets are expressed as \begin{equation} \label{AbstSelfSimiSubSet} \mathcal{F}_{i_1 i_2 ... i_n} = \bigcup_{j_k=1,2, ..., m } \mathcal{F}_{i_1 i_2 ... i_n j_1 j_2 ... }, \end{equation} such that $ \mathcal{F}_{i_1 i_2 ... i_n} = \cup_{j=1}^{m} \mathcal{F}_{i_1 i_2 ... i_n j} $, for each natural number $ n $, where all sets $ \mathcal{F}_{i_1 i_2 ... i_n j}, \; j=1, 2, ..., m $, are nonempty, disjoint and satisfy the diameter condition. Based on the definition of the abstract self-similar set, we see that every abstract fractal is an abstract self-similar set, but the reverse is not necessarily valid. A similarity map $ \varphi $ for the abstract fractal $ \mathcal{F} $ can be defined by \[ \varphi(\mathcal{F}_{i_1 i_2 i_3 ...}) = \mathcal{F}_{i_2 i_3 i_4 ...}. \] Let us assume that the separation condition holds, that is, there exist a positive number $ \varepsilon_0 $ and a natural number $ n $ such that for arbitrary $ i_1 i_2 ... i_n $ one can find $ j_1 j_2 ... j_n $ so that \[ d \big( \mathcal{F}_{i_1 i_2 ... i_n} \, , \, \mathcal{F}_{j_1 j_2 ... j_n} \big) \geq \varepsilon_0, \] where $ \varepsilon_0 $ is the separation constant. Considering the results on chaos for self-similar set provided in \cite{AkhmetSimilarity}, it can be proven that the similarity map $ \varphi $ possesses the three ingredients of Devaney chaos, namely density of periodic points, transitivity and sensitivity. Moreover, $ \varphi $ possesses Poincar\`{e} chaos, which characterized by unpredictable point and unpredictable function \cite{AkhmetUnpredictable,AkhmetPoincare}. In addition to the Devaney and Poincar\`{e} chaos, it can be shown that the Li-Yorke chaos also takes place in the dynamics of the map. These results are summarized in the next theorem which can be proven in the similar way that explained in \cite{AkhmetSimilarity}. \begin{theorem} \label{Thm1} If the separation condition holds, then the similarity map possesses chaos in the sense of Poincar\'{e}, Li-Yorke and Devaney.. \end{theorem} That is the triple $ (\mathcal{F}, d, \varphi) $ is a self-similar space and $ \varphi $ is chaotic in the sense of Poincar\'{e}, Li-Yorke and Devaney. \section{Abstract Fractals and Iterated Function System} Iterated function system (IFS) is a powerful tool for the construction of fractal sets. It is defined by a family of contraction mappings $ w_n, n=1, 2, ... \, N $ on a complete metric space $ (X, d) $ \cite{Hutchinson,Barnsley}. The procedure starts with choosing an initial set $ A_0 \in \mathcal{B}(X) $, where $ \mathcal{B}(X) $ is the space of the non-empty compact subsets of $ X $, then iteratively applying the map $ W=\{ w_n, n=1, 2, ... \, N \} $ such that $ A_{k+1}= W(A_k) = \bigcup_{n=1}^N A_k^n $, where $ A_k^n = w_n(A_k) $. The fixed point of this map, $ A = W(A) = \lim_{k \to \infty} W^k(A_0) \in \mathcal{B}(X) $, is called the attractor of the IFS which represents the intended fractal. The idea of the structure of the abstract fractal can be realized using IFS. The fractal constructed by an IFS is an invariant set. Therefore, the subsets at each step of constructions can be determined using the maps $ w_n, n=1, 2, ... \, m $ as illustrated if Fig. \ref{IFS} (a). Similarly, the maps transform each subfractal into subfractals of the subsequent order. Figure \ref{IFS} (b) demonstrates the action of $ w_n $'s on the abstract fractal. The difference between this case and the above IFS fractal construction is that the sets $ \mathcal{F}_{i_1 i_2 ... i_n} $ are fractals in themselves, whereas the sets $ A_k^n $ are not. Utilizing the idea, moreover, each subfractal can be expressed in terms of the iterated images of whole fractal $ \mathcal{F} $, that is \[ \mathcal{F}_i= w_i (\mathcal{F}), \; \mathcal{F}_{ij}= w_j (w_i (\mathcal{F})), \; F_{ijk}= w_k (w_j (w_i (\mathcal{F}))), ... \, , \] thus, in general we have \[ \mathcal{F}_{i_1 i_2 ... i_n}= w_{i_n} (w_{i_{n-1}} ( ... (w_{i_1} (\mathcal{F}))) ... ), \] from which we can define a point belong to the fractal as the limit of the iterated images of $ \mathcal{F} $, \[ \mathcal{F}_{i_1 i_2 ...}= \lim_{n \to \infty} w_{i_n} (w_{i_{n-1}} ( ... (w_{i_1} (\mathcal{F}))) ... ). \] The existing of a separation constant $ \varepsilon_0 $ can be expressed in terms of $ w_n $ such that the condition is satisfied if \[ \min_n \inf_{\substack{i_n, j_n= \\ 1, 2 , ... \, m}} d \big( w_{i_n} ( ... (w_{i_1} (\mathcal{F})), w_{j_n} ( ... (w_{j_1} (\mathcal{F})) \big) \geq \varepsilon_0. \] \begin{figure}[H] \centering \subfigure[]{\includegraphics[width = 2.8in]{IFS1}} \hspace{1.5cm} \subfigure[]{\includegraphics[width = 2.8in]{IFS2}} \caption{IFS} \label{IFS} \end{figure} In addition to the construction of fractals, the IFS is used to prove chaos for the so-called totally disconnected IFS corresponding to certain classes of self-similar fractals like the Cantor set \cite{BarnsleyB}. The proof consists of construction of a dynamical system $ \{A; S\} $, where $ S: A \to A $ is the shift transformation defined by $ S(a) = w_n^{-1}(a) $ for $ a \in W_n(A) $. The system is called the shift dynamical system associated with the IFS, and then it showed to be topologically conjugate to the shift map on the $ N $-symbols code space. We see that this approach follows the usual construction of chaos which begins with defining a map with certain properties where the conjugacy to a well known chaotic map is the major key in discovering the chaotic nature of fractals. Again we emphasize that this approach is only applicable for the totally disconnected fractals namely the well-known Cantor sets. Differently, our approach is characterized by the similarity map $ \varphi: \mathcal{F} \to \mathcal{F} $ which, with regard to IFS approach, can be seen as an abstraction of the geometric essence of the transformation $ S $. Using the idea of indexing the domain elements allows to define the abstract map $ \varphi $. This has shortened the way of chaos proving by eliminating the need for topological conjugacy. Moreover, it becomes possible to investigate the chaotic nature in several classes of fractals such as the Sierpinski fractals and the Koch curve. \section{Discussion} The fractal concept is axiomatically linked with the notion self-similarity. This is why it is considered to be one of the two acceptable definitions of fractals. That is, a fractal can be defined as a set that display self-similarity at all scales. Mandelbrot define a fractal as a set whose Hausdorff dimension strictly larger than its topological dimension. In the present research, we introduce a conception of abstract fractal which can be considered as another criterion of fractalness. Indeed, the idea of the abstract fractal centers around the self-similarity property and many self-similar fractals like the Cantor sets and the Sierpinski fractals are shown to be fractals in the sense of the abstract fractal. These fractals are also satisfied the Mandelbrot definition. Moreover, in our previous paper \cite{AkhmetSimilarity}, we have also shown that the set of symbolic strings satisfies the definition of abstract self-similarity. Because of these facts, we believe that the notion of abstract fractal deserves to be the third definition of fractals and we hope it will be accepted by the mathematical community. Considering the abstract fractal as a new definition of fractal may open new opportunities for more theoretical investigations in this field as well as new possible applications in science and engineering. For example, we may start with the equivalency between these definitions. It is known that the fractals that display exact self-similarity at all scales satisfy Mandelbrot definition of fractal. The proposed definition satisfies the self-similarity since it is the main pivot of the concept of the abstract fractal. But the interesting question is: Does the abstract fractal agree with Mandelbrot definition? The notion of Hausdorff dimension for the abstract fractal is not yet developed enough to provide an answer to the question. However, the determination of the fractal dimension can possibly be performed based on two important properties. The first one is the self-similarity of the abstract fractal which may provide a self-similar dimension that can be assumed to be equivalent to the Hausdorff dimension. The second one is the accumulation condition combining perhaps with the diameter condition. These properties are essential for describing the geometry of fractals, therefore, the fractal dimension can be characterized in terms of them. This is why the definition in our paper can give opportunities to compare abstract fractals with fractals defined through dimension. Furthermore, the suggested fractal definition can be elaborated through chaotic dynamics development, topological spaces, physics, chemistry, neural network theories development.
{"config": "arxiv", "file": "1908.04273/Abstract_Fractals.tex"}
\begin{document} \renewcommand{\evenhead}{H~Gargoubi} \renewcommand{\oddhead}{Algebra ${\rm gl}(\lambda)$ Inside the Algebra of Differential Operators on the Real Line} \thispagestyle{empty} \FirstPageHead{9}{3}{2002}{\pageref{Gargoubi-firstpage}--\pageref{Gargoubi-lastpage}}{Letter} \copyrightnote{2002}{H~Gargoubi} \Name{Algebra $\boldsymbol{{\mathrm{gl}}(\lambda)}$ Inside the Algebra \\ of Differential Operators on the Real Line}\label{Gargoubi-firstpage} \Author{H~GARGOUBI} \Address{I.P.E.I.M., route de Kairouan, 5019 Monastir, Tunisia\\ E-mail: hichem.gargoubi@ipeim.rnu.tn} \Date{Received November 22, 2001; Revised February 26, 2002; Accepted March 12, 2002} \begin{abstract} \noindent The Lie algebra ${\rm gl}(\lambda)$ with $\lambda \in {\mathbb C}$, introduced by B~L~Feigin, can be embedded into the Lie algebra of differential operators on the real line (see~\cite{fe}). We give an explicit formula of the embedding of ${\rm gl}(\lambda)$ into the algebra ${\cal D}_{\lambda}$ of differential operators on the space of tensor densities of degree $\lambda$ on ${\mathbb R}$. Our main tool is the notion of projectively equivariant symbol of a differential operator. \end{abstract} \section{Introduction} The Lie algebra ${\rm gl}(\lambda)$ ($\lambda \in {\mathbb C}$) was introduced by B~L~Feigin in~\cite {fe} for calculation the cohomology of the Lie algebra of differential operators on the real line. The algebra ${\rm gl}(\lambda)$ is defined as the quotient of the universal enveloping algebra $\mathrm{U}(\mathrm{sl}_2)$ of $\mathrm{sl}_2$ with respect to the ideal generated by the element $\Delta - \lambda (\lambda - 1)$, where $\Delta$ is the Casimir element of $\mathrm{U}(\mathrm{sl}_2)$. ${\rm gl}(\lambda)$ is turned into a Lie algebra by the standard method of setting $[a,b] = ab -ba$. According to Feigin, ${\rm gl}(\lambda)$ can be considered as an analogue of ${\rm gl}(n)$ for $n = \lambda \in {{\mathbb N}}$; it is also called the algebra of matrices of complex size, see also \cite{km,ls,sh,gl}. We consider the space ${\cal D}_{\lambda}$ of all linear differential operators acting on tensor densities of degree $\lambda$ on ${{\mathbb R}}$. One of the main results of \cite{fe} is the construction of an embedding ${\rm gl}(\lambda) \to {\cal D}_{\lambda}$. The purpose of this paper is to give an explicit formula of this embedding. We also show that this embedding realizes the isomorphism of Lie algebras ${\rm gl}(\lambda) \cong {\cal D}^{\rm pol}_{\lambda}$ constructed in \cite {bb1,bb2}, where ${\cal D}^{\rm pol}_{\lambda} \subset {\cal D}_{\lambda}$ is the subalgebra of differential operators with polynomial coefficients. The main idea of this paper is to use the {\it projectively equivariant symbol} of a differential operator, that is an $\mathrm{sl}_2$-equivariant way to associate a polynomial function on $T^*{\mathbb R}$ to a~differential operator. The notion of projectively equivariant symbol was defined in \cite{cmz,lo} and used in \cite{ga,go,go1} for study of modules of differential operators. \section{Basic definitions} {\bf 2.1 The Lie algebra $\boldsymbol{{\rm gl}(\lambda)}$.} Let $\mathrm{Vect}({\mathbb R})$ be the Lie algebra of smooth vector fields on~${\mathbb R}$ with complex coefficients: $X=X(x)\partial$, where $X(x)$ is a smooth complex function of one real variable; $X(x) \in C^{\infty}({\mathbb R},{\mathbb C})$, and where $\partial=\frac{d}{dx}$. Consider the Lie algebra $\mathrm{sl}_2\subset\mathrm{Vect}({\mathbb R})$ generated by the vector fields \begin{equation} \left\{\partial,x\partial,x^2\partial\right\}. \label{gensl} \end{equation} Denote $ e_i := x^i\partial$, $i=0,1,2$, the Casimir element \[ \Delta := e^2_1 - \frac{1}{2} (e_0 e_2+ e_2 e_0) \] generates the center of $\mathrm{U}(\mathrm{sl}_2)$. The quotient \[ {\rm gl}(\lambda) := \mathrm{U}(\mathrm{sl}_2) / (\Delta - \lambda (\lambda - 1)), \qquad \lambda \in {{\mathbb C}} \] is naturally a Lie algebra containing $\mathrm{sl}_2$. {\bf 2.2 Modules of differential operators on $\boldsymbol{{\mathbb R}}$.} Denote ${\cal D}$ the Lie algebra of linear differential operators on ${{\mathbb R}}$ with complex coefficients: \begin{equation} A = a_n(x){\partial}^n+a_{n-1}(x){\partial}^{n-1}+\cdots+a_0(x), \label{ope1} \end{equation} with $a_i(x) \in C^{\infty}({\mathbb R},{\mathbb C})$. For $\lambda \in {\mathbb C}$, $\mathrm{Vect}({\mathbb R})$ is embedded into the Lie algebra ${\cal D}$ by: \begin{equation} X \mapsto L_X^{\lambda}:=X(x)\partial+\lambda X^{\prime}(x) . \label{lieder} \end{equation} Denote ${\cal D}_{\lambda}$ the $\mathrm{Vect}({\mathbb R})$-module structure with respect to the adjoint action of $\mathrm{Vect}({\mathbb R})$ on~${\cal D}$. The module ${\cal D}_{\lambda}$ has a natural filtration: ${\cal D}^0_{\lambda}\subset{\cal D}^1_{\lambda}\subset \cdots\subset{\cal D}^n_{\lambda}\subset\cdots$, where ${\cal D}^n_{\lambda}$ is the module of $n$-th order differential operators~(\ref{ope1}). Geometrically speaking, differential operators are acting on tensor densities, namely: $A:{\cal F}_\lambda \to {\cal F}_\lambda$, where ${\cal F}_\lambda$ is the space of tensor densities of degree $\lambda$ on ${\mathbb R}$ (i.e., of sections of the line bundle $(T^{\ast}{\mathbb R})^{\otimes\,\lambda}, \lambda\in {\mathbb C}$), that is: $\phi =\phi (x)(dx)^{\lambda }$, where $\phi (x) \in C^{\infty}({\mathbb R},{\mathbb C})$. It is evident that ${\cal F}_\lambda \cong C^{\infty}({\mathbb R},{\mathbb C})$ as linear spaces (but not as modules) for any $\lambda$. We use this identification throughout this paper. The Lie algebra structures of differential operators acting on the space of tensor densities and on the space of functions are also identified (see~\cite{ga}). The $\mathrm{Vect}({\mathbb R})$-modules ${\cal D}_{\lambda}$ were considered by classics (see \cite{car,wil}) and, recently, stu\-died in a series of papers \cite{do,go,ga,go1,lmt}. {\bf 2.3 Principal symbol.} Let $\mathrm{Pol}(T^*{{\mathbb R}})$ be the space of functions on $T^*{{\mathbb R}}$ polynomial in the fibers. This space is usually considered as the space of symbols associated to the space of differential operators on ${{\mathbb R}}$. Recall that the {\it principal symbol} of a differential operator is the linear map $\sigma :{{\cal D} }\to \mathrm{Pol}(T^*{{\mathbb R}})$ defined by: \[ \sigma (A) = a_n(x) \xi^n , \] where $A$ is a differential operator (\ref{ope1}) and $\xi$ is the coordinate on the fiber. One can also speak about the principal symbol of an element of $\mathrm{U}(\mathrm{sl}_2)$. Indeed, $\mathrm{U}(\mathrm{sl}_2)$ is canonically identified with the symmetric algebra $S(\mathrm{sl}_2)$ as $\mathrm{sl}_2$-modules (see, e.g., \cite[p.82]{dix}). Using the realization (\ref{gensl}), the algebra $S(\mathrm{sl}_2)$ can be projected to $\mathrm{Pol}(T^*{{\mathbb R}})$. Therefore, one can define in a natural way the principal symbol on $S(\mathrm{sl}_2)$. Our goal is to construct an $\mathrm{sl}_2$-equivariant linear map $T_{\lambda}:\mathrm{U}(\mathrm{sl}_2) \to {\cal D}_{\lambda}$ which preserves the principal symbol, i.e., such that the following diagram commutes: \[ \begin{CD} \mathrm{U}(\mathrm{sl}_2) @>T_{\lambda} >> {\cal D}_{\lambda} \strut\\ @V{\sigma}VV @VV{\sigma}V \strut\\ \mathrm{Pol}(T^*{{\mathbb R}}) @>id >> \mathrm{Pol}(T^*{{\mathbb R}})\strut \end{CD} \] {\bf 2.4 Projectively equivariant symbol.} Viewed as a $\mathrm{Vect}({\mathbb R})$-module, the space of symbols corresponding to ${\cal D}_{\lambda}$ has the form: \begin{equation} \label{symb} \mathrm{Pol}(T^*{{\mathbb R}}) \cong {\cal F}_0\oplus{\cal F}_1\oplus\cdots\oplus {\cal F}_n\oplus\cdots. \end{equation} The space of polynomials of degree $\leq n$ is a submodule of $\mathrm{Pol}(T^*{{\mathbb R}})$ which we denote $\mathrm{Pol}_n(T^*{{\mathbb R}})$. The following result of \cite{ga} allows one to identify, for arbitrary $\lambda$, ${\cal D}^n_{\lambda}$ with $\mathrm{Pol}_n(T^*{{\mathbb R}})$ as $\mathrm{sl}_2$-modules: (i) There exists a unique $\mathrm{sl}(2,{\mathbb R})$-isomorphism $\sigma_\lambda:{\cal D}^n_{\lambda} \to \mathrm{Pol}_n(T^*{{\mathbb R}})$ preserving the principal symbol. (ii) $\sigma_\lambda$ associates to each differential operator $A$ the polynomial $\sigma_{\lambda}(A)=\sum\limits_{p=0}^{n} \bar a_p(x) \xi^p $, defined by: \begin{equation} \bar a_p(x) = \sum_{j=p}^n\alpha_p^{j}a_j^{(j-p)}, \label{apbar} \end{equation} where the constants $\alpha_p^{j}$ are given by: \[ \alpha_p^{j} = \frac{{\binom{j}{p}{\binom{2\lambda-p}{j-p}}}}{{\binom{j+p+1}{2p+1}}} \] (the binomial coefficient $\binom{\lambda}{j} = \lambda (\lambda-1) \cdots (\lambda-j+1)/j!$ is a polynomial in $\lambda$). The isomorphism $\sigma_\lambda$ is called the \textit{projectively equivariant symbol map}. Its explicit formula was first found in~\cite{cmz,lo} in the general case of pseudo-differential operators on a~one-dimensional manifold (see also~\cite{lo} for the multi-dimensional case). \section{Main result} In this section, we give the main result of this paper. We adopt the following notations: \[ {[L_{X_1}^{\lambda}L_{X_2}^{\lambda}\cdots L_{X_n}^{\lambda}]}_+ := \sum_{\tau \in S_n} L^{\lambda}_{X_{\tau(1)}}\circ L^{\lambda}_{X_{\tau(2)}} \circ \cdots \circ L^{\lambda}_{X_{\tau(n)}} \] for a symmetric $n$-linear map from $\mathrm{Vect}({\mathbb R})$ to $\cal D$ and \[ {(X_1 X_2 \cdots X_n)}_{+} := \sum_{\tau \in S_n} X_{\tau(1)} X_{\tau(2)} \cdots X_{\tau(n)} \] for a symmetric $n$-linear map from $\mathrm{sl}_2$ to $\mathrm{U}(\mathrm{sl}_2)$, where $S_n$ is the group of permutations of $n$ elements and $X_i \in \mathrm{sl}_2$. \resetfootnoterule \begin{theorem} \label{main} (i) For arbitrary $\lambda \in {{\mathbb C}}$, there exists a unique $\mathrm{sl}_2$-equivariant linear map preserving the principal symbol: \[ T_{\lambda}:\mathrm{U}(\mathrm{sl}_2) \to {\cal D}_{\lambda} \] defined by \begin{equation} T_{\lambda}({(X_1 X_2 \cdots X_n)}_{+}) = {[L_{X_1}^{\lambda}L_{X_2}^{\lambda}\cdots L_{X_n}^{\lambda}]}_+ , \label{tlform} \end{equation} where $X_i \in \{e_0,e_1,e_2 \}$, $L_{X_i}^{\lambda}$ given by (\ref{lieder}) and $n = 1,2,\dots$. (ii) The operator $T_{\lambda}$ is given in term of the $\mathrm{sl}_2$-equivariant symbol (\ref{apbar}) by: \begin{equation} \sigma_\lambda([L_{X_1}^{\lambda}L_{X_2}^{\lambda}\cdots L_{X_n}^{\lambda}]_+) = \sum_{\mbox{\scriptsize$\begin{matrix} 0\leq k \leq n \\ k \; even\end{matrix}$}} P^n_k (\lambda) {\cal A}_k (X_1,\dots,X_n) \xi^{n-k} , \label{symbT} \end{equation} where \begin{gather} {\cal A}_k (X_1,\dots,X_n) \nonumber\\ \qquad {}= \sum_{2p+m=k} {\textstyle \binom{k/2}{p}} {(-2)}^p {(X_1'' \dots X_p'' X_{p+1}' \cdots X_{p+m}'{X}_{p+m+1} \cdots X_n)}_+ \label{opsy} \end{gather} and \begin{equation} P^n_k (\lambda) = \sum^n_{p=0} \sum^n_{l=n-k} (l-n+k)! \; \frac{{{\binom{l}{{n-k}}^2}{\binom{2\lambda-n+k}{l-n+k}}}}{{\binom{n-k+l+1}{2n-2k+1}}} {\textstyle \binom{n}{p}} \mbox{\scriptsize$\left\{\begin{matrix} p \\ l\end{matrix}\right\}$} {\lambda}^{n-p} , \label{Polsy} \end{equation} where $\mbox{\scriptsize$\left\{\begin{matrix} p \\ l\end{matrix}\right\}$}$ is the Stirling number of the second kind\footnote{We refer to \cite{gkp} as a nice elementary introduction to the combinatorics of the Stirling numbers.}. \end{theorem} It is worth noticing that the linear map $T_{\lambda}$ does not depend on the choice of the PBW-base in $\mathrm{U}(\mathrm{sl}_2)$. \section{Proof of Theorem \ref{main}} By construction, the linear map $T_{\lambda}$ is $\mathrm{sl}_2$-equivariant. {\bf 4.1 $\boldsymbol{\mathrm{sl}_2}$-invariant symmetric differential operators.} To prove part (ii) of Theorem~\ref{main} one needs the following \begin{proposition} \label{resPro} For arbitrary $\mu \in {{\mathbb C}}$ and $n =1,2,\dots$, there exists at most one, up to proportionality, $\mathrm{sl}_2$-equivariant symmetric operator ${\otimes}^n \mathrm{sl}_2 \to {{\cal F}}_\mu $ which is differential with respect to the vector fields $X_i \in \mathrm{sl}_2$. This operator exists if and only if $\mu = k-n$, where~$k$ is an even positive integer. It is denoted: $ {\cal A}_k : {\otimes}^n \mathrm{sl}_2 \to {{\cal F}}_{k-n}$, and defined by the expression~(\ref{opsy}). \end{proposition} \begin{proof} Each $k$-th order differential operator ${\cal A}:{\otimes}^n \mathrm{sl}_2 \to {{\cal F}}_\mu$ is of the form: \[ {\cal A} (X_1,\dots,X_n) = \sum_{2p+m=k} \beta_p(x) {(X_1'' \cdots X_p'' X_{p+1}' \cdots X_{p+m}'{X}_{p+m+1} \cdots X_n)}_+, \] where $\beta_p(x)$ are some functions. The condition of $\mathrm{sl}_2$-equivariance for $\cal A$ reads as follows: \[ X [{\cal A} (X_1,\dots,X_n)]' + \mu X' {\cal A} (X_1,\dots,X_n) = \sum^n_{i=1} {\cal A} (X_1,\dots,L_{X}^{-1}(X_i),\dots,X_n) , \] where $X \in \mathrm{sl}_2$. Substitute $X=\partial$ to check that the coefficients $\beta_p(x)$ do not depend on $x$. Substitute $X= x\partial$ to obtain the condition $\mu = k-n$. At last, substitute $X= x^2\partial$ and put $\beta_0 =1$ to obtain, for even $k$, the coefficients from (\ref{opsy}). If $k$ is odd, one obtains $\beta_p = 0$ for all $p$. Proposition \ref{resPro} is proven.\end{proof} The general form (\ref{symbT}) is a consequence of Proposition \ref{resPro} and decomposition (\ref{symb}). {\bf 4.2 Polynomials $\boldsymbol{P^n_k(\lambda)}$.} To compute the polynomials $P^n_k$, put $X_1=\cdots=X_n=x\partial$. One readily gets, from (\ref{symbT}), \begin{equation} \sigma_\lambda ( T_\lambda (X_1,\dots,X_n))|_{x=1}= n! \sum_{\mbox{\scriptsize $\begin{matrix} 0\leq k \leq n \\ k \; even\end{matrix}$}} P^n_k (\lambda) \; \xi^{n-k}. \label{eq1} \end{equation} Furthermore, using the well-known expression ${(x\partial)}^n = \sum\limits_{l=0}^n \mbox{\scriptsize$\left\{\begin{matrix} n\\ l\end{matrix}\right\}$} x^l {\partial}^l$, one has: \begin{gather*} T_\lambda (X_1,\dots,X_n) = n! \; (x\partial + \lambda)^n \\ \phantom{T_\lambda (X_1,\dots,X_n) } {} = n! \sum_{p=0}^n {\textstyle \binom{n}{p}} {(x\partial)}^n \lambda^{n-p} = n! \sum_{p=0}^n \sum^n_{l=0} {\textstyle\binom{n}{p}} \mbox{\scriptsize$\left\{\begin{matrix} n\\ l\end{matrix}\right\}$} x^l {\partial}^l \lambda^{n-p} . \end{gather*} A straightforward computation gives the projectively equivariant symbol (\ref{apbar}) of this differential operator: \begin{gather*} \sigma_\lambda ( T_\lambda (X_1,\dots,X_n))|_{x=1}\\ \qquad{}= n! \sum_{\mbox{\scriptsize$\begin{matrix} 0\leq k \leq n \\ k \; even\end{matrix}$}} \sum^n_{p=0} \sum^n_{l=n-k} (l-n+k)! \frac{{{\binom{l}{{n-k}}^2}{\binom{2\lambda-n+k}{l-n+k}}}}{{\binom{n-k+l+1}{2n-2k+1}}} {\textstyle\binom{n}{p}} \mbox{\scriptsize$\left\{\begin{matrix} p\\ l\end{matrix}\right\}$} {\lambda}^{n-p} \xi^{n-k} . \end{gather*} Compare with the equality (\ref{eq1}) to obtain the formulae from (\ref{Polsy}). Theorem \ref{main} (ii) is proven. {\bf 4.3 Uniqueness.} Let $T$ be an $\mathrm{sl}_2$-equivariant linear map $\mathrm{U}(\mathrm{sl}_2) \to {\cal D}_{\lambda}$ for a certain $\lambda \in {{\mathbb C}}$. In view of the decomposition (\ref{symb}), it follows from Proposition \ref{resPro} that $\sigma_\lambda \circ T |_{{\cal F}_k}=c_k(\lambda) {\cal A}_k$, where $c_k(\lambda)$ is a constant depending on $\lambda$. Recall that $\mathrm{Pol}_n(T^*{{\mathbb R}})$ is a {\it rigid} $\mathrm{sl}_2$-module, i.e., every $\mathrm{sl}_2$-equivariant linear map on $\mathrm{Pol}_n(T^*{{\mathbb R}})$ is proportional to the identity (see, e.g., \cite{lo}). Assuming, now, that $T$ preserves the principal symbol, the rigidity of $\mathrm{Pol}_n(T^*{{\mathbb R}})$ fixes the constants $c_k(\lambda)$ in a unique way. Hence the uniqueness of $T_\lambda$. Theorem \ref{main} is proven. \section{The embedding $\boldsymbol{{\rm gl(\lambda)} \to {\cal D}_{\lambda}}$} A corollary of the uniqueness of the operator $T_\lambda$ and results of \cite{bb1,bb2,fe,sh} is that the embedding ${\rm gl(\lambda)} \to {\cal D}_{\lambda}$ constructed in \cite{fe} coincides with $T_\lambda$. More precisely, according to results of \cite{bb1,bb2,sh}, there exists a homomorphism of Lie algebras $p_\lambda : \mathrm{U}(\mathrm{sl}_2) \to {\cal D}_{\lambda}$ preserving the principal symbol. The homomorphism $p_\lambda$ is, in particular, $\mathrm{sl}_2$-equivariant. By uniqueness of $T_\lambda$, one has $T_\lambda=p_\lambda$. It is also proven that the kernel of $p_\lambda$ is a two-sided ideal of $\mathrm{U}(\mathrm{sl}_2)$ generated by $\Delta - \lambda(\lambda-1)$ (see \cite{bb1,bb2}). Taking the quotient, one then has an embedding ${\tilde T_\lambda}:{\rm gl(\lambda)} \to {\cal D}_{\lambda}$. Since the embedding from~\cite{fe} preserves the principal symbol, it is equal to ${\tilde T_\lambda}$. Finally, it is obvious that the image of $T_\lambda$ is the subalgebra ${\cal D}^{\rm pol}_{\lambda} \subset {\cal D}_{\lambda} $ of differential operators with polynomial coefficients. Therefore, ${\tilde T_\lambda}:{\rm gl(\lambda)} \to {\cal D}^{\rm pol}_{\lambda}$ is a Lie algebras isomorphism. \section{Examples} As an illustration of Theorem \ref{main}, let us give the expressions of the general formulae~(\ref{tlform}) and (\ref{symbT}) for the order $n=1,2,3,4,5$. Let $X_1,X_2,X_3,X_4$ and $X_5$ be arbitrary vector fields in $\mathrm{sl}_2$. 1) The $\mathrm{sl}_2$-equivariant symbol, defined by (\ref{apbar}), of a first order operator of a Lie derivative $L_{X_1}^{\lambda}$ is \[ \sigma_\lambda (L_{X_1}^{\lambda}) = X_1 (x) \xi. \] 2) The ``anti-commutator'' ${[L_{X_1}^{\lambda}L_{X_2}^{\lambda}]}_+$ has the following projectively equivariant symbol: \[ \sigma_\lambda ({[L_{X_1}^{\lambda}L_{X_2}^{\lambda}]}_+)= {({X_1}{X_2})}_+ \xi^2 +\frac{1}{3}\lambda(\lambda-1)({({X_1'}{X_2'})}_+ -2 {(X_1'' X_2)}_+) \] which also following from (\ref{apbar}). 3) The projectively equivariant symbol of a third order expression ${[L_{X_1}^{\lambda}L_{X_2}^{\lambda}L_{X_3}^{\lambda}]}_+$ can be also easily calculated from (\ref{apbar}). The result is: \begin{gather*} \sigma_\lambda({[L_{X_1}^{\lambda}L_{X_2}^{\lambda}L_{X_3}^{\lambda}]}_+) ={({X_1}{X_2}{X_3})}_+ \xi^3 \\ \qquad{}+\frac{1}{5}(3\lambda^2-3\lambda-1) ({(X_1' X_2' X_3)}_+ -2{(X_1'' X_2 X_3)}_+) \xi . \end{gather*} 4) Direct calculation from (\ref{apbar}) gives the projectively equivariant symbol of a fourth order expression ${[L_{X_1}^{\lambda}L_{X_2}^{\lambda}L_{X_3}^{\lambda}L_{X_4}^{\lambda}]}_+$, that is: \begin{gather*} \sigma_\lambda({[L_{X_1}^{\lambda}L_{X_2}^{\lambda}L_{X_3}^{\lambda}L_{X_4}^{\lambda}]}_+) ={({X_1}{X_2}{X_3}{X_4})}_+ \xi^4 \\ \qquad{}+\frac{1}{7}(6\lambda^2-6\lambda-5) ({(X_1' X_2' X_3 X_4)}_+ -2{(X_1'' X_2 X_3 X_4)}_+) \xi^2 \\ \qquad {}+\frac{1}{15}\lambda(\lambda-1)(3\lambda^2-3\lambda-1) ({(X_1' X_2' X_3' X_4')}_+ -4{(X_1'' X_2' X_3' X_4)}_+ \\ \qquad{}+4{(X_1'' X_2'' X_3 X_4)}_+) . \end{gather*} 5) In the same manner, one can easily check that the $\mathrm{sl}_2$-equivariant symbol of a fifth order expression ${[L_{X_1}^{\lambda} L_{X_2}^{\lambda}L_{X_3}^{\lambda}L_{X_4}^{\lambda}L_{X_4}^{\lambda}]}_+$ is: \begin{gather*} \sigma_\lambda({[L_{X_1}^{\lambda}L_{X_2}^{\lambda}L_{X_3}^{\lambda} L_{X_4}^{\lambda}L_{X_5}^{\lambda}]}_+) ={({X_1}{X_2}{X_3}{X_4}{X_5})}_+ \xi^5 \\ \qquad{}+\frac{5}{9}(2\lambda^2-2\lambda-3) ({(X_1' X_2' X_3 X_4 X_5)}_+ -2{(X_1'' X_2 X_3 X_4 X_5)}_+) \xi^3\\ \qquad {}+\frac{1}{7}(3\lambda^4-6\lambda^3+3\lambda+1) ({(X_1' X_2' X_3' X_4' X_5)}_+ -4{(X_1'' X_2' X_3' X_4 X_5)}_+\\ \qquad{}+4{(X_1'' X_2'' X_3 X_4 X_5)}_+)\xi . \end{gather*} \subsection*{Acknowledgments} I would like to thank V~Ovsienko for statement of the problem. I am also grateful to Ch~Duval and A~El~Gradechi for enlightening discussions.
{"config": "arxiv", "file": "math0302027.tex"}
TITLE: Associativity and commutativity of $\lvert x + y\rvert$ QUESTION [0 upvotes]: I'm trying to understand why the operation $\lvert x + y\rvert$ (on the set of all real numbers) isn't associative, and is commutative. I know addition is associative, $x+y = y + x$, but the absolute value operation is throwing me off a little. My thinking so far as to why $\lvert x + y\rvert$ is not associative: $\lvert x + y\rvert \ne \lvert x\rvert+\lvert y\rvert $ for all $x,y \in \mathbb{R}$ e.g. $x=-4, y=6$ $\lvert -4 + 6\rvert = 2 $, whereas $\lvert -4\rvert+\lvert 6\rvert =10$ And for why it is commutative, the best I can come up with is this, which doesn't seem right: $\lvert x + y\rvert = \lvert y+x\rvert $ $\lvert x\rvert+\lvert y\rvert = \lvert y\rvert+\lvert x\rvert $ REPLY [0 votes]: Let $P$ be a property a real number can have. Then to disprove $\forall x \in \mathbb R : P(x)$, you have to prove $\exists x \in \mathbb R : \neg P(x)$. That is you have to find a counter-example. Note that the associativity of the operator $(x,y)\mapsto|x+y|$ you have to disprove is $$ \big||a+b|+c\big| = \big|a+|b+c|\big|. $$ That is, find values of $a$, $b$ and $c$ such that the above is wrong. Hint : use $a=0$ and $b$ such that $b<0$. For commutativity, you've got it right. By taking the absolute value on both sides of the equation, $a+b = b+a$ converts to $|a+b| = |b+a|$.
{"set_name": "stack_exchange", "score": 0, "question_id": 2703119}
TITLE: Prove that $d(x,y)=\sum_{i=1}^\infty \frac{|x_i - y_i|}{2^i}$ converges QUESTION [1 upvotes]: Question: Let S be the set of sequences of $0$s and $1$s. For $x = (x_1, x_2, x_3, ...)$ and $y = (y_1, y_2, y_3, ...)$. Define $d(x,y)=\sum_{i=1}^\infty \dfrac{|x_i - y_i|}{2^i}$ Proof the infinite sum in the definition of $d(x,y)$ converges for all $x$ and $y$. Incomplete answer: Since the max of $d(x,y)$ happens when all elements of one of $x$ and $y$ is $1$ and the other is $0$, and the min of $d(x,y)$ happens when $x_n=y_n$ for all indices $n$, then $0=\sum_{i=1}^\infty \dfrac{0}{2^i}\leq d(x,y)=\sum_{i=1}^\infty \dfrac{|x_i - y_i|}{2^i}\leq \sum_{i=1}^\infty \dfrac{1}{2^i}=1 $ But how to prove that $d(x,y)$ converges to some point when $x$ and $y$ have fixed arbitrary elements? Thank you. REPLY [1 votes]: The sequence $a_n = \sum_{k=1}^n \frac{|x_k-y_k|}{2^k}$ is monotonic and bounded from above, thus converges. $$a_n \le \sum_{k=1}^n \frac{1}{2^k} \le \sum_{k=1}^\infty \frac{1}{2^k} = 1$$
{"set_name": "stack_exchange", "score": 1, "question_id": 1128623}
TITLE: Schur decomposition of matrix $X - \alpha \, u_1 u_1^T$ QUESTION [0 upvotes]: Consider the Schur decomposition $X = URU^T$ of a real matrix $X$, where $U$ is orthogonal and $R$ is upper triangular. Is there a nice way to compute the Schur decomposition of the matrix $X - \alpha \, u_1 u_1^T$ for any $\alpha \in \mathbb{R}$, where $u_1u_1^T$ is the outer product of the first column of $U$ with itself? I'm not sure how to use the decomposition of $X$ here. REPLY [2 votes]: The same orthogonal matrix $U$ will give the Schur decomposition: $$U^\top(X-\alpha u_1u_1^\top)U=U^\top XU-\alpha U^\top u_1u_1^\top U=R-\alpha e_1e_1^\top=\tilde{R}$$ $$\therefore\ X-\alpha u_1u_1^\top=U\tilde{R}U^\top$$ where $\tilde{R}$ is the same as matrix $R$ except $\tilde{R}_{11}=R_{11}-\alpha$. Note that $U^\top u_1=U^{-1}u_1=e_1$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3944104}
TITLE: proof: $r \in \langle S\setminus \{r\}\rangle \leftrightarrow \langle S\rangle = \langle S\setminus \{r\} \rangle$ QUESTION [1 upvotes]: Theorem: Let be $V$ a $K-$vector space, and $ r \in S \subseteq V$: $$r \in \langle S\setminus \{r\}\rangle \leftrightarrow \langle S\rangle = \langle S\setminus \{r\} \rangle$$ Proof:$$ "\to"$$ $"\subseteq"$ -- I prove that $S \subseteq \langle S\setminus \{r\} \rangle$: $$ \begin{align} x \in S &\to x \in \{r\} \cup S \setminus \{r\} \\ &\to x = r \vee x \in S \setminus \{r\} \\ &\to x \in \langle S\setminus \{r\}\rangle \text{ by Iphotesis } \vee \\ &\vee x \in \langle S\setminus \{r\}\rangle \text{ because }S\setminus \{r\} \subseteq \langle S\setminus \{r\}\rangle \end{align}$$ Now it is simple, in fact $$S \subseteq \langle S\setminus \{r\}\rangle \to \langle S\rangle \subseteq \langle \langle S\setminus \{r\}\rangle \rangle= \langle S\setminus \{r\}\rangle \to \langle S \rangle \subseteq \langle S\setminus \{r\}\rangle$$ $"\supseteq"$-- Naturally $S\setminus \{r\}\subseteq S$ with $r \in S$, therefore: $$ \langle S\setminus \{r\}\rangle \subseteq \langle S \rangle$$ $$"\leftarrow"$$ We have by Iphotesis that $r \in S$, therefore: $$r \in S \subseteq \langle S\rangle=\langle S\setminus \{r\}\rangle \to r \in \langle S\setminus \{r\}\rangle$$ Is it correct? REPLY [3 votes]: The part of the proof showing that $S \subseteq \langle S \setminus \{r\}\rangle$ can be written slightly more elegantly as follows: Clearly one has $S \setminus \{ r\} \subseteq \langle S \setminus \{r\}\rangle$ Furthermore, $\{r\} \subseteq \langle S \setminus \{r\}\rangle$ by assumption. Hence $$S = (S \setminus \{r\}) \cup \{r\} \subseteq \langle S \setminus \{r\}\rangle\,.$$ Otherwise the proof looks good.
{"set_name": "stack_exchange", "score": 1, "question_id": 2136918}
TITLE: Evaluating $\lim\limits_{h\to 0^+} \frac1h(\int_{0}^{\pi}\sin^{h}x~{\rm d}x-\pi)$ QUESTION [5 upvotes]: The initial question is to find $$\lim_{n\rightarrow \infty}n^3\left(\tan\left(\int_{0}^{\pi}\sqrt[n] {\sin x}\,{\rm d}x\right)+\sin\left(\int_{0^+}^{\pi}\sqrt[n] {\sin x}\,{\rm d}x\right)\right),$$ and I simplify it to be $$\lim_{h\rightarrow0} \frac{1}{h}\left(\int_{0}^{\pi}\sin^{h}x\,{\rm d}x-\pi\right)=?$$ Am I wrong or any idea? $$\lim_{n\rightarrow \infty}n^3\left(\tan\left(\int_{0}^{\pi}\sqrt[n] {\sin x}{\rm d}x\right)+\sin\left(\int_{0}^{\pi}\sqrt[n] {\sin x}{\rm d}x\right)\right)\\= \lim_{h\rightarrow 0^+}\frac{1}{h^3}{\left(\tan\left(\int_{0}^{\pi} {\sin^h x}{\rm d}x-\pi\right)-\sin\left(\int_{0}^{\pi} {\sin^h x}{\rm d}x-\pi\right)\right)}\\=\frac{1}{2}\lim_{h\rightarrow 0^+}\frac{1}{h^3}\left({\int_{0}^{\pi}\sin^{h}x~{\rm d}x-\pi}\right)^3$$ REPLY [1 votes]: For $h>-1,$ let $I(h) = \int_0^{\pi}[(\sin x)^h - 1]\, dx.$ ($h>-1$ insures a convergent integral). I'll show $$\tag 1 I(h) = h\int_0^{\pi}\ln (\sin x)\, dx + O(h^2).$$ The Lagrange form of the remainder in Taylor shows $$e^u=1+u +r(u), \,\,\text {where } |r(u)|\le e^{|u|}u^2.$$ Thus for $x\in (0,\pi),$ $$(\sin x)^h = e^{h\ln (\sin x)} = 1 + h\ln (\sin x) + r_h(x),$$ where $|r_h(x)|\le (\exp{|h\ln (\sin x)|})h^2\ln^2 (\sin x).$ We'll be done if we show $\int_0^{\pi}|r_h(x)|\,dx = O(h^2).$ Assume $|h|<1/2.$ Then $$\exp{|h\ln (\sin x)|} = \exp{\{|h|\ln (1/\sin x)\}} = \exp{\ln (1/\sin x)^{|h|}} = \frac{1}{(\sin x)^{|h|}}\le \frac{1}{(\sin x)^{1/2}}.$$ For such $h$ this implies $$|r_h(x)|\le h^2\frac{\ln^2 (\sin x)}{(\sin x)^{1/2}}.$$ The function of $x$ on the right is integrable over $(0,\pi),$ since the singularity at $0$ is on the order of $(\ln x)^2/x^{1/2}$ at $0;$ similarly for the singularity at $\pi.$ Putting everything together proves $(1),$ and yields the desired answers to both questions.
{"set_name": "stack_exchange", "score": 5, "question_id": 2648460}
TITLE: Linear Partial Differential Equation QUESTION [1 upvotes]: I was trying to solve $$\frac{{\partial}^4\phi}{\partial{z^2}\partial\bar{z}^2}=0$$ Using the Wirtinger Derivatives $$\frac{\partial}{\partial z} = \frac{1}{2}(\frac{\partial }{\partial x} - i\frac{\partial }{\partial y})$$ $$\frac{\partial}{\partial \bar{z}} = \frac{1}{2}(\frac{\partial }{\partial x} + i\frac{\partial }{\partial y})$$ I used the following by multiplying the two wirtinger derivatives by itself, $$\frac{\partial^2}{\partial {z}^2} = \frac{1}{4}(\frac{\partial^2 }{\partial x^2} - 2i\frac{\partial^2 }{\partial y \partial x}-\frac{\partial^2 }{\partial y^2})$$ $$\frac{\partial^2}{\partial \bar{z}^2} = \frac{1}{4}(\frac{\partial^2 }{\partial x^2} + 2i\frac{\partial^2 }{\partial y \partial x}-\frac{\partial^2 }{\partial y^2})$$ And then I fit into the previous equation I wanted to solve and then got two linear partial differential equations by solving the real part and the imaginary part. I'm having trouble how to solve them simultaneously. Please help. REPLY [1 votes]: Well, the whole point of using Wirtinger derivatives is to make this kind of problem easy. One can "trivially" see that $\phi = (\overline{z}+a)p(z) + (z+b)q(\overline{z})$ is a solution, for arbitrary functions p and q. Why is this "trivial"? Because $z$ and $\overline{z}$ behave as if they were (linearly) independent variables. Posing the question you did is very similar to (the same as?) asking for solutions to $$\frac{\partial^4\phi}{\partial^2s\partial^2t}=0$$ for two linearly independent variables $s$ and $t$. (And oh, by the way, you'd get a mess if you tried to solve the above, using $s=u+v$ and $t=u-v$ - the same mess that you are currently getting).
{"set_name": "stack_exchange", "score": 1, "question_id": 2644204}
TITLE: Probability of a specific sequence QUESTION [3 upvotes]: I was preparing for my probability exam and i came across this problem. A fair dice is rolled until either 6 comes up or two $5$ comes up in a row. For example some of the possible outcomes are $$52546, 6, 36, 3251541355, 55, 255 \,\, \text {etc} $$ Find the probability that the last number to come up is a $6$. I have attempted to solve this problem this way. Let $X$ be the number of times the die was rolled to obtain two $5$ in a row and $Y$ be the number of times the die was rolled to obtain a $6$. Intuitively i thought that there could be a $c$ such that $c(P(Y=n)) = P(X=n)$ . Then $$1=\sum_{n=1}^\infty P(X=n)+P(Y=n)=\sum_{n=1}^\infty cP(Y=n)+P(Y=n)=(1+c)\sum_{n=1}^\infty P(Y=n)$$ $$\implies \sum_{n=1}^\infty P(Y=n)=\frac {1} {1+c}$$ However i am unable to find such a constant $c$. Any help / insight is appreciated. Thanks REPLY [1 votes]: The absorbing chain matrix is $$\left( \begin{array}{ccccc} \text{} & o & 5 & 55 & 6 \\ o & \frac{2}{3} & \frac{1}{6} & 0 & \frac{1}{6} \\ 5 & \frac{2}{3} & 0 & \frac{1}{6} & \frac{1}{6} \\ 55 & 0 & 0 & 1 & 0 \\ 6 & 0 & 0 & 0 & 1 \\ \end{array} \right)$$ The states have been labeled o which represents a throw of (1,2,3,4), 5 which is the first 5 thrown, 55 which is 2 5's in a row and 6 which is the first 6 thrown. $$ \text{Q=}\left( \begin{array}{cc} \frac{2}{3} & \frac{1}{6} \\ \frac{2}{3} & 0 \\ \end{array} \right)$$ $$\text{R=}\left( \begin{array}{cc} 0 & \frac{1}{6} \\ \frac{1}{6} & \frac{1}{6} \\ \end{array} \right)$$ $$\text{0=}\left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ \end{array} \right)$$ Now B is the probability of ending in some absorbing state (first row) starting in some transient state (first column). $$\text{B=}\left( \begin{array}{ccc} \text{} & 55 & 6 \\ o & \frac{1}{8} & \frac{7}{8} \\ 5 & \frac{1}{4} & \frac{3}{4} \\ \end{array} \right)$$ When we check the element o,6 we see the answer of $ \frac{7}{8} $
{"set_name": "stack_exchange", "score": 3, "question_id": 1195090}
\begin{document} \begin{abstract} We study gradient estimates of $q$-harmonic functions $u$ of the fractional Schr{\"o}dinger operator $\Delta^{\alpha/2} + q$, $\alpha \in (0,1]$ in bounded domains $D \subset \R^d$. For nonnegative $u$ we show that if $q$ is H{\"o}lder continuous of order $\eta > 1 - \alpha$ then $\nabla u(x)$ exists for any $x \in D$ and $|\nabla u(x)| \le c u(x)/ (\dist(x,\partial D) \wedge 1)$. The exponent $1 - \alpha$ is critical i.e. when $q$ is only $1 - \alpha$ H{\"o}lder continuous $\nabla u(x)$ may not exist. The above gradient estimates are well known for $\alpha \in (1,2]$ under the assumption that $q$ belongs to the Kato class $\calJ^{\alpha - 1}$. The case $\alpha \in (0,1]$ is different. To obtain results for $\alpha \in (0,1]$ we use probabilistic methods. As a corollary, we obtain for $\alpha \in (0,1)$ that a weak solution of $\Delta^{\alpha/2}u + q u = 0$ is in fact a strong solution. \end{abstract} \maketitle \section{Introduction} Let $\alpha \in (0,2)$, $d \in \N$ and $q$ belong to the Kato class $\calJ^{\alpha}$. We say that a Borel function $u$ on $\R^d$ is {\it{$q$-harmonic}} in an open set $D \subset \R^d$ iff \begin{equation} \label{probab} u(x) = E^x\left[\exp \left( \int_0^{\tau_W} q(X_s) \, ds\right) u(X_{\tau_W})\right], \quad x \in W, \end{equation} for every open bounded set $W$, with $\overline{W} \subset D$. Here $X_t$ is the symmetric $\alpha$-stable process in $\R^d$, $\tau_W$ the first exit time of $X_t$ from $W$, and we understand that the expectation in (\ref{probab}) is absolutely convergent. It is possible to express the above probabilistic definition in analytic terms. Namely, it is known \cite[Theorem 5.5]{BB1999} that if $u$ is $q$-harmonic in open set $D \subset \R^d$ then $u$ is a weak solution of \begin{equation} \label{weak} \Delta^{\alpha/2}u + q u = 0, \quad \text{on} \quad D. \end{equation} Here $\Delta^{\alpha/2} : = -(-\Delta)^{\alpha/2}$ is the fractional Laplacian. On the other hand if $D \subset \R^d$ is an open bounded set and $(D,q)$ is gaugeable then a weak solution of (\ref{weak}) is a $q$-harmonic function on $D$ after a modification on a set of Lebesgue measure zero (for more details see Preliminaries). It is known \cite{BB1999} that if $u$ is $q$-harmonic in $D$ then it is continuous in $D$. The purpose of this paper is to derive further regularity results of $q$-harmonic functions. The main result is the following. \begin{theorem} \label{mainthm} Let $\alpha \in (0,1]$, $d \in \N$ and $D \subset \R^d$ be an open bounded set. Assume that $q: D \to \R$ is H{\"o}lder continuous with H{\"o}lder exponent $\eta > 1 - \alpha$. Let $u$ be $q$-harmonic in $D$. If $u$ is nonnegative in $\R^d$ then $\nabla u(x)$ exists for any $x \in D$ and we have \begin{equation} \label{nonnegative} |\nabla u(x)| \le c \frac{u(x)}{\delta_D(x) \wedge 1}, \quad x \in D, \end{equation} where $\delta_D(x) = \dist(x,\partial D)$ and $c = c(\alpha,d,\eta,q)$. If $u$ is not nonnegative in $\R^d$ but $\|u\|_{\infty} < \infty$ then $\nabla u(x)$ exists for any $x \in D$ and we have \begin{equation} \label{norm} |\nabla u(x)| \le c \frac{\|u\|_{\infty}}{\delta_D(x) \wedge 1}, \quad x \in D, \end{equation} where $c = c(\alpha,d,\eta,q)$. \end{theorem} The existence of $\nabla u(x)$ and similar gradient estimates are well known in the classical case for $\alpha =2$, see e.g. \cite{CZ1990} and for $\alpha \in (1,2)$, see \cite{BKN2002}. These results for $\alpha \in (1,2]$ were shown under the assumption that $q \in \calJ^{\alpha - 1}$. The biggest difference between the cases $\alpha \in (0,1]$ and $\alpha \in (1,2]$ is the fact that for $\alpha \in (0,1]$ the function $y \to |\nabla_x G_D(x,y)|$ is not integrable while for $\alpha \in (1,2]$ is integrable. Here $G_D(x,y)$ is the Green function for $\Delta^{\alpha/2}$ with Dirichlet condition on $D^c$. The fact that $y \to |\nabla_x G_D(x,y)|$ is integrable was widely used in \cite{BKN2002} for $\alpha \in (1,2)$, see e.g. \cite[Lemma 5.2]{BKN2002}. For $\alpha \in (0,1]$ more complicated method must be used. Key ingredients of the method for $\alpha \in (0,1]$ may be briefly described as the combination of some estimates of the Green function and some self-improving estimates used in the proof of Theorem \ref{mainthm}. The proof of the estimates of the Green function is mainly probabilistic. It is based on the representation of symmetric $\alpha$-stable processes as subordinated Brownian motions and the reflection principle for the Brownian motion. This probabilistic idea is similar to the one used in the paper by B. B{\"o}ttcher, R. Schilling, J. Wang, where they study couplings of subordinated Brownian motions, see Section 2 in \cite{BSW2011}. More remarks about these probabilistic methods are at the end of Section 3. From analytic point of view Theorem \ref{mainthm} gives some regularity results for weak solutions of (\ref{weak}). It is worth to notice that regularity results of weak solutions of equations involving the fractional Laplacian have attracted a lot of attention recently, see e.g. \cite{KNV2007}, \cite{S2011}. One may ask whether it is possible to weaken the assumption in Theorem \ref{mainthm} that $q$ is H{\"o}lder continuous with H{\"o}lder exponent $\eta > 1 - \alpha$. It occurs that the exponent $\eta = 1 - \alpha$ is critical in the following sense. \begin{proposition} \label{counterexample} For any $\alpha \in (0,1]$, $d \in \N$ and any open bounded set $D \subset \R^d$ there exists $q: D \to [0,\infty)$ which is $1- \alpha$ H{\"o}lder continuous, a function $u: \R^d \to [0,\infty)$ which is $q$-harmonic in $D$ and a point $z \in D$ such that $\nabla u(z)$ does not exist. \end{proposition} The proof of this proposition is based on the estimates of the Green function of the killed Brownian motion subordinated by the $\alpha/2$-stable subordinator. These estimates were obtained by R. Song in \cite{S2004}. When a $q$-harmonic function $u$ vanishes continuously near some part of the boundary of $D$ and $D \subset \R^d$ is a bounded Lipschitz domain then the estimates obtained in Theorem \ref{mainthm} are sharp near that part of the boundary. \begin{theorem} \label{sharpthm} Let $\alpha \in (0,1]$, $d \in \N$, $D \subset \R^d$ be a bounded Lipschitz domain and $q: D \to \R$ be H{\"o}lder continuous with H{\"o}lder exponent $\eta > 1 - \alpha$. Let $V \subset \R^d$ be open and let $K$ be a compact subset of $V$. Then there exist constants $c = c(D,V,K,\alpha,q,\eta)$ and $\eps = \eps(D,V,K,\alpha,q,\eta)$ such that for every function $u: \R^d \to [0,\infty)$ which is bounded on $V$, $q$-harmonic in $D \cap V$ and vanishes in $D^c \cap V$ we have \begin{equation*} c^{-1} \frac{u(x)}{\delta_D(x)} \le |\nabla u(x)| \le c \frac{u(x)}{\delta_D(x)}, \quad x \in K \cap D, \quad \delta_D(x) < \eps. \end{equation*} \end{theorem} Similar result was obtained for $\alpha = 2$ in \cite{BP1999} and for $\alpha \in (1,2)$ in \cite{BKN2002}, see Theorem 5.1. As an application of our main result we obtain gradient estimates of eigenfunctions of the eigenvalue problem of the fractional Schr{\"o}dinger operator with Dirichlet boundary conditions. These estimates are formulated and proved in Section 6. As another application of our main result we show for $\alpha \in (0,1)$ that under some assumptions on $q$ a weak solution of $\Delta^{\alpha/2} u + q u = 0$ is in fact a strong solution. Note that in the following corollary we do not have to assume that $(D,q)$ is gaugeable. \begin{corollary} \label{weakstrong} Let $\alpha \in (0,1)$, $d \in \N$ and $D \subset \R^d$ be an open bounded set. Assume that $q: D \to \R$ is H{\"o}lder continuous with H{\"o}lder exponent $\eta > 1 - \alpha$ and either $u$ is nonnegative on $\R^d$ or $\|u\|_{\infty} < \infty$. If $u$ is a weak solution of (\ref{weak}) then (after a modification on a set of Lebesgue measure zero) $u$ is continuous on $D$ and it is a strong solution of (\ref{weak}). \end{corollary} The paper is organized as follows. Section 2 is preliminary; we collect here basic facts concerning the fractional Laplacian, the fractional Schr{\"o}dinger operator and $q$-harmonic functions. In Section 3 using probabilistic methods we obtain estimates of the Green function, which will be essential in the rest of the paper. In Section 4 the main result of the paper is proved. Section 5 contains proofs of Proposition \ref{counterexample} and Theorem \ref{sharpthm}. Section 6 concerns applications of the main result. \section{Preliminaries} Most of the terminology and facts presented here are taken from \cite{BB1999} and \cite{BB2000}. The notation $c(a,b,\ldots)$ means that $c$ is a constant depending only on $a,b,\ldots$. Constants are always positive and finite. We adopt the convention that constants may change their value from one use to another. As usual we write $x \wedge y = \min(x,y)$, $x \vee y = \max(x,y)$ for $x,y \in \R$, $\|u\|_{\infty} = \sup_{x \in \R^d} |u(x)|$ for any function $u: \R^d \to \R$, $B(x,r) = \{y \in \R^d: \, |x - y| < r\}$ for $x \in \R^d$, $r > 0$. By $e_i$, $i=1,\ldots,d$ we denote the standard basis in $\R^d$. We denote by $(X_t,P^x)$ the standard rotation invariant ("symmetric") $\alpha$-stable process in $\R^d$, $\alpha \in (0,2]$ with the characteristic function $E^0 \exp(i \xi X_t) = \exp(- t |\xi|^{\alpha})$, $\xi \in \R^d$, $t \ge 0$. $E^x$ denotes the expectation with respect to the distribition $P^x$ of the process starting from $x \in \R^d$. We have $P^x(X_t \in A) = \int_A p(t,x,y) \, dy$, where $p(t,x,y) = p_t(y - x)$ is the transition density of $X_t$. For $\alpha < d$ the process $X_t$ is transient and the potential kernel of $X_t$ is given by \begin{equation} \label{Riesz} K_{\alpha}(y - x) = \int_0^{\infty} p(t,x,y) \, dt = \frac{\calA(d,\alpha)}{|y - x|^{d - \alpha}}, \quad x,y \in \R^d, \end{equation} where $\calA(d,\gamma) = \Gamma((d - \gamma)/2)/(2^{\gamma}\pi^{d/2}|\Gamma(\gamma/2)|)$ \cite{BG1968}. When $\alpha \ge d$ the process is recurrent and it is appropriate to consider the so-called compensated kernels. Namely for $\alpha \ge d$ we put $$ K_{\alpha}(y - x) = \int_0^{\infty} (p(t,x,y) - p(t,0,x_0)) \, dt, $$ where $x_0 = 0$ for $\alpha > d = 1$, $x_0 = 1$ for $\alpha = d = 1$ and $x_0 = (0,1)$ for $\alpha = d = 2$. For $\alpha = d = 1$ we have $$ K_{\alpha}(x) = \frac{1}{\pi} \log\left(\frac{1}{|x|}\right). $$ For any open set $D \subset \R^d$ we put $\tau_D = \inf\{t \ge 0: \, X_t \notin D\}$ the first exit time of $X_t$ from $D$ and we denote by $p_D(t,x,y)$ the transition density of the process $X_t$ killed on exiting $D$. The transition density is given by the formula $$ p_D(t,x,y) = p(t,x,y) - E^x(p(t - \tau_D, X(\tau_D),y), \tau_D < t), \quad \quad x,y \in D, \quad t > 0. $$ We put $p_D(t,x,y) =0$ if $x \in D^c$ or $y \in D^c$. It is known that for each fixed $t > 0$ the function $p_D(t,\cdot,\cdot)$ is bounded and continuous on $D \times D$. When $d > \alpha$ and $D \subset \R^d$ is an open set or $d = 1 \le \alpha$ and $D \subset \R^d$ is an open bounded set we put $$ G_D(x,y) = \int_0^{\infty} p_D(t,x,y) \, dt, \quad x,y \in D, $$ $G_D(x,y) = 0$ if $x \in D^c$ or $y \in D^c$. We call $G_D(x,y)$ {\it{the Green function}} for $D$. It is known that $G_D(x,\cdot)$ is continuous on $D \setminus \{x\}$. For any open bounded set $D \subset \R^d$ we define {\it{the Green operator}} $G_D$ for $D$ by $$ G_D f(x) = \int G_D(x,y) f(y) \, dy. $$ We assume here that $f$ is a bounded Borel function $f: D \to \R$. We have $$ G_D f(x) = E^x \int_0^{\tau_D} f(X_s) \, ds. $$ Now we briefly present basic definitions and facts concerning the fractional Laplacian and the fractional Schr{\"o}dinger operator. We follow the approach from \cite{BB1999}. We denote by $\calL$ the space of all Borel functions $f$ on $\R^d$ satisfying $$ \int_{\R^d} \frac{|f(x)|}{(1 + |x|)^{d + \alpha}} \, dx < \infty. $$ For $f \in \calL$ and $x \in \R^d$ we define $$ \Delta^{\alpha/2} f(x) = \calA(d,-\alpha) \lim_{\eps \downarrow 0} \int_{|y - x| > \eps} \frac{f(y) - f(x)}{|y - x|^{d + \alpha}} \, dy, $$ whenever the limit exists. We say that a Borel function $q: \R^d \to \R$ belongs to {\it{the Kato class}} $\calJ^{\alpha}$ iff $q$ satisfies $$ \lim_{r \downarrow 0} \sup_{x \in \R^d} \int_{|y - x| \le r} |q(y) K_{\alpha}(y - x)| \, dy = 0. $$ For any $\alpha \in (0,2)$, $q \in \calJ^{\alpha}$ we call $\Delta^{\alpha/2} + q$ {\it{the fractional Schr{\"o}dinger}} operator. Let $\alpha \in (0,2)$, $q \in \calJ^{\alpha}$ and $D \subset \R^d$ be an open set. For $u \in \calL$ such that $uq \in L_{\text{loc}}^1(D)$ we define the distribution $(\Delta^{\alpha/2} + q) u$ in $D$ by the formula $$ ((\Delta^{\alpha/2} + q)u,\varphi) = (u,\Delta^{\alpha/2}\varphi + q\varphi), \quad \varphi \in C_c^{\infty}(D), $$ (cf. Definition 3.14 in \cite{BB1999}). We will say that $u$ is {\it{a weak solution}} of \begin{equation} \label{solution} (\Delta^{\alpha/2} + q) u = 0 \end{equation} on $D$ iff $u \in \calL$, $uq \in L_{\text{loc}}^1(D)$ and (\ref{solution}) holds in the sense of distributions in $D$. We will say that $u$ is {\it{a strong solution}} of (\ref{solution}) on $D$ iff $u \in \calL$, $uq \in L_{\text{loc}}^1(D)$ and (\ref{solution}) holds for any $x \in D$. For $\alpha \in (0,2)$, $q \in \calJ^{\alpha}$ the multiplicative functional $e_q(t)$ is defined by $e_q(t) = \exp\left(\int_0^t q(X_s) \, ds \right)$, $t \ge 0$. For any open bounded set $D \subset \R^d$ the function $$ u_D(x) = E^x(e_q(\tau_D)) $$ is called the {\it{gauge}} function for $(D,q)$; when it is bounded in $D$ we say that $(D,q)$ is {\it{gaugeable}}. There are several other equivalent conditions for gaugeability, in particular there is a condition in terms of the first Dirichlet eigenvalue of $\Delta^{\alpha/2} + q$ on $D$, see below. Let $u$ be a Borel function on $\R^d$ and let $q \in \calJ^{\alpha}$. We say that $u$ is {\it{$q$-harmonic}} in an open set $D \subset \R^d$ iff \begin{equation} \label{qharm1} u(x) = E^x\left[e_q(\tau_W) u(X_{\tau_W})\right], \quad x \in W, \end{equation} for every bounded open set $W$ with $\overline{W} \subset D$. $u$ is called {\it{regular $q$-harmonic}} in $D$ iff \begin{equation} \label{qharmr} u(x) = E^x\left[e_q(\tau_D) u(X_{\tau_D}); \tau_D < \infty\right], \quad x \in D. \end{equation} We understand that the expectation in (\ref{qharm1}) and (\ref{qharmr}) is absolutely convergent. By the strong Markov property any regular $q$-harmonic function in $D$ is a $q$-harmonic function in $D$. By \cite[Theorem 4.1]{BB2000} any $q$-harmonic function in $D$ is continuous in $D$. By \cite[(4.7)]{BB2000} any $q$-harmonic function in $D$ belongs to $\calL$ (when $D \ne \emptyset$). It follows that if $u$ is a $q$-harmonic function in $D$ then $uq \in L_{\text{loc}}^1(D)$. Let $\alpha \in (0,2)$, $q \in \calJ^{\alpha}$. If $u$ is a $q$-harmonic function in an open set $D \subset \R^d$ then it is a weak solution of $(\Delta^{\alpha/2} + q) u = 0$ on $D$. Conversely assume that $D \subset \R^d$ is an open bounded set and $(D,q)$ is gaugeable. If a function $u$ is a weak solution of $(\Delta^{\alpha/2} + q) u = 0$ on $D$ then after a modification on a set of Lebesgue measure zero, $u$ is $q$-harmonic in $D$ (see \cite[Theorem 5.5]{BB1999}). It is known that if $u$ is $q$-harmonic in open set $D \subset \R^d$ then, unless $u = 0$ on $D$ and $u = 0$ a.e. on $D^c$, $(W,q)$ is gaugeable for any open bounded set $W$ such that $\overline{W} \subset D$ (see \cite[Lemma 4.3]{BB2000}). We will often use the following representation of $q$-harmonic functions. If $u$ is $q$-harmonic in an open set $D \subset \R^d$ then for every open bounded $W$ with the exterior cone property such that $\overline{W} \subset D$ we have \begin{equation} \label{representation} u(x) = E^x u(X_{\tau_W}) + G_W(qu)(x), \quad x \in D. \end{equation} This follows from \cite[Proposition 6.1]{BB2000} and continuity of $q$-harmonic functions. By saying that $q: D \to \R$ is H{\"o}lder continuous with H{\"o}lder exponent $\eta > 0$ we understand that there exists a constant $c$ such that for all $x,y \in D$ we have $|q(x) - q(y)| \le c |x - y|^{\eta}$. We finish this section with some basic information about the spectral problem for $\Delta^{\alpha/2} + q$. Assume that $D \subset \R^d$ is an open bounded set, $\alpha \in (0,2)$, $q \in \calJ^{\alpha}$. Let us consider the eigenvalue problem for the fractional Schr{\"o}dinger operator on $D$ with zero exterior condition \begin{eqnarray} \label{spectral1} \Delta^{\alpha/2} \varphi + q \varphi &=& -\lambda \varphi \quad \quad \quad \text{on} \,\,\, D, \\ \label{spectral2} \varphi &=& 0 \quad \quad \quad \, \, \, \, \text{on} \,\,\, D^c. \end{eqnarray} It is well known that for the problem (\ref{spectral1}-\ref{spectral2}) there exists a sequence of eigenvalues $\{\lambda_n\}_{n = 1}^{\infty}$ satisfying $$ \lambda_1 < \lambda_2 \le \lambda_3 \le \ldots, \quad \quad \quad \lim_{n \to \infty} \lambda_n = \infty, $$ and a sequence of corresponding eigenfunctions $\{\varphi_n\}_{n = 1}^{\infty}$, which can be chosen so that they form an orthonormal basis in $L^2(D)$. All $\varphi_n$ are bounded and continuous on $D$ and $\varphi_1$ is strictly positive on $D$. It is also well known that gaugeability of $(D,q)$ is equivalent to $\lambda_1 > 0$ see \cite[Theorem 3.11]{CS1997}, cf. \cite[Theorem 4.19]{CZ1995}. We understand that (\ref{spectral2}) holds for all $x \in D^c$ and (\ref{spectral1}) holds for almost all $x \in D$. The eigenvalue problem (\ref{spectral1}-\ref{spectral2}) was studied in e.g. \cite{CS1997}, \cite{K1998} and very recently in \cite{K2011}. For more systematic presentation of the potential theory of fractional Schr{\"o}dinger operators we refer the reader to \cite{BB1999} or to \cite{BBKRSV2009}. \section{Estimates of the Green function} In this section we fix $i \in \{1,\ldots,d\}$ and use the following notation $$ H = \{(y_1,\ldots,y_d) \in \R^d: \, y_i > 0\}, $$ $$ H_0 = \{(y_1,\ldots,y_d) \in \R^d: \, y_i = 0\}. $$ Let $R: \R^d \to \R^d$ be the reflection with respect to $H_0$. For any $x \in \R^d$ we put $$ \hat{x} = R(x). $$ We have $\hat{x} = x - 2 x_i e_i$, where $(e_1,\ldots,e_d)$ is the standard basis in $\R^d$ and $x = (x_1,\ldots,x_d)$. We say that a set $D \subset \R^d$ is {\it{symmetric with respect to $H_0$}} iff $R(D) = D$. For any set $D \subset \R^d$, which is symmetric with respect to $H_0$ we put \begin{equation} \label{plusminus} D_+ = \{(y_1,\ldots,y_d) \in D: \, y_i > 0\}, \quad \quad D_- = \{(y_1,\ldots,y_d) \in D: \, y_i < 0\}. \end{equation} Let $B_t$ be the $d$-dimensional Brownian motion starting from $x \in \R^d$ (with the transition density $(4 \pi t)^{-d/2} e^{-|x - y|^2/(4t)}$) and $\eta_t$ be the $\alpha/2$-stable subordinator starting from zero, $\alpha \in (0,2)$, independent of $B_t$ ($E^{-s\eta_t} = e^{-t s^{\alpha/2}}$). It is well known that the $d$-dimensional symmetric $\alpha$-stable process $X_t$, $\alpha \in (0,2)$, starting from $x \in \R^d$ has the following representation $$ X_t = B_{\eta_t}. $$ Let $$ T = \inf\{s \ge 0: \, B_s \in H_0\}. $$ Assume that the Brownian motion $B_t$ starts from $x \in H$. We define \begin{equation*} \hat{B}_t = \begin{cases} \displaystyle R(B_t) \quad \text{for} \quad t \le T \\[\medskipamount] \displaystyle B_t \quad \quad \, \, \, \text{for} \quad t > T. \end{cases} \end{equation*} That is $\hat{B}_t$ is the mirror reflection of $B_t$ with respect to $H_0$ before $T$ and coincides with $B_t$ afterwards. It is well known that $\hat{B}_t$ is the Brownian motion starting from $\hat{x}$. Now set $$ \hat{X}_t = \hat{B}_{\eta_t}. $$ $\hat{X}_t$ is the symmetric $\alpha$-stable process starting from $\hat{x}$. The above construction is taken from Section 2 in \cite{BSW2011}. When discussing probabilities of $B_{t}$, $\hat{B}_{t}$, $\eta_t$, $X_t$, $\hat{X}_t$ we will use $P_B^x$, $P_{B}^{\hat{x}}$, $P_{\eta}$, $P^x$, $P^{\hat{x}}$ respectively. Now we need to consider another process, which is a subordinated killed Brownian motion. We define it as follows $$ \tilde{X}_t = (B^{H})_{\eta_t}, $$ where $B^{H}_t$ is the Brownian motion $B_t$ (starting from $x \in H$) killed on exiting $H$ and $\eta_t$ is the $\alpha/2$-stable subordinator starting from zero, independent of $B_t$. When discussing probabilities of $B_{t}^H$, $\tilde{X}_t$ we will use $P_{B^H}^{x}$, $\tilde{P}^x$ respectively. The general theory of subordinated killed Brownian motions was studied in \cite{SV2003}. For any open set $D \subset \R^d$, which is symmetric with respect to $H_0$ we put $$ \tilde{\tau}_{D_+} = \inf\{t \ge 0: \, \tilde{X}_t \notin D_+\}, $$ where $D_+$ is given by (\ref{plusminus}). By $\tilde{p}_{D_+}(t,x,y)$ we denote the transition density of the process $\tilde{X}_t$ killed on exiting $D_+$. The idea of considering $\tilde{p}_{D_+}(t,x,y)$ comes from \cite[Section 4]{BK2004}. Recall that $p_D(t,x,y)$ is the transition density of the symmetric $\alpha$-stable process killed on exiting $D$. \begin{lemma} \label{killed2} Let $D \subset \R^d$ be an open set which is symmetric with respect to $H_0$. Then we have $$ \tilde{p}_{D_+}(t,x,y) = p_D(t,x,y) - p_D(t,\hat{x},y), \quad \quad x,y \in D_+, \quad t > 0. $$ \end{lemma} \begin{proof} The proof is based on the reflection principle for the Brownian motion. Put $$ \hat{\tau}_{D} = \inf\{s \ge 0: \, (\hat{B})_{\eta_s} \notin D\}. $$ Note that \begin{eqnarray*} \tau_{D} &=& \inf\{s \ge 0: \, B_{\eta_s} \notin D\},\\ \tilde{\tau}_{D_+} &=& \inf\{s \ge 0: \, (B^H)_{\eta_s} \notin D_+\}, \end{eqnarray*} Fix $x \in D_+$, $t > 0$ and a Borel set $A \subset D_+$. We have \begin{eqnarray} \nonumber && \tilde{P}^x\left(\tilde{X}_t \in A, \tilde{\tau}_{D_+} > t\right) \\ \nonumber && =E_{\eta} P_{B^H}^x \left( (B^{H})_{\eta_t} \in A, \tilde{\tau}_{D_+} > t \right) \\ \nonumber && =E_{\eta} P_B^x \left( B_{\eta_t} \in A, \eta_t < T, \tau_{D} > t \right) \\ \label{sum1} && =E_{\eta} P_B^x \left( B_{\eta_t} \in A, \tau_{D} > t \right) - E_{\eta} P_B^x \left( B_{\eta_t} \in A, \eta_t > T, \tau_{D} > t \right). \end{eqnarray} Note that $$ \left\{ B_{\eta_t} \in A, \eta_t > T, \tau_{D} > t \right\} = \left\{ \hat{B}_{\eta_t} \in A, \hat{\tau}_{D} > t \right\}. $$ Hence (\ref{sum1}) equals \begin{eqnarray*} && E_{\eta} P_B^x \left( B_{\eta_t} \in A, \tau_{D} > t \right) - E_{\eta} P_B^{\hat{x}} \left( \hat{B}_{\eta_t} \in A, \hat{\tau}_{D} > t \right) \\ && = P^x \left( X_t \in A, \tau_{D} > t \right) - P^{\hat{x}} \left( \hat{X}_t \in A, \hat{\tau}_{D} > t \right) \\ && = \int_A p_D(t,x,y) - p_D(t,\hat{x},y) \, dy. \end{eqnarray*} \end{proof} Let $D \subset \R^d$ be an open set which is symmetric with respect to $H_0$. If $d = 1 \le \alpha$ we assume additionally that $D$ is bounded. We define the Green function for $D_+$ for the process $\tilde{X}_t$ by $$ \tilde{G}_{D_+}(x,y) = \int_{0}^{\infty} \tilde{p}_{D_+}(t,x,y) \, dt, \quad \quad \quad x,y \in D_+, $$ $\tilde{G}_{D_+}(x,y) = 0$ if $x \in (D_+)^c$ or $y \in (D_+)^c$. For an open bounded set $D \subset \R^d$ which is symmetric with respect to $H_0$ we define the corresponding Green operator for $D_+$ by $$ \tilde{G}_{D_+} f(x) = \int_{D_+} \tilde{G}_{D_+}(x,y) f(y) \, dy. $$ We assume here that $f$ is a bounded Borel function $f: D_+ \to \R$. Clearly we have $$ \tilde{G}_{D_+} f(x) = E^x \int_0^{\tilde{\tau}_{D_+}} f(\tilde{X}_s) \, ds. $$ By Lemma \ref{killed2} we obtain the following corollary. \begin{corollary} \label{corollaryGreenformula} Let $D \subset \R^d$ be an open set which is symmetric with respect to $H_0$. If $d = 1 \le \alpha$ we assume additionally that $D$ is bounded. Then we have \begin{equation} \label{Greenformula} \tilde{G}_{D_+}(x,y) = G_D(x,y) - G_D(\hat{x},y), \quad \quad \quad x,y \in D_+. \end{equation} \end{corollary} \begin{lemma} \label{coupling} Let $B = B(0,r)$, $r > 0$. Assume that $f: B \to \R$ is Borel and bounded. Then we have $$ G_B f(x) - G_B f(\hat{x}) = \int_{B_+} \tilde{G}_{B_+}(x,y) (f(y) - f(\hat{y})) \, dy. $$ \end{lemma} \begin{proof} Note that $G_B(\hat{x},\hat{y}) = G_B(x,y)$ and $G_B(\hat{x},y) = G_B(x,\hat{y})$ for any $x,y \in B_+$. We have \begin{eqnarray*} G_B f(\hat{x}) &=& \int_{B_+} G_B(\hat{x},y) f(y) \, dy + \int_{B_-} G_B(\hat{x},y) f(y) \, dy \\ &=& \int_{B_+} G_B(\hat{x},y) f(y) \, dy + \int_{B_+} G_B(\hat{x},\hat{y}) f(\hat{y}) \, dy \\ &=& \int_{B_+} G_B(\hat{x},y) f(y) \, dy + \int_{B_+} G_B(x,y) f(\hat{y}) \, dy. \end{eqnarray*} Similarly, we have \begin{eqnarray*} G_B f(x) &=& \int_{B_+} G_B(x,y) f(y) \, dy + \int_{B_+} G_B(x,\hat{y}) f(\hat{y}) \, dy \\ &=& \int_{B_+} G_B(x,y) f(y) \, dy + \int_{B_+} G_B(\hat{x},y) f(\hat{y}) \, dy. \end{eqnarray*} Using the above equalities and (\ref{Greenformula}) we obtain the assertion of the lemma. \end{proof} \begin{lemma} \label{Greenmonotonicity} Let $d > \alpha$ and $V \subset W \subset \R^d$ be open sets symmetric with respect to $H_0$. Then we have $$ \tilde{G}_{V_+}(x,y) \le \tilde{G}_{W_+}(x,y), \quad \quad \quad x,y \in V_+. $$ \end{lemma} \begin{proof} Let $A \subset V_+$ be a Borel bounded set. For any $x \in V_+$ we have $$ \int_A \tilde{G}_{V_+}(x,y) \, dy = E^x \int_0^{\tilde{\tau}_{V_+}} 1_A(\tilde{X}_s) \, ds \le E^x \int_0^{\tilde{\tau}_{W_+}} 1_A(\tilde{X}_s) \, ds = \int_A \tilde{G}_{W_+}(x,y) \, dy. $$ Now the lemma follows from (\ref{Greenformula}) and continuity of $G_D(x,\cdot)$ on $D \setminus \{x\}$ (for $D = V, W$). \end{proof} \begin{lemma} \label{Greenupper1} Let $d > \alpha \in(0,1]$. Fix $r > 0$ and put $B = B(0,r)$, $B_+ =B_+(0,r)$. Then we have $$ 0 \le G_B(x,y) - G_B(\hat{x},y) \le c \frac{|x - \hat{x}|}{|x - y|^{d - \alpha} |\hat{x} - y|}, \quad \quad x,y \in B_+, $$ where $c = c(d,\alpha)$. \end{lemma} \begin{proof} Using Corollary \ref{corollaryGreenformula}, Lemma \ref{Greenmonotonicity} and (\ref{Riesz}) we obtain \begin{eqnarray*} G_B(x,y) - G_B(\hat{x},y) &=& \tilde{G}_{B_+}(x,y) \\ &\le& \tilde{G}_{\R^d_+}(x,y) \\ &=& G_{\R^d}(x,y) - G_{\R^d}(\hat{x},y) \\ &=& \calA(d,\alpha) \frac{|\hat{x} - y|^{d - \alpha} - |x - y|^{d - \alpha}}{|x - y|^{d - \alpha} |\hat{x} - y|^{d - \alpha}}. \end{eqnarray*} One can show that for any $p \ge q \ge 0$ and $\beta > 0$ $$ p^{\beta} - q^{\beta} \le (2 \vee 2 \beta) p^{\beta - 1} (p - q) $$ (we omit an elementary justification of this inequality). Using this one obtains for any $x,y \in B_+$ $$ |\hat{x} - y|^{d - \alpha} - |x - y|^{d - \alpha} \le (2 \vee (2d - 2 \alpha)) |x - \hat{x}| |\hat{x} - y|^{d - \alpha - 1}, $$ which implies the assertion of the lemma. \end{proof} Now we prove similar lower bound estimates of $G_B(x,y) - G_B(\hat{x},y)$. These lower bound estimates will be needed in the proof of Proposition \ref{counterexample}. We prove these estimates only for $x \in B_+(0,r/4)$ and $y$ belonging to some truncated cone lying inside $B_+(0,r)$. This will be enough for our purposes. The lower bound estimates are based on the results of R. Song \cite{S2004}. \begin{lemma} \label{lowerboundGreen} Let $\alpha \in (0,1]$, $d > \alpha$. Fix $r > 0$ and for any $x \in B_+(0,r/4)$ put $$ K(r,x) = \{y = (y_1,\ldots,y_d) \in \R^d: \, r/2 > y_i > 2 |x|, |y| < \sqrt{2} y_i\}. $$ For any $x \in B_+(0,r/4)$ and $y \in K(r,x)$ we have $$ G_{B(0,r)}(x,y) - G_{B(0,r)}(\hat{x},y) \ge c \frac{|x - \hat{x}|}{|x - y|^{d - \alpha} |\hat{x} - y|}, $$ where $c = c(d,\alpha)$. \end{lemma} \begin{proof} Let $B_t^D$ be the $d$-dimensional Brownian motion killed on exiting a connected bounded open set $D \subset \R^d$, $\eta_t$ - $\alpha/2$-stable subordinator starting from zero independent of $B_t^D$ and put $Z_t^D = (B^D)_{\eta_t}$. $Z_t^D$ is a Markov process with the generator $-(-\Delta |_D)^{\alpha/2}$ where $\Delta |_D$ is the Dirichlet Laplacian in $D$. The process $Z_t^D$ has been intensively studied see e.g. \cite{S2004}, \cite{SV2003}. Let $G_{D}^Z(x,y)$ be the Green function of the set $D$ of the process $Z_t^D$. Note that $\tilde{X}_t = (B^H)_{\eta_t} = Z_t^H$. It follows that $$ \tilde{G}_{B_+(0,r)}(x,y) \ge G_{B_+(0,r)}^Z(x,y), \quad \quad x,y \in B_+(0,r). $$ Let us first consider the case $r = 1$. Let us fix an auxiliary set $U \subset \R^d$ such that $U$ is an open, bounded, connected set with $C^{1,1}$ boundary satisfying $B_+(0,9/10) \subset U \subset B_+(0,1)$. We need to introduce the auxiliary set $U$ because $B_+(0,1)$ is not a $C^{1,1}$ domain for $d \ge 2$ and results from \cite{S2004} which we use are for $C^{1,1}$ domains. By \cite[Theorem 4.1]{S2004} we have for any $x, y \in U$ \begin{eqnarray} \label{lowerGreenZ1} G_{B(0,1)}(x,y) - G_{B(0,1)}(\hat{x},y) &=& \tilde{G}_{B_+(0,1)}(x,y) \\ \label{lowerGreenZ2} &\ge& G_{B_+(0,1)}^Z(x,y) \\ \label{lowerGreenZ2a} &\ge& G_{U}^Z(x,y) \\ \label{lowerGreenZ3} &\ge& \left(\frac{\delta_{U}(x) \delta_{U}(y)}{|x - y|^2} \wedge 1\right) \frac{c}{|x - y|^{d - \alpha}}, \end{eqnarray} where $c = c(d,\alpha)$. Assume that $x \in B_+(0,1/4)$ and $y \in K(1,x)$. We have $$ 1 \ge \frac{|x - \hat{x}|}{2 |\hat{x} - y|}, $$ $$ \delta_{U}(x) = \delta_{B_+(0,1)}(x) = x_i = \frac{1}{2}|x - \hat{x}|, $$ $$ |\hat{x} - y| \ge |x - y|, $$ $$ \delta_{U}(y) \ge \frac{y_i}{4} \ge \frac{1}{16} (|x| + |y|) \ge \frac{1}{16} |x - y|. $$ It follows that $$ \frac{\delta_{U}(x) \delta_{U}(y)}{|x - y|^2} \wedge 1 \ge \frac{|x - \hat{x}|}{32 |\hat{x} - y|}. $$ Using this and (\ref{lowerGreenZ1} - \ref{lowerGreenZ3}) we obtain for $x \in B_+(0,1/4)$, $y \in K(1,x)$ \begin{equation} \label{lowerr1} G_{B(0,1)}(x,y) - G_{B(0,1)}(\hat{x},y) \ge c \frac{|x - \hat{x}|}{|x - y|^{d - \alpha} |\hat{x} - y|}, \end{equation} where $c = c(d,\alpha)$. Now let $r > 0$ be arbitrary. Assume that $x \in B_+(0,r/4)$ and $y \in K(r,x)$. Note that $x/r \in B_+(0,1/4)$ and $y/r \in K(1,x/r)$. By scaling and (\ref{lowerr1}) we get \begin{eqnarray*} G_{B(0,r)}(x,y) - G_{B(0,r)}(\hat{x},y) &=& r^{\alpha - d} \left(G_{B(0,1)}\left(\frac{x}{r},\frac{y}{r}\right) - G_{B(0,1)}\left(\frac{\hat{x}}{r},\frac{y}{r}\right)\right) \\ &\ge& c r^{\alpha - d} \frac{\left|\frac{x}{r} - \frac{\hat{x}}{r}\right|}{\left|\frac{x}{r} - \frac{y}{r}\right|^{d - \alpha} \left|\frac{\hat{x}}{r} - \frac{y}{r}\right|} \\ &=& c \frac{|x - \hat{x}|}{|x - y|^{d - \alpha} |\hat{x} - y|}. \end{eqnarray*} \end{proof} To obtain estimates of $\tilde{G}_{B_+}(x,y)$ for $d = \alpha = 1$ we do not use probabilistic methods but we use the explicit formula for the Green function for an interval. \begin{lemma} \label{Greenupper2} Let $d = \alpha = 1$. Fix $r > 0$. For $x \in \R$ let $\hat{x} = -x$. Put $B = (-r,r)$ and $B_+ = (0,r)$. Then for any $x,y \in B_+$ we have \begin{equation} \label{11} 0 \le G_B(x,y) - G_B(\hat{x},y) \le \frac{1}{\pi} \min\left(\frac{4 |x|}{|x - y|}, \log\left(\frac{2 |x + y|}{|x - y|}\right)\right). \end{equation} For any $x,y \in (0,r/2)$ we have \begin{equation*} G_B(x,y) - G_B(\hat{x},y) \ge \frac{1}{\pi} \min\left(\frac{2 |x|}{15 |x - y|}, \log\left(\frac{|x + y|}{4 |x - y|}\right)\right). \end{equation*} For any $x \in (0,r/4)$ and $y \in (2 x,r/2)$ we have \begin{equation*} G_B(x,y) - G_B(\hat{x},y) \ge \frac{2}{15 \pi} \frac{|x|}{|x - y|}. \end{equation*} \end{lemma} \begin{proof} By scaling we have $$ G_{(-r,r)}(x,y) = G_{(-1,1)}\left(\frac{x}{r},\frac{y}{r}\right) $$ so we may assume that $r = 1$. We have \cite{BGR1961} $$ G_B(x,y) = \frac{1}{\pi} \log\left(\sqrt{w(x,y)} + \sqrt{1 + w(x,y)}\right), $$ where $$ w(x,y) = \frac{(1 - |x|^2) (1 - |y|^2)}{|x - y|^2}. $$ Let $x,y \in B_+ = (0,1)$. Put $t_2 = w(x,y)$, $t_1 = w(\hat{x},y)$. Note that $t_2 > t_1$. It follows that \begin{eqnarray} \nonumber 0 \le G_B(x,y) - G_B(\hat{x},y) &=& \frac{1}{\pi} \log\left(\frac{\sqrt{t_2} + \sqrt{1 + t_2}}{\sqrt{t_1} + \sqrt{1 + t_1}}\right) \\ \label{t1t2a} &=& \frac{1}{\pi} \log\left(1 + \frac{\sqrt{t_2} + \sqrt{1 + t_2} - \sqrt{t_1} - \sqrt{1 + t_1}}{\sqrt{t_1} + \sqrt{1 + t_1}}\right). \end{eqnarray} It is elementary to show that $\sqrt{1 + t_2} - \sqrt{1 + t_1} \le \sqrt{t_2} - \sqrt{t_1}$. Hence (\ref{t1t2a}) is bounded from the above by \begin{eqnarray} \label{t1t2b} \frac{1}{\pi} \frac{\sqrt{t_2} + \sqrt{1 + t_2} - \sqrt{t_1} - \sqrt{1 + t_1}}{\sqrt{t_1} + \sqrt{1 + t_1}} &\le& \frac{2}{\pi \sqrt{t_1}} (\sqrt{t_2} - \sqrt{t_1})\\ \label{t1t2c} &=& \frac{2 |x + y|}{\pi} \left(\frac{1}{|x - y|} - \frac{1}{|x + y|}\right)\\ \label{t1t2d} &\le& \frac{4|x|}{\pi |x - y|}. \end{eqnarray} Now assume that $x \in (0,1/4)$, $y \in (2x,1/2)$. We will show that $G_B(x,y) - G_B(\hat{x},y) \ge 2 |x|/(15 \pi |x - y|)$. Note that $4 |x|/|x - y| \le 4$. It is elementary to show that for $0 \le z \le 4$ we have $\log(1+z) \ge z/5$. Using this and (\ref{t1t2b} - \ref{t1t2d}) we obtain that (\ref{t1t2a}) is bounded from below by \begin{equation} \label{t1t2e} \frac{1}{5 \pi} \frac{\sqrt{t_2} + \sqrt{1 + t_2} - \sqrt{t_1} - \sqrt{1 + t_1}}{\sqrt{t_1} + \sqrt{1 + t_1}}. \end{equation} Note that $(1 - |\hat{x}|^2) (1 - |y|^2) \ge 1/2 \ge |\hat{x} - y|^2/2$, so $2t_1 \ge 1$, which implies $2\sqrt{t_1} \ge \sqrt{1 + t_1}$. Hence (\ref{t1t2e}) is bounded from below by $$ \frac{1}{5 \pi} \frac{\sqrt{t_2} - \sqrt{t_1}}{3 \sqrt{t_1}} = \frac{|x + y|}{15 \pi} \left(\frac{1}{|x - y|} - \frac{1}{|x + y|}\right) = \frac{2 |x|}{15 \pi |x - y|}. $$ Now again let $x, y \in B_+ = (0,1)$. We have \begin{eqnarray} \label{t1t2f} &&G_B(x,y) - G_B(\hat{x},y) = \frac{1}{\pi} \log\left(\frac{\sqrt{t_2} + \sqrt{1 + t_2}}{\sqrt{t_1} + \sqrt{1 + t_1}}\right) \\ \nonumber &\le& \frac{1}{\pi} \log\left(\frac{2 \sqrt{1 + t_2}}{\sqrt{1 + t_1}}\right) \\ \label{t1t2h} &=& \frac{1}{\pi} \log\left(2 \sqrt{\frac{|x+y|^2}{|x-y|^2} \left(\frac{|x-y|^2 + (1 - |x|^2)(1 - |y|^2)}{|x+y|^2 + (1 - |x|^2)(1 - |y|^2)}\right)}\right). \end{eqnarray} One can easily show that $|x+y|^2 + (1 - |x|^2)(1 - |y|^2) \ge 1$ and $|x-y|^2 + (1 - |x|^2)(1 - |y|^2) \le 1$. Hence (\ref{t1t2h}) is bounded from above by $$ \frac{1}{\pi} \log\left(\frac{2 |x + y|}{|x - y|}\right). $$ Now let $x, y \in (0,1/2)$. By (\ref{t1t2f}) we obtain \begin{eqnarray} \nonumber &&G_B(x,y) - G_B(\hat{x},y) \ge \frac{1}{\pi} \log\left(\frac{\sqrt{1 + t_2}}{2 \sqrt{1 + t_1}}\right)\\ \label{t1t2g} &=& \frac{1}{\pi} \log\left(\frac{1}{2} \sqrt{\frac{|x+y|^2}{|x-y|^2} \left(\frac{|x-y|^2 + (1 - |x|^2)(1 - |y|^2)}{|x+y|^2 + (1 - |x|^2)(1 - |y|^2)}\right)}\right). \end{eqnarray} One can easily show that $|x+y|^2 + (1 - |x|^2)(1 - |y|^2) \le 2$ and $|x-y|^2 + (1 - |x|^2)(1 - |y|^2) \ge 1/2$. Hence (\ref{t1t2g}) is bounded from below by $$ \frac{1}{\pi} \log\left(\frac{|x + y|}{4|x - y|}\right). $$ \end{proof} The estimates of the Green function obtained in this section are crucial in proving the main result of this paper. To get these estimates in the transient case we used probabilistic methods. There is alternative way of obtaining these estimates. Namely, one can use explicit formulas for the Green function of a ball for symmetric $\alpha$-stable processes (in fact this formula was used in the case $d = \alpha = 1$). We decided to use probabilistic methods instead of explicit formulas for two reasons. First, the probabilistic methods are much simpler. Secondly, it seems that it can be generalized to some other processes, which are subordinated Brownian motions. Especially interesting in this context is the relativistic process, which generator is $-(\sqrt{-\Delta + m^2} -m)$, see e.g. \cite{R2002}, \cite{KS2006}, \cite{CKS2012}. This operator is called relativistic Hamiltonian and is used in some models of relativistic quantum mechanics see e.g. \cite{LS2010}. For the relativistic process the explicit formula for the Green function of a ball is not known, but it seems that the probabilistic methods from this paper could be used to study Schr{\"o}dinger equations based on the relativistic Hamiltonian $-(\sqrt{-\Delta + m^2} -m)$. \section{Proof of the main result} We will need the following technical lemma. \begin{lemma} \label{betau} Fix $r \in (0,1]$, $i \in \{1,\ldots,d\}$ and $z = (z_1,\ldots,z_d) \in \R^d$. For any $x = (x_1,\ldots,x_d) \in \R^d$ denote $\hat{x} = x - 2 e_i(x_i - z_i)$. Put $B = B(z,r)$. Assume that the function $f: B \to \R$ is Borel and bounded on $B$ and satisfies \begin{equation} \label{halfbeta} |f(x) - f(\hat{x})| \le A |x - z|^{\beta}, \quad \quad \quad x \in B(z,r/2), \end{equation} for some constants $A \ge 1$ and $\beta \ge 0$. If $\alpha \in (0,1)$ and $\beta \in [0,1 - \alpha)$ then there exists $c = c(d,\alpha,\beta)$ such that for any $x \in B$ we have \begin{equation} \label{<1-alpha} |G_Bf(x) - G_Bf(\hat{x})| \le c A |x - z|^{\beta + \alpha} + c \frac{\sup_{y \in B} |f(y)|}{r} |x - z|^{\beta + \alpha}. \end{equation} If $\alpha \in (0,1]$ and $\beta > 1 - \alpha$ then there exists $c = c(d,\alpha,\beta)$ such that for any $x \in B$ we have \begin{equation} \label{>1-alpha} |G_Bf(x) - G_Bf(\hat{x})| \le c A |x - z| + c \frac{\sup_{y \in B} |f(y)|}{r} |x - z|. \end{equation} If $\alpha = 1$ and $\beta = 0$ then there exists $c = c(d)$ such that for any $x \in B$ we have \begin{equation} \label{beta0} |G_Bf(x) - G_Bf(\hat{x})| \le c A |x - z|^{1/2} + c \frac{\sup_{y \in B} |f(y)|}{r} |x - z|^{1/2}. \end{equation} \end{lemma} \begin{proof} Put $B_+ = \{y = (y_1,\ldots,y_d) \in B: \, y_i > 0\}$. We may assume that $z = 0$ and $x = (x_1,\ldots,x_d) \in B_+$. By Lemma \ref{coupling} we have $$ |G_B f(x) - G_B f(\hat{x})| \le \int_{B_+} \tilde{G}_{B_+}(x,y) |f(y) - f(\hat{y})| \, dy, $$ where $\tilde{G}_{B_+}(x,y) = G_B(x,y) - G_B(\hat{x},y)$. We will consider two cases: case 1: $d > \alpha \in (0,1]$, case 2: $d = \alpha = 1$. We will often use the fact that $r \in (0,1]$ and $|x| < r \le 1$. {\bf{Case 1:}} $d > \alpha \in (0,1]$. Note that \begin{equation} \label{xtildex} |x - \hat{x}| \le 2 |x|. \end{equation} For any $y \in B_+$ we have \begin{equation} \label{xtildey} |\hat{x} - y| \ge |x - y|, \quad \quad \quad |\hat{x} - y| \ge |x - \hat{x}|/2. \end{equation} Put $U_1 = B(x,|x|) \cap \{y \in B_+: \, |y| \le r/2\}$, $U_2 = B^c(x,|x|) \cap \{y \in B_+: \, |y| \le r/2\}$, $U_3 = \{y \in B_+: \, |y| \ge r/2\}$. By Lemma \ref{Greenupper1}, (\ref{halfbeta}), (\ref{xtildex}), (\ref{xtildey}) we obtain \begin{eqnarray*} && \int_{B_+} \tilde{G}_{B_+}(x,y) |f(y) - f(\hat{y})| \, dy \le c |x - \hat{x}| \int_{B_+} \frac{|f(y) - f(\hat{y})|}{|x - y|^{d - \alpha} |\hat{x} - y|} \, dy \\ &\le& c A \int_{U_1} \frac{|y|^{\beta} \, dy}{|x - y|^{d - \alpha}} + c A |x| \int_{U_2} \frac{|y|^{\beta} \, dy}{|x - y|^{d - \alpha + 1}} + c \sup_{y \in B} |f(y)| \int_{U_3} \frac{|x - \hat{x}| \, dy}{|x - y|^{d - \alpha} |\hat{x} - y|} \\ &=& \text{I} + \text{II} + \text{III}, \end{eqnarray*} where $c = c(d,\alpha)$. For $y \in U_1$ we have $|y| \le |y - x| + |x| \le 2 |x|$. Hence \begin{equation} \label{estI} \text{I} \le c A |x|^{\beta} \int_{U_1} \frac{dy}{|x - y|^{d - \alpha}} = c A |x|^{\alpha + \beta}, \end{equation} where $c = c(d,\alpha,\beta)$. When $|x| > r/4$ we get by (\ref{xtildey}) $$ \text{III} \le c \sup_{y \in B} |f(y)| \int_{B(x,2r)} \frac{dy}{|x - y|^{d - \alpha}} = c \sup_{y \in B} |f(y)| r^{\alpha} \le c \sup_{y \in B} |f(y)| |x| r^{\alpha - 1}, $$ where $c = c(d,\alpha)$. If $|x| < r/4$ then $U_3 \subset B^c(x,r/4) \cap B(x,2r)$ and by (\ref{xtildex}), (\ref{xtildey}) we get $$ \text{III} \le c \sup_{y \in B} |f(y)| |x| \int_{B^c(x,r/4) \cap B(x,2r)} \frac{dy}{|x - y|^{d - \alpha + 1}} \le c \sup_{y \in B} |f(y)| |x| r^{\alpha - 1}, $$ where $c = c(d,\alpha)$. Recall that $r \le 1$. It follows that for $x \in B_+$ we have \begin{equation} \label{estIII} \text{III} \le \frac{c}{r} \sup_{y \in B} |f(y)| |x|, \end{equation} where $c = c(d,\alpha)$. For $y \in U_2$ we have $|y| \le |y - x| + |x| \le 2 |y - x|$. Hence $$ \text{II} \le c A |x| \int_{U_2} \frac{dy}{|x - y|^{d - \alpha - \beta +1}}, $$ where $c = c(d,\alpha,\beta)$. If $\alpha \in (0,1)$, $\beta \in [0,1 - \alpha)$ then $\text{II} \le c A |x|^{\alpha + \beta}$, where $c = c(d,\alpha,\beta)$. This, (\ref{estI}), (\ref{estIII}) imply (\ref{<1-alpha}). If $\alpha \in (0,1]$, $\beta > 1 - \alpha$ then $\text{II} \le c A |x| r^{\alpha + \beta - 1} \le c A |x|$, where $c = c(d,\alpha,\beta)$. This, (\ref{estI}), (\ref{estIII}) imply (\ref{>1-alpha}). If $\alpha = 1$, $\beta = 0$ we have $$ \text{II} = c A |x| \int_{U_2} \frac{dy}{|x - y|^{d}} \le c A |x| \int_{|x|}^{2r} \rho^{-1} \, d\rho \le c A |x| (|\log(2r)|+|\log|x||) \le c A |x|^{1/2}, $$ where $c = c(d)$. This, (\ref{estI}), (\ref{estIII}) imply (\ref{beta0}). {\bf{Case 2:}} $d = \alpha = 1$. {\bf{Subcase 2a:}} $x \in (0,r/4)$. We have \begin{eqnarray*} && \int_{B_+} \tilde{G}_{B_+}(x,y) |f(y) -f(\hat{y})| \, dy \\ &\le& A (2x)^{\beta} \int_0^{2x} \tilde{G}_{B_+}(x,y) \, dy + A \int_{2x}^{r/2} y^{\beta} \tilde{G}_{B_+}(x,y) \, dy + 2(\sup_{y \in B} |f(y)|) \int_{r/2}^{r} \tilde{G}_{B_+}(x,y) \, dy \\ &=& \text{I} + \text{II} + \text{III}. \end{eqnarray*} By (\ref{11}) we have $$ \text{I} \le c A x^{\beta} \int_0^{2x} \log\left(\frac{6 x}{|x - y|}\right) \, dy \le c A x^{\beta} (x + x |\log x|), $$ where $c = c(\beta)$. By (\ref{11}) we also have $$ \text{II} \le c A x \int_{2x}^{r/2} \frac{y^{\beta} \, dy}{|x - y|} \le c A x \int_{2x}^{r/2} y^{\beta - 1} \, dy, $$ where $c = c(\beta)$. Since $x \in (0,r/4)$ by (\ref{11}) we also get $$ \text{III} \le c (\sup_{y \in B} |f(y)|) x \int_{r/2}^r \frac{dy}{|x - y|} \le c (\sup_{y \in B} |f(y)|) x, $$ where $c$ is an absolute constant. Now, if $\beta > 0$ then $$ \text{I} + \text{II} + \text{III} \le c A x + c \frac{x}{r} \sup_{y \in B} |f(y)|, $$ for some $c = c(\beta)$. If $\beta = 0$ then $$ \text{I} + \text{II} + \text{III} \le c A x^{1/2} + c \frac{x^{1/2}}{r} \sup_{y \in B} |f(y)|, $$ where $c$ is an absolute constant. {\bf{Subcase 2b:}} $x \in (r/4,r)$. We have \begin{eqnarray*} && \int_{B_+} \tilde{G}_{B_+}(x,y) |f(y) -f(\hat{y})| \, dy \\ &\le& c A x^{\beta} \int_0^{r/2} \tilde{G}_{B_+}(x,y) \, dy + c(\sup_{y \in B} |f(y)|) \int_{r/2}^{r} \tilde{G}_{B_+}(x,y) \, dy \\ &=& \text{I} + \text{II}, \end{eqnarray*} where $c = c(\beta)$. By (\ref{11}) for any $x,y \in B_+$ we have \begin{equation} \label{11b} \tilde{G}_{B_+}(x,y) \le c - \log |x - y|, \end{equation} where $c$ is an absolute constant. By (\ref{11b}) we obtain \begin{eqnarray*} \text{I} &\le& c A x^{\beta} \left(cr - \int_0^{r/2} \log|x - y| \, dy\right) \\ &\le& c A x^{\beta} \left(cr - 2 \int_0^{r/2} \log y \, dy\right) \\ &=& c A x^{\beta} (cr + r - r \log(r/2)) \\ &\le& c A r^{\beta} (r + r |\log r|), \end{eqnarray*} where $c = c(\beta)$. Similarly, by (\ref{11b}) we obtain $$ \text{II} \le c (\sup_{y \in B} |f(y)|) \left(cr - \int_{r/2}^{r} \log|x - y| \, dy\right) \le c (\sup_{y \in B} |f(y)|) (r + r |\log r|), $$ where $c = c(\beta)$. If $\beta > 0$ then $$ \text{I} + \text{II} \le c A r + c \sup_{y \in B} |f(y)| \le c A x + c \frac{x}{r} \sup_{y \in B} |f(y)|, $$ for some $c = c(\beta)$. If $\beta = 0$ then $$ \text{I} + \text{II} \le c A (r + r |\log r|)+ c \sup_{y \in B} |f(y)| \le c A x^{1/2} + c \frac{x^{1/2}}{r} \sup_{y \in B} |f(y)|, $$ where $c$ is an absolute constant. \end{proof} \begin{lemma} \label{gradientGreen1} Fix $r \in (0,1]$, $i \in \{1,\ldots,d\}$ and $z = (z_1,\ldots,z_d) \in \R^d$. For any $x = (x_1,\ldots,x_d) \in \R^d$ denote $\hat{x} = x - 2 e_i (x_i - z_i)$. Put $B = B(z,r)$. Assume that a Borel function $f$ satisfies $$ |f(x) - f(y)| \le A |x - y|^{\eta}, \quad x,y \in B, $$ for some $A > 0$ and $\eta \in (1 - \alpha,1]$. If $d > \alpha \in (0,1]$ then for any $\eps \in (0,r]$ we have $$ \int_{B(z,\eps)} \left|\frac{\partial}{\partial z_i}G_B(z,y)\right| |f(y) - f(\hat{y})| \, dy < c A \eps^{\eta + \alpha -1}, $$ for some $c = c(d,\alpha,\eta)$. If $d = \alpha = 1$ then for any $\eps \in (0,r]$ we have $$ \int_{B(z,\eps)} \left|\frac{\partial}{\partial z_i}G_B(z,y)\right| |f(y) - f(\hat{y})| \, dy < c A \eps^{\eta} ( 1 + |\log \eps |), $$ for some $c = c(\eta)$. \end{lemma} \begin{proof} By \cite[Corollary 3.3]{BKN2002} we have $$ \left|\frac{\partial}{\partial z_i}G_B(z,y)\right| \le d \frac{G_B(z,y)}{|z - y| \wedge r} = d \frac{G_B(z,y)}{|z - y|}, \quad \quad y \in B, \quad y \ne z. $$ By the assumption on $f$ we have for $y \in B$ $$ |f(y) - f(\hat{y})| \le A |y - \hat{y}|^{\eta} = 2^{\eta} A |y_i - z_i|^{\eta}. $$ If $d > \alpha \in (0,1]$ for any $y \in B$ we have $$ \left|\frac{\partial}{\partial z_i}G_B(z,y)\right| |f(y) - f(\hat{y})| \le c A \frac{G_B(z,y)}{|z - y|^{1 - \eta}} \le c A |z - y|^{\alpha + \eta - 1 - d}, $$ for some $c = c(d,\alpha,\eta)$. If $d = \alpha = 1$ we obtain from \cite[Corollary 3.2]{BB2000} that for any $y \in B$ we have $$ \left|\frac{\partial}{\partial z_i}G_B(z,y)\right| |f(y) - f(\hat{y})| \le c A \frac{G_B(z,y)}{|z - y|^{1 - \eta}} \le c A |z - y|^{\eta - 1} \log(1 + |z - y|^{-1}), $$ for some $c = c(\eta)$. The above estimates imply the assertion of the lemma. \end{proof} \begin{lemma} \label{gradientGreen} Let $\alpha \in (0,1]$. Fix $r \in (0,1]$, $z = (z_1,\ldots,z_d)\in \R^d$ and $i \in \{1,\ldots,d\}$. Put $B = B(z,r)$. Assume that $f$ is bounded and H{\"o}lder continuous in $B$ with H{\"o}lder exponent $\eta \in (1-\alpha,1]$, that is $$ |f(x) - f(y)| \le A |x - y|^{\eta}, \quad \quad x,y \in B. $$ Then $\nabla G_Bf(z)$ exists and we have \begin{equation} \label{GBfformula} \frac{\partial}{\partial z_i} G_B f(z) = \int_{B_+} \frac{\partial}{\partial z_i} G_B(z,y) (f(y) - f(\hat{y})) \, dy, \end{equation} where $B_+ = \{(y_1,\ldots,y_d) \in B: \, y_i - z_i > 0\}$, and $\hat{y} = y - 2(y_i - z_i) e_i$ for $y = (y_1,\ldots,y_d)$. We also have \begin{equation} \label{nablaGBf} |\nabla G_Bf(z)| \le c A r^{\eta + \alpha -1} (1 + |\log{r}|), \end{equation} where $c = c(d,\alpha,\eta)$. \end{lemma} \begin{proof} Let $g(y) = f(y) - f(z)$. By our assumption on $f$ we obtain \begin{equation} \label{gestimate} |g(y)| \le A |y - z|^\eta, \quad y \in B(z,r). \end{equation} Let $h \in (-r/8,r/8)$. We have \begin{eqnarray*} G_B f(z + e_i h) - G_B f(z) &=& (G_B 1_B(z + e_i h) - G_B 1_B(z)) f(z) \\ && + G_B g(z + e_i h) - G_B g(z). \end{eqnarray*} By a well known \cite{G1961} explicit formula for $G_B 1_B(x)$ we get \begin{equation*} \lim_{h \to 0} \frac{1}{h} (G_B 1_B(z + e_i h) - G_B 1_B(z)) f(z) = f(z) \frac{\partial}{\partial z_i} G_B 1_B(z) = 0. \end{equation*} We also have \begin{eqnarray*} \frac{1}{h} ( G_B g(z + e_i h) - G_B g(z)) &=& \frac{1}{h} \int_{B(z,2 |h|)}( G_B(z + e_i h,y) - G_B(z,y)) g(y) \, dy \\ &+& \frac{1}{h} \int_{B(z,r) \setminus B(z,2|h|)}( G_B(z + e_i h,y) - G_B(z,y)) g(y) \, dy \\ &=& \text{I} + \text{II}. \end{eqnarray*} We will consider 2 cases: 1: $d > \alpha$, 2: $d = \alpha = 1$. {\bf{Case 1:}} $d > \alpha$. By (\ref{gestimate}) and the standard estimate $G_B(x,y) \le K_{\alpha}(x-y)$ we obtain \begin{eqnarray*} |\text{I}| &\le& A 2^{\eta} |h|^{\eta - 1} \int_{B(z,2 |h|)} G_B(z + e_i h,y) + G_B(z,y) \, dy \\ &\le& c A |h|^{\eta - 1} \int_{B(z,2 |h|)} |z + e_i h - y|^{\alpha - d} + |z - y|^{\alpha - d} \, dy \\ &\le& c A |h|^{\eta + \alpha - 1}, \end{eqnarray*} where $c = c(d,\alpha,\eta)$. By our assumption on $\eta$ it follows that $\lim_{h \to 0} \text{I} = 0$. We also have \begin{equation} \label{IIest} \text{II} = \int_{B(z,r) \setminus B(z,2|h|)} \frac{\partial G_B}{\partial z_i}(z + e_i h \theta,y) g(y) \, dy, \end{equation} where $\theta = \theta(y,z,h,i,\alpha,d,r) \in (0,1)$. Note that for $y \in B(z,r) \setminus B(z,2|h|)$ we have $|y - (z + e_i h \theta)| \ge |y - z|/2$. Using this, (\ref{gestimate}) and \cite[Corollary 3.3]{BKN2002} we obtain for $y \in B(z,r) \setminus B(z,2|h|)$ and $\theta$ as in (\ref{IIest}) $$ \left|\frac{\partial G_B}{\partial z_i}(z + e_i h \theta,y) g(y)\right| \le c A |y - z|^{\alpha + \eta - d - 1}, $$ where $c = c(\alpha,d,\eta)$. Note that by our assumption on $\eta$ the function $y \to |y - z|^{\alpha + \eta - d - 1}$ is integrable on $B = B(z,r)$. By the bounded convergence theorem we get $$ \lim_{h \to 0} \text{II} = \int_B \frac{\partial}{\partial z_i}G_B(z,y) g(y) \, dy. $$ It follows that \begin{equation} \label{derg} \frac{\partial}{\partial z_i}G_B f(z) = \int_B \frac{\partial}{\partial z_i}G_B(z,y) g(y) \, dy. \end{equation} Note that $\frac{\partial}{\partial z_i}G_B(z,\hat{y}) = - \frac{\partial}{\partial z_i}G_B(z,y)$, $y \in B$. This and (\ref{derg}) implies (\ref{GBfformula}). {\bf{Case 2:}} $d = \alpha = 1$. Recall that $h \in (-r/8,r/8)$ and $r \in (0,1]$. By \cite[Corollary 3.2]{BB2000} we have \begin{equation} \label{logGreen} G_B(x,y) \le c (1 + |\log|x - y||), \quad x,y \in B, x \ne y, \end{equation} where $c$ is an absolute constant. By (\ref{gestimate}) we obtain \begin{eqnarray*} |\text{I}| &\le& A 2^{\eta} |h|^{\eta - 1} \int_{B(z,2 |h|)} G_B(z + h,y) + G_B(z,y) \, dy \\ &\le& c A |h|^{\eta - 1} \int_{B(z,2 |h|)} 1 + |\log|z + h - y|| + |\log|z - y|| \, dy \\ &\le& c A |h|^{\eta} (1 + |\log|h||), \end{eqnarray*} where $c = c(\eta)$. By our assumption on $\eta$ it follows that $\lim_{h \to 0} \text{I} = 0$. We also have \begin{equation} \label{IIest2} \text{II} = \int_{B(z,r) \setminus B(z,2|h|)} \frac{d G_B}{dz}(z + h \theta,y) g(y) \, dy, \end{equation} where $\theta = \theta(y,z,h,r) \in (0,1)$. Note that for $y \in B(z,r) \setminus B(z,2|h|)$ we have $|y - (z + h \theta)| \ge |y - z|/2$. Using this, (\ref{gestimate}), (\ref{logGreen}) and \cite[Corollary 3.3]{BKN2002} we obtain for $y \in B(z,r) \setminus B(z,2|h|)$ and $\theta$ as in (\ref{IIest2}) $$ \left|\frac{d G_B}{dz}(z + h \theta,y) g(y)\right| \le c A |y - z|^{\eta - 1} (1 + |\log|y - z||), $$ where $c$ is an absolute constant. Note that by our assumption on $\eta$ the function $y \to |y - z|^{\eta - 1} (1 + |\log|y - z||)$ is integrable on $B = B(z,r)$. By the bounded convergence theorem we get $$ \lim_{h \to 0} \text{II} = \int_B \frac{d}{dz}G_B(z,y) g(y) \, dy. $$ It follows that \begin{equation} \label{derg2} \frac{d}{dz}G_B f(z) = \int_B \frac{d}{dz}G_B(z,y) g(y) \, dy. \end{equation} Note that $\frac{d}{dz}G_B(z,\hat{y}) = - \frac{d}{dz}G_B(z,y)$, $y \in B$. This and (\ref{derg2}) implies (\ref{GBfformula}). This finishes the justification of (\ref{GBfformula}) in both cases. Inequality (\ref{nablaGBf}) follows from (\ref{GBfformula}) and Lemma \ref{gradientGreen1}. \end{proof} \begin{lemma} \label{alphaharmonic} Let $\alpha \in (0,2)$ and $D$ be an open set in $\R^d$. For every function $f$ which is $\alpha$-harmonic in $D$ we have $$ |\nabla f(x)| \le d \frac{\|f\|_{\infty}}{\delta_D(x)}, \quad \quad \quad x \in D. $$ \end{lemma} The proof of Lemma \ref{alphaharmonic} is almost the same as the proof of Lemma 3.2 in \cite{BKN2002} and is omitted. \begin{proof}[proof of Theorem \ref{mainthm}] Fix arbitrary $z = (z_1,\ldots,z_d) \in D$ and $i \in \{1,\ldots,d\}$. Similarly like in Lemma \ref{betau} for any $x = (x_1,\ldots,x_d) \in \R^d$ put $\hat{x} = x - 2 e_i (x_i - z_i)$. Using \cite[Lemma 3.5]{BB2000} let us choose $r_0 = r_0(d,\alpha,\|q\|_{\infty}) \in (0,1]$ such that for any $r \in (0,r_0]$ and any ball od radius $r$ contained in $D$ the conditional gauge function for that ball is bounded from below by $1/2$ and from above by $2$. Let $r = (\delta_D(z) \wedge r_0)/2$ and $B = B(z,r)$. By (\ref{representation}) we get \begin{equation} \label{decomposition} u(x) = f(x) + G_B(qu)(x), \quad x \in B, \end{equation} where $f(x) = E^x u(X_{\tau_B})$. The function $f$ is $\alpha$-harmonic on $B$. When $u$ is nonnegative on $\R^d$ by our choice of $r_0$ and by (2.15) in \cite{BB2000} we obtain $f(x) \le 2 u(x)$, $x \in B$. By \cite[Lemma 3.2]{BKN2002} it follows that $$ |\nabla f(x)| \le d \frac{f(x)}{\delta_B(x)} \le 4d \frac{u(x)}{\delta_B(z)} \le c \frac{u(x)}{\delta_D(z) \wedge 1}, \quad \quad x \in B(z,r/2), $$ where $c = c(d,\alpha,\|q\|_{\infty})$. If $u$ is not nonnegative on $\R^d$ but $\|u\|_{\infty} < \infty$ by Lemma \ref{alphaharmonic} we get $$ |\nabla f(x)| \le d \frac{\|f\|_{\infty}}{\delta_B(x)} \le c \frac{\|u\|_{\infty}}{\delta_D(z) \wedge 1}, \quad \quad x \in B(z,r/2), $$ where $c = c(d,\alpha,\|q\|_{\infty})$. Let $K = B$ when $u$ is nonnegative and $K = \R^d$ when $u$ is not nonnegative in $\R^d$ and $\|u\|_{\infty} < \infty$. It follows that for any $x \in B(z,r/2)$ we have \begin{equation} \label{harmonic} |E^x u(X_{\tau_B}) - E^{\hat{x}} u(X_{\tau_B})| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - \hat{x}| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|, \end{equation} where $c = c(d,\alpha,\|q\|_{\infty})$. Let us consider the following inequality \begin{equation} \label{main} |u(x) - u(\hat{x})| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|^{\beta} \quad \quad x \in B(z,r/2), \end{equation} for some $\beta \in [0,1]$ and $c = c(d,\alpha,\beta,q,\eta)$. Note that $|x - \hat{x}| \le 2 |x - z|$. Recall that $r \le 1/2$ so $|x - z| \le 1/2$ for $x \in B(z,r)$. If (\ref{main}) holds then for any $x \in B(z,r/2)$ we have \begin{eqnarray} \nonumber |q(x) u(x) - q(\hat{x}) u(\hat{x})| &\le& |u(x)| |q(x) - q(\hat{x})| + |q(\hat{x})| |u(x) - u(\hat{x})| \\ \nonumber &\le& c \sup_{y \in K} |u(y)| |x - \hat{x}|^{\eta} + c \sup_{y \in D} |q(y)| \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|^{\beta} \\ \label{festimate} &\le& c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|^{\beta \wedge \eta}, \end{eqnarray} where $c = c(d,\alpha,\beta,q,\eta)$. Note that if $\beta < 1 - \alpha$ then $\beta \wedge \eta = \beta$ and if $\beta > 1 - \alpha$ then $\beta \wedge \eta > 1 - \alpha$. Assume now that (\ref{main}) holds for some ($\alpha \in (0,1)$, $\beta \in [0,1 - \alpha)$) or ($\alpha \in (0,1]$, $\beta \in (1 - \alpha,1)$) and $c = c(d,\alpha,\beta,q,\eta)$. If $\alpha \in (0,1)$, $\beta \in [0,1 - \alpha)$ then by (\ref{festimate}) and Lemma \ref{betau} we obtain for $x \in B$ $$ |G_B(qu)(x) - G_B(qu)(\hat{x})| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|^{\beta + \alpha}, $$ where $c = c(d,\alpha,\beta,q,\eta)$. If $\alpha \in (0,1]$, $\beta \in (1 - \alpha,1)$ then by (\ref{festimate}) and Lemma \ref{betau} we obtain for $x \in B$ $$ |G_B(qu)(x) - G_B(qu)(\hat{x})| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|, $$ where $c = c(d,\alpha,\beta,q,\eta)$. Joining this with (\ref{harmonic}) we obtain in view of (\ref{decomposition}) that if (\ref{main}) holds for some ($\alpha \in (0,1)$, $\beta \in [0,1 - \alpha)$) or ($\alpha \in (0,1]$, $\beta \in (1 - \alpha,1)$) and $c = c(d,\alpha,\beta,q,\eta)$ then \begin{equation} \label{main1} |u(x) - u(\hat{x})| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} |x - z|^{(\beta + \alpha) \wedge 1} \quad \quad x \in B(z,r/2), \end{equation} for $c = c(d,\alpha,\beta,q,\eta)$. Note that (\ref{main}) holds trivially for $\beta = 0$. Assume first that $\alpha \in (0,1)$ and $k \alpha \ne 1 - \alpha$ for any $k \in \N$. Then repeating the above procedure we obtain that (\ref{main}) holds for $\beta = 0, \alpha, 2 \alpha, \ldots$ and finally for $\beta = 1$. Assume now that $\alpha \in (0,1)$ and $k_0 \alpha = 1 - \alpha$ for some $k_0 \in \N$. Then we obtain that (\ref{main}) holds for $\beta = 0, \alpha, 2 \alpha, \ldots, k_0 \alpha$. Then (\ref{main}) holds for any $\beta \in [0,k_0 \alpha]$. In particular, it holds for $\beta = k_0 \alpha - \alpha/2$. By (\ref{main1}) we obtain that (\ref{main}) holds for $\beta = k_0 \alpha + \alpha/2 \in (1 - \alpha,1)$. Then, again by (\ref{main1}) we obtain that (\ref{main}) holds for $\beta = 1$. Finally assume that $\alpha = 1$. (\ref{beta0}) gives that (\ref{main}) holds for $\beta = 1/2$. Then by (\ref{main1}) we obtain that (\ref{main}) holds for $\beta = 1$. Now let us fix arbitrary $w \in D$ and put $s = (\delta_D(w) \wedge r_0)/8$. We will show that $\nabla u(w)$ exists. Let us take $x,y \in B(w,s)$. Since $z \in D$ was arbitrary one can take $z = (x + y)/2$ and choose the Cartesian coordinate system and $i$ so that $y = \hat{x} = x - 2 e_i (x_i - z_i)$. We put $r = (\delta_D(z) \wedge r_0)/2$ as before. Note that $$ \delta_D(z) \ge \delta_D(w) - \frac{\delta_D(w)}{8} = \frac{7 \delta_D(w)}{8} \ge 7 s, \quad \quad s \le \frac{r_0}{8}. $$ We also have $$ |x - z| = \frac{|x - y|}{2} \le s \le \left(\frac{\delta_D(z)}{7} \wedge \frac{r_0}{8}\right) < \frac{r}{2}, $$ so $x \in B(z,r/2)$. On the other hand we have $\delta_D(z) \le s + \delta_D(w)$ so $$ r = \frac{\delta_D(z) \wedge r_0}{2} \le \frac{(s + \delta_D(w)) \wedge r_0}{2} \le \frac{s}{2} + \frac{\delta_D(w) \wedge r_0}{2} = \frac{9s}{2}, \quad \text{and} \quad |w - z| \le s. $$ Hence $B(z,r) \subset B(w,11s/2)$ which gives $\sup_{p \in B(z,r)}|u(p)| \le \sup_{p \in B(w,11s/2)}|u(p)| $. By (\ref{main}) for $\beta = 1$ we obtain $$ |u(x) - u(y)| \le c \frac{\sup_{p \in K'}|u(p)|}{\delta_D(z) \wedge 1} |x - z| \le c \frac{\sup_{p \in K'}|u(p)|}{\delta_D(w) \wedge 1} |x - y|, $$ where $c = c(d,\alpha,q,\eta)$ and $K' = B(w,11s/2)$ when $u$ is nonnegative in $\R^d$ and $K' = \R^d$ when $u$ is not nonnegative in $\R^d$ and $\|u\|_{\infty} < \infty$. Since $x, y \in B(w,s)$ were arbitrary we obtain that $q u$ is H{\"o}lder continuous with H{\"o}lder exponent $\eta \wedge 1$ in $B(w,s)$. Using (\ref{representation}) for $W = B(w,s)$ and Lemma \ref{gradientGreen} for $B(w,s)$ we obtain that $\nabla u(w)$ exists. Since $w \in D$ was arbitrary this implies that $\nabla u$ is well defined on $D$. Now again let us fix arbitrary $z \in D$, $i \in \{1,\ldots,d\}$ and put $r = (\delta_D(z) \wedge r_0)/2$, $B = B(z,r)$, $K = B$ when $u$ is nonnegative and $K = \R^d$ when $u$ is not nonnegative and $\|u\|_{\infty} < \infty$. When $u$ is nonnegative, by the Harnack principle (see \cite[Theorem 4.1]{BB2000}) we have \begin{equation} \label{Harnack} \sup_{y \in K} |u(y)| = \sup_{y \in B} u(y) \le c u(z). \end{equation} By the proof of \cite[Theorem 4.1]{BB2000} it follows that $c = c(d,\alpha,\|q\|_{\infty})$. Put $x = z + h e_i$, $h \in (0,r/2)$. By (\ref{main}) for $\beta = 1$ we get $$ |u(z + h e_i) - u(z - h e_i)| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1} h, $$ where $c = c(d,\alpha,q,\eta)$. Since $i \in \{1,\ldots,d\}$ is arbitrary it follows that \begin{equation} \label{firstgrad} |\nabla u(z)| \le c \frac{\sup_{y \in K} |u(y)|}{\delta_D(z) \wedge 1}. \end{equation} Of course this gives (\ref{norm}). When $u$ is nonnegative (\ref{firstgrad}) and (\ref{Harnack}) imply (\ref{nonnegative}). \end{proof} \section{Proof of Proposition \ref{counterexample} and Theorem \ref{sharpthm}} First we prove Proposition \ref{counterexample}. By saying that $\nabla u(x)$ exists we understand that for each $i \in \{1,\ldots,d\}$ $\lim_{h \to 0} (u(x + h e_i) - u(x))/h$ exists and is finite. We say that a function is $0$ H{\"o}lder continuous if it is bounded and measureable. \begin{proof}[proof of Proposition \ref{counterexample}] Let us choose arbitrary point $w \in D$ and $r \in (0,\delta_D(w)/3)$. Put \begin{equation} \label{qdef} q(x) = 1_{B(w,r)}(x) (r^2 - |x - w|^2)^{1 - \alpha}, \quad \quad x \in \R^d. \end{equation} It may be easily shown that $q(x)$ is $(1 - \alpha)$ H{\"o}lder continuous. We may assume that $r$ is sufficiently small so that $(D,q)$ is gaugeable. (The fact that $(D,q)$ is gaugeable for small $r$ follows by Khasminski's lemma, see page 57 in \cite{BB1999}.) Put $u(x) = E^x(e_q(\tau_D))$, $x \in \R^d$. $u(x)$ is the gauge function for $(D,q)$. By Theorem 4.1 in \cite{BB2000} $u(x)$ is regular $q$-harmonic in $D$. Note that $u$ is continuous and bounded on $D$. Fix $z \in \partial B(w,r)$. We may assume that the Cartesian coordinate system $(x_1,\ldots,x_d)$ is chosen so that $z = (0,\ldots,0)$ and $w = (r,0,\ldots,0)$. Let $B = B(0,r)$. We will show that $\nabla u(0)$ does not exist. On the contrary assume that $\nabla u(0)$ exists. By (\ref{representation}) we have \begin{equation} \label{sum} u(x) = E^x(u(X_{\tau_B})) + G_B(qu)(x), \quad \quad x \in B. \end{equation} Of course $\nabla E^x(u(X_{\tau_B}))$ exists for $x \in B$ (see (10) and Lemma 3.2 in \cite{BKN2002}). Put $f_0(y) = u(0) q(y)$ and $f_1(y) = (u(y) - u(0)) q(y)$. We have \begin{equation} \label{uq} u(y) q(y) = f_0(y) + f_1(y). \end{equation} For any $s > 0$ put $B_+(0,s) = \{(y_1,\ldots,y_d) \in B(0,s): \, y_1 > 0\}$, $B_+ = B_+(0,r)$ and $\hat{y} = y - 2 e_1 y_1$ for $y = (y_1,\ldots,y_d) \in \R^d$. Recall that we have assumed that $\nabla u(0)$ exists. By this and boundedness of $u$ we get $|u(y) - u(0)| \le c |y|$ for some $c = c(w,z,r,D,d,\alpha,q)$ and any $y \in \R^d$. It follows that for $y \in B_+(0,r/2)$ $$ |f_1(y) - f_1(\hat{y})| = |f_1(y)| = |u(y) - u(0)| |q(y)| \le c \|q\|_{\infty} |y|, $$ where $c = c(w,z,r,D,d,\alpha,q)$. By (\ref{>1-alpha}) for any $x \in B(0,r/2)$ we have \begin{equation} \label{diff1a} |G_B f_1(x) - G_B f_1(\hat{x})| \le c |x| + c \frac{\sup_{y \in B} |f_1(y)|}{r} |x| \le c |x|, \end{equation} for some $c = c(w,z,r,D,d,\alpha,q)$. Now let us consider the case $\alpha \in (0,1]$, $d > \alpha$. By Lemmas \ref{coupling} and \ref{lowerboundGreen} we get for $x \in B_+(0,r/4)$ \begin{eqnarray} \nonumber G_B f_0(x) - G_B f_0(\hat{x}) &=& \int_{B_+} (G_{B}(x,y) - G_{B}(\hat{x},y)) (f_0(y) - f_0(\hat{y})) \, dy \\ \nonumber &=& u(0) \int_{B_+} (G_{B}(x,y) - G_{B}(\hat{x},y)) q(y) \, dy \\ \label{ux0} &\ge& c u(0) \int_{K(r,x)} \frac{|x - \hat{x}|}{|x - y|^{d - \alpha} |\hat{x} -y|} q(y) \, dy, \end{eqnarray} where $c = c(w,z,r,D,d,\alpha,q)$ and $K(r,x)$ is defined in Lemma \ref{lowerboundGreen} for $i = 1$. Since $u$ is positive on $D$ we have $u(0) > 0$. Note that for $x \in B_+(0,r/4)$ and $y \in K(r,x)$ we have $|x - y| \le (3/2) |y|$, $|\hat{x} - y| \le (3/2) |y|$. Hence (\ref{ux0}) is bounded from below by $$ c u(0) |x - \hat{x}| \int_{K(r,x)} |y|^{\alpha - d - 1} q(y) \, dy, $$ where $c = c(w,z,r,D,d,\alpha,q)$. One can easily show that for any $x \in B_+(0,r/4)$ and $y \in K(r,x)$ we have $q(y) \ge c |y|^{1 - \alpha}$, where $c = c(d,\alpha,r)$. Let $x = (x_1,0\ldots,0) \in B_+(0,r/4)$. It follows that \begin{equation} \label{GBquotient} \frac{G_B f_0(x) - G_B f_0(\hat{x})}{|x - \hat{x}|} \ge c u(0) \int_{K(r,x)} |y|^{- d} \, dy, \end{equation} where $c = c(w,z,r,D,d,\alpha,q)$. It is clear that if $x = (x_1,0\ldots,0)$ tends to $0$ then the right-hand side of (\ref{GBquotient}) tends to $\infty$. This, (\ref{sum}), the fact that $\nabla E^x(u(X_{\tau_B}))$ exists for $x \in B$, (\ref{uq}) and (\ref{diff1a}) give contradiction with the assumption that $\nabla u(0)$ exists. Now we will consider the case $d = \alpha = 1$. Recall that in this case $q(y) = 1_{B(w,r)}(y) = 1_{(0,2r)}(y)$. By Lemmas \ref{coupling} and \ref{Greenupper2} we get for $x \in (0,r/4)$ \begin{eqnarray} \nonumber \frac{G_B f_0(x) - G_B f_0(\hat{x})}{|x - \hat{x}|} &=& \frac{u(0)}{2 |x|} \int_{B_+} (G_{B}(x,y) - G_{B}(\hat{x},y)) q(y) \, dy \\ \label{dyy} &\ge& \frac{u(0)}{15 \pi} \int_{2x}^{r/2} \frac{dy}{y}. \end{eqnarray} It is clear that if $x \to 0$ then (\ref{dyy}) tends to $\infty$. This, (\ref{sum}), the fact that $\frac{d}{dx} E^x(u(X_{\tau_B}))$ exists for $x \in B$, (\ref{uq}) and (\ref{diff1a}) give contradiction with the assumption that $u'(0)$ exists. \end{proof} Now we will prove lower bound gradient estimates. The idea of the proof is to some extent similar to the proof of lower bound gradient estimates in \cite{BKN2002}. The main difference is the use of Lemma \ref{uppereps} below instead of \cite[Lemma 5.4]{BKN2002}. There are essential differences in proofs of Lemma \ref{uppereps} and \cite[Lemma 5.4]{BKN2002}. The key arguments in the proof of Lemma \ref{uppereps} are based on Lemma \ref{gradientGreen}. We will use the notation as in \cite{BKN2002}. For $x = (x_1,\ldots,x_d) \in \R^d$ we write $x = (\tilde{x},x_d)$, where $\tilde{x} = (x_1,\ldots,x_{d-1})$. In order to include the case $d = 1$ in the considerations below we make the convention that for $x \in \R$, $\tilde{x} = 0$ and we set $\R^0 = \{0\}$. We fix a Lipschitz function $\Gamma: \R^{d-1} \to \R$ with a Lipschitz constant $\lambda$, so that $|\Gamma(\tilde{x}) - \Gamma(\tilde{y})| \le \lambda |\tilde{x} - \tilde{y}|$ for $\tilde{x}, \tilde{y} \in \R^{d - 1}$. We put $\rho(x) = x_d - \Gamma(\tilde{x})$. $D$ denotes the special Lipschitz domain defined by $D = \{x \in \R^d: \, \rho(x) > 0\}$. The function $\rho(x)$ serves as vertical distance from $x \in D$ to $\partial{D}$. We define the ``box" $$ \Delta(x,a,r) = \{y \in \R^d: \, 0 < \rho(y) < a, \, |\tilde{x} - \tilde{y}| < r\}, $$ where $x \in \R^d$ and $a,r > 0$. We note that $\Delta(x,a,r)$ is a Lipschitz domain. We also define the ``inverted box" $$ \nabla(x,a,r) = \{y \in \R^d: \, -a < \rho(y) \le 0, \, |\tilde{x} - \tilde{y}| < r\}. $$ The same symbol $\nabla$ is used for the gradient but the meaning will be clear from the context. For $r > 0$ and $Q \in \partial D$ we set $\Delta_r = \Delta(Q,r,r)$ and $G_r = G_{\Delta_r}$. For a nonnegative function $u$ we put $$ u^{\Delta_r}(x) = E^x u(X_{\tau_{\Delta_r}}), \quad x \in \R^d. $$ Fix $Q \in \partial D$ and assume that a Borel function $q$ satisfies \begin{equation} \label{qbox} |q(x) - q(y)| \le A |x - y|^{\eta}, \end{equation} for some $A > 0$, $\eta \in (1-\alpha,1]$ and all $x,y \in \Delta_{s_0}$ for some $s_0 \in (0,1]$. Now we will repeat the assertion of Lemma 5.3 \cite{BKN2002}. Note that the assertion of Lemma 5.3 in \cite{BKN2002} holds for all $\alpha \in (0,2)$ under the condition that $q 1_{\Delta_{s_0}} \in \calJ^{\alpha}$ for some $s_0 \in (0,1]$. This condition follows from (\ref{qbox}). For every $\eps > 0$ there exists a constant $r_0 = r_0(d,\lambda,\alpha,\eta, q,s_0,\eps) \le s_0 \le 1$ such that if $r \in (0,r_0]$ and $u: \R^d \to [0,\infty)$ is $q$-harmonic and bounded in $\Delta_r$ then \begin{equation} \label{eps1} (1 - \eps) u^{\Delta_r}(x) \le u(x) \le (1 + \eps) u^{\Delta_r}(x), \quad \quad x \in \R^d, \end{equation} and \begin{equation} \label{eps2} G_r(|q| u)(x) \le \eps u^{\Delta_r}(x), \quad \quad x \in \R^d, \end{equation} \begin{lemma} \label{uppereps} Let $\alpha \in (0,1]$ and $\eps \in (0,1/2]$. There exist constants $c = c(d,\alpha,\eta,q)$ and $\kappa = \kappa(d,\lambda,\alpha,\eta,q,r_0,s_0,\eps) \le r_0$ such that if $0 < r \le \kappa$, $u: \R^d \to [0,\infty)$ is $q$-harmonic and bounded in $\Delta_r$ then $$ |\nabla G_r(qu)(x)| \le \eps c \frac{u(x)}{\delta_{\Delta_r}(x)}, \quad \quad x \in \Delta_r. $$ \end{lemma} \begin{proof} Let us choose $$ \kappa = \max\{s \in (0,r_0]: \, \sup_{0 < a \le s} a^{\eta + \alpha - 1} (1 + |\log a|) \le \eps\}. $$ Fix $r \in (0,\kappa]$ and $x_0 \in \Delta_r$. Note that $\delta_{\Delta_r}(x_0) \le r \le 1$. Let $B = B(x_0,\delta_{\Delta_r}(x_0)/2)$. We have \begin{equation} \label{Grsplit} G_r(qu)(x_0) = G_B(qu)(x_0) + E^{x_0} G_r(qu)(X_{\tau_B}). \end{equation} We will estimate gradient of two terms on the right-hand side of (\ref{Grsplit}) separately. Let $P_B(x,z)$, $x \in B$, $z \in \text{int}(B^c)$ be the Poisson kernel for $B$ (that is the density of the $P^x$ distribution of $X(\tau_B)$ \cite{BGR1961}). By Lemma 3.1 in \cite{BKN2002} we have \begin{eqnarray} \nonumber \left|\nabla \left(E^{x_0} G_r(qu)(X_{\tau_B})\right)\right| &=& \left|\nabla_x \int_{B^c} P_B(x_0,z) G_r(qu)(z) \, dz \right| \\ \nonumber &\le& \int_{B^c} |\nabla_x P_B(x_0,z)| G_r(|q|u)(z) \, dz \\ \nonumber &\le& \frac{c}{\delta_{\Delta_r}(x_0)} \int_{B^c} P_B(x_0,z) G_r(|q|u)(z) \, dz \\ \label{normGrqu} &=& \frac{c}{\delta_{\Delta_r}(x_0)} E^{x_0} G_r(|q|u)(X_{\tau_B}), \end{eqnarray} where $c = c(d,\alpha)$. By (\ref{Grsplit}) for $|q|$ instead of $q$ we obtain that $E^{x_0} G_r(|q|u)(X_{\tau_B}) \le G_r(|q|u)(x_0)$. Using this and (\ref{eps1}), (\ref{eps2}) we obtain that (\ref{normGrqu}) is bounded from above by $c \eps u(x_0)/\delta_{\Delta_r}(x_0)$, where $c = c(d,\alpha)$. By Theorem \ref{mainthm} for any $x,y \in B$ we have $$ |u(x) - u(y)| \le \frac{c}{\delta_{\Delta_r}(x_0)} \left(\sup_{z \in B} u(z) \right) |x - y|, $$ for some $c = c(d,\alpha,\eta,q)$. Using this and (\ref{qbox}) for any $x,y \in B$ we get \begin{eqnarray*} |q(x) u(x) - q(y) u(y)| &\le& |q(x)| |u(x) - u(y)| + |q(x) -q(y)| |u(y)| \\ &\le& \frac{c (A + 1)}{\delta_{\Delta_r}(x_0)} \left(\sup_{z \in B} u(z) \right) |x - y|^{\eta}, \end{eqnarray*} for some $c = c(d,\alpha,\eta,q)$. Hence by Lemma \ref{gradientGreen} we obtain \begin{equation} \label{GBqu} |\nabla G_B(qu)(x_0)| \le \frac{c (A + 1)}{\delta_{\Delta_r}(x_0)} \left(\sup_{z \in B} u(z) \right) (\delta_{\Delta_r}(x_0))^{\eta + \alpha - 1} (1 + |\log(\delta_{\Delta_r}(x_0))|), \end{equation} where $c = c(d,\alpha,\eta,q)$. Note that $\delta_{\Delta_r}(x_0) \le r \le \kappa$. By our choice of $\kappa$ we have $$ (\delta_{\Delta_r}(x_0))^{\eta + \alpha - 1} (1 + |\log(\delta_{\Delta_r}(x_0))|) \le \eps. $$ Using this and the Harnack inequality (see (\ref{Harnack}) with $y$ changed to $z$ and $z$ changed to $x_0$) we obtain that the right-hand side of (\ref{GBqu}) is bounded from above by $c \eps u(x_0)/\delta_{\Delta_r}(x_0)$, where $c = c(d,\alpha,\eta,q)$. \end{proof} The next lemma is similar to Lemma 5.6 in \cite{BKN2002}. \begin{lemma} \label{lowerbox} Let $\alpha \in (0,1]$. There are constants $c = c(d,\alpha,\lambda)$, $h = h(d,\alpha,\lambda)$ and $r_1 = r_1(d,\alpha, \lambda, \eta, q, s_0)$ such that if $0 < r \le r_1$ and $u$ is nonnegative in $\R^d$, $q$-harmonic and bounded in $\Delta_r$ and vanishes in $\nabla(Q,r,r)$ then $$ |\nabla u(x)| \ge c \frac{u(x)}{\delta_{\Delta_r}(x)}, \quad \quad x \in \Delta(Q,r h,r/2). $$ \end{lemma} \begin{proof} The function $u$ satisfies (\ref{representation}) with $W = \Delta_r$. Using \cite[Lemma 4.5]{BKN2002} and scaling, (\ref{eps1}) and Lemma \ref{uppereps} we obtain the result by an appropriate choice of $\eps$ in Lemma \ref{uppereps}. \end{proof} \begin{proof}[Proof of Theorem \ref{sharpthm}] The upper bound follows from Theorem \ref{mainthm}. The lower bound follows from Lemma \ref{lowerbox} and compactness of $\partial D \cap K$. \end{proof} \section{Applications} As an application of the main results of this paper we obtain gradient estimates of eigenfunctions of the fractional Schr{\"o}dinger operator. \begin{corollary} \label{Schrodinger} Assume that $\alpha \in (0,2)$, $D \subset \R^d$ is an open bounded set, $q \in \calJ^{\alpha -1}$ when $\alpha \in (1,2)$, or $q$ is H{\"o}lder continuous on $D$ with H{\"o}lder exponent $\eta > 1 - \alpha$ when $\alpha \in (0,1]$. Let $\{\varphi_n\}_{n =1}^{\infty}$ be the eigenfunctions of the eigenvalue problem (\ref{spectral1})-(\ref{spectral2}) for the fractional Schr{\"o}dinger operator on $D$ with zero exterior condition. Then $\nabla \varphi_n(x)$ exist for any $n \in \N$, $x \in D$ and we have \begin{equation} \label{Schr1} |\nabla \varphi_1(x)| \le c \frac{\varphi_1(x)}{\delta_D(x) \wedge 1}, \quad \quad x \in D, \end{equation} where $c = c(D,q,\alpha,\eta)$ and \begin{equation} \label{Schr2} |\nabla \varphi_n(x)| \le \frac{c_n}{\delta_D(x) \wedge 1}, \quad \quad x \in D, \end{equation} where $c_n = c_n(D,q,\alpha,\eta)$. Furthermore, if additionally $D \subset \R^d$ is a bounded Lipschitz domain then there exists $\eps = \eps(D,q,\alpha,\eta)$ such that \begin{equation} \label{Schr3} |\nabla \varphi_1(x)| \ge c \frac{\varphi_1(x)}{\delta_D(x)}, \quad \quad x \in D, \quad \delta_D(x) \le \eps, \end{equation} where $c = c(D,q,\alpha,\eta)$. \end{corollary} The result is new even for $q \equiv 0$. In that case this is the eigenvalue problem for the fractional Laplacian with zero exterior condition. This eigenvalue problem have been recently very intensively studied see e.g. \cite{BK2004}, \cite{CS2005}, \cite{BKS2009}, \cite{KL2011}, \cite{FG2011}, \cite{BKM2006}. For $\alpha = 2$, under additional assumptions that $d \ge 3$, $D$ is connected and Lipschitz, inequalities (\ref{Schr1}), (\ref{Schr2}) follows from \cite[Theorem 1]{CZ1990} and inequality (\ref{Schr3}) follows from \cite[Theorem 1]{BP1999}. Before we come to the proof of Corollary \ref{Schrodinger} we will need the following easy addendum to the results obtained in \cite{BKN2002}. \begin{lemma} \label{qharmonic12} Let $\alpha \in (1,2)$, $q \in \calJ^{\alpha - 1}$ and $D \subset \R^d$ be an open set. Assume that the function $u$ is $q$-harmonic in $D$ and $\|u\|_{\infty} < \infty$. Then $\nabla u(x)$ exists for any $x \in D$ and we have $$ |\nabla u(x)| \le c \frac{\|u\|_{\infty}}{\delta_{D}(x) \wedge 1}, \quad \quad x \in D, $$ where $c = c(d,\alpha,q)$. \end{lemma} \begin{proof} The proof of this lemma follows from the arguments used in \cite{BKN2002}. First note that the assertion of Lemma 5.4 in \cite{BKN2002} remains true if we replace the assumption that $u$ is nonnegative in $\R^d$ by the assumption that $\|u\|_{\infty} < \infty$ and when we replace $u(x)$ by $\|u\|_{\infty}$ on the right-hand side of the estimate of $|\nabla G_r(qu)(x)|$. Then the proof of Lemma \ref{qharmonic12} is almost the same as the proof of Lemma 5.5 in \cite{BKN2002}. \end{proof} \begin{proof}[proof of Corollary \ref{Schrodinger}] It is clear that $\varphi_n$ is not $(q + \lambda_n)$-harmonic on the whole $D$ because $(D,q+\lambda_n)$ is not gaugeable. However by the definition of the Kato class and standard arguments (see e.g. page 299 \cite{BB2000}) for any $n = 1,2,\ldots$ there exists $r \in (0,1]$ and the finite number of balls $B(x_1,r),\ldots,B(x_M,r)$ such that $x_1,\ldots,x_M \in D$, $$ D \subset \sum_{m = 1}^{M} B(x_m,r) $$ and each $(B(x_m,2r) \cap D,q+\lambda_n)$ is gaugeable. This means that $\varphi_n$ is $(q + \lambda_n)$-harmonic on each $B(x_m,2r) \cap D$. Note that for any $x \in B(x_m,r) \cap D$ we have $$ \delta_{B(x_m,2r) \cap D}(x) \wedge 1 \ge \delta_D(x) \wedge r \wedge 1 \ge r (\delta_D(x) \wedge 1). $$ Now, (\ref{Schr1}), (\ref{Schr2}) follow from Theorem \ref{mainthm} for $\alpha \in (0,1]$ and from \cite[Lemma 5.5]{BKN2002}, Lemma \ref{qharmonic12} for $\alpha \in (1,2)$. Inequality (\ref{Schr3}) follows from similar arguments and Lemma \ref{lowerbox} for $\alpha \in (0,1]$ and \cite[Lemma 5.6]{BKN2002} for $\alpha \in (1,2)$. \end{proof} As another application of our main result we show that under some assumptions on $q$ a weak solution of $\Delta^{\alpha/2} u + q u = 0$ is in fact a strong solution. First we need the following easy lemma. \begin{lemma} \label{existence} Let $\alpha \in (0,1)$. Choose $x_0 \in \R^d$ and $r > 0$. Assume that a Borel function $u: \R^d \to \R$ satisfies $$ \int_{\R^d} \frac{|u(y)|}{(1 + |y|)^{d + \alpha}} \, dy < \infty, $$ $\nabla u(x)$ exists and $|\nabla u(x)| \le A$ for all $x \in B(x_0,r)$ and some constant $A$. Then $\Delta^{\alpha/2} u(x)$ is well defined and continuous on $B(x_0,r/2)$. \end{lemma} \begin{proof} Let $x \in B(x_0,r/2)$. Choose $\eps \in (0,r/2)$. We have \begin{eqnarray} \label{eps1a} \int_{|x - y| < \eps} \frac{|u(y) - u(x)|}{|y - x|^{d + \alpha}} \, dy &\le& \int_{|x - y| < \eps} \frac{A |y - x|}{|y - x|^{d + \alpha}} \, dy \\ \label{eps2a} &=& c A \int_0^{\eps} \rho^{-\alpha} \, d \rho \to 0, \quad \quad \text{when} \quad \eps \to 0, \end{eqnarray} where $c = c(d)$. By the definition of $\Delta^{\alpha/2}$ (see Preliminaries) we obtain that $\Delta^{\alpha/2} u(x)$ is well defined. One can easily show that for any fixed $\eps \in (0,r/2)$ the function $$ f_{\eps}(x) = \int_{|x - y| > \eps} \frac{|u(y) - u(x)|}{|y - x|^{d + \alpha}} \, dy $$ is continuous on $B(x_0,r/2)$. This and (\ref{eps1a}) - (\ref{eps2a}) imply that $\Delta^{\alpha/2} u(x)$ is continuous on $B(x_0,r/2)$. \end{proof} \begin{proof}[proof of Corollary \ref{weakstrong}] Choose arbitrary $x_0 \in D$. It is clear that there exists $r > 0$ such that $B(x_0,2r) \subset \subset D$ and $(B(x_0,2r),q)$ is gaugeable. This can be done by Khasminski's lemma (see page 299 \cite{BB2000}). Put $B = B(x_0,r)$. By \cite[Theorem 5.5]{BB1999} we may assume that $u$ is a $q$-harmonic function on $B(x_0,2r)$ (after a modification on a set of Lebesgue measure zero). By (\ref{representation}) we get \begin{equation} \label{repr1} u(x) = E^x u(X_{\tau_B}) + G_B(qu)(x), \quad \quad x \in \R^d. \end{equation} By Theorem \ref{mainthm} and Lemma \ref{existence} $\Delta^{\alpha/2} u(x)$ is well defined and continuous on $B(x_0,r/2)$. The function $v(x) = E^x u(X_{\tau_B})$ is an $\alpha$-harmonic function on $B(x_0,r/2)$, so $\Delta^{\alpha/2} v(x) = 0$ on $B(x_0,r/2)$. Hence by (\ref{repr1}) we obtain $$ \Delta^{\alpha/2} u(x) = \Delta^{\alpha/2}(G_B(qu))(x), \quad \quad x \in B(x_0,r/2). $$ By Lemma 5.3 \cite{BB2000} we have $$ \Delta^{\alpha/2}(G_B(qu))(x) = -q(x) u(x), $$ for almost all $x \in B(x_0,r/2)$. But both sides of this equality are continuous so in fact this equality holds for all $x \in B(x_0,r/2)$. \end{proof} {\bf{ Ackowledgements.}} I am grateful for the hospitality of the Institute of Mathematics, Polish Academy of Sciences, the branch in Wroc{\l}aw, where part of this paper was written.
{"config": "arxiv", "file": "1209.5904.tex"}
TITLE: How to formulate a data fitting problem as a least squares problem QUESTION [2 upvotes]: Formulate the data fitting problem as a least squares problem $\frac {1}{2} \Vert Ax-b \Vert_2^2 $ I thought I was supposed to wrote it like this: $ \frac {1}{2} x^THx + g^T+ \gamma$ but actually that's an unconstrained quadratic program; any help? REPLY [1 votes]: In data fitting, we are interested to solve: $$\boldsymbol \theta = \underset{{\boldsymbol{\theta}} \in \mathbb{R} ^{M+1}}{\text{minimize}}J(\boldsymbol{\theta})$$ The error function $J\colon \mathbb{R^{M+1}}\to\mathbb{R}$ is given by $$J(\boldsymbol{\theta}) = \frac{1}{2N} \sum\limits_{n=1}^N \{h_{\boldsymbol{\theta}}(\boldsymbol{\phi}^{(n)})-t^{(n)}\}^2$$ where $\boldsymbol{\phi}:\mathbf{R}^D \rightarrow \mathbf{R}^{\mathcal{H}}$ a map/transformation function. The hypothesis $h_{\boldsymbol{\theta}}(\mathbf{\phi}^{(n)})$ we want to fit is given by: $$h_{\boldsymbol{\theta}}(\boldsymbol{\phi}^{(n)}) = h(\boldsymbol{\phi}^{(n)},\boldsymbol{\theta}) = \theta_0 + \theta_1 \phi^{(n)}_1 + \theta_2 \phi^{(n)}_2 + \dots + \theta_D \phi^{(n)}_D = \sum\limits_{d=0}^D \theta_d \phi^{(n)}_d, \quad \phi^{(n)}_0 = 1$$ If we define the parameters vector $\boldsymbol{\theta} = [\theta_0, \theta_2, \dots, \theta_D]^T \in \mathbb{R^{D+1}}$ the vectorized form of the hypothesis and the error functions respectively is $$h_{\boldsymbol{\theta}}(\boldsymbol{\phi}^{(n)}) = \boldsymbol{\theta}^T \boldsymbol{\phi}^{(n)}$$ and $$J(\boldsymbol{\theta}) = \frac{1}{2N} (\boldsymbol{\Phi}\boldsymbol{\theta}- \mathbf{t})^T(\boldsymbol{\Phi}\boldsymbol{\theta}- \mathbf{t}) = \frac{1}{2N}||\boldsymbol{\Phi}\boldsymbol{\theta}- \mathbf{t}||^2$$ with $$\boldsymbol{\Phi} = \begin{bmatrix} (\boldsymbol{\phi}^{(1)})^T \\[0.3em] (\boldsymbol{\phi}^{(2)})^T \\[0.3em] \vdots \\[0.3em] (\boldsymbol{\phi}^{(n)})^T \end{bmatrix}= \begin{bmatrix} \phi^{(1)}_1 & \phi^{(1)}_2 & \dots & \phi^{(1)}_D \\[0.3em] \phi^{(2)}_1 & \phi^{(2)}_2 & \dots & \phi^{(2)}_D \\[0.3em] \vdots \\[0.3em] \phi^{(N)}_1 & \phi^{(N)}_2 & \dots & \phi^{(N)}_D \\[0.3em] \end{bmatrix}$$ Finally, the quadratic form of the error function is: $$J(\boldsymbol{\theta}) = \frac{1}{2N} \Bigg\{ \boldsymbol{\theta}^T \boldsymbol{\Phi}^T\boldsymbol{\Phi}\boldsymbol{\theta} -2 \boldsymbol{t}^T \boldsymbol{\Phi} \boldsymbol{\theta} + \boldsymbol{t}^T\boldsymbol{t} \Bigg\}$$ PS: This methodology is also called multivariable linear regression.
{"set_name": "stack_exchange", "score": 2, "question_id": 1568031}
TITLE: Why is Kirchhoff's second law commonly used for circuits involving inductivity? QUESTION [3 upvotes]: I have often seen in introductory literature, e.g. for the activation process of a circuit with a power source which delivers the voltage $U$, current $I$, resistance $R$ and an inductivity $L$, that Kirchhoff's second law is frequently used for deducing e.g. $I(t)$ of the process. I cite (translate) from the script of my professor last term: "We determine the direction of the current and add all the voltages positively and substract all the voltage drops, i.e. $U(t)-RI(t)+U_{ind}(t)=0$. Before turning the circuit on, $U=I=0$, afterwards $U(t)=U_0$, which leads to the differential equation $L\frac{dI(t)}{dt}+RI(t)=U_0$ for $I(t)$." which basically is Kirchhoff's second law. I'm perfectly fine with the calculations but why may we use Kirchhoff's second law? It follows from the conservativity of the electric field. But when we have an inductivity involved in a circuit, we face, by the law of induction, $$\oint \vec{E} \cdot d\vec{r} = - \frac{d\Phi}{dt} \neq 0$$ explicitly that $\vec{E}$ is NOT conservative, which actually forbids us to use Kirchhoff's second law, doesn't it? REPLY [4 votes]: In circuits with nontrivial inductance, Kirchoff's law of voltages is precisely an expression of Faraday's law, $$\oint \vec{E} \cdot d\vec{r} +\frac{d\Phi}{dt} =0.$$ The first part are the resistive and capacitive voltages as would be measured in an electrostatic setting, as well as the EMF of any sources in the circuit. The second term, ${d\Phi}/{dt}$, gives the inductive EMF from all the inductors in the circuit. This you can then model appropriately but its origin is directly from the Faraday law. Thus, for example, you might neglect the inductance of the circuit's overall loop (which is usually very small), and then you can split up the contribution to the flux derivative from the various inductive elements in the circuit. Generally, these flux derivatives are given by a self-inductance term, $$\frac{d\Phi}{dt}=L\frac{dI}{dt},$$ as well as a sum of mutual inductance terms with (in principle all) other circuits in the world, from which you usually neglect all except the important ones. This gives you a simplified expression suitable for use as Kirchoff's voltage law which still emanates from the Faraday law.
{"set_name": "stack_exchange", "score": 3, "question_id": 203704}
TITLE: An example of the maximum of $f(x)$ and the maximum of $g(x)$ does not to equal to the maximum of $(f+g)(x)$ QUESTION [1 upvotes]: Let's $f(x)$ and $g(x)$ be continuous functions on $[0,1]$. Show an example of the maximum of $f(x)$ and $g(x)$ does not equal to the maximum of $(f+g)(x)$ on $[0,1]$. Now I have tried to find an example, but for each time, the maximum of $f(x) +$ the maximum of $g(x)$ would equal to the maximum of $(f+g)(x)$. For example, I would have $f(x) = x^2$ and $g(x) = x+2$. The maximum of $f(x)$ would be $1$ and the maximum of $g(x) = 3$, so $1+3 = 4$. However, when I have $(f+g)(x) = x^2 + x + 2$, the maximum of $(f+g)(x) = 4$. This would happen each time I used a continuous function for both $f(x)$ and $g(x)$. Any help would be greatly appreciated. REPLY [1 votes]: As others have said, look for functions with maxima in different places. Try looking at trigonometric functions. Particularly, $\sin$ and $\cos$.
{"set_name": "stack_exchange", "score": 1, "question_id": 212141}
TITLE: map from 6-vertex model to domino tiling QUESTION [7 upvotes]: I am trying to find a correspondence between 6-vertex model and an Aztec Diamond tiling. Here are the building blocks of the 8-vertex model: There seems to be more than one correspondence. I found at least two: Ferrari, Spohn Domino tilings and the six-vertex model at its free fermion point Zinn-Justin Six-Vertex Model with Domain Wall Boundary Conditions and One-Matrix Model And there is likely even more. My goal had been to try to understand if there was equivalence between domino tilings and alternating sign matrix, with 6-vertex as an intermediate object. I wound up just totally confusing myself. ising model alternating sign matrix 6-vertex model domino tiling Now ASM and Domino tilings in the same space have different numbers of tilings. ELKP shows how to obtain two different Alternating Sign Matrices from the domino tilings. Every paper seems to give a different answer. My main question is whether the combinatorics models: domino, alternating sign are the same as the statistical mechanics models? Here I list a few candidates ising, 6-vertex, XXZ etc REPLY [7 votes]: In short: unlike the mapping between alternating-sign matrices and the configurations of the six-vertex model with domain-wall boundary conditions, the mapping between the six-vertex model and domino tilings is not one-to-one. This accounts for the fact that the relevant special case of the six-vertex model (with $a=b=c/\sqrt{2}$) corresponds to the 2-enumeration of alternating-sign matrices (ASMs). On the statistical-mechanical side we're dealing with the special case $a=b<c$, known as the F-model, of the six-vertex model. (Historical aside: today I finally found out that the name "F-model" was chosen by Rys in honour of his thesis advisor, Fierz, who apparently came up with the model.) Recall that $c$ is the weight of the two 'saddle-like' arrow configurations on the edges surrounding a vertex: these are vertices 5 and 6 in the OP's figure. The ice rule implies that along any row (and any column) the two vertex configurations of weight $c$ must occur alternatingly, with any number of intermediate vertices of weight $a$ and $b$. The domain-wall boundary conditions further imply that one of the two vertices of weight $c$ occurs less than the other: it must occur precisely one time less in every row and in every column. (Whether this is vertex 5 or 6 in the OP depends on which of the two possible domain-wall boundaries one chooses; the two are related by reversing all arrows.) Of course these facts are precisely what allows one to relate configurations of the six-vertex model with domain walls to ASMs, cf. Kuperberg [arXiv:math/9712207]. On the combinatorial side recall that an $x$-enumeration counts ASMs with weight $x^k$ when the ASM contains $k$ entries equal to $-1$. Now the latter ASM entries precisely correspond to the vertex configuration of weight $c$ that occurs less (cf above). Thus, if a row contains $l$ of these vertices, it must contain precisely $2l+1$ vertices of weight $c$. But this is true for every row of the lattice. The upshot is that if the lattice has size $L\times L$ then the domain-wall partition function of the F-model accounts for the various $x$-enumerations of ASMs: $$c^L \ Z_L(a=b,c) \quad\longleftrightarrow\quad \text{$c^2$-enumeration of $L\times L$ ASMs} \ . $$ (Since common rescalings of $a,b,c$ only yield a physically unimportant normalization factor for $Z_L$ we can set $a=b=1$ to remove the overall factor in the above correspondence.) In particular, at the combinatorial or ice point ($a=b=c$) the domain-wall partition function just counts the number of ASMs up to an overall normalization. By assigning two domino tiles for every $-1$ in the ASM (as in Fig 1 of Zinn-Justin cited in the OP) we thus get a combinatorial interpretation for the 2-enumeration of ASMs (counted by the domain-wall partition function at $a=b=c/\sqrt{2}$) in terms of the number of domino tilings of the so-called Aztec diamond. PS. Just to mention some more related terminology consider the combination $$\Delta=\frac{a^2+b^2-c^2}{2\,a\,b}$$ of vertex weights. For the F-model we have $\Delta = 1-(c/a)^2/2$. The ice-model corresponds to $\Delta =1/2$, the 2-enumeration of ASMs to $\Delta = 0$ and their 3-enumeration to $\Delta=-1/2$. The value $\Delta = 0$ is known as the free-fermion point. That this value is quite special is clear from the viewpoint of the XXZ spin chain related to the six-vertex model, whose Hamiltonian is of the form $$H_{XXZ} = \sum_{j\in\mathbb{Z}/L\mathbb{Z}} (S_j^x \, S_{j+1}^x + S_j^y \, S_{j+1}^y + \Delta \, S_j^z \, S_{j+1}^z) \ \in \ \text{End}((\mathbb{C}^2)^{\otimes L}) \ ,$$ where $\Delta$ now sets the (partial) anisotropy, breaking the $SU(2)$-symmetry of Heisenberg's isotropic XXX spin chain ($\Delta=1$) to the subgroup $U(1)\subseteq SU(2)$ of rotations around the $z$-axis.
{"set_name": "stack_exchange", "score": 7, "question_id": 264523}
TITLE: Isomorphism between homotopy groups of CW-complexes QUESTION [1 upvotes]: Let $(Y, y_0)$ and $(Y’, y_0)$ pointed CW-complexes, with $Y’$ obtained from $Y$ by attaching $n+1$-cells. Why is it true that $i_{*}: \pi_{q}(Y, y_0) \to \pi_{q}(Y’, y_0)$ is an isomorphism for $q < n$ and an epimorphism for $q=n$? I am aware of the fact that this is true if $Y$ is the $n$-skeleton of $Y’$, but here I only know that the $n$-skeleton of $Y’$ is contained in $Y$. By the way, this has already been asked here, but I couldn’t quite understand the answer. As a related question: if $e_{\alpha}^{n+1}$ is a $n+1$-cell in $Y’$, why is it true that $i_{*}(\alpha) =0$? Note I am using notation as in Switzer, Lemma 9.8 in the chapter dedicated to Brown theorem. REPLY [1 votes]: By cellular approximation, up to homotopy, one may assume that $Y'$ is obtained by adding $n+1$-cells to the $n$-skeleton of $Y$, so that $Y^{(n)}= Y'^{(n)}$ Let $f:S^q\to Y'$ be a pointed map, $q\leq n$. Then by cellular approximation, up to pointed homotopy, $f$ lands in the $n$-skeleton of $Y'$. But the $n$-skeleton of $Y'$ is the same as that of $Y$, so $f$ lands in $Y$ up to homotopy. This shows surjectivity for $q\leq n$. Similarly, if $q<n$, because then $q+1\leq n$, again by cellular approximation, any homotopy between two such maps can be homotoped to live in $Y$, and so this shows injectivity for $q<n$. For the second question, if you've added a cell along $\alpha: S^n\to Y$, then that means, by definition, that there is a commutative diagram : $\require{AMScd}\begin{CD} S^n @>>> Y\\ @VVV @VVV \\ D^{n+1} @>>> Y'\end{CD}$ it follows that $S^n\to Y\to Y'$ extends to $D^{n+1}$, i.e. it is $0$ in homotopy.
{"set_name": "stack_exchange", "score": 1, "question_id": 4018815}
TITLE: Some examples of local and nonlocal properties QUESTION [4 upvotes]: Today I learned that continuity at a point is a local property. Concretely, if $f: \mathbb R \to \mathbb R$ is continuous on $[-K,K]$ for all $K \in \mathbb R$ then $f$ is continuous on $ \mathbb R$. Uniform convergence on the other hand is not a local property: if $g_n \to g$ uniformly on $[-K,K]$ for all $K \in \mathbb R$ then it does not follow that $g_n \to g$ on $\mathbb R$. (it is not clear to me though if this is only because $ \mathbb R$ is not compact and it would hold if $\mathbb R$ was compact) Since I still don't fully grasp what a local property is and what a non-local property is I would like to kindly request you to post some examples of both to help me get a feel for it. Added For example, is differentiability a local property like contiuity? Does it hold that if $f: \mathbb R \to \mathbb R$ is differentiable on $[-K,K]$ for all $K \in \mathbb R$ then $f$ is differentiable on $ \mathbb R$? REPLY [5 votes]: Uniform continuity is also not local. A continuous function is uniformly continuous on compact sets. For example, the function $f(x)=x^2$ is uniformly continuous on any finite interval $[-K,K]$ but not on the whole real line. We want to theck the definition of uniform continuity, i.e. for each $\varepsilon>0$ there exists $\delta>0$ such that for all $|x-y|<\delta$, you have $|f(x)-f(y)|<\varepsilon$. Equivalently, if $x_n,y_n$ are two sequences with $|x_n-y_n|\to 0$, then $|f(x_n)-f(y_n)|\to 0$. If you are inside a compact set the sequences $x_n,y_n$ can't do much, but if you are on the whole real line, you can take sequences $x_n,y_n\to \infty$ such that $|x_n-y_n|\to 0$. For example take $x_n= n+1/n$, $y_n=n-1/n$. Then we have $$|f(x_n)-f(y_n)|= n^2+2+1/n^2-n^2+2-1/n^2=4 \nrightarrow 0$$ The local properties depend only on local data, but here if you look at the definition of uniform continuity you must able to check it for all $x,y\in \mathbb R$ with $|x-y|<\delta$, and the example shows how this can fail.
{"set_name": "stack_exchange", "score": 4, "question_id": 781159}
TITLE: Minmax linear algebra problem QUESTION [0 upvotes]: Let $A$ be 3x3 hermitian matrix with eigenvalues $\lambda_1\leq\lambda_2\leq\lambda_3$. Let $B$ be the 2x2 matrix which is the upper left-hand corner of $A$. Let $\mu_1\leq\mu_2$ be the eigenvalues of $B$. Prove that $\lambda_1\leq\mu_1\leq\lambda_2\leq\mu_2\leq\lambda_3$. I know $B$ is hermitian since $A$ is, and I'm almost positive you want to use the minmax theorem for this, but I'm having difficulty making progress. Does anyone have any suggestions how you might show at least the first inequality $\lambda_1\leq\mu_1$? REPLY [1 votes]: Sure, here's a demonstration of that first inequality. We have $$ \lambda_1 = \min \{x^*Ax : x = (x_1,x_2,x_3)^T \in \Bbb C^3, \|x\| = 1\}\\ \leq \min\left\{ x^*Ax : x = (x_1,x_2,x_3)^T \in \Bbb C^3, \|x\| = 1, x_3 = 0\right\}\\ = \min\left\{ \pmatrix{\bar x_1 & \bar x_2 & 0}A\pmatrix{x_1\\x_2\\0} : x \in \Bbb C^3,|x_1|^2 + |x_2|^2 = 1 \right\}\\ = \min\left\{ x^*Bx : x \in \Bbb C^2,\|x\| = 1 \right\} = \mu_1 $$ The last eigenvalue has the same trick but with a max. For the middle eigenvalue, we have to do this with an actual min-max, as in the theorem.
{"set_name": "stack_exchange", "score": 0, "question_id": 2396087}
TITLE: Why are the interarrival times independent and exponentially distributed? - The "independent increments" QUESTION [0 upvotes]: Definitions: A Poisson Process is a counting process that satisfies $N(t)\sim\mathcal{P}oi(\lambda t)$ and has the independent and stationary increment properties. The independent increment property is where the number of arrivals of a phenomenon in disjoint intervals (i.e. increments) are independent. That is, if $(t_1, t_2] $ and $ (t_3, t_4 ] $ are disjoint, then $N(t_2) - N(t_1) $ and $N(t_4) - N(t_3)$ are independent. I've seen this reasoning many, many times: (Source: An Introduction to Probability Models by Sheldon Ross) Question: I don't understand why the highlighted condition disappears because of "independent increments." The condition is written as the first arrival time. How can you write this as an equivalent event with increment(s) disjoint from $(s, s+t]$? I tried $\{ T_1 = s \} = \{ N(s) = 1 \cap N(t) < 1 \quad \forall t \in (0,s)\} $, but I can't see how I can fit this into the definition of independent increments. If someone can formally show this, I would really appreciate it. REPLY [1 votes]: The relevant event is the conjunction. $$\{T_2{>}t~,T_1{=}s\} = \{N(t){-}N(s){=}0,\forall \tau~(\tau{<}s\leftrightarrow N(\tau){=}0)\}$$ Now, while the LHS is not, the RHS is an intersection of independent events.   Since the count of arrivals in $(s..t]$ is independent of the count of arrivals in $(0..s]$, so therefore the count of arrivals in $(s..t]$ is independent of the first arrival occurring exactly at $s$ (a point within $(0..s]$). $$\begin{align}\mathsf P(T_2>t\mid T_1=s)&=\mathsf P(N(t){-}N(s){=}0\mid T_1=s)\\[1ex]&=\mathsf P(N(t){-}N(s){=}0)\end{align}$$ Alternatively, if we use the notation $N_\mathcal I$ to represent the count for arrivals within interval $\mathcal I$, then $$\{T_2{>}t~,T_1{=}s\} = {\{N_{(s..t]}{=}0, N_{\{s\}}=1, N_{(0..s)}=0\}}$$ So $$\begin{align}\mathsf P(T_2{>}t\mid T_1{=}s)&=\mathsf P(N_{(s..t]}{=}0 \mid N_{\{s\}}=1, N_{(0..s)}=0)\\[1ex]&=\mathsf P(N_{(s..t]}{=}0)\end{align}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 3621835}
TITLE: Fresnel equations & emission QUESTION [0 upvotes]: As part of a project work I had to derive the absorption of a thin-film stack. It was a semiconductor between two oxides on a metallic back reflector. I used Fresnel equations and optimized the absorption by adjusting the thickness. After fabricating the device, it looked quite black, which was exactly what was the goal. Therefore my calculations seem to be successful and match the problem. I am now putting everything in my thesis, but there is one open question for me. As a body that is absorbing is also emitting, I would expect the semiconductor material and back reflector to emit as well (as they have complex refractive indices). My question is, how the Fresnel equations could possibly already account for this problem (I don't think that they'll do)? Emission in my opinion can occur at any depth of the material, so if I would consider it, I would get all kinds of different phase shifts, intimating a very messy calculation. Nevertheless it seems, that emission is only a minor problem, because otherwise my structure would be far from being a good absober, thus far from being black. Any ideas, why that is the case? Thanks for any help! REPLY [0 votes]: The Fresnel equations describe the response to an external field. Your structure will also emit black body radiation, depending on temperature, and its intensity is modified by the emissivity. For example, at a frequency for which a material is transparent the emissivity is zero and it will not emit any black body radiation.
{"set_name": "stack_exchange", "score": 0, "question_id": 428828}
TITLE: Prove determinant of a matrix with trigonometry functions. QUESTION [0 upvotes]: I am a student. I am 15. Actually, I am a new learner of matrix & determinant. I have to prove an equation. I tried my best to solve this. But, I am failed. The equation is: $$ \begin{vmatrix} 1 & \cos2\alpha & \sin\alpha \\ 1 & \cos2\beta& \sin\beta \\ 1 & \cos2\gamma & \sin\gamma \\ \end{vmatrix} = 2(\sin\alpha - \sin\beta)(\sin\beta - \sin\gamma)(\sin\gamma - \sin\alpha) $$ Thanks- REPLY [2 votes]: First idea: Make $0$'s in the first column. For example: $Row_2-Row_1$ and $Row_3-Row_1$ $$\text{Your det}=\begin{vmatrix} 1&\cos2\alpha&\sin\alpha\\ 0&\cos2\beta-\cos2\alpha&\sin\beta-\sin\alpha\\ 0&\cos2\gamma-\cos2\alpha&\sin\gamma-\sin\alpha\end{vmatrix} = \begin{vmatrix} \cos2\beta-\cos2\alpha&\sin\beta-\sin\alpha\\\cos2\gamma-\cos2\alpha&\sin\gamma-\sin\alpha\end{vmatrix}$$ Now, as @Ethan said in comments, try to use the double angle formula to continue. REPLY [2 votes]: By the rule of SARRUS we get $$\cos(2\beta)\sin(\gamma)+\cos(2\alpha)\sin(\beta)+\cos(2\gamma)\sin(\alpha)-\cos(2\alpha)\sin(\gamma)-\cos(2\gamma)\sin(\beta)-\cos(2\beta)\sin(\alpha)$$
{"set_name": "stack_exchange", "score": 0, "question_id": 2983004}
\section{Introduction} Much of the recent empirical success of machine learning has been enabled by distributed computation. In order to contend with the size and scale of modern data and models, many production-scale machine learning solutions employ distributed training methods. Ideally, distributed implementations of learning algorithms lead to speedup gains that scale linearly with the number of compute nodes. Unfortunately, in practice these gains fall short of what is theoretically possible even with a small number of compute nodes. Several studies, starting with \cite{dean2012large} extending to more recent ones \cite{qi17paleo,grubic2018synchronous}, have consistently reported a tremendous gap between ideal and realizable speedup gains. \begin{figure}[t] \centering \includegraphics[width=0.45\linewidth]{figs/straggler.pdf} \vspace{-0.3cm} \caption{Probability of job completion time per worker. Setup: 148 worker nodes on AWS EC2 \texttt{t2.small} instances, running distributed SGD with batch size 1024 on CIFAR-10, and a simple variation of the cuda-covnet model. Observe that 100\% completion time corresponds to $8\times$ the median.} \label{fig:RunningExample} \vspace{-0.5cm} \end{figure} There are many causes of this phenomenon. A notable and well-studied one is the presence of communication bottlenecks~\cite{dean2012large, seide20141, strom2015scalable, qi17paleo,alistarh2017qsgd,grubic2018synchronous,wang2018atomo}, while another is the presence of {\it straggler} nodes. These are compute nodes whose runtime is significantly slower than the average node in the system. This {\it straggler} effect can be observed practically in many real world distributed machine learning applications. Figure \ref{fig:RunningExample} illustrates one running example of {\it straggler} effect. In this work, we focus on the latter. Stragglers become a bottleneck when running {\it synchronous} distributed algorithms that require explicit synchronization between tasks. This is the case for commonly used synchronous optimization methods, including mini-batch stochastic gradient descent (SGD), and gradient descent (GD), where the overall runtime is determined by the slowest response time of the distributed tasks. Straggler mitigation for general workloads has received significant attention from the systems community, with techniques ranging from job replication approaches, to predictive job allocation, straggler detection, and straggler dropping mechanisms \cite{ananthanarayanan2013effective, zaharia2008improving, chen2016revisiting}. In the context of model training, one could use asynchronous techniques such as \textsc{Hogwild!} \cite{recht2011hogwild}, but these may suffer from reproducibility issues due to system randomness, making them potentially less desirable in some production environments \cite{chen2016revisiting}. Recently, coding theory has provided a popular tool set for mitigating the effects of stragglers. Codes have recently been used in the context of machine learning, and applied to problems such as data shuffling \cite{li2015coded,lee2016speeding}, distributed matrix-vector and matrix-matrix multiplication \cite{lee2016speeding,dutta2016short}, as well as distributed training \cite{li2018near, yu2017polynomial, karakus2017encoded, fahim2017optimal, park2018hierarchical, chen2018draco, yu2018straggler}. In the context of distributed, gradient-based algorithms, Tandon et al. \cite{tandon2017gradient} introduced {\it gradient coding} as a means to mitigate straggler delays. They show that with $n$ compute nodes, one can assign $c$ gradients to compute node such that the sum of the $n$ gradients can be recovered from any $n-c+1$ compute nodes. Unfortunately, for $c$ small this may require waiting for almost all of the compute nodes to finish, while for larger $c$ the compute time per worker may outweigh any benefits of straggler mitigation. A reasonable alternative to these exact gradient codes are {\it approximate} gradient codes, where we only recover an approximate sum of gradients. This seems to be a sensible compromise, as first order methods are known to be be robust to small amounts of noise \cite{mania2017perturbed}. Approximate gradient codes (AGCs) were first analyzed in \cite{raviv2017gradient} and \cite{charles2017approximate}. While these studies showed that AGCs can provide significantly higher delay tolerance when recovering the sum of gradients within a small error, neither work provided a rigorous performance analysis or a practical implementation of AGCs on any training algorithm. \subsection{Our Contributions} In this work, we present \erasegd{}, an end-to-end distributed training method that implements the approximate gradient codes in studied in \cite{charles2017approximate}. The theoretical novelty of this work is that we rigorously analyze the convergence and delay properties of GD in the presence of stragglers when using approximate gradient codes. The practical novelty is that \erasegd{} allows for ``erasures'' of up to a constant fraction of all compute nodes, while introducing small noise in the recovered gradient, which leads to a significantly reduced end-to-end run time compared to ``vanilla" and gradient coded GD. \paragraph{Theoretical results.} We analyze the performance of \erasegd{} by analyzing the convergence rate of distributed gradient descent when using AGCs to compute the first-order updates. We focus on functions satisfying the Polyak-\L{}ojasiewicz (PL) condition, which generalizes the notion of strong convexity. While full-batch gradient descent achieves linear convergence rates on such functions, it is well-known that SGD only achieves a convergence rate of $O(1/T)$. Despite the fact that \erasegd{} uses a stochastic gradient type of an update, we show that it achieves a linear convergence rate up to a relatively small noise floor. Specifically, suppose $f$ is $\mu$-PL, $\beta$-smooth and we use \erasegd{} to minimize $f$, where we initialize at $x_0$ and at each iteration update $x_t$ using the output of \erasegd{}. If we assign each compute node $c$ gradients and wait for a $\delta$ fraction of the nodes to finish, then we have the following convergence rate: $$\Delta_T \leq \left(1-\frac{(1-e^{-c\delta})\mu}{\beta}\right)^T\Delta_0 + \tilde{O}\left(\frac{e^{-c\delta}}{n}\right)$$ where $\Delta_T = \EE[f(x_T)-f^*]$, $f^* = \inf_{x}f(x)$, and $\tilde{O}$ hides problem dependent variables related to $\mu$ and $\beta$. This result is stated formally in Theorem \ref{thm:conv_rate_2}. This result does not capture the end-to-end runtime, as it does not model stragglers or the fact that for larger $c$, each worker requires more compute time. In order to analyze the total runtime of \erasegd, we use a probabilistic model for stragglers proposed in \cite{lee2016speeding}. We show that if we apply distributed GD, gradient coded GD, and \erasegd{} on $n$ compute nodes, they have total runtimes that are approximately $O(T\cdot \log(n)/n)$, $ O(T\cdot\log(n/c)/n)$, and $\tilde{O}(T/n)$, where $T$ is the total iterations to reach to an accuracy $\Delta_T = \tilde{O}(e^{-c}/n)$. Thus, \erasegd{} can lead to almost a $\log(n)$ speedup over vanilla and gradient coded GD. This is made formal in Theorem \ref{thm:tot_time}. \paragraph{Experimental results.} Finally, we provide an extensive empirical analysis of \erasegd. We compare this to exact gradient coded gradient descent and uncoded gradient descent. Our results generally show that approximate gradient codes lead to up to $6\times$ and $3\times$ faster distributed training over vanilla and coded GD respectively. Moreover, we see these speedups consistently across multiple datasets and classification tasks. Our implementation is publicly available for reproducibility \footnote{https://github.com/hwang595/ErasureHead}. \subsection{Related Work} Many different works over the past few years have employed coding--theoretic ideas to improve the performance of various distributed algorithms. Prior work proposes different methods for reducing the effect of stragglers, including replicating jobs across nodes \cite{shah2016redundant} and dropping straggler nodes \cite{ananthanarayanan2013effective}. More recently, coding theory has become popular for mitigating the straggler effect in distributed machine learning. In particular, \cite{lee2016speeding} proposed the use of erasure codes for speeding up the computation of linear functions in distributed learning systems. Since then, many other works have analyzed the use of coding theory for distributed tasks with linear structure \cite{fahim2017optimal, park2018hierarchical, leecoded, dutta2016short, yu2017polynomial}. \cite{tandon2017gradient} proposed the use of coding theory for nonlinear machine learning tasks. They analyzed so-called gradient codes for distributed training, and showed that their gradient codes achieve the optimal trade-off between computation load and straggler tolerance. Other gradient codes that achieve this optimal trade-off have since been proposed \cite{raviv2017gradient,halbawi2018improving}. \cite{li2018near} used the Batched Coupon Collection problem to improve distributed gradient descent under a probabilistic straggler model. More sophisticated gradient codes that focus on the trade-off between computation and communication were introduced and analyzed in \cite{ye2018communication}. The aforementioned gradient codes generally focus on recovering the exact gradient in each iteration of gradient descent. However, \cite{raviv2017gradient, charles2017approximate, karakus2017encoded, charles2018gradient} utilize the fact that for distributed learning problems, approximations to the gradient may be acceptable. While \cite{karakus2017encoded} focuses on loss related to least-squares problems, \cite{raviv2017gradient, charles2017approximate, charles2018gradient} construct and analyze gradient codes that can be used in non-linear settings. \cite{raviv2017gradient} uses expander graphs to construct approximate gradient codes with small error in the worst-case straggler setting, \cite{charles2017approximate} focuses on the setting where the stragglers are chosen randomly. While the aforementioned work shows that these codes are effective from in terms of their $\ell_2$ error, they lack careful convergence rate analyses or an extensive experimental evaluation.
{"config": "arxiv", "file": "1901.09671/section_intro.tex"}
TITLE: Are my calculations correct? QUESTION [0 upvotes]: Solve the following system using Gaussian elimination: $ x + 4y + z = 0\\ 4x + 13y + 7z = 0\\ 7x + 22y + 13z = 1$ This is what I have done: Augmented Matrix: 1 4 1 0 4 13 7 0 7 22 13 1 $ R_2 - 4R_1 = R_2$ and $R_3 - 7R_1 = R_3$ 1 4 1 0 0 -3 3 0 0 -6 6 1 $-1/3R_2 = R_2$ 1 4 1 0 0 1 -1 0 0 -6 6 1 $R_3 + 6R_2 = R_3$ 1 4 1 0 0 1 -1 0 0 0 0 1 By what I understand, the last row means $0x + 0y + 0z = 1$ This is incorrect so this means that the system is inconsistent Have I made a mistake somewhere with my calculations or am I misunderstanding something? Or is the system really inconsistent and does that mean that it has no valid solution? - REPLY [0 votes]: Yes, the system is indeed inconsistent. The rank of the original matrix (non-augmented) is $2$, which means that the system either has infinitely many solutions or it has zero solutions. Your manipulation shows that it has zero solutions.
{"set_name": "stack_exchange", "score": 0, "question_id": 2418816}
TITLE: If $A$ is idempotent then $(I-A)$ is also idempotent QUESTION [1 upvotes]: I've been under the impression that matrices cannot be treated like normal algebra. This is a fundamental concept, of course, and I've known it for some time. Generally such a notion would imply $(A-B)^2 ≠ A^2 - 2AB + B^2$, right? However a question in a university course has caused me some confusion: Prove that if $A$ is idempotent, then so is $(I-A)$. The worked solution is as follows: "$A$ is idempotent if $A^2 = A$. Consider $(I-A)^2$. We have $$(I-A)^2 = (I-A)(I-A) = I^2 - 2AI + A^2 = I - 2A + A = I - A$$ Therefore $(I-A)^2 = (I-A)$, thus $(I-A)$ is idempotent if $A$ is idempotent." Perhaps there is some 'loophole', if you will, when it comes to the identity matrix that allows us to perform these distributions in certain matrix situations. If so, would someone please be able to provide a simple proof - just something to give me some understanding behind the origins of the concept? Thank you. REPLY [0 votes]: More generally, since matrix multiplication is distributive, you have for any matrices of the same dimensions $$(A-B)^2=(A-B)(A-B) = A(A-B)-B(A-B)=A^2-AB-BA+B^2.\tag{1}$$ If the matrices $A$ and $B$ commute, i.e., if they are such matrices that $AB=BA$, then you have $$(A-B)^2=A^2-2AB+B^2.\tag{2}$$ One example is if one of the two matrices is the identity matrix $I$. It would also work, for example, if $B=A^2$. There are many other examples. But, as you correctly said, you have to keep in mind that $(2)$ is not true for arbitrary matrices, only for some of them. (As opposed to $(1)$, which holds for any matrices.)
{"set_name": "stack_exchange", "score": 1, "question_id": 1744758}
\begin{document} \title{ Fundamental Limits of Caching: Improved Bounds For Small Buffer Users} \author{Zhi~Chen \IEEEmembership{Member,~IEEE} Pingyi~Fan~\IEEEmembership{Senior Member,~IEEE} and~Khaled~Ben~Letaief~\IEEEmembership{Fellow,~IEEE} \thanks{Z. Chen is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada, N2L3G1, (email: z335chen@uwaterloo.ca). P. Fan are with the Department of Electrical Engineering, Tsinghua University, Beijing, China, 100084 (email: fpy@tsinghua.edu.cn). K. B. Letaief is with Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong (e-mail: eekhaled@ece.ust.hk). }} \maketitle \maketitle \baselineskip 24pt \begin{abstract}\\ \baselineskip=18pt In this work, the peak rate of the caching problem is investigated, under the scenario that the users are with small buffer sizes and the number of users is no less than the amount of files in the server. A novel coded caching strategy is proposed for such a scenario, leading to a lower peak rate compared to recent results in the literature. Furthermore, it is verified that our peak rates coincides with the cut-set bound analytically in an information-theoretic view. \end{abstract} \begin{keywords} Caching, coded caching, content distribution, network coding \end{keywords} \IEEEpeerreviewmaketitle \section{Introduction} Caching, a technique playing a crucial role in combatting the peak hour network traffic congestion, receives increasing attention recently. A natural way to reduce peak hour traffic is to duplicate some contents at the end users. In the literature, there are several works focusing on investigating how to duplicate fractions of files at end users so that the peak rate is minimized and network congestion is reduced. Usually, caching works in two phases. One is the placement phase, which is performed during off-peak times. The other is the delivery phase, performed during rush hours when network resources are scarce. The general model with caching strategy were discussed in \cite{dowdy1982file}-\cite{kol2011demand} where no coding strategy was applied and the gain comes only from local duplication. However, if each user is equipped with a cache with a small size compared with the amount of the content in the server, this gain is readily observed to be negligible. In \cite{kol2011index}, the index coding strategy was discussed. In \cite{niesen2014fundamental}, a new coded caching strategy from an information-theoretic perspective was proposed to achieve a new achievable rate region for general scenarios, where some finite rate-cache pairs were firstly derived and then the lower convex envelope of these points is shown to be achievable by memory sharing. This strategy was shown to enjoy both the local gain from duplication as well as the global gain from coding. This fundamental idea was then extended to \cite{niesen2014decentralized} where a decentralized coded caching algorithm was presented and to \cite{niesen2014nonuniform} where the non-uniform demand scenario was investigated. In \cite{clancy2014secure}, the secure issue with coded caching was investigated. In this work, however, we investigate the fundamental achievable rate for a special case where all users are equipped with a cache of a small size. In this case, appropriate coded duplication of contents is essential to reduce the delivery rates. To this end, we introduce a new coded caching strategy and it is shown that the rate of this strategy coincides with the lower cut-set bound when the cache size is rather small. With memory sharing, it is shown that our strategy outperforms the strategy proposed in \cite{niesen2014fundamental} in terms of achievable delivery rates when the cache size is relatively small. \section{Problem Setting} A system consisting of one server and $K$ users is considered. An error-free link is assumed to be shared by all users connecting the server, where $N$ files are stored for fetching. We also assume that each user is equipped with a cache of size $Z_k$ ($k=1, \ldots, K $) and each user is assumed to request only one full file. The aim is to design a novel coded strategy to achieve a lower peak rate that can guarantee each user obtaining the file requested, compared with the recent results on caching problems in \cite{niesen2014fundamental}. In this work, we turn our interest on the special case that all users are with small buffer sizes ($Z_k \le 1/K $) and $K \ge N$, i.e., the amount of users is no smaller than that of the files in the server. For clarity, we denote the smallest peak rate achieved by our strategy by $R(M)$, i.e., the cache-rate pair ($M$,$R(M)$) is on the boundary of the achievable region, where $M$ denotes the cache size of all users. For comparison, we denote the minimum peak rate achieved in \cite{niesen2014fundamental} by $R_c(M)$ and the lower cut-set bound by $R^*(M)$. \section{Main Results} \begin{thm} \label{thm:1} For $N \in \mathbb{N}$ files and $K$ ($K = N$) users each with cache of size $M=1/N$, the cache-rate pair ($1/N,N-1$) is achievable. Furthermore, if $M \in [0,1/N]$, \begin{align} R(M) \le N(1-M) \end{align} is achievable. \end{thm} \begin{thm} \label{thm:2} For $N \in \mathbb{N}$ files and $K$ ($K \in \mathbb{N}$ and $K>N$) users each with cache of size $M=1/K$, the cache-rate pair ($1/K,N-N/K$) is achievable. Furthermore, if $M \in [0,1/K]$, \begin{align} R(M) \le N(1-M) \end{align} is achievable. \end{thm} \begin{thm} \label{thm:3} For $N \in \mathbb{N}$ files and $K$ ($K=N$) users each with cache of size $M \le 1/N$, the achievable rate coincides with the associated cut-set bound. \end{thm} \begin{thm} \label{thm:4} For $N \in \mathbb{N}$ files and $K$ ($K \in \mathbb{N}$ and $K>N$) users each with cache of size $M \le 1/K$, the achievable rate coincides with the associated cut-set bound. \end{thm} Note that in \cite{niesen2014fundamental}, the achievable rate with $M=1/K$ is on the line connecting the two cache-rate pair points ($0,N$) and the first non-trivial point ($N/K, \min\left((K-1)/2,N(K-1)/K \right)$)\footnote{Note that in \cite{niesen2014fundamental} only rates of a number of points with cache size of $tN/K$ ($t=0,\ldots,N$) are directly derived and then the achievable cache-rate region is determined by the lower convex envelope of these points. It is readily observed that the non-trivial direct-derived achievable point with the smallest cache size is hence the point with cache size $N/K$.} and is hence given by, \begin{align} R_c(\frac{1}{K})= &\frac{\min \left(\frac{K(K-1)}{2},N(K-1) \right)-KN}{N} \cdot \frac{1}{K}+N \label{eq:proof:compare_0} \\ &=N-1 + \min(\frac{K-1}{2N},1-\frac{1}{K}) \label{eq:proof:compare_1}\\ &\geq N-1 + \min(\frac{N-1}{2N},1-\frac{1}{N}) \label{eq:proof:compare_2}\\ &= N - \max \left( \frac{N+1}{2N}, 1/N \right) \\ &\ge N-\frac{K}{N} =R(\frac{1}{K}). \label{eq:proof:compare_3} \end{align} where the inequalities in (\ref{eq:proof:compare_2}) and (\ref{eq:proof:compare_3}) follow from the setting that $K \ge N \ge 1$. Note also that the inequality in (\ref{eq:proof:compare_3}) strictly holds as long as $N>1$, which demonstrates the gain achieved by our coding strategy over the strategy designed in \cite{niesen2014fundamental} for the small cache size scenario. Furthermore, with our coding strategy, we have \begin{align} R(1/K)&=N(1-1/K) \\ &= \min\left(\frac{K-1}{2},\frac{N(K-1)}{K} \right), \quad \mbox{if $K \ge 2N$}\nonumber\\ &=R_c(N/K), \quad \mbox{if $K \ge 2N$.} \nonumber \end{align} Which is an encouraging result. In other words, with a smaller cache size $M=1/K$, the designed coding strategy can achieve a rate no smaller than that in \cite{niesen2014fundamental} with the cache size $M=N/K$ if $K \ge 2N$. Therefore, compared with \cite{niesen2014fundamental}, the rate with the cache size of $M<N/K$ is improved by our results through memory sharing, where the exact expression of the achievable rate with $0 \leq M \leq N/K$ is given on top of next page. \section{Examples} {\bf Example 1.} In this example, we set $N=K=3$, i.e., a system consists of three files in the server and three intended users. Let $W_1=A$, $W_2=B$ and $W_3=C$. We would like to show that the $(M,R)$ pair ($1/3, 2$) is achievable. With cache size $M=1/3$, we split each file into three subfiles with equal size, i.e., $A=(A_1,A_2,A_3)$, $B=(B_1,B_2,B_3)$ and $C=(C_1,C_2,C_3)$. In the placement phase, the cache content of user $k$ is designed to be $Z_k=(A_k \oplus B_k \oplus C_k)$, which is an XORed version of three subfiles from different files in the server. In the delivery phase, let us consider an example that user 1 requires $A$, user $2$ requires $B$ an user $3$ requires $C$. Hence, to obtain the missing files for user 1, we should transmit $B_1$ and $C_1$ to obtain $A_1$ from the XORed subfile in $Z_1$ as well as $A_2$ and $A_3$ for the missing files of $A$. In a similar manner, for user 2 requesting file $B$, the server need to transmit $B_3$ for the missing part of $B$ ($B_1$ is obtained from the shared link satisfying user 1). In addition, the server transmits $C_2$ to obtain $B_2$ (as $A_2$ has been transmitted and received by user 2). Note that the server has satisfied user 3 since the missing subfiles $C_1$ and $C_2$ are already received by it. In addition, with the received $A_3$ and $B_3$ from the shared link user $3$ can obtain $C_3$ from the cached $A_3 \oplus B_3 \oplus C_3$. Therefore, the server has to transmit ($B_1$, $C_1$, $A_2$, $A_3$, $B_3$, $C_2$) to satisfy the requests of all users in this example. In a similar manner, all other requests can be satisfied. Since each subfile has rate $1/3$, the total rate $2$ is achievable. On the other hand, the cut set bound derived in \cite{niesen2014fundamental} indicates the minimum rate is $R^*(1/3)=3-3/3=2$ and is identical to the achievable rate. By cache sharing, we conclude that the achievable rate coincides with the cut set bound if $0 \leq M \leq 1/N$. {\bf Example 2.} In this example, we consider a system with a server of $4$ files and $4$ users, i.e., $N=K=4$. The four files are termed as $W_1=A$, $W_2=B$, $W_3=C$ and $W_4=D$. Consider the case with the cache size $M=1/4$. In this example, we split each file into four parts of equal size, i.e., $A=(A_1,A_2,A_3,A_4)$, $B=(B_1,B_2,B_3,B_4)$, $C=(C_1,C_2,C_3,C_4)$ and $D=(D_1,D_2,D_3,D_4)$. In the placement phase, we let user $i$ caches the XORed subfile $Z_k=(A_k \oplus B_k \oplus C_k \oplus D_k)$. In the delivery phase, for instance, consider that user $i$ requires $W_i$, i.e., user 1 requests A, user 2 requests B, user 3 requests C and user 4 requests D. We can satisfy all requests of different users by sending ($A_2$, $A_3$, $A_4$, $B_1$, $B_3$, $B_4$, $C_1$, $C_2$, $C_4$, $D_1$, $D_2$, $D_3$). It is observed that with this transmission subfile list, all missing subfiles can be received by intended users. In addition, it is readily verified that the intended subfile which is XORed in the cache of each user is also obtained by XORing the three other XORed subfiles. For example, for user 1, it receives $B_1$, $C_1$ and $D_1$, hence $A_1$ is also fetched by $(A_1 \oplus B_1 \oplus C_1 \oplus D_1) \oplus B_1 \oplus C_1 \oplus D_1$. In a similar manner, user 2, user 3 and user 4 can also obtain $B_2$, $C_3$ and $D_4$ respectively. Therefore, by sending these subfiles, all user requests are satisfied with rate $3$, as the rate of each subfile is $1/4$. Similarly, we can realize any possible requests with rate $3$ with the cache size $M=1/4$. Hence, the cache-rate pair ($1/4$, $3$) is achievable and can be verified to coincide with the cut-set bound, which is $R^*(1/4)=4-4 \cdot 1/4=3$. Therefore the cut-set bound is achievable if $0\leq M \leq 1/4$. Akin to \textbf{Example 1 and 2}, the cache-rate pair ($1/N$,$N-1$) is achievable for an arbitrary number of files $N$ in the server with the same number of users as that of the files in the server, i.e., $K=N$. The proof for this general case is left to the next section. {\bf Example 3.} Consider a system with $N=3$ files and $K=4$ users. We term each file as $W_1=A$, $W_2=B$ and $W_3=C$. Consider the case with cache size $M=1/4$. We split each file into 12 parts of equal size, i.e., $A=(A_1,\cdots,A_{12})$, $B=(B_1,\cdots,B_{12})$ and $C=(C_1,\cdots,C_{12})$. Each cache can therefore store three subfiles. In the placement phase, we let user $i$ caches the three XORed subfiles as $$Z_i=(A_{3(i-1)+j} \oplus B_{3(i-1)+j} \oplus C_{3(i-1)+j}), \quad j=1,2,3.$$ Hence one user caches 9 exclusive subfiles in an XORed version and any subfiles partitioned in the server can be found in the cache of one and only one user. In the transmissions phase, let us assume that user $1$ needs $A$, user $2$ needs $B$, user $3$ needs $C$ and user $4$ needs $A$. To fully exploit the coded caching strategy, we then delivery the subfiles ($B_1$, $C_1$, $B_2$, $C_2$, $B_3$, $C_3$) for user 1 to XOR $A_1$, $A_2$ and $A_3$. By delivering of these subfiles, $B_1$, $B_2$ are received by user $2$ and $C_1$, $C_2$ are received by user $3$. Similarly, we deliver ($A_4$, $C_4$, $A_5$, $C_5$, $A_6$, $C_6$) for user 2 to obtain $B_4$, $B_5$ and $B_6$. ($A_7$, $B_7$, $A_8$, $B_8$, $A_9$, $B_9$) for user 3 to obtain $C_7$, $C_8$ and $C_9$. ($B_{10}$, $C_{10}$, $B_{11}$, $C_{11}$, $B_{12}$, $C_{12}$) for user 4 to obtain $A_{10}$, $A_{11}$ and $A_{12}$. Hence, by delivering these $24$ subfiles, user $2$ receive the complete file $B$ and user $3$ receive the entire file $C$. However, user 1 still lacks the subfiles ($A_{10}$, $A_{11}$, $A_{12}$) and user 4 is in need of the subfiles ($A_1$, $A_2$, $A_3$). To exploit the side information at the caches, we hence delivery ($A_1 \oplus A_{10}$, $A_2 \oplus A_{11}$ and $A_3 \oplus A_{12}$). By doing so, we can fulfil the requests of all users with delivery of 27 subfiles, i.e., rate $R(1/4)=27/12=9/4$ is achievable for this case. Similarly, it can be readily shown that this rate is achievable for any other possible requests. It is worth pointing out that, the cut-set bound at the point $M=1/4$ is $R^*(1/4)=3-3/4=9/4$ and identical to the achievable rate $R(1/4)$. Thanks to cache sharing, the cut-set bound is therefore achievable in the interval $M \in [0,1/4]$ in this example. {\bf Example 4}. Consider the case of a server with $3$ files and 5 users. We term each file as $W_1=A$, $W_2=B$ and $W_3=C$. Consider the case with cache size $M=1/5$. We split each file into $3 \times 5=15$ parts of equal size, i.e., $A=(A_1,\cdots,A_{15})$, $B=(B_1,\cdots,B_{15})$ and $C=(C_1,\cdots,C_{15})$ and each cache can store three subfiles. In the placement phase, we let user $i$ caches the three XORed subfiles as $$Z_i=(A_{3(i-1)+j} \oplus B_{3(i-1)+j} \oplus C_{3(i-1)+j}), \quad j=1,2,3.$$ Each user then stores $9$ exclusive subfiles in an XORed version and each subfile can be found in the cache of one and only one user. In the transmissions phase, let us assume that user $1$ needs $A$, user $2$ needs $B$, user $3$ needs $C$, user $4$ needs $A$ and user $5$ requests $B$. Similar to {\bf Example 3}, we deliver the subfile list ($B_1$, $C_1$, $B_2$, $C_2$, $B_3$, $C_3$) for user 1 to XOR $A_1$, $A_2$ and $A_3$, Therefore $B_1$, $B_2$ are received by user $2$ and user $5$, while $C_1$ and $C_2$ are received by user 3. Similarly, we deliver ($A_4$, $C_4$, $A_5$, $C_5$, $A_6$, $C_6$) for user 2 to obtain $B_4$, $B_5$ and $B_6$; ($A_7$, $B_7$, $A_8$, $B_8$, $A_9$, $B_9$) for user 3 to obtain $C_7$, $C_8$ and $C_9$; ($B_{10}$, $C_{10}$, $B_{11}$, $C_{11}$, $B_{12}$, $C_{12}$) for user 4 to obtain $A_{10}$, $A_{11}$ and $A_{12}$; ($A_{13}$, $C_{13}$, $A_{14}$, $C_{14}$, $A_{15}$, $C_{15}$) for user $5$ to obtain $B_{13}$, $B_{14}$ and $B_{15}$. Hence, by delivering these $30$ subfiles, user $3$ receive the entire file $C$. However, user $1$ still requests the subfiles ($A_{10}$, $A_{11}$, $A_{12}$), user $2$ requests ($B_{10}$, $B_{11}$, $B_{12}$), user $4$ requests ($A_1$, $A_2$, $A_3$) and user $5$ requests ($B_{4}$, $B_{5}$, $B_{6}$). To exploit the side information at the caches, we can delivery the XORed version of the subfiles, i.e., ($A_1 \oplus A_{10}$, $A_2 \oplus A_{11}$, $A_3 \oplus A_{12}$, $B_{11} \oplus B_{5}$ and $B_{12} \oplus B_{6}$). With this coded transmission, all intended users can completely obtain the subfiles requested. We therefore fulfil the requests of all users by delivery of only 36 subfiles, i.e., rate $R(1/5)=36/15=12/5$ is achievable for this case. In a similar manner, it can be readily shown that this rate is achievable for any possible requests. It is worth pointing out that, the cut-set bound at the point $M=1/5$ is $R^*(1/5)=3-3/5=12/5$ and equals the achievable rate $R(1/5)$. By memory sharing, the cut-set bound is therefore achievable in the interval $M \in [0,1/5]$ in this example. \section{Proof Of Theorems } We now present the achievable scheme for an arbitrary number of users with $K \ge N$. We shall show that with the cache size of $M \leq 1/\max(N,K)$, the delivery rates presented in Theorem \ref{thm:1}-\ref{thm:2} are achievable and the cut-set bound is met for such points with cache size $M \leq 1/\max(N,K)$. \subsection{Proof of Theorem 1} Here we prove Theorem 1 for the case with an equal number of files and users, i.e., $N=K$. We prove it in two folds. Firstly, we verify that the point ($1/N$,$N-1$) is achievable by a constructed coded caching scheme. Secondly, we show that any points with $M<1/N$ can achieve a rate of $N-NM$ by memory sharing. Let us define the files as $W_i$ ($i=1,\ldots,N$) and split each file into $N$ subfiles, i.e., $W_i=(W_{i1}, \ldots, W_{iN})$. In the placement phase, the cache of user $j$ is designed to be $Z_j = W_{1j} \otimes \ldots \otimes W_{Nj}$, an XORed version of subfiles, which contains one and only one subfile from all files. With this coded placement scheme, each user caches some exclusive part of all files. In the delivery phase, if the users request $L \leq N-1$ files, we can simply transmit these requested files and the delivery rate is $L$ files. We then move to the case that the users request $N$ files, i.e., each user requests a different file. Due to symmetry, we only need to study the case that user $i$ requests file $W_i$. The transmission algorithm is therefore presented as follows. \begin{itemize} \item For the first file, we transmit the subfiles $W_{12}$, $\ldots$, $W_{1N}$. \item For the $i$th ($1<i<N$) file, we transmit the subfiles $W_{i,1}$, $\ldots$, $W_{i,i-1}$, $W_{i,i+1}$, $\ldots$, $W_{1N}$. \item For the $N$th file, we transmit the subfiles $W_{N1}$, $\ldots$, $W_{N,N-1}$. \end{itemize} As for each file $(N-1)/N$ fraction of it is delivered, we totally deliver $N-1$ files. With this transmission, we argue that each user can obtain the files requested. For instance, for the $i$th user requesting $W_i$, it can obtain all subfiles except $W_{ii}$ from the delivery of $W_i$ directly. In addition, user $i$ receives all $W_{ki}$ ($j \neq i$) subfiles from file $W_k$. Hence it can obtain the subfile $W_{ii}$ by \begin{align} W_{ii}=&(W_{1i} \oplus \ldots \oplus W_{Ni}) \oplus W_{1i} \oplus \ldots \oplus W_{i-1,i} \nonumber \\ &\oplus W_{i+1,i} \oplus \ldots \oplus W_{Ni} \\ =&W_{ii} \nonumber \end{align} Therefore, user $i$ can obtain all subfiles of $W_i$ and construct the complete file $W_i$. In a similar manner, all users can obtain the complete file requested and the cache-rate pair $(1/N,N-1)$ is hence achievable for this special case. Moreover, due to symmetry, we can conclude that the cache-rate pair $(1/N,N-1)$ is achievable for all possible requests. On the other hand, with the two achievable points, i.e., ($0,N$) and ($1/N$,$N-1$) taken into account, we can achieve a rate of $R(M)=N(1-M)$ for the cache size $0 \le M \leq 1/N $ by memory sharing. Theorem 1 is hence proved. \subsection{Proof of Theorem 2} Here we prove Theorem 2 for the case with $N<K$. The files are defined by $W_i$ ($i=1,\ldots,N$) and we split each file into $NK$ subfiles, i.e., $W_i=(W_{i,1}, \ldots, W_{i,NK})$. In the placement phase, the cache of user $i$ is designed to store $N$ XORed version of subfiles, which are, $$Z_i=W_{1,N(i-1)+j} \oplus \cdots \oplus W_{N,N(i-1)+j}, \quad j=1, \ldots, N.$$ With this coded placement scheme, each user caches some exclusive part of all files and the union set of the caches comprises all $N$ files in the server. In the delivery phase, if all users request $L$ ($L \leq N-1$) distinct files in total, we can simply transmit these requested files one by one and the total amount of files delivered is $L$ files and the associated rate is less than $N-N/K$. We then move to the case that all $N$ files are requested. Suppose user $i$ requests the file $W_{d_i}$ and correspondingly the subfile $W_{i}$ is requested by totally $k_{i}$ users. By definition, we hence have $\sum_{i=1}^N k_{i}=K$. The transmission procedure can be divided into two steps as follows. \begin{enumerate} \item In the first step, for the $i$th user requesting $W_{d_i}$, we transmit $W_{k,N(i-1)+j}$ ($k \neq d_i$ and $j=1,\ldots,N$), i.e., $(N-1)N$ subfiles in total are delivered to obtain $W_{d_i, N(i-1)+j}$ ($j=1,\ldots,N$) via coded operation. \item In the second step, for the rest subfiles requested by users, we apply the following algorithm by firstly grouping the users requesting the same file and then applying coding strategy to reduce transmissions. The details are presented as follows. \begin{enumerate} \item If $W_{d_i}$ ($i=1,\ldots, K$) is solely requested by the $i$th user, all subfiles of $W_{d_i}$ can be completely received in Step 1). Hence the amount of remaining requests for $W_{d_i}$ is $0$. \item For any $W_{i}$ requested by $k_i$ users ($k_i>1$), where each associated user requesting the residue $(k_i-1)N$ subfiles, we do \begin{enumerate} \item Initialization: list the users requesting $W_i$ in an ascending order with respect to their index. For simplicity, their index are correspondingly denoted by $K_l$ ($l=1, \ldots, k_i$). Observe that the exclusive subfiles obtained by user $K_l$ is $W_{i,N(K_l-1)+j}$ ($j=1,\ldots,N$) and they are requested by the other users in the same group. Set the initial value of the counter as $u=1$. \item If $u=1$, deliver the $N$ coded subfiles, $W_{i,N(K_1-1)+j} \oplus W_{i,N(K_2-1)+j}$ ($j=1,\ldots,N$) and set $u \leftarrow u+1$. \item If $u=m$ ($m<k_i-1$), deliver the $N$ coded subfiles, $W_{i,N(K_m-1)+j} \oplus W_{i,N(K_{m+1}-1)+j}$ ($j=1,\ldots,N$) to all users requesting $W_i$, set $u \leftarrow u+1$ and go to Step iv). \item If $u<k_i-1$ go to Step iii), otherwise terminate the delivery of subfiles of $W_i$. \end{enumerate} \end{enumerate} \end{enumerate} Note that in step 2), a) follows from two facts. The first is that the $i$th user obtains $W_{d_i, N(i-1)+j}$ ($j=1,\ldots,N$) via coded delivery. The second is that it receives directly $W_{d_i, N(k-1)+j}$ ($k \neq i$ and $j=1,\ldots,N$) in the first step because they are delivered for other users for XORing. Therefore, the $i$th user can reconstruct the full file $W_{d_i}$ directly after Step 1). Similarly for the case that $W_i$ is requested by more than one users ($k_i>1$) in b) of Step 2), the fact that each user requesting $W_i$ needs $(k_i-1)N$ follows also from two facts. The first is that it receives $N$ subfiles via coded delivery in Step 1). The second is that it directly receives $N(K-k_i)$ subfiles for the users requesting other files in Step 1). Therefore, only $NK-N-N(K-k_i)=N(k_i-1)$ subfiles is requested by each of the users requesting $W_i$. In the following, we shall show that the sub-algorithm in b) in Step 2) can help all users requesting $W_i$ receive all the residue files. Note that for user $K_m$ requesting $W_i$, it receives the subfile list ($W_{i,N(K_m-1)+j} \oplus W_{i,N(K_{m+1}-1)+j}$) ($m=1, \ldots, k_i-1$, $j=1,\ldots,N$). It can firstly obtain $W_{i,N(K_{m-1}-1)+j}$ and $W_{i,N(K_{m+1}-1)+j}$ ($j=1,\ldots,N$) from the $m-1$th and the $m$ delivery of subfiles via XORing. It can then recursively obtain $W_{i,N(K_{m-k}-1)+j}$ ($k=2,\ldots,m-1$) and $W_{i,N(K_{m+k}-1)+j}$ ($k=2,\ldots,k_i-m$). Hence, user $K_m$ can obtain the complete file $W_i$. In a similar manner, we can verify that any other users in the same group requesting $W_i$ can receive the complete file $W_i$. As $W_i$ is an arbitrary file in the server, we conclude that all users can obtain the requested file by our algorithm and in the following we shall derive the achievable rate for $M=1/K$ by applying the algorithm above. We first denote $C_i$ as the amount of subfiles delivered in Step i) and $n_{k_i}$ as the amount of the XORed version of subfiles delivered for $W_i$ in Step 2). In Step 1), it is observed that the total amount of subfiles delivered is given by, \begin{align} C_1=(N-1)NK. \end{align} As designed in Step 2) for file $W_i$, the total amount of the remaining transmissions is \begin{align} n_{k_i}=(k_i-1)N. \label{eq:step_2_k_i} \end{align} Therefore, the total amount of subfiles delivered in the second step is \begin{align} C_2=&\sum_{i=1}^N n_{k_i}= \sum_{i=1}^N (k_i-1)N \label{eq:step_2_all_k_i_1}\\ = & \sum_{i=1}^N k_iN - N^2 = (K-N)N. \label{eq:step_2_all_k_i_2} \end{align} The total amount of subfile deliveries in these two steps is given by \begin{align} C_1+C_2=(N-1)NK+(K-N)N=(K-1)N^2. \end{align} The associated delivery rate therefore is \begin{align} R(1/K)=(K-1)N^2/NK=N-N/K>N-1 \end{align} and we can claim that $(1/K,R(1/K))=(1/K,N(1-1/K)$ is an achievable cache-rate pair. In addition, regarding the trivial cache-rate pair $(0,N)$, for any $M \leq 1/K$, the rate pair $(M, N(1-M))$ is achievable by memory sharing. Theorem 2 is hence proved. \subsection{Proof of Theorem 3 and Theorem 4} Here we show that the achieved rate given in Theorem \ref{thm:3} and Theorem \ref{thm:4} for the scenario with $N \leq K$ and $M \leq 1/K$ coincides with the lower cut-set bound. From \cite{niesen2014fundamental}, the cut-set lower bound is given by, \begin{align} R^*(M) \ge \max_{s\in\{ 1, \ldots, \min(N,K) \}}(s-\frac{s}{ \lfloor N/s \rfloor}M) \end{align} Therefore, with $M \le 1/K$, we obtain \begin{align} R^*(M) &\ge \max(1-\frac{M}{N}, \ldots, N-NM), \quad 0\leq M \leq \frac{1}{K} \label{eq:proof:T4_1}\\ & \ge N(1-M) \label{eq:proof:T4_2}\\ & = R(M) \label{eq:proof:T4_3} \end{align} where (\ref{eq:proof:T4_1}) follows directly from the cut-set bound and (\ref{eq:proof:T4_2}) follows from the fact that $\max( \cdot )$ returns the maximum value of the elements in the brackets. (\ref{eq:proof:T4_3}) follows directly from Theorem \ref{thm:1} and Theorem \ref{thm:2}. From the above derivation, it is hence concluded that for the scenario $N \le K$ and $M \le 1/K$, the lower cut-set bound is achievable. Theorem \ref{thm:3} and Theorem \ref{thm:4} are therefore verified. \section{Conclusion} In this work, we studied the caching problem when all users are with a small buffer size and the number of users is no less than the amount of files in the server. A novel coded caching scheme was proposed to achieve the cut-set bound rate for such a scenario.
{"config": "arxiv", "file": "1407.1935/cathing_tradeoff_new_bound.tex"}
TITLE: Showing that $1$ is the only unit in a ring with identity $R$ such that $a^2 = a$ for all a in $R$ QUESTION [1 upvotes]: So I've been given a ring $R$ with identity (no further information on what the identity is or what kinds of elements are in $R$ or operation definitions) such that $a^2 = a$ for all $a \in R$. I need to show that the only unit in $R$ is $1$. I've kind of assumed a proof by contradiction approach. I'm assuming there are other units in $R$ which I've called $u$, called its inverse $x$ and called the identity $I$. So I've done the following $ux=1$ $(ux)I=I$ $ux=I$ $u^2=u$ $(u^2)I=uI$ $u^2=u(ux)$ $u^2=(u^2)x$ $1=x$ $ux = 1$ $u(1) = 1$ $u = 1$ Contradiction, $1$ is the only unit in $R$. I know I've got the right answer but I'm a little apprehensive about it. Do I need to do the left-side multiplications too, since there's nothing about commutativity in the original question? REPLY [0 votes]: Suppose that $a$ is both an idempotent and a unit. Then, $a=aaa^{-1}=aa^{-1}=1$. In particular, the only unit in a Boolean ring is $1$.
{"set_name": "stack_exchange", "score": 1, "question_id": 4158500}
TITLE: Two Questions about Sobolev inequalities and Lipschitz smooth functions QUESTION [1 upvotes]: Question1 I want to find a manifold such that the Sobolev inequality on M of the form $\lVert f \rVert_{n/(n-1)} \leq C\lVert \bigtriangledown f \rVert_1$, where $C=C(n)$, implies that $vol(B(r)) \geq cr^n$ for some constant c. (That is: at-least-Euclidean volume growth) I can't find the existence of such manifold, does everyone have such example? Question2 Let $M^{(n)}$ be a compact Riemannian manifold. $f:M\longrightarrow \mathbb{R}$ a smooth function. Is $f$ Lipschitz implies that $f \in L_1^p(M)$ for all $p \geq 1$? I am appreciate for your answers. Thank you very much! REPLY [1 votes]: The answer to Question 2 is yes. Since $f$ is smooth and has bounded derivative, it belongs to $L_1^p$ for all $p\ge 1$. I'm not sure I understand Question 1. Does $M=\mathbb R^n$ work?
{"set_name": "stack_exchange", "score": 1, "question_id": 150635}
\begin{document} \maketitle \begin{abstract} After works by Michael and Simon \cite{MS}, Hoffman and Spruck \cite{HS}, and White \cite{W}, the celebrated Sobolev inequality could be extended to submanifolds in a huge class of Riemannian manifolds. The universal constant obtained depends only on the dimension of the submanifold. A sort of applications to the submanifold theory and geometric analysis have been obtained from that inequality. It is worthwhile to point out that, by a Nash Theorem, every Riemannian manifold can be seen as a submanifold in some Euclidean space. In the same spirit, Carron obtained a Hardy inequality for submanifolds in Euclidean spaces. In this paper, we will prove the Hardy, weighted Sobolev and Caffarelli-Kohn-Nirenberg inequalities, as well as some of their derivatives, as Galiardo-Nirenberg and Heisenberg-Pauli-Weyl inequalities, for submanifolds in a class of manifolds, that include, the Cartan-Hadamard ones. \end{abstract} \tableofcontents \section{Introduction} Over the years, geometers have been interested in understanding how integral inequalities imply geometric or topological obstructions on Riemannian manifolds. Under this purpose, some integral inequalities lead us to study positive solutions to critical singular quasilinear elliptic problems, sharp constants, existence, non-existence and symmetry results for extremal functions on subsets in the Euclidean space. About these subjects, one can read, for instance, \cite{BT}, \cite{C}, \cite{CC}, \cite{CKN}, \cite{CW}, \cite{HS}, \cite{MS}, \cite{KO} and references therein. In the literature, some of the most known integral inequalities are the Hardy inequality, Gagliardo-Nirenberg inequality, and, more generally, the Caffarelli-Kohn-Nirenberg inequality. These inequalities imply comparison for the volume growth, estimates of the essencial spectrum for the Schr\"{o}dinger operators, parabolicity, among others properties (see, for instance, \cite{L, dCX, X}). In this paper, we propose to study the Caffarelli-Kohn-Nirenberg (CKN) inequality for submanifolds in a class of Riemannian manifolds that includes, for instance, the Cartan-Hadamard manifolds, using an elementary and very efficient approach. We recall that a Cartan-Hadamard manifold is a complete simply-connected Riemannian manifold with nonpositive sectional curvature. Euclidean and hyperbolic spaces are the simplest examples of Cartan-Hadamard manifolds. \section{Preliminaries} In this section, let us start recalling some concepts, notations and basic properties about submanifolds. First, let $M=M^k$ be a $k$-dimensional Riemannian manifold with (possibly nonempty) smooth boundary $\p M$. Assume $M$ is isometrically immersed in a complete Riemannian manifold $\bar M$. Henceforth, we will denote by $f:M\to \bar M$ the isometric immersion. In this paper, no restriction on the codimension of $f$ is required. By abuse of notation, sometimes we will identify $f(x)=x$, for all $x\in M$. Let $\lan\cdot,\cdot\ran$ denote the Euclidean metric on $\bar M$ and consider the same notation to the metric induced on $M$. Associated to these metrics, consider the Levi-Civita connections $D$ and $\na$ on $\bar M$ and $M$, respectively. It easy to see that $\nabla_Y Z = (D_YZ)^\top,$ where $\top$ means the orthogonal projection onto the tangent bundle $TM$. The Gauss equation says $$D_Y Z = \nabla_YZ + \II(Y,Z),$$ where $\II$ is a quadratic form named by {\it second fundamental form}. The mean curvature vector is defined by $H=\tr_M \,\II$. Let $\m K:[0,\infty)\to [0,\infty)$ be a nonnegative continuous function and $h\in C^2([0,+\infty))$ the solution of the Cauchy problem: \begin{equation}\label{cauchy-h} \begin{array}{l} h''+ \m K h = 0, \\ h(0)=0, \ h'(0)=1. \end{array} \end{equation} Let $0<\bar r_0=\bar r_0(\m K)\le +\infty$ be the supremum value where the restriction $h|_{[0,\bar r_0)}$ is increasing and let $[0,\bar s_0)=h([0,\bar r_0))$. Notice that $h'$ is non-increasing since $h''=-\m K h\le 0$. \begin{example}\label{mK-constant} If $\m K=b^2$, with $b\ge 0$, then \begin{enumerate}[(i)] \item\label{ambient-b=0} if $b=0$, it holds $h(t)=t$ and $\bar r_0=\bar s_0=+\infty$; \item\label{ambient-b>0} if $b>0$, it holds $h(t)=\frac{1}{b}\sin(bt)$ and $\bar r_0=\frac{\pi}{2b}$ and $\bar s_0=h(\bar r_0)=\frac{1}{b}$. \end{enumerate} \end{example} For $\xi\in \bar M$, let $r_{\xi}=d_{\bar M}(\cdot\,,\xi)$ be the distance function on $\bar M$ from $\xi\in \bar M$. In this paper, we will deal with complete ambient spaces $\bar M$ whose radial sectional curvature satisfies $(\bar K_\rad)_{\xi_0}\le \m K(r_{\xi_0})$, for some fixed $\xi_0\in \bar M$. Let us recall the definition of radial sectional curvature. Let $x\in \bar M$ and, since $\bar M$ is complete, let $\ga:[0,t_0=r_{\xi}(x)]\to \bar M$ be a minimizing geodesic in $\bar M$ from $\xi$ to $x$. For all orthonormal pair of vectors $Y,Z\in T_x\bar M$ we define $(\bar K_\rad)_{\xi}(Y,Z)=\lan \bar R(Y,\ga'(t_0))\ga'(t_0), Z\ran$. \begin{example}\label{example-warped} Let $(P,d\si^2_P)$ be a complete manifold. Consider the manifold $\bar M=[0,r_0)\times P/\sim$, where $(0,y_1) \sim (0,y_2)$, for all $y_1$, and $y_2\in P$, with the following metric: \begin{equation}\label{warped-metric} \lan \cdot\,,\cdot\ran_{\bar M} = dr^2 + h(r)^2 d\si^2_P. \end{equation} Since $h>0$ in $(0,r_0)$, $h(0)=0$ and $h'(0)=1$, it follows that $\bar M$ defines a Riemannian manifold. If $P=\Sp^{n-1}$ is the round metric, $\lan\,,\ran_{\bar M}$ is called a rotationally invariant metric. We fix the point $\xi_0=(0,y)\in \bar M$. The distance $d_{\bar M}((r,y), \xi_0)=r$, for all $(r,y)\in \bar M$. The curvatura tensor $\bar R$ of $\bar M$ satisfies \begin{equation} \bar R(Y,\p_r)\p_r = \left\{ \begin{array}{l} -\frac{h''(r)}{h} Y, \mbox{ if } Y \mbox{ is tangent to } P;\\ 0, \mbox{ if } Y=\p_r. \end{array} \right. \end{equation} Hence, the radial sectional curvature $(\bar K_{\rad})_{\xi_0}(\cdot\,,\cdot)=\lan \bar R(\cdot\,,\p_r)\p_r,\cdot\ran$, with basis point $\xi_0$, satisfies $(\bar K_{\rad})_{\xi_0}=\m K(r)$. A huge class of metrics are rotationally symmetric: (i) The Euclidean metric: $\lan\,,\ran_{\real^n}=dr^2+r^2 d\si^2_{\Sp^{n-1}}$, in $[0,\infty)\times \Sp^{n-1}$. (ii) The spherical metric $\lan\,,\ran_{\Sp^n}=dr^2 + \sin^2(r) d\si^2_{\Sp^{n-1}}$, in $[0,\pi]\times \Sp^{n-1}$. (iii) The Hyperbolic metric: $\lan\,,\ran_{\hy^n}=dt^2+\sinh^{2}(r) d\si^2_{\Sp^{n-1}}$, in $[0,\infty)\times \Sp^{n-1}$; (iv) Some classical examples in general relativity: Schwarzchild metric, De Sitter-Schwarzchild metric, Kottler-Schwarzchild metric, among others. \end{example} Assume the radial sectional curvatures of $\bar M$ satisfies $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$. We fix $0<r_0 < \min\{\bar r_0(\m K),Inj_{\bar M}(\xi_0)\}$ and consider the geodesic ball $\m B=\m B_{r_0}(\xi_0)=\{x\in \bar M \mid d_{\bar M}(x,\xi_0)<r_0\}$. It follows that $r$ is differentiable at all points in $\m B^*=\m B\setminus \{\xi_0\}$ and, by the Hessian comparison theorem (see Theorem 2.3 page 29 of \cite{PRS}), we have \begin{equation}\label{hes-comp} \hs_{\,r}(v,v) \ge \frac{h'(r)}{h(r)}(1-\lan \bar\na r,v\ran^2), \end{equation} for all points in $\m B^*$ and vector fields $v:\m B^*\to T\bar M$ with $|v|=1$. For a vector field $Y:M\to T\bar M$, the divergence of $Y$ on $M$ is given by $$\Div Y = \sum_{i=1}^k \langle D_{e_i}Y, e_i\rangle,$$ where $\{e_1,\cdots, e_k\}$ denotes a local orthonormal frame on $M$. By simple computations, one has \begin{lemma}\label{prop} Let $Y:M\to T\bar M$ be a vector field and $\psi \in C^1(M)$. The following items hold \begin{enumerate}[(a)] \item\label{prop-a} $\Div Y = \Div Y^\top - \langle \vec{H}, Y\rangle;$ \medskip \item\label{prop-b} $\Div (\psi Y) = \psi~\Div Y + \lan \na^M\psi, Y\rangle$. \end{enumerate} \end{lemma} From now on, we will consider the radial vector field $X=X_{\xi_0}=h(r)\bar\na r$, defined in $\m B^*$. Notice that $|X|=h(r)>0$ everywhere in $\m B^*$. \begin{lemma}\label{formula} For all $\al\in (-\infty, +\infty)$, it holds \begin{equation*} \dv_M(\frac{X}{|X|^\al}) \ge h'(r)[\frac{k-\al}{{h(r)}^\al} + \al~\frac{|\bar\na r ^\perp|^2}{{h(r)}^\al}], \end{equation*} in $M\cap \m B^*$. Here, $(\cdot)^\perp$ denotes the orthogonal projection on the normal bundle $TM^\perp$ of $M.$ \end{lemma} \begin{proof} By Lemma \ref{prop} item \ref{prop-b}, $\dv_M (\frac{X}{{h(r)}^\al}) = \frac{1}{{h(r)}^\al}\dv X +\lan \na^M (\frac{1}{{h(r)}^\al}),~X \ran$. Since $1=|\bar\na r|^2=|\bar\na r^\top|^2+|\bar\na r^\perp|^2$, and $\na^M (\frac{1}{|X|^\al}) = -\al~\frac{h'(r) \bar\na r ^\top}{{h(r)}^{\al + 1}}$, one has \begin{eqnarray*} \Div (\dfrac{X}{|X|^\al}) &=& \frac{1}{{h(r)}^\al}\dv_M X -\frac{\al h'(r)}{{h(r)}^{\al+1}}\lan \bar\na r^\top, h(r)\bar\na r\ran \\&=& \frac{1}{{h(r)}^\al}\dv_M X -\frac{\al h'(r)}{{h(r)}^\al} + \frac{\al h'(r)|\bar\na r^\perp|^2}{{h(r)}^{\al+1}}. \end{eqnarray*} On the other hand, let $\{e_1,\cdots, e_k\}$ denote an orthonormal frame on $M$. By (\ref{hes-comp}), we have \begin{eqnarray*} \dv_M X &=& \sum_{i=1}^k\lan D_{e_i}X, e_i\ran =\sum_{i=1}^k [h'(r)\lan \bar\na r,e_i\ran^2 + h(r)\hs_{r}(e_i,e_i)] \\&\ge& h'(r)|(\bar\na r)^\top|^2 + h'(r)(k- |(\bar\na r)^\top|^2) = k h'(r) \end{eqnarray*} Lemma \ref{formula} follows. \end{proof} \section{The Hardy inequality for submanifolds} Carron \cite{C} proved the following Hardy Inequality. \begin{theoremA}[Carron]\label{carron-teo} Let $\Si^k$ be a complete non compact Riemannian manifold isometrically immersed in a Euclidean space $\real^n$. Fix $v\in \real^n$ and let $r(x)=|x-v|$, for all $x\in \Si$. Then, for all smooth function $\psi\in C_c^\infty(\Si)$ compactly supported in $\Si$, the following Hardy inequality holds: \begin{equation*} \frac{(k-2)^2}{4} \int_\Si \frac{\psi^2}{r^2} \le \int_\Si [|\na^\Si \psi|^2 + \frac{k-2}{2}\frac{|H|\psi^2}{r}]. \end{equation*} \end{theoremA} Just comparing Theorem A with Corollary \ref{hardy-hadamard-cor} below, given $\psi\in C_c^\infty(\Si)$, let $M$ be a compact subset of $\Si$ with compact smooth boundary $\p M$ satisfying $\spp(\psi)\subset M\subset \Si$. We will see that Corollary \ref{hardy-hadamard-cor} does not generalize Theorem A, unless $\Si$ is a minimal submanifold. \\ The result below will be fundamental to obtain our Hardy inequality (see Theorem \ref{teo-hardy}). \begin{proposition} \label{teo-hardy-norm} Fixed $\xi_0\in \bar M$, we assume $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$. Assume further $M$ is contained in a ball $\m B=\m B_{\bar r_0}(\xi_0)$, for some $0<r_0<\min\{\bar r_0(\m K),\, Inj_{\bar M}(\xi_0)\}$. Let $1< p < \infty$ and $-\infty<\ga<k$. Then, for all $\psi\in C^1(M)$, with $\psi\ge 0$, it holds \begin{eqnarray*} \frac{(k-\ga)^p h'(r_0)^{p-1}}{p^p} \int_{M} \frac{\psi^p h'(r)}{h(r)^{\ga}} + \frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}}\int_M \frac{\psi^ph'(r)\,|(\bar\na r)^\perp|^2}{h(r)^\ga} \\&& \hspace{-13.0cm}\le \int_M \frac{1}{h(r)^{\ga-p}}[|\na^M \psi|^2 + \frac{\psi^2|H|^2}{p^2}]^{{p}/{2}} + \frac{[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}} \int_{\p M} \frac{\psi^p}{h(r)^{\ga-1}}\lan \bar\na r,\nu\ran, \end{eqnarray*} provided that $\int_{\p M} \frac{\psi^p}{h(r)^{\ga-1}}\lan \bar\na r,\nu\ran$ exists. Here, $\nu$ denotes the outward conormal vector to $\p M$. \end{proposition} \begin{proof} First, we assume $\xi_0\notin M$. Let $X=h(r_{\xi_0})\bar\na r_{\xi_0}$ and write $\ga=\al+\be+1$ with $\al,\be\in \real$. Let $\psi \in C^1(M)$. By Lemma \ref{formula}, \begin{eqnarray}\label{dv-hardy} \dv_M(\frac{\psi^p X^\top}{|X|^\ga}) &=& \psi^p \dv_M(\frac{X^\top}{|X|^\ga}) + \lan \na^M \psi^p,\frac{X}{|X|^\ga}\ran \nonumber\\&=& \psi^p \dv_M(\frac{X}{|X|^\ga}) + \psi^p \lan \frac{X}{|X|^\ga}, H\ran +p\,\psi^{p-1} \lan \na^M \psi, \frac{X}{|X|^\ga}\ran \nonumber\\&\ge& \psi^p h'(r)[\frac{k-\ga}{{h(r)}^\ga}+\frac{\ga |\bar\na r^\perp|^2}{{h(r)}^\ga}] + \lan p \na^M \psi + \psi {H}, \frac{\psi^{p-1} X}{|X|^\ga}\ran. \end{eqnarray} By the divergence theorem, \begin{eqnarray}\label{0notin} \int_{M} \psi^p h'(r)[\frac{k-\ga}{{h(r)}^{\ga}} + \frac{\ga|\bar\na r^\perp|^2}{{h(r)}^{\ga}}] &\le& - \int_{M} \lan p \na^M \psi + \psi {H}, \frac{\psi^{p-1} X}{|X|^\ga}\ran \nonumber\\&& + \int_{\p M} \frac{\psi^p}{|X|^\ga}\lan X,\nu \ran, \end{eqnarray} where $\nu$ denotes the outward conormal vector to the boundary $\p M$ in $M$. Let $r^*:=\max_{x\in \spp(\psi)} d_{\bar M}(f(x),\xi_0)$. Since $0<r^* < r_0$ and $h''=-\m K h\le 0$ it holds that $h'(r)\ge h'(r^*) > h'(r_0)\ge 0$ in $M$. By the Young inequality with $\ep>0$ (to be chosen soon), it holds \begin{eqnarray} \int_{M} \psi^p h'(r)[\frac{k-\ga}{{h(r)}^{\ga}} + \frac{\ga |\bar\na r^\perp|^2}{{h(r)}^\ga}] &\le& -\int_{M} \lan \frac{p\,\na^M\psi}{|X|^\al}+ \frac{\psi{H}}{|X|^\al}, \frac{\psi^{p-1} X}{|X|^{\be+1}}\ran \nonumber\\&& + \int_{\p M} \frac{\psi^p}{|X|^\ga}\lan X,\nu \ran \nonumber\\ &&\hspace{-6.5cm} \le \frac{1}{p\ep^p} \int_M |\frac{p\na^M\psi}{{h(r)}^\al}+\frac{\psi{H}}{{h(r)}^\al}|^p + \frac{\ep^q}{q} \int_M \frac{|\psi|^{(p-1)q}}{{h(r)}^{\be q}} + \int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran \nonumber\\&&\hspace{-6.5cm} \le \frac{1}{p\ep^p} \int_M |\frac{p^2|\na^M\psi|^2}{{h(r)}^{2\al}}+\frac{\psi^2|{H}|^2}{{h(r)}^{2\al}}|^{p/2} + \frac{\ep^q}{q} \int_M \frac{|\psi|^{(p-1)q}}{{h(r)}^{\be q}}\frac{h'(r)}{h'(r_0)} \nonumber\\&&+ \int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran, \nonumber \end{eqnarray} where $q=\frac{p}{p-1}$. Now, consider $\be=(p-1)(\al+1)$. We have $\ga=q\be=p(\al+1)$. Thus, \begin{eqnarray}\label{h-ep} \vp(\ep) \int_{M} \frac{\psi^p h'(r)}{{h(r)}^{\ga}} + p\ga\ep^p \int_{M} \frac{\psi^p h'(r)|\bar\na r^\perp|^2}{{h(r)}^{\ga}} &\le& \ \hspace{-0.3cm} \int_M [\frac{p^2|\na^M\psi|^2}{{h(r)}^{2\al}}+\frac{\psi^2|{H}|^2}{{h(r)}^{2\al}}]^{p/2}\nonumber \\&& + \ p\ep^p \int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran \end{eqnarray} where $\vp(\ep)=p\ep^p[(k-\ga)-\frac{\ep^q}{q h'(r_0)}] = \frac{p\ep^p}{h'(r_0)}[(k-\ga)h'(r_0)-\frac{\ep^q}{q}]$. Now, notice that \begin{eqnarray*} \frac{h'(r_0)}{p}\vp'(\ep)(\ep) &=& p\ep^{p-1}[(k-\ga)h'(r_0)-\frac{\ep^q}{q}] - \ep^p \ep^{q-1} \\&=& p\ep^{p-1}[(k-\ga)h'(r_0)-\frac{\ep^q}{q} - \frac{\ep}{p}\ep^{q-1}] \\&=& p\ep^{p-1}[(k-\ga)h'(r_0)-\ep^q]. \end{eqnarray*} And, $\frac{h'(r_0)}{p^2}\vp''(\ep)=(p-1)\ep^{p-2}[(k-\ga)h'(r_0)-\ep^q]-q\ep^{p-1}\ep^{q-1}$. Thus, $\vp'(\ep)=0$ if and only if $\ep^q=(k-\ga)h'(r_0)>0$. At this point, $\vp''(\ep)=-p^2q\ep^{p+q-2}< 0$. Hence, $\vp(\ep)$ attains its maximum at $\ep_0= [(k-\ga)h'(r_0)]^{\frac{p-1}{p}}$, with $\vp(\ep_0)=\frac{p}{h'(r_0)}[(k-\ga)h'(r_0)]^{p-1}(1-\frac{1}{q})(k-\ga)h'(r_0)=(k-\ga)^p h'(r_0)^{p-1}$. Since $p\al=\ga-p$, by (\ref{h-ep}), and multiplying both sides by $\frac{1}{p^p}$, it holds \begin{eqnarray*} \frac{(k-\ga)^p h'(r_0)^{p-1}}{p^p} \int_{M} \frac{h'(r)\psi^p}{{h(r)}^{\ga}} + \frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}}\int_{M} \frac{\psi^ph'(r)|\bar\na r^\perp|^2}{{h(r)}^\ga} \nonumber\\ && \hspace{-12.5cm} \le \int_M \frac{1}{{h(r)}^{\ga-p}}[|\na^M \psi|^2 + \frac{\psi^2|H|^2}{p^2}]^{p/2} + \frac{[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}}\int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran. \end{eqnarray*} Now, we assume $\xi_0\in M$. Let $\m Z_0=\{x\in M \mid f(x)=\xi_0\}$. Since every immersion is locally an embedding, it follows that $\m Z_0$ is discrete, hence it is finite, since $M$ is compact. We write $\m Z_0=\{p_1,\ldots,p_l\}$ and let $\rho=r\circ f=d_{\bar M}(f\,,\xi_0)$. By a Nash Theorem, there is an isometric embedding of $\bar M$ in an Euclidean space $\real^N$. The composition of such immersion with $f$ induces an isometric immersion $\bar f:M\to \real^N$. By the compactness of $M$, finiteness of $\m Z_0$, and the local form of an immersion, one can choose a small $\ep>0$, such that $ [\rho<2\ep]:=\rho^{-1}[0,2\ep) = U_1 \sqcup \ldots \sqcup U_l$ (disjoint union), where each $U_i$ is a neighborhood of $p_i$ in $M$ such that the restriction $\bar f|_{U_i}:U_i\to \real^N$ is a graph over a smooth function, say $u_i:U_i\to \real^{N-k}$. Thus, considering the set $[\rho<\de]=[r<\de]\cap M$ (identifying $x\in M$ with $f(x)$), with $0<\de<\ep$, again by the finiteness of $\m Z_0$, we have that the volume $\vol_M([\rho<\de])=O(\de^k)$, as $\de\to 0$. Similarly, one also obtain that $\vol_{\p M}(\p M \cap [\rho<\de])=O(\de^{k-1})$. Now, for each $0<\de<\ep$, consider the cut-off function $\eta=\eta_\de\in C^\infty(M)$ satisfying: \begin{eqnarray}\label{cut-off} && 0\le \eta \le 1, \mbox{ in } M; \nonumber\\ &&\eta = 0, \mbox{ in } [\,\rho<\de], \ \mbox{ and } \eta = 1, \mbox{ in } [\,\rho>2\de];\\ && |\na^M \eta|\le L/\de, \nonumber \end{eqnarray} for some constant $L>1$, that does not depend on $\de$ and $\eta$. Consider $\phi=\eta\psi$. Since $\phi\in C^1(M)$ and $\xi_0\notin M':=\spp(\phi)$, it holds \begin{eqnarray*}\label{hardy-0-phi} \frac{(k-\ga)^p h'(r_0)^{p-1}}{p^p} \int_{M} \frac{\phi^ph'(r) }{{h(r)}^{\ga}} + \frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1} }\int_{M} \frac{\phi^p h'(r) |\bar\na r^\perp|^2}{{h(r)}^{\ga}} \nonumber\\ && \hspace{-12.5cm} \le \ \int_M \frac{h'(r_0)^{1-p}}{{h(r)}^{\ga-p}}[|\na^M \phi|^2 + \frac{\phi^2| H|^2}{p^2}]^{p/2} + \frac{[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}} \int_{\p M} \frac{\phi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran. \end{eqnarray*} The integral $\int_{\p M} \frac{\phi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran$ exists, since $\int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}|\lan \bar\na r,\nu\ran|$ exists and $0\le \phi\le \psi$. Furthermore, \begin{eqnarray}\label{est-grad-0-phi-1} \int_M \frac{1}{{h(r)}^{\ga-p}}[|\na^M\phi|^2+\frac{\phi^2|{H}|^2}{p^2}]^{p/2} &=& \int_{[\rho>2\de]} \frac{1}{{h(r)}^{\ga-p}}[|\na^M\psi|^2+\frac{\psi^2|{H}|^2}{p^2}]^{p/2} \nonumber \\&& \hspace{-0.6cm}+ \int_{[\de<\rho<2\de]} \frac{1}{{h(r)}^{\ga-p}}[|\na^M\phi|^2+\frac{\phi^2|{H}|^2}{p^2}]^{p/2} \end{eqnarray} and \begin{eqnarray*} \int_{[\de<\rho<2\de]} \frac{1}{{h(r)}^{\ga-p}}[|\na^M\phi|^2+\frac{\phi^2|{H}|^2}{p^2}]^{p/2} && \\&& \hspace{-5cm} \le \ \int_{[\de<\rho<2\de]} \frac{O(1)}{{h(r)}^{\ga-p}}[\eta^p|\na^M\psi|^p +|\psi|^p |\na^M\eta|^p + |\psi|^p {|H|^p}] \\&& \hspace{-5cm} = \ \int_{[\de<\rho<2\de]} O(\frac{1}{h(\de)^{\ga-p}}) (O(1)+ O(\frac{1}{\de^p})) \\&& \hspace{-5cm} = \ O(\frac{1}{\de^{\ga-p}}) (O(1)+ O(\frac{1}{\de^p}))O(\de^k) = O(\de^{k-\ga}), \end{eqnarray*} as $\de\to 0$. Therefore, it holds \begin{eqnarray*} \frac{(k-\ga)^ph'(r_0)^{p-1}}{p^p } \int_{[r>2\de]\cap M} \frac{\psi^ph'(r)}{{h(r)}^{\ga}} \\&& \hspace{-3cm}+ \frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1} }\int_{[r>2\de]\cap M} \frac{\psi^ph'(r)|\bar\na r^\perp|^2}{{h(r)}^{\ga}} \nonumber\\ && \hspace{-6cm} \le \ \int_M \frac{1}{{h(r)}^{\ga-p}}[|\na^M \psi|^2 + \frac{\psi^2|H|^2}{p^2}]^{p/2} \\&&\hspace{-5cm} + \ \frac{[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}} \int_{\p M \cap [\rho>2\de]} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran + O(\de^{k-\ga}). \end{eqnarray*} Proposition \ref{teo-hardy-norm}, follows, since $k-\ga>0$ and $\int_{\p M} \frac{\psi^p}{{h(r)}^{\ga-1}}\lan \bar\na r,\nu \ran$ exits. \end{proof} It is simple to see that, for all numbers $a\ge 0$ and $b\ge 0$, it holds \begin{equation}\label{num-property} \min\{1,2^{\frac{p-2}{2}}\}(a^p+b^p) \le (a^2+b^2)^{p/2} \le \max\{1,2^{\frac{p-2}{2}}\}(a^p+b^p). \end{equation} In fact, to show this, without loss of generality, we can suppose $a^2+b^2=1$. We write $a=\cos\te$ and $b=\sin\te$, for some $\te\in [0,\pi/2]$. If $p=2$, there is nothing to do. Assume $p\neq 2$. Consider $f(\te)= a^p+b^p=cos^p(\te)+\sin^p(\te)$. The derivative of $f$ is given by $f'(\te)=-p\cos^{p-1}\sin(\te)+p\sin^{p-1}\cos(\te)$. Thus, $f'(\te)=0$ iff $\cos^{p-1}\sin(\te)=\sin^{p-1}\cos(\te)=0$, that is, iff either $\cos(\te)=0$ or $\sin(\te)=0$, or $\cos^{p-2}(\te)=\sin^{p-2}(\te)$. Thus, $f'(\te)=0$ iff $\te=0$, $\te=\frac{\pi}{2}$, or $\te=\frac{\pi}{4}$. So, the critical values are $f(0)=f(\frac{\pi}{2})=1$ and $f(\frac{\pi}{4})=2 (\frac{1}{\sqrt{2}})^{p}=2^{1-\frac{p}{2}}$. Thus, $\min\{1,2^{1-\frac{p}{2}}\} \le f(\te)=a^p+b^p\le \max\{1,2^{1-\frac{p}{2}}\}$. Hence, (\ref{num-property}) follows. \\ As a consequence of (\ref{num-property}) and Proposition \ref{teo-hardy-norm}, we obtain the following Hardy inequality. \begin{theorem}\label{teo-hardy} Fixed $\xi_0\in \bar M$, we assume $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$. Assume that $M$ is contained in a ball $\m B=\m B_{r_0}(\xi_0)$, for some $0<r_0<\min\{\bar r_0(\m K),\, Inj_{\bar M}(\xi_0)\}$. Let $1\le p<\infty$ and $-\infty<\ga<k$. Then, for all $\psi\in C^1(M)$, it holds \begin{eqnarray*} \frac{(k-\ga)^p h'(r_0)^{p-1}}{p^p} \int_{M} \frac{|\psi|^p h'(r)}{h(r)^{\ga}} + \frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}}\int_M \frac{|\psi|^p h'(r)|\bar\na r^\perp|^2}{h(r)^{\ga}} \\&& \hspace{-11.5cm} \le \ \m A_p\int_M [\frac{|\na^M \psi|^p}{h(r)^{\ga-p}} + \frac{|\psi|^p|H|^p}{p^p h(r)^{\ga-p}}] +\frac{[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}} \int_{\p M} \frac{|\psi|^p}{h(r)^{\ga-1}}. \end{eqnarray*} where $\m A_p=\max\{1, 2^{\frac{p-2}{2}}\}$. Moreover, if $M$ is minimal, we can take $\m A=1$. \end{theorem} \begin{proof} We may assume $h'(r_0)>0$, otherwise, there is nothing to do. First, we fix $p>1$ and let $\psi\in C^1(M)$. Take $\ep>0$ and consider the function $\psi_{\ep}=(\psi^2+\ep^2)^{1/2}$. Note that $\psi_\ep\ge |\psi|\ge 0$ and $|\na \psi_\ep| = \frac{\psi}{(\psi^2+\ep^2)^{{1}/{2}}}|\na \psi|\le |\na\psi|$. Thus, by Proposition \ref{teo-hardy-norm}, \begin{eqnarray*} \frac{(k-\ga)^p }{p^p} \int_{M} \frac{\psi_\ep^p h'(r)}{{h(r)}^{\ga}} + \frac{\ga(k-\ga)^{p-1}}{p^{p-1}}\int_M \frac{\psi_\ep^p h'(r)|\bar\na r^\perp|^2}{{h(r)}^{\ga}} \\&& \hspace{-8cm}\le \int_M \frac{h'(r_0)^{1-p}}{{h(r)}^{\ga-p}}[|\na^M \psi|^2 + \frac{\psi_\ep^2|H|^2}{p^2}]^{{p}/{2}} + \frac{(k-\ga)^{p-1}}{p^{p-1}} \int_{\p M} \frac{\psi_\ep^p}{{h(r)}^{\ga-1}}. \end{eqnarray*} Since $\psi_{\ep_1}\le \psi_{\ep_2}$, if $\ep_1<\ep_2$, and $|\psi|\le \psi_\ep \le |\psi|+\ep$, by taking $\ep\to 0$, we have \begin{eqnarray}\label{hardy-ineq-norm-p>1} \frac{(k-\ga)^p}{p^p} \int_{M} \frac{|\psi|^p h'(r)}{{h(r)}^{\ga}} + \frac{\ga(k-\ga)^{p-1}}{p^{p-1}}\int_M \frac{|\psi|^ph'(r)|\bar\na r^\perp|^2}{{h(r)}^{\ga}} \nonumber\\&& \hspace{-8.5cm}\le \int_M \frac{h'(r_0)^{1-p}}{{h(r)}^{\ga-p}}[|\na^M \psi|^2 + \frac{\psi^2|H|^2}{p^2}]^{{p}/{2}} + \frac{(k-\ga)^{p-1}}{p^{p-1}} \int_{\p M} \frac{|\psi|^p}{{h(r)}^{\ga-1}}. \end{eqnarray} Now, taking $p\to 1$, and applying the dominated convergence theorem, we obtain that (\ref{hardy-ineq-norm-p>1}) also holds for $p=1$. Applying (\ref{num-property}) in inequality (\ref{hardy-ineq-norm-p>1}), Theorem \ref{teo-hardy} follows. \end{proof} As a consequence of Theorem \ref{teo-hardy}, we obtain a Hardy type inequality for submanifolds in ambient spaces having a pole with nonpositive radial sectional curvature. Namely, the following result holds \begin{corollary}\label{hardy-hadamard-cor} Let $\bar M$ be a complete simply-connected manifold with radial sectional curvature $(\bar K_{\rad})_{\xi_0}\le 0$, for some $\xi_0\in \bar M$. Let $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$ and let $1\le p <k$ and $-\infty<\ga<k$. Then, for all $\psi\in C^1(M)$, it holds \begin{eqnarray*} \frac{(k-\ga)^p}{p^p} \int_{M} \frac{|\psi|^p}{{r}^{\ga}} + \frac{\ga(k-\ga)^{p-1}}{p^{p-1}}\int_M \frac{|\psi|^p|\bar\na r^\perp|^2}{{r}^{\ga}} \\&& \hspace{-5.5cm} \le \ \m A_p\int_M [\frac{|\na^M \psi|^p}{{r}^{\ga-p}} + \frac{|\psi|^p|H|^p}{p^p {r}^{\ga-p}}] +\frac{(k-\ga)^{p-1}}{p^{p-1}} \int_{\p M} \frac{|\psi|^p}{{r}^{\ga-1}}. \end{eqnarray*} where $\m A_p=\max\{1, 2^{\frac{p-2}{2}}\}$. Moreover, if $M$ is minimal, we can take $\m A_p=1$. \end{corollary} \section{The weighted Hoffman-Spruck inequality for submanifolds} Another consequence of Theorem \ref{teo-hardy} is the Hoffman-Spruck Inequality. Namely, fixed $\xi_0\in \bar M$, we assume $(\bar K)_{\rad}\le \m K(r)$ in $\bar M$, where $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$. Let $\m B$ be the geodesic ball in $\bar M$ centered at $\xi_0$ and radius $\bar r_0=\min\{\bar r_0(\m K),Inj_{\bar M}(\xi_0)\}$. Since $M$ is compact and contained in $\m B$, it follows that $r^*=\max_{x\in M} r(x)<\bar r_0= \min\{\bar r_0(\m K), Inj_{\bar M}(\xi_0)\}$ and $h$ is increasing in $[0,\bar r_0)$. Hence, we may assume $M$ is contained in a ball $\m B_{r_0}(\xi_0)$, for some $0<r_0<\min\{\bar r_0(\m K),Inj_{\bar M}(\xi_0)\}$, , arbitrarily close to $\bar r_0$, satisfying $h'(r_0)>0$. In particular, $h'(r)>0$, for all $0\le r\le r_0$, since $h''=-\m K h\le 0$. Applying Theorem \ref{teo-hardy}, we conclude $M$ cannot be minimal. On the other hand, notice that $\eta=-\bar \na r$ is the unit normal vector to the boundary $\p \m B_{r_0}(\xi_0)$ pointing inward $\m B_{r_0}(\xi_0)$. By the Hessian comparison theorem (see (\ref{hes-comp})), the shape operator $A=-\bar\na\eta$ satisfies $A(v,v)=\hs_{r}(v,v)\ge \frac{h'(r_0)}{h(r_0)}> 0$, for all unit vector $v$ tangent to $\p \m B_{r_0}(\xi_0)$. Hence, the boundary $\p \m B_{r_0}(\xi_0)$ is convex. Since $\bar M$ does not admit any closed minimal submanifold inside $\m B$, by Section 6 of \cite{W}, there exists a constant $c>0$ depending only on $k$, satisfying \begin{equation*} \vol(M)^{\frac{n-1}{n}}\le c \big(\vol(\p M)+\int_M |H|\big), \end{equation*} provided $n<7$, or $\vol(M)<\m D$, where $\m D$ depends only on $\m B$. Thus, by a straightforward calculus, one obtain \begin{equation} \label{HS-White-version} [\int_M |\psi|^{p^*}]^\frac{p}{p^*} \le S \int_M (|\na^M \psi|^p + \frac{|\psi|^p|{H}|^p}{p^p}), \end{equation} for all $1\le p<k$ and $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, where $S$ depends only on $k$ and $p$. By Hoffman and Spruck \cite {HS}, one see that $\m D$ depends only on $Inj_{\bar M}(M)$ and $\bar r_0(\m K)$. Namely, Hoffman and Spruck proved the following. \begin{theorem}\label{hoffman-spruck-teo} Assume the sectional curvatures of $\bar M$ satisfy $\bar K \le b^2$, for some constant $b\ge 0$. Then, there exists a constant $S>0$ satisfying \begin{equation}\label{desig-hoff-sprk} [\int_M |\psi|^{p^*}]^\frac{p}{p^*} \le S \int_M (|\na^M \psi|^p + \frac{|\psi|^p|{H}|^p}{p^p}), \end{equation} for all $1\le p<k$ and $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, provided there exists $z\in (0,1)$ satisfying \begin{eqnarray} \label{bar J} && \bar J_z:= [\frac{\om_k^{-1}}{1-z}\vol_M(\spp(\psi))]^{\frac{1}{k}}< \frac{1}{b}, \mbox{ if } b>0; \mbox{ and } \\ \label{h-bar J} && 2h^{-1}_b(\bar J_z)\le Inj_{\bar M}\,(\spp(\psi)), \end{eqnarray} where $h_b(t)=t$, with $t\in (0,\infty)$, if $b=0$, and $h_b(t)=\frac{1}{b}\sin(bt)$, with $t\in (0,\frac{\pi}{2b})$, if $b>0$ (In this case, $h^{-1}_b(t)=\frac{1}{b}\sin^{-1}(tb)$, with $t\in (0,\frac{1}{b})$). Here, $\om_k$ is the volume of the standard unit ball $B_1(0)$ in $\real^k$, and $\mbox{Inj}\,(\spp(\psi))$ is the infimum of the injectivity radius of $\bar M$ restricted to the points of $\spp(\psi)$. Furthermore, the constant $S=S_{k,z}$ is given by \begin{equation}\label{hoff-sprk-S} S_{k,p,z}=\frac{\pi}{2}\, \frac{2^{k} k}{z(k-1)}\big(\frac{\om_k^{-1}}{1-z}\big)^{\frac{1}{k}}\,2^{p-1}[\frac{p(k-1)}{k-p}]^p. \end{equation} Moreover, if $b=0$, $S_{k,p,z}$ can be improved by taking $1$ instead $\frac{\pi}{2}$. \end{theorem} \begin{remark} The Hoffman-Spruck's Theorem above can be generalized for ambient spaces $\bar M$ satisfying $(\bar K_{\rad})_\xi \le \m K(r_\xi)$, for all $\xi\in \bar M$. The details and proof for this case can be found, for instance, in \cite{BM}. \end{remark} The constant $S_{k,p,z}$ as in (\ref{hoff-sprk-S}) reaches its minimum at $z=\frac{k}{k+1}$, hence we can take \begin{eqnarray}\label{constant-S} S &=& S_{k,p}=\min_{z\in (0,1)} S_{k,p,z} = \frac{\pi}{2} \, \frac{2^{k} k}{\frac{k}{k+1}(k-1)}\big(\om_k^{-1}(k+1)\big)^{\frac{1}{k}} \,2^{p-1}[\frac{p(k-1)}{k-p}]^p \nonumber\\&=& \frac{\pi}{2} \, \frac{2^{k}(k+1)^\frac{k+1}{k}}{k-1} \om_k^{-\frac{1}{k}}\,2^{p-1}[\frac{p(k-1)}{k-p}]^p, \end{eqnarray} provided $\bar J= [\frac{k+1}{\om_k}\vol_M(\spp(\psi))]^\frac{1}{k}\le s_b$ and $2h^{-1}(\bar J)\le Inj_{\bar M}(\spp(\psi))$. Thus, as a corollary of Theorem \ref{hoffman-spruck-teo} and (\ref{HS-White-version}), one has \begin{proposition}\label{HS-compact} Fixed $\xi_0\in \bar M$, assume $(\bar K_{\rad})_{\xi_0}\le \m K(r_{\xi_0})$. Assume $M$ is contained in $\m B=\m B_{r_0}(\xi_0)$, with $r_0=\min\{\bar r_0(\m K), Inj_{\bar M}(\xi_0)\}$. Then, for all $1\le p<k$, there exists $S>0$, depending only on $k$ and $p$, such that, for all $\psi\in C^1(M)$ with $\psi=0$ on $\p M$, it holds \begin{equation*} [\int_M |\psi|^{p^*}]^\frac{p}{p^*} \le S \int_M (|\na^M \psi|^p + \frac{|\psi|^p|{H}|^p}{p^p}), \end{equation*} provided either $k<7$, or $\vol_M(\spp(\psi))<\m D$, where $0<\m D\le +\infty$, depends only on $Inj_{\bar M}(M)$ and $r_0$. \end{proposition} \begin{example} \label{cartan-hadamard-example} Assume $\bar M$ is a Cartan-Hadamard manifold. Then, it holds $\bar r_0(\m K)=Inj_{\bar M}(\xi_0)=Inj_{\bar M}\,(M)=\infty$. Hence, we can take $\m D=+\infty$ in Proposition \ref{HS-compact}. \end{example} Now, we use Theorem \ref{teo-hardy} together with Theorem \ref{hoffman-spruck-teo}, in order to obtain a weighted Hoffman-Spruck inequality for submanifolds in manifolds. \begin{theorem}\label{teo-sob} Fixed $\xi_0\in \bar M$, assume $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=d_{\bar M}(\cdot\,,\xi_0)$. Assume $M$ is contained in $\m B=\m B_{r_0}(\xi_0)$, with $r_0=\min\{\bar r_0(\m K), Inj_{\bar M}(\xi_0)\}$. Then, for all $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, it holds \begin{eqnarray*} \frac{1}{S}[\int_M\frac{|\psi|^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} + \Phi_{k,p,\al}\int_M \frac{|\psi|^p{h'(r)}|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}} + \De_{k,p,\al} \int_M \frac{|\psi|^p{h'(r)}|\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}} \\&& \hspace{-6.2cm} \le \Ga_{k,p,\al} \int_M[\frac{|\na^M\psi|^p}{{h(r)}^{p\al}}+ \frac{|\psi|^p|H|^p}{p^p{h(r)}^{p\al}}], \end{eqnarray*} provided either $k<7$, or $\vol(M)<\m D$, where $0<\m D\le +\infty$ depends only $inj_{\bar M}(M)$ and $r_0$. Here, $p^*=\frac{kp}{k-p}$, $S>0$ depends only on $k$ and $p$, and \begin{eqnarray*} \Ga_{k,p,\al}&=& \m A_p \big[1+|\al|^{\frac{2p}{2+p}}2^\frac{|p-2|}{(p+2)}{h'(r_0)}^{\frac{2(1-p)}{2+p}}(\frac{p}{k-\ga})^\frac{2p}{p+2}\,\big]^{\frac{p+2}{2}}\\ &=& h'(r_0)^{1-p}\m A_p \big[h'(r_0)^\frac{2(p-1)}{p+2}+|\al|^{\frac{2p}{2+p}}2^\frac{|p-2|}{(p+2)}(\frac{p}{k-\ga})^\frac{2p}{p+2}\,\big]^{\frac{p+2}{2}}\\ \Phi_{k,p,\al}&=& 2^{\frac{|p-2|}{2}}\frac{\ga p}{k-\ga} (|\al|^{\frac{2p}{2+p}}+2^{\frac{-|p-2|}{p+2}}h'(r_0)^{\frac{2(p-1)}{p+2}}(\frac{p}{k-\ga})^{\frac{-2p}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}}\\ \De_{k,p,\al}&=& \m A_p(|\al|^{\frac{2p}{2+p}}+2^{\frac{-|p-2|}{p+2}}h'(r_0)^{\frac{2(p-1)}{p+2}}(\frac{p}{k-\ga})^{\frac{-2p}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}}, \end{eqnarray*} where $\ga=p(\al+1)$ and $\m A_p=\max\{1,2^{\frac{p-2}{2}}\}$. \end{theorem} \begin{proof} First, we assume $\xi_0\notin M$. Then, $r=d_{\bar M}(\cdot\,,\xi_0)>0$ on $M$, hence $\frac{\psi}{{h(r)}^\al}$ is a $C^1$ function on $M$ vanishing on $\p M$. By Proposition \ref{HS-compact}, there is a constant $S>0$, depending only on $k$ and $p$, such that, for all $\psi\in C^1(M)$ with $\psi=0$ on $\p M$, the following inequality holds \begin{equation}\label{Michael-Simon-psi} [\int_M \frac{|\psi|^{p^*}}{{h(r)}^{p^*\al}}]^{\frac{p}{p^*}}\le S\int_M [|\na^M(\frac{\psi}{{h(r)}^\al})|^p + \frac{|\psi|^p|H|^p}{p^p{h(r)}^{p\al}}], \end{equation} provided $k<7$ or $\vol(M)\le \m D$, where $\m D$, depends only on $r_0$ and $Inj_{\bar M}(M)$. Using that $\na^M (\frac{\psi}{{h(r)}^\al}) = \frac{\na^M\psi}{{h(r)}^\al} - \frac{\al\psi h'(r)\bar\na r^\top}{{h(r)}^{\al+1}}$, by the Young inequality, \begin{eqnarray} |\na^M(\frac{\psi}{{h(r)}^\al})|^2 &=& \frac{\al^2\psi^2 h'(r)^2|\bar\na r^\top|^2}{{h(r)}^{2\al+2}} + (\frac{-\al}{{h(r)}^{2\al}}) ~ 2\lan\na^M\psi, \frac{\psi h'(r)\bar\na r^\top}{{h(r)}}\ran \nonumber\\&& + \ \frac{|\na^M\psi|^2}{{h(r)}^{2\al}} \nonumber\\ &\le& (\al^2+|\al|\ep^2)\frac{\psi^2 h'(r)^2|\bar\na r^\top|^2}{{h(r)}^{2\al+2}} + (1+\frac{|\al|}{\ep^2})\frac{|\na^M\psi|^2}{{h(r)}^{2\al}} \nonumber\\ &=& (|\al|+\ep^2)\big[\frac{|\al|\psi^2 h'(r)^2|\bar\na r^\top|^2}{{h(r)}^{2(\al+1)}}+ \frac{|\na^M\psi|^2}{\ep^2 {h(r)}^{2\al}}\big], \end{eqnarray} for all $\ep>0$. Hence, using (\ref{num-property}), \begin{eqnarray}\label{a0} |\na^\Si(\frac{\psi}{{h(r)}^\al})|^p &=& (|\na^M(\frac{\psi}{{h(r)}^\al})|^2)^{\frac{p}{2}} \nonumber\\ &\le& \m A_p (|\al|+\ep^2)^\frac{p}{2}[\frac{|\al|^\frac{p}{2} \psi^p {h'(r)}|\bar\na r^\top|^p}{{h(r)}^{p(\al+1)}} + \frac{|\na^M\psi|^p}{\ep^p {h(r)}^{p\al}}] \label{h'-without-p}\\&\le& \m A_p (|\al|+\ep^2)^\frac{p}{2}[\frac{|\al|^\frac{p}{2} \psi^p {h'(r)}}{{h(r)}^{p(\al+1)}}(\m B_p - |\bar\na r^\perp|^p) + \frac{|\na^M\psi|^p}{\ep^p {h(r)}^{p\al}}] \\&& \hspace{-2cm}= \m A_p (|\al|+\ep^2)^\frac{p}{2}[\frac{(\m B_p |\al|^\frac{p}{2} \psi^p {h'(r)}}{{h(r)}^{p(\al+1)}} - \frac{|\al|^\frac{p}{2} \psi^p {h'(r)} |\bar\na r^\perp|^p)}{{h(r)}^{p(\al+1)}} + \frac{|\na^M\psi|^p}{\ep^p {h(r)}^{p\al}}]. \nonumber \end{eqnarray} where $\m A_p = \max\{1,2^\frac{p-2}{2}\}$ and $\m B_p=\max\{1,2^{\frac{2-p}{2}}\}$. Inequality (\ref{h'-without-p}) holds since $h''\le 0$, hence $h'(r)\le h'(0)=1$ and Inequality (\ref{a0}) holds since, by (\ref{num-property}), one has $|\bar\na r^\top|^p+|\bar\na r^\perp|^p\le \max\{1,2^{\frac{2-p}{2}}\}$. Thus, using $(\ref{Michael-Simon-psi})$ and $(\ref{a0})$, we obtain \begin{eqnarray*} \frac{1}{S}[\int_M\frac{\psi^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} &\le & \int_M \frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}} \ + \ \\&& \hspace{-4.2cm} \m A_p(|\al|+\ep^2)^{\frac{p}{2}} \big[\,|\al|^{\frac{p}{2}}\m B_p \int_M \frac{\psi^p {h'(r)}}{{h(r)}^{p(\al+1)}} + \frac{1}{\ep^p} \int_M \frac{|\na^M\psi|^p}{{h(r)}^{p\al}} - |\al|^{\frac{p}{2}} \int_M \frac{\psi^p {h'(r)} |\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}}\big]. \end{eqnarray*} On the other hand, by using Theorem \ref{teo-hardy}, \begin{equation*}\label{a1} \int_M~\frac{\psi^p {h'(r)}}{{h(r)}^{p(\al+1)}} ~ \le A_{k,p,\al} \int_M [\frac{|\na^M \psi|^p}{{h(r)}^{p\al}}+\frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}}] - B_{k,p,\al} \int_M \frac{\psi^p {h'(r)}|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}}, \end{equation*} where $A_{k,p,\al} = \frac{\m A_p}{h'(r_0)^{p-1}}\frac{p^p}{(k-\ga)^p}$ and $B_{k,p,\al} = \frac{p^p h'(r_0)^{1-p}}{(k-\ga)^p}\frac{\ga[(k-\ga)h'(r_0)]^{p-1}}{p^{p-1}}=\frac{\ga p}{k-\ga}$. Thus, it holds \begin{eqnarray*} \frac{1}{S}[\int_M\frac{\psi^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} &\le& \ C_{k,\al,p,\ep} \int_M \frac{|\na^M \psi|^p}{{h(r)}^{p\al}} + D_{k,p,\al,\ep} \int_M \frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}}\\ &&\ - \ E_{k,p,\al,\ep} \int_M \frac{\psi^p|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}} - F_{k,p,\al,\ep} \int_M \frac{\psi^p|\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}} \nonumber, \end{eqnarray*} where \begin{eqnarray*} C_{k,p,\al,\ep} &=& \m A_p(|\al|+\ep^2)^{\frac{p}{2}}(\,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + \ep^{-p}) \\&=& \m A_p[(|\al|+\ep^2)^{\frac{p}{2}}\,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + (1+ |\al|\ep^{-2})^{\frac{p}{2}}] \\ D_{k,p,\al,\ep} &=& \m A_p(|\al|+\ep^2)^{\frac{p}{2}}|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + 1 \le C_{k,p,\al,\ep} \\ E_{k,p,\al,\ep} &=& \m A_p(|\al|+\ep^2)^{\frac{p}{2}}|\al|^{\frac{p}{2}} \m B_p B_{k,p,\al}. \\ F_{k,p,\al,\ep} &=& \m A_p(|\al|+\ep^2)^{\frac{p}{2}}|\al|^{\frac{p}{2}}. \end{eqnarray*} Consider the function $k(\ep)= C_{k,p,\al,\ep}$. We have \begin{eqnarray*} \m A_p^{-1}k'(\ep) &=& \frac{p}{2}(|\al|+\ep^2)^{\frac{p}{2}-1}\, 2\ep \,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + \frac{p}{2}(1+|\al|\ep^{-2})^{\frac{p}{2}-1}(-2|\al|\ep^{-3}) \\&=& p(|\al|+\ep^2)^{\frac{p}{2}-1}\, [\ep \,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} - \ep^{2-p}|\al|\ep^{-3}] \\&=& p(|\al|+\ep^2)^{\frac{p}{2}-1}\ep\,|\al| [\,|\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al} - \ep^{-2-p}]. \end{eqnarray*} Thus, $k'(\ep)=0$ iff $\ep^{-2-p} = |\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al}$, i.e., $\ep=[|\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al}]^{\frac{-1}{p+2}}$. Hence, it simple to see $k(\ep)$ reachs its minimum at $\ep_0=[|\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al}]^{\frac{-1}{p+2}}$. We obtain \begin{eqnarray*} \Ga_{k,p,\al}&:=& C_{k,p,\al,\ep_0} = \m A_p(|\al|+\ep_0^2)^{\frac{p}{2}}(\,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + \ep_0^{-p}) \\&=& \m A_p(|\al|+[|\al|^\frac{p-2}{2}\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}}\,)^{\frac{p}{2}}(\,|\al|^{\frac{p}{2}}\m B_p A_{k,p,\al} + [|\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al}]^{\frac{p}{p+2}}) \\&=& \m A_p|\al|^{\frac{p}{2}}(1+|\al|^{\frac{-2p}{2+p}}[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}}\,)^{\frac{p}{2}}(\,1 + |\al|^{\frac{-2p}{p+2}}[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}}) |\al|^{\frac{p}{2}}\m B_pA_{k,p,\al} \\&=& \m A_p|\al|^{\frac{p}{2}}(1+|\al|^{\frac{-2p}{2+p}}[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}}\,)^{\frac{p+2}{2}}|\al|^{\frac{p}{2}}\m B_pA_{k,p,\al} \\&=& \m A_p|\al|^{\frac{p}{2}} |\al|^{-p}[\m B_pA_{k,p,\al}]^{-1} (1+|\al|^{\frac{2p}{2+p}}[\m B_p A_{k,p,\al}]^{\frac{2}{p+2}}\,)^{\frac{p+2}{2}}|\al|^{\frac{p}{2}}\m B_pA_{k,p,\al} \\&=& \m A_p (1+|\al|^{\frac{2p}{2+p}}[\m B_p A_{k,p,\al}]^{\frac{2}{p+2}}\,)^{\frac{p+2}{2}} \\&=& \m A_p \big[1+|\al|^{\frac{2p}{2+p}}2^\frac{|p-2|}{(p+2)}{h'(r_0)}^{\frac{2(1-p)}{2+p}}(\frac{p}{k-\ga})^\frac{2p}{p+2}\,\big]^{\frac{p+2}{2}}. \end{eqnarray*} The last equality holds since $\m A_p \m B_p = \max\{1,2^{\frac{p-2}{2}}\} \max\{1,2^{\frac{2-p}{2}}\}=2^{\frac{|p-2|}{2}}$. We also have \begin{eqnarray*} \Phi_{k,p,\al} &:=& E_{k,p,\al,\ep_0}=\m A_p(|\al|+\ep_0^2)^{\frac{p}{2}}|\al|^{\frac{p}{2}} \m B_p B_{k,p,\al} \\&=& \m A_p(|\al|+[|\al|^{\frac{p-2}{2}}\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}})^{\frac{p}{2}}|\al|^{\frac{p}{2}} \m B_p B_{k,p,\al} \\&=& \m A_p(|\al|^2+|\al|^{\frac{4}{2+p}}[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}})^{\frac{p}{2}}\m B_p B_{k,p,\al} \\&=& \m A_p (|\al|^{\frac{2p}{2+p}}+[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}}\m B_p B_{k,p,\al} \\&=& 2^{\frac{|p-2|}{2}}\frac{\ga p}{k-\ga} (|\al|^{\frac{2p}{2+p}}+2^{\frac{-|p-2|}{p+2}}h'(r_0)^{\frac{2(p-1)}{p+2}}(\frac{p}{k-\ga})^{\frac{-2p}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}} \end{eqnarray*} and \begin{eqnarray*} \De_{k,p,\al,\ep} &:=& F_{k,p,\al,\ep} = \m A_p(|\al|+\ep_0^2)^{\frac{p}{2}}|\al|^{\frac{p}{2}} \\&=& \m A_p (|\al|^{\frac{2p}{2+p}}+[\m B_p A_{k,p,\al}]^{\frac{-2}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}} \\&=& \max\{1,2^{\frac{p-2}{2}}\}(|\al|^{\frac{2p}{2+p}}+2^{\frac{-|p-2|}{p+2}}h'(r_0)^{\frac{2(p-1)}{p+2}}(\frac{p}{k-\ga})^{\frac{-2p}{p+2}})^{\frac{p}{2}}|\al|^{\frac{2p}{2+p}}. \end{eqnarray*} Thus, it follows that \begin{eqnarray*} \frac{1}{S}[\int_M\frac{\psi^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} &\le& \Ga_{k,p,\al} \int_M [\frac{|\na^M \psi|^p}{{h(r)}^{p\al}}+\frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}}]\\ && \hspace{-1cm} - \ \Phi_{k,p,\al} \int_M \frac{\psi^p{h'(r)}|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}} - \De_{k,p,\al}\int_M \frac{\psi^p{h'(r)}|\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}} \nonumber, \end{eqnarray*} Now, assume $\xi_0\in M$. As we have observed in the proof of Theorem \ref{teo-hardy-norm}, it holds $\vol_M ([r<2\de]\cap M)=O(\de^k)$ and $\vol_{\p M} ({\p M\cap [r<2\de]})=O(\de^{k-1})$, as $\de>0$ goes to $0$. Consider the cut-off function $\eta=\eta_\de\in C^\infty(M)$ satisfying: \begin{eqnarray}\label{cut-off} && 0\le \eta \le 1, \mbox{ in } M; \nonumber\\ &&\eta = 0, \mbox{ in } [\,r<\de]\cap M, \ \mbox{ and } \eta = 1, \mbox{ in } [\,r>2\de]\cap M;\\ && |\na^M \eta|\le L/\de, \nonumber \end{eqnarray} for some $L>1$ that does not depend on $\de$ and $\eta$. Let $\phi=\eta\psi$. Since $\phi\in C^1(M)$ and $\xi_0\notin M'=\spp(\phi)$, it holds \begin{eqnarray*} \frac{1}{S}[\int_M\frac{\phi^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} + \Phi_{k,p,\al}\int_M \frac{\phi^p{h'(r)}|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}} + \De_{k,p,\al}\int_M \frac{\phi^p{h'(r)}|\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}} \\&& \hspace{-6.0cm} \le \Ga_{k,p,\al} \int_M[\frac{|\na^M\phi|^p}{{h(r)}^{p\al}}+ \frac{\phi^p|H|^p}{p^p{h(r)}^{p\al}}]. \end{eqnarray*} Notice that, \begin{eqnarray}\label{est-grad-0-phi-sob-1} \int_M \frac{|\na^M\phi|^p}{{h(r)}^{\al p}} &=& \int_M \frac{|\eta \na^M\psi + \psi \na^M \eta|^p}{{h(r)}^{\al p}} \nonumber\\&=& \int_{[r>2\de]\cap M} \frac{|\na^M\psi|^p}{{h(r)}^{\al p}} + \int_{[\de<r<2\de]\cap M} \frac{|\eta \na^M\psi + \psi \na^M \eta|^p}{{h(r)}^{\al p}}, \end{eqnarray} and, since $h(\de)=O(\de)$, as $\de\to 0$, \begin{eqnarray}\label{est-grad-0-phi-sob-2} \int_{[\de<r<2\de]\cap M} \frac{|\eta \na^M\psi + \psi \na^M \eta|^p}{{h(r)}^{\al p}} &\le& \int_{[\de<r<2\de]} (|\na^M\psi|^p+ \psi^p O(\frac{1}{\de^p}))O(\frac{1}{\de^{\al p}}) \nonumber\\&=& \int_{\{\de<r<2\de\}}(O(\frac{1}{\de^{\al p}}) + O(\frac{\de^{-p}}{\de^{\al p}})) \nonumber\\&=& (O(\frac{1}{\de^{\al p}}) + O(\frac{\de^{-p}}{\de^{\al p}}))O(\de^{k}) \nonumber\\&=& O(\de^{k-\al p}) + O(\de^{k-p(\al+1)}) = O(\de^{k-\ga}), \end{eqnarray} as $\de\to 0$, since $k-\ga>0$. Hence, \begin{eqnarray*} \frac{1}{S}[\int_{[r>2\de]\cap M} \frac{\psi^{p^*}}{{h(r)}^{p^*\al}}]^{p/p^*} &&\\&& \hspace{-3cm} + \ \Phi_{k,p,\al}\int_{[r>2\de]\cap M} \frac{\psi^p{h'(r)}|\bar\na r^\perp|^2}{{h(r)}^{p(\al+1)}} + \De_{k,p,\al}\int_{[r>2\de]\cap M} \frac{\psi^p {h'(r)}|\bar\na r^\perp|^p}{{h(r)}^{p(\al+1)}} \\ && \hspace{-3cm}\le \ \Ga_{k,p,\al} \int_{M}[\frac{|\na^M\psi|^p}{{h(r)}^{p\al}}+ \frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}}] + O(\de^{k-\ga}). \end{eqnarray*} Since $k-\ga>0$, taking $\de\to 0$, Theorem \ref{teo-sob} follows. \end{proof} As a corollary, we have the weighted Hoffman-Spruck type inequality for submanifolds in Cartan-Hadamard manifolds. \begin{corollary} Assume $\bar M$ is a Cartan-Hadamard manifold. We fix any $\xi_0\in \bar M$ and let $r=r_{\xi_0}=d_{\bar M}(\cdot\,,\xi_0)$. Let $1\le p<k$ and $-\infty<\al<\frac{k-p}{p}$. Then, for all $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, it holds \begin{eqnarray*} \frac{1}{S}[\int_M\frac{|\psi|^{p^*}}{{r}^{p^*\al}}]^{p/p^*} + \Phi_{k,p,\al}\int_M \frac{|\psi|^p|\bar\na r^\perp|^2}{{r}^{p(\al+1)}} + \De_{k,p,\al} \int_M \frac{|\psi|^p|\bar\na r^\perp|^p}{{r}^{p(\al+1)}} \\&& \hspace{-4.5cm} \le \Ga_{k,p,\al} \int_M[\frac{|\na^M\psi|^p}{{r}^{p\al}}+ \frac{|\psi|^p|H|^p}{p^p{r}^{p\al}}], \end{eqnarray*} Here, $p^*=\frac{kp}{k-p}$, $S=S_{k,p}>0$ depends only on $k$ and $p$, and $\Ga_{k,p\al}$, $\Phi_{k,p,\al}$ and $\De_{k,p,\al}$ are defined as in Theorem \ref{teo-sob}, with $h'(r_0)=1$. \end{corollary} \section{The Caffarelli-Kohn-Nirenberg inequality for submanifolds} Inspired by an argument in Bazan and Neves \cite{BN}, we will obtain the Caffarelli-Kohn-Nirenberg type inequality for submanifolds (see Theorem \ref{CKN-ineq-teo} below) by interpolating Theorem \ref{teo-hardy} and Theorem \ref{teo-sob}. In order to do that, first, we will test the interpolation argument to prove a particular case of our Caffarelli-Kohn-Inequality type inequality (compare with Theorem \ref{teo-sob} above). We prove the following. \begin{theorem}\label{sob-weighted teo} Fixed $\xi_0\in \bar M$, assume $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=d_{\bar M}(\cdot\,,\xi_0)$. Assume $M$ is contained in $\m B=\m B_{r_0}(\xi_0)$, with $r_0=\min\{\bar r_0(\m K), Inj_{\bar M}(\xi_0)\}$. Let $1\le p<k$ and $-\infty<\al<\frac{k-p}{p}$. Let $s>0$ and $\al\le \ga\le \al+1$ satisfying the balance condition: $$\frac{1}{s}=\frac{1}{p}-\frac{(\al+1)-\ga}{k}=\frac{1}{p^*}+\frac{\ga-\al}{k}.$$ We write $s=(1-c)p + c p^*$, for some $c\in [0,1]$. Then, for all $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, it holds \begin{equation*} [\int_M \frac{|\psi|^s}{{h(r)}^{s\ga}}]^{\frac{p}{s}}\le (\frac{\La}{h'(r_0)})^{\frac{p(1-c)}{s}}(S\,\Ga)^{\frac{p^*c}{s}}\int_M [\frac{|\na^M\psi|^p}{{h(r)}^{p\al}}+\frac{|\psi|^p|H|^p}{p^p{h(r)}^{p\al}}], \end{equation*} provided either $k<7$ or $\vol(M)<\m D$, being $0<\m D\le +\infty$ a constant depending only on $r_0$ and $Inj_{\bar M}(M)$. Here, $S>0$ is a constant depending only on $k$ and $p$, and \begin{eqnarray*} \La&=& \max\{1,2^{\frac{p-2}{2}}\} \frac{p^p h'(r_0)^{-p}}{[k-p(\al+1)]^p},\\ \Ga&=& \max\{1,2^{\frac{p-2}{2}}\} h'(r_0)^{1-p}\big[h'(r_0)^\frac{2(p-1)}{p+2}+|\al|^{\frac{2p}{2+p}}2^\frac{|p-2|}{(p+2)}(\frac{p}{k-p(\al+1)})^\frac{2p}{p+2}\,\big]^{\frac{p+2}{2}}. \end{eqnarray*} \end{theorem} \begin{proof} We write $\ga=(1-\te)(\al+1)+\te \al$, for some $\te\in [0,1]$. Since $s=(1-c)p+cp^*$, with $c\in [0,1]$, by the balance condition, it holds $$c=\frac{\te p}{\te p+ (1-\te)p^*}=\frac{\te(k-p)}{k-\te p},$$ which implies $\te(1-c)p=(1-\te)c p^*$. Hence, after a straightforward computation, one has $s\ga = p(1-c)(\al+1) + p^* c\al$. By the H\"{o}lder inequality, \begin{eqnarray} \int_M\frac{\psi^s}{{h(r)}^{s\ga}}&=&\int_M\frac{\psi^{p(1-c) + p^*c}}{{h(r)}^{p(1-c)(\al+1) + p^*c\al}} = \int_M\frac{\psi^{p(1-c)}}{{h(r)}^{p(1-c)(\al+1)}}\frac{\psi^{ p^*c}}{{h(r)}^{p^*{c\al}}} \nonumber\\&\leq& [\int_M(\frac{\psi^{p(1-c)}}{{h(r)}^{p(1-c)(\al+1)}})^{\frac{1}{1-c}}]^{1-c}[\int_\Si(\frac{\psi^{ p^*c}}{{h(r)}^{p^*c\al}})^\frac{1}{c}]^c \nonumber\\&\le& [\frac{1}{{h'(r_0)}} \int_M\frac{\psi^{p}h'(r)}{{h(r)}^{p(\al+1)}}]^{1-c}[\int_M\frac{|\psi|^{ p^*}}{{h(r)}^{p^* \al}}]^c.\nonumber \end{eqnarray} The last inequality holds since $1=h'(0)\ge h'(r)\ge h'(r_0)$. Thus, using Theorem \ref{teo-hardy} and Theorem \ref{teo-sob}, \begin{eqnarray*} [\int_M\frac{\psi^s}{{h(r)}^{s\ga}}]^{\frac{p}{s}} &\leq& [\La\int_M (\frac{|\na^M \psi|^p}{{h(r)}^{p\al}}+\frac{\psi^p|H|^p}{p^p{h(r)}^{p\al}})]^{\frac{p(1-c)}{s}} \times \nonumber\\&& \hspace{2cm}\times \, [S\,\Ga\int_M (\frac{|\na^M \psi|^p}{{h(r)}^{p\al}}+\frac{\psi^p|H|^p}{{h(r)}^{p\al}})]^{\frac{p^*c}{s}} \nonumber \\ &\leq& \La^{\frac{p(1-c)}{s}}(S\,\Ga)^{\frac{p^*c}{s}}\int_M(\frac{|\na^M \psi|^p}{{h(r)}^{p\al}}+\frac{\psi^p|H|^p}{{h(r)}^{p\al}}),\nonumber \end{eqnarray*} where $\La=\La_{k,p,\al}=\max\{1,2^{\frac{p-2}{2}}\} \frac{p^p h'(r_0)^{-p}}{[k-p(\al+1)]^p}$ and $\Ga=\Ga_{k,p,\al}$ is given as in Theorem \ref{teo-sob}. \end{proof} Now, we will state our Caffarelli-Kohn-Nirenberg type inequality for submanifolds. \begin{theorem}\label{CKN-ineq-teo} Fixed $\xi_0\in \bar M$, assume $(\bar K_{\rad})_{\xi_0}\le \m K(r)$, where $r=d_{\bar M}(\cdot\,,\xi_0)$. Assume $M$ is contained in $\m B=\m B_{r_0}(\xi_0)$, with $r_0=\min\{\bar r_0(\m K), Inj_{\bar M}(\xi_0)\}$. Let $1\le p<k$ and $-\infty<\al<\frac{k-p}{p}$. Furthermore, let $q>0,t>0$ and $\be,\ga,\si$ satisfying \begin{enumerate}[(i)] \item\label{convex} $\ga$ is a convex combination, $\ga=a\si + (1-a)\be$, for some $a\in [0,1]$ and $\al\le \si\le \al+1$; \item\label{balance1} Balance condition: $\frac{1}{t}-\frac{\ga}{k}=a(\frac{1}{p}-\frac{\al+1}{k})+(1-a)(\frac{1}{q}-\frac{\be}{k})$. \end{enumerate} Then, for all $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, it holds \begin{equation}\label{CKN-inequality} [\int_M \frac{|\psi|^t}{{h(r)}^{\ga t}}]^\frac{1}{t} \le C\big[\int_M (\frac{|\na^M \psi|^p}{{h(r)}^{\al p}} + \frac{|\psi|^p|H|^p}{{h(r)}^{\al p}})\big]^{\frac{a}{p}}[\int_M \frac{|\psi|^q}{{h(r)}^{\be q}}]^{\frac{1-a}{q}}, \end{equation} provided either $k<7$ or $\vol(M)<\m D$, being $0<\m D\le +\infty$ a constant depending only on $r_0$ and $Inj_{\bar M}(M)$. Here, $$C=(\frac{\La}{h'(r_0)})^{\frac{p(1-c)}{s}}(S\,\Ga)^{\frac{p^*c}{s}},$$ where $c\in [0,1]$ and $s\in [p,p^*]$ depend only on the parameters $p,k,\al$ and $\si$, $S$ depends only on $k$ and $p$, and $\La$ and $\Ga$ are defined as in Theorem \ref{sob-weighted teo}. \end{theorem} \begin{proof} If $a=1$ then $\al\le \ga=\si \le \al+1$ and $\frac{1}{t}=\frac{1}{p}-\frac{(\al+1)-\ga}{k}=\frac{1}{p^*}+\frac{\ga-\al}{k}$, in particular, $p\le t\le p^*$. Thus, Theorem \ref{CKN-ineq-teo} follows from Theorem \ref{sob-weighted teo}. If $a=0$ then $\ga=\be$ and $q=t$, hence there is nothing to do. From now on, we will assume $0<a<1$. By \ref{convex} and \ref{balance1}, we obtain \begin{eqnarray}\label{eq-1/r} \frac{1}{t} &=& \frac{\ga}{k} + a(\frac{1}{p}-\frac{\al+1}{k})+(1-a)(\frac{1}{q}-\frac{\be}{k}) \nonumber\\&=& \frac{a\si + (1-a)\be}{k} + a(\frac{1}{p}-\frac{\al+1}{k})+(1-a)(\frac{1}{q}-\frac{\be}{k}) \nonumber\\&=& a(\frac{1}{p}-\frac{(\al+1)-\si}{k}) + \frac{1-a}{q} \nonumber\\&=& \frac{a}{s} + \frac{1-a}{q}, \end{eqnarray} where $\frac{1}{s}=\frac{1}{p}-\frac{(\al+1)-\si}{k}=\frac{1}{p^*} + \frac{\si-\al}{k}$. Hence, $s=\frac{kp}{k-p[(\al+1)-\si]}\in [p,p^*]$. We write \begin{equation}\label{exp-r} t=(1-b)q+bs. \end{equation} If $s=q$, we take $b=a$. If $s\neq q$ then, by (\ref{eq-1/r}), $t=q+b(s-q) = (\frac{a}{s} + \frac{1-a}{q})^{-1}=\frac{sq}{aq+(1-a)s}$. Hence, \begin{eqnarray}\label{exp-b} b&=& \frac{r-q}{s-q} = (\frac{sq}{aq+(1-a)s}-q)\frac{1}{s-q} = (\frac{sq -q(aq+(1-a)s)}{aq+(1-a)s})\frac{1}{s-q} \nonumber\\&=& (\frac{sq - aq^2 - qs + asq}{aq+(1-a)s})\frac{1}{s-q} = (\frac{ s-q}{aq+(1-a)s})\frac{aq}{s-q} \nonumber\\&=& \frac{aq}{aq+(1-a)s}. \end{eqnarray} Thus, $b=\frac{aq}{aq+(1-a)s}\in [0,1]$, independently whether $s=q$ or not. In particular, $(1-b)=\frac{(1-a)s}{aq+(1-a)s}$, hence $(1-b)aq + (1-b)(1-a)s = (1-a)s$, which implies, \begin{equation}\label{1-b-eq} (1-b)aq=(1-a)bs. \end{equation} Thus, it holds \begin{eqnarray}\label{exp-ga-b} \ga t &=& (a\si+(1-a)\be)((1-b)q+bs) \nonumber\\&=& [(1-b)aq]\si + abs\si + (1-a)(1-b)q\be + [(1-a)b s] \be \nonumber\\&=& [(1-a)bs]\si + abs\si + (1-a)(1-b)q\be + [(1-b)aq]\be \nonumber\\&=& bs\si + (1-b)q\be. \end{eqnarray} By (\ref{exp-r}) and (\ref{exp-ga-b}), \begin{eqnarray}\label{CKN-est-sg} [\int_M \frac{|\psi|^t}{{h(r)}^{\ga t}}]^\frac{1}{t} &=& [\int_M \frac{|\psi|^{(1-b)q+bs}}{{h(r)}^{(1-b)q\be+b s\si}}]^\frac{1}{t} = [\int_M \frac{|\psi|^{bs}}{{h(r)}^{b s\si}}\frac{|\psi|^{(1-b)q}}{{h(r)}^{(1-b)q\be}}]^\frac{1}{t} \nonumber\\&\le& [\int_M (\frac{|\psi|^{bs}}{{h(r)}^{b s\si}})^\frac{1}{b}]^\frac{b}{t}[\int_M(\frac{|\psi|^{(1-b)q}}{|X|^{(1-b)q\be}})^\frac{1}{1-b}]^\frac{1-b}{t} \nonumber\\&& \hspace{-1.2cm}= \ [\int_M \frac{|\psi|^{s}}{{h(r)}^{s\si}}]^\frac{b}{t}[\int_M\frac{|\psi|^{q}}{{h(r)}^{q\be}}]^\frac{1-b}{t} = [\int_M \frac{|\psi|^{s}}{{h(r)}^{s\si}}]^\frac{a}{s}[\int_M\frac{|\psi|^{q}}{{h(r)}^{q\be}}]^\frac{1-a}{q}. \end{eqnarray} The last equality holds since, by (\ref{eq-1/r}) and (\ref{exp-b}), we obtain $\frac{b}{r}=(\frac{aq}{aq+(1-a)s})(\frac{a}{s} + \frac{1-a}{q})=\frac{a}{s}$ and $\frac{1-b}{r}=\frac{1-a}{q}$. Now, since $p\le s\le p^*$ satisfies, $\frac{1}{s}=\frac{1}{p}-\frac{(\al+1)-\si}{k}$, the balance condition holds: \begin{equation*} \frac{1}{s}+\frac{\si}{k}=\frac{1}{p}-\frac{\al+1}{k}. \end{equation*} Write $s=(1-c)p+cp^*$, with $c\in [0,1]$. By Theorem \ref{sob-weighted teo}, \begin{equation*} [\int_M \frac{|\psi|^{s}}{{h(r)}^{s\si}}]^\frac{1}{s} \le C \big[\int_M (\frac{|\na^M \psi|^p}{{h(r)}^{\al p}} + \frac{|\psi|^p|H|^p}{{h(r)}^{\al p}})\big]^{\frac{1}{p}}, \end{equation*} where $C$ is given as in Theorem \ref{sob-weighted teo}. Theorem \ref{CKN-ineq-teo} is proved. \end{proof} As a corollary, we have the Caffarelli-Kohn-Nirenberg type inequality for submanifolds in Cartan-Hadamard manifolds. \begin{corollary}\label{CKN-CH-cor} Assume $\bar M$ is a Cartan-Hadamard manifold. We fix any $\xi_0\in \bar M$ and let $r=d_{\bar M}(\cdot\,,\xi_0)$. Let $1\le p<k$ and $-\infty<\al<\frac{k-p}{p}$. Furthermore, let $q>0,t>0$ and $\be,\ga,\si$ satisfying \begin{enumerate}[(i)] \item\label{convex} $\ga$ is a convex combination, $\ga=a\si + (1-a)\be$, for some $a\in [0,1]$ and $\al\le \si\le \al+1$; \item\label{balance1} Balance condition: $\frac{1}{t}-\frac{\ga}{k}=a(\frac{1}{p}-\frac{\al+1}{k})+(1-a)(\frac{1}{q}-\frac{\be}{k})$. \end{enumerate} Then, for all $\psi\in C^1(M)$, with $\psi=0$ on $\p M$, it holds \begin{equation}\label{CKN-inequality} [\int_M \frac{|\psi|^t}{{r}^{\ga t}}]^\frac{1}{t} \le C\big[\int_M (\frac{|\na^M \psi|^p}{{r}^{\al p}} + \frac{|\psi|^p|H|^p}{{r}^{\al p}})\big]^{\frac{a}{p}}[\int_M \frac{|\psi|^q}{{r}^{\be q}}]^{\frac{1-a}{q}}. \end{equation} Here, $$C=\La^{\frac{p(1-c)}{s}}(S\,\Ga)^{\frac{p^*c}{s}},$$ where $c\in [0,1]$ and $s\in [p,p^*]$ depend only on the parameters $p,k,\al$ and $\si$, $S$ depends only on $k$ and $p$, and \begin{eqnarray*} \La&=& \max\{1,2^{\frac{p-2}{2}}\} \frac{p^p}{[k-p(\al+1)]^p},\\ \Ga&=& \max\{1,2^{\frac{p-2}{2}}\}\big[1+|\al|^{\frac{2p}{2+p}}2^\frac{|p-2|}{(p+2)}(\frac{p}{k-p(\al+1)})^\frac{2p}{p+2}\,\big]^{\frac{p+2}{2}}. \end{eqnarray*} \end{corollary} \begin{example} There are some inequalities that derive from Theorem \ref{CKN-ineq-teo}. For the sake of simplicity, we assume $\bar M$ is a Cartan-Hadamard manifold. Fix any $\xi_0\in \bar M$ and let $r=d_{\bar M}(\cdot\,,\xi_0)$. By Corollary \ref{CKN-CH-cor}, there exists a constant $C$, depending only on the parameters $k,p,q,t,\ga,\al$ and $\be$, such that, for all $\psi\in C^1(M)$ with $\psi=0$ on $\p M$, the following inequality holds. \\ \\ 1 - The weighted Michael-Simon-Sobolev inequality (compare with Theorem \ref{teo-sob} and Theorem \ref{sob-weighted teo}) is obtained from Theorem \ref{CKN-ineq-teo} by taking $a=1$ (hence $\ga=\si$). In particular, if $a=1$ and $\al=0$ then, for all $\ga\in [0,1]$ and $t>0$ satisfying $\frac{1}{t}-\frac{\ga}{k}=\frac{1}{p*}$, it holds \begin{equation*} [\int_M \frac{|\psi|^t}{r^{\ga t}}]^\frac{p}{t} \le C \int_{M} (|\na^M\psi|^p + |\psi|^p| |H|^p), \end{equation*} \\ 2 - Hardy type inequality for submanifolds (compare with Theorem \ref{teo-hardy}). We take $a=1$ and $\ga=\al+1$. Hence, $\ga=\si$ and, by the balance condition, $t=p$. Thus, it holds \begin{equation*} \int_M \frac{|\psi|^p}{{r}^{(\al+1)p}} \le C \int_{M} (\frac{|\na^M\psi|^p}{{r}^{p\al}} + \frac{|\psi|^p|\vec{H}|^p}{{r}^{p\al}}). \end{equation*} \\ 3 - Galiardo-Nirenberg type inequality for submanifolds. We take $\al=\be=\si=0$. We obtain, $\ga=0$ and, for all $t>0$, satisfying $\frac{1}{t}=\frac{a}{p^*}+\frac{1-a}{q}$, with $a\in [0,1]$, it holds \begin{equation*} [\int_M |\psi|^t]^\frac{1}{t} \le C\big[\int_M (|\na^M \psi|^p + |\psi|^p|H|^p)\big]^{\frac{a}{p}}[\int_M |\psi|^q]^{\frac{1-a}{q}}. \end{equation*} In particular, if we take $k\ge 3$, $p=2$, $q=1$, and $a={2}/(2+\frac{4}{k})$, then $\frac{1}{t}=\frac{2}{2+\frac{4}{k}} \frac{k-2}{2k} + \frac{4}{k}\frac{1}{2+\frac{4}{k}} = \frac{k-2}{2(k+2)}+\frac{2}{k+2} = \frac{1}{2}$, and the following Nash type inequality for submanifolds holds \begin{equation*} [\int_M |\psi|^2]^\frac{1}{2} \le C\big[\int_M (|\na^M \psi|^2 + |\psi|^2|H|^2)\big]^{\frac{k}{2k+4}}[\int_M |\psi|]^{\frac{2}{k+2}}. \end{equation*} 4 - Heisenberg-Pauli-Weyl type inequality for submanifolds. We consider $k\ge 3$ and take $t=2$, $p=q=2$, $\ga=\al=0$, $\be=-1$ and $a=\frac{1}{2}$. The parameter conditions in Theorem \ref{CKN-ineq-teo} are satisfied. So, we obtain \begin{equation*} [\int_M |\psi|^2]^{\frac{1}{2}} \le C [\int_M (|\na^M \psi|^2 + |\psi|^2|H|^2)]^{\frac{1}{4}}[\int_M r^2|\psi|^2]^{\frac{1}{4}}. \end{equation*} \end{example} \section*{Aknowledgement} The second author thanks his friends Wladimir Neves and Aldo Bazan for your suggestions and comments.
{"config": "arxiv", "file": "1509.03857.tex"}
TITLE: Schrodinger basis kets with Time-dependent Hamiltonian QUESTION [5 upvotes]: I was reading through the proof of the Adiabatic Theorem (in Sakurai) and I realised I'm not quite sure how Schrodinger Basis kets behave when we have a time-dependent Hamiltonian. I know that with a time-independent Hamiltonian the basis kets don't change in the Schrodinger Picture. So if $|n;t\rangle$ are the energy eigenkets of $H(t)$ at time $t$ and $|\alpha;t\rangle$ is an arbitrary state at time $t$, is the following at all true? \begin{align*} |\alpha;t\rangle = \sum_n c_n(t)|n;t\rangle = \sum_n c_n(t) e^{i\theta_n(t)}|n,t_0\rangle \end{align*} where $\theta_n(t) = -\frac{1}{\hbar}\int_{t_0}^t H(t')\,dt'$ and $e^{i\theta_n(t)}$ is a time-evolution operator Wikipedia and Sakurai both have (each in different notation): \begin{align*} |\alpha;t\rangle = \sum_n c_n(t) e^{i\theta_n(t)}|n;t\rangle \end{align*} I feel like I'm not understanding this properly at all REPLY [4 votes]: The basis of the Hilbert space in Schrödinger's picture is assumed to be time-independent regardless of any properties of the Hamiltonian. The Hamiltonian is just another operator. If the Hamiltonian is time-dependent, its eigenstates and eigenvalues are obviously time-dependent, too. Both equations you write down only express the fact that the basis of eigenstates of $H(t)$ is still a basis, so a general ket vector, including the actual state vector of the system, may be expanded as a linear superposition of these basic vectors with some general complex coefficients $c_n(t)$. The two expansions only differ by the phase one includes into the coefficients $c_n(t)$ or into the basis vectors $|n;t\rangle$. One convention includes the phase $\exp(i\theta_n(t))$, another one doesn't, and so on. Obviously, there is no "universally mandatory" rule that would dictate the right phase of these vectors so there's some freedom about the notation. Note that a phase factor times an eigenstate is still an eigenstate. Whatever your convention for the phases is, if you carefully follow the maths and remember what the symbols mean – the defining equations – you will be able to derive the invariant claims about the adiabatic theorem. The Wikipedia-Sakurai conventions treat the phases wisely and naturally, to speed up the derivations.
{"set_name": "stack_exchange", "score": 5, "question_id": 121492}
\subsection{Construction - 2: General case}\label{sec:TGIIconstruction_general} We describe the generalizations of the basic algorithms that offer potential optimization and flexibility. \paragraph{Composing non-coprime degree encodings using ``ladders''.} In the basic setting of the composable encodings, the group operation is feasible only for encodings with relatively prime degrees. To support the compositions of non-relatively prime degree encodings, say the shared prime degree is $\ell$, we can include in the public parameter the encodings of $\set{x^i}_{i\in[k]}$, where $x = [(\ell, b, \cdot)]$, $k$ is a polynomial. This then supports the composition of two encodings whose sum of exponents on the degree $\ell$ is $\leq k$. Sampling ladders has the benefit of supporting bounded number of shared-degree compositions. In some settings of the applications (e.g. in the broadcast encryption where the number of users is bounded a priori), ladders enable stateless encoding sampling algorithms. \begin{algorithm}[Sampling a ladder]\label{alg:ladder} Given a polynomially large prime $\ell$, a polynomial $k$, and an element $x = [(\ell, b, \cdot)]\in\CL(D)$, sample a ladder for degree $\ell$ of length $k$ as follows: \begin{enumerate} \item For $i = 1$ to $k$: compute $j_{i} := \act( \tau, j_{i-1}, x )$. (Recall $j_0$ is the identity from the public parameter.) \item Let $\lad(\ell) := ( j_1, ..., j_k )$. Include $\lad(\ell)$ in the public parameter. \end{enumerate} \end{algorithm} For the convenience of the description of the group operation with ladders, let $\iota(\ell,L)$ take as input an integer $\ell$, a list $L$, and output the index of $\ell$ in $L$. \begin{algorithm}[Composition with ladders] The algorithm $\comp(\PP, \enc(x), \enc(y))$ parses $\enc(x) = (L_x; T_{x,1}, ..., T_{x,w_x})$, $\enc(y) = (L_y; T_{y,1}, ..., T_{y,w_y})$, produces the composable encoding of $z = x\circ y$ as follows: \begin{enumerate} \item Let $L_z = L_x\cup L_y$. \item For all $\ell \in L_x \setminus L_{x}\cap L_{y}$, let $T_{z,\iota(\ell,L_z)} = T_{x,\iota(\ell,L_x)}$; for all $\ell \in L_y \setminus L_{x}\cap L_{y}$, let $T_{z,\iota(\ell,L_z)} = T_{y,\iota(\ell,L_y)}$. \item For all $\ell\in L_{x}\cap L_{y}$: \begin{itemize} \item If $|\lad(\ell)|\geq |T_{x,\iota(\ell,L_x)}|+|T_{y,\iota(\ell,L_y)}|$, then let $T_{z,\iota(\ell,L_z)}$ be the list of the first $|T_{x,\iota(\ell,L_x)}|+|T_{y,\iota(\ell,L_y)}|$ elements in $\lad(\ell)$. \item If $|\lad(\ell)| < |T_{x,\iota(\ell,L_x)}|+|T_{y,\iota(\ell,L_y)}|$, then the composition is infeasible. Return ``failure". \end{itemize} \item Output the composable encoding of $z$ as $\enc(z) = ( L_z; T_{z,1}, ..., T_{z,|L_z|} )$. \end{enumerate} \end{algorithm} \paragraph{Sample the composable encoding of a random element.} In the applications we are often required to sample (using the trapdoor) the composable encoding of a random element in $\G$ under the generation set $S = \set{ C_i := [(\ell_i, b_i, \cdot)] }_{i\in[w]}$ and. Let us call such an algorithm $\Random\Sam(\tau, S)$. An obvious instantiation of $\Random\Sam(\tau, S)$ is to simply choose a random element $x$ from $\G$ first, then run $\Trap\Sam(\PP, \tau, x)$ in Algorithm~\ref{alg:represent_enc}. Alternatively, we can pick a random exponent vector $\ary{e}\in [-B,B]^w$, and let $x = \prod_{i\in[w]}{C_i}^{e_i}$. For proper choices of $B$, $w$, and the set of ideals, $x$ is with high min-entropy heuristically, but figuring out the exact distribution of $x$ is difficult in general. If $B$, $w$, and the set of ideals are chosen according to Lemma~\ref{lemma:expander}, then under GRH, $x$ is statistically close to uniform over $\CL(\OOO)$. \paragraph{Compressing the composable encoding using the partial extraction algorithm.} In the basic setting, the composition of composable encodings keeps growing. In some application, it is tempting to compress the composable encodings as much as possible. We provide a partial extraction algorithm, which takes two composable encodings $\enc(x), \enc(y)$ and the public parameter as inputs. It outputs a $j$-invariant $j_x$ as the canonical encoding of $x$, and a sequences of isogenies $\phi_y = (\phi_1, ..., \phi_n)$ such that $\phi_n\circ ...\circ\phi_1(j_x) = j_{x\circ y}$ (we abuse the notation by using the $j$-invariant to represent an elliptic curve with such a $j$-invariant), where $j_{x\circ y}$ is the canonical encoding of ${x\circ y}$, $n$ is the number of the $j$-invariants in $\enc(y)$. In other words, $\phi_y$ represents the ideal class $C_y\in\G$ defined by $\enc(y)$. The one-side partial extraction algorithm is useful in the situation where the encoding $\enc(x)$ is relatively long and needs to be compressed, the encoding $\enc(y)$ is relatively short and doesn't have to be compressed. Now we present the partial extraction algorithm. The algorithm first runs the basic extraction algorithm $\convert(\PP, \cdot)$ on $\enc(x)$ (cf. Algorithm~\ref{alg:convert}), then produces $\phi_y$ using $\enc(y)$ and the intermediate information from $\convert(\PP, \enc(x))$. \begin{algorithm}\label{alg:partialconvert} $\Partial.\convert(\PP, \enc(x), \enc(y))$ maintains a pair of lists $(U, V)$, where $U$ stores a list of $j$-invariants $(j_1, ..., j_{|U|})$, $V$ stores a list of degrees where the $i^{th}$ entry of $V$ is the degree of isogeny between $j_{i}$ and $j_{i-1}$ (when $i=1$, $j_{i-1}$ is the $j_0$ in the public parameter). The lengths of $U$ and $V$ are always equal during the execution of the algorithm. The algorithm parses $\enc(x) = (L_x; T_{x,1}, ..., T_{x,w_x})$, $\enc(y) = (L_y; T_{y,1}, ..., T_{y,w_y})$, proceeds as follows: \begin{enumerate} \item Initialization: Let $U := T_{x,1}$, $V := ( L_{x,1}, ..., L_{x,1} )$ of length $|T_{x, 1}|$. \item For $i = 2$ to $w_x$: \begin{enumerate} \item Set $u_\temp:= | U |$. \item For $k = 1$ to $|T_{x,i}|$: \begin{enumerate} \item Let $t_{x,i,k,0}$ be the $k^{th}$ $j$-invariant in $T_{x,i}$, i.e. $j_{x,i,k}$; \item For $h = 1$ to $u_\temp$: \begin{itemize} \item If $k=1$, compute $t_{x,i,k,h}: = \gcdop(\PP, L_{x,i}, V_h; t_{x,i,k,h-1}, U_h)$; \item If $k>1$, compute $t_{x,i,k,h}: = \gcdop(\PP, L_{x,i}, V_h; t_{x,i,k,h-1}, t_{x,i, k-1,h})$; \end{itemize} \item Append $t_{x,i, k,u_\temp}$ to the list $U$, append $L_{x,i}$ to the list $V$. \end{enumerate} \end{enumerate} \item Let $j_x$ be the last entry of $U$. \item Initialize a counter $z := 1$. \item For $i = 1$ to $w_y$: \begin{enumerate} \item Set $u_\temp:= | U |$. \item For $k = 1$ to $|T_{y,i}|$: \begin{enumerate} \item Let $t_{y,i,k,0}$ be the $k^{th}$ $j$-invariant in $T_{y,i}$, i.e. $j_{y,i,k}$; \item For $h = 1$ to $u_\temp$: \begin{itemize} \item If $k=1$, compute $t_{y,i,k,h}: = \gcdop(\PP, L_{y,i}, V_h; t_{y,i, k,h-1}, U_h)$; \item If $k>1$, compute $t_{y,i,k,h}: = \gcdop(\PP, L_{y,i}, V_h; t_{y,i, k,h-1}, t_{y,i, k-1,h})$; \end{itemize} \item Let $j_\temp$ be the last entry in $U$. \item $(*)$ Produce an isogeny $\phi_z$ of degree $L_{y,i}$ such that $\phi_z(j_\temp) = t_{y,i, k,u_\temp}$. \item Increase the counter $z := z+1$. \item Append $t_{y,i, k,u_\temp}$ to the list $U$, append $L_{y,i}$ to the list $V$. \end{enumerate} \end{enumerate} \item Let $\phi_y := (\phi_1, ..., \phi_{n})$, where $n = z-1$. \item Output $j_x$ and $\phi_y$. \end{enumerate} \end{algorithm} Note that the operation $(*)$ can be performed efficiently due to Proposition~\ref{claim:kernelpoly}. The partial extraction algorithm is used in the directed transitive signature scheme to compress a composed signature. In fact, in the DTS scheme, we need to efficiently verify that $\phi_y$ does represent the ideal class $C_y\in\G$ defined by $\enc(y)$. We don't know how to verify that solely from $\enc(y)$ and the rational polynomial of $\phi_y$ over $\ZNZ$. (As mentioned in Remark~\ref{remark:isogenyoverZNZ}, it is not clear how to recover the explicit ideal class $C$ given an isogeny $\phi$ over $\ZNZ$. Of course we can recover the norm of the ideal from the degree of $\phi$, but normally there are two ideals of the same prime norm.) Instead we provide a succinct proof on the correctness of the execution of Algorithm~\ref{alg:partialconvert}. Let us recall the definition of a succinct non-interactive argument (SNARG). \begin{definition} Let $\CCC = \set{ C_\secp: \zo^{g(\secp)}\times \zo^{h(\secp)}\to \zo }_{\secp\in\N}$ be a family of Boolean circuits. A SNARG for the instance-witness relation defined by $\CCC$ is a tuple of efficient algorithms $(\Gen, \Prove, \Verify)$ defined as: \begin{itemize} \item $\Gen(1^\secp)$ take the security parameter $\secp$, outputs a common reference string $\CRS$. \item $\Prove(\CRS,x,w)$ takes the $\CRS$, the instance $x\in \zo^{g(\secp)}$, and a witness $w\in\zo^{h(\secp)}$, outputs a proof $\pi$. \item $\Verify(\CRS,x,\pi)$ takes as input an instance $x$, and a proof $\pi$, outputs 1 if it accepts the proof, and 0 otherwise. \end{itemize} It satisfies the following properties: \begin{itemize} \item Completeness: For all $x\in\zo^{g(\secp)}, w\in\zo^{h(\secp)}$ such that $C(x,w)=1$: \[ \Pr[ \Verify(\CRS,x,\Prove(\CRS,x,w)) = 1 ]=1 . \] \item Soundness: For all $x\in\zo^{g(\secp)}$ such that $C(x,w)=0$ for all $w\in\zo^{h(\secp)}$, for all polynomially bounded cheating prover $P^*$: \[ \Pr[ \Verify(\CRS,x, P^*(\CRS,x)) = 1 ]\leq \negl(\secp) . \] \item Succinctness: There exists a universal polynomial $Q$ (independent of $\CCC$) such that $\Gen$ runs in time $Q(\secp + \log |\CCC_\secp|)$, the length of the proof output by $\Prove$ is bounded by $Q(\secp + \log |\CCC_\secp|)$, and $\Verify$ runs in time $Q(\secp + g(\secp) + \log |\CCC_\secp|)$. \end{itemize} \end{definition} A construction of SNARG in the random oracle model is given by \cite{DBLP:journals/siamcomp/Micali00}. With a SNARG in hand, we can add in Algorithm~\ref{alg:partialconvert} a proof $\pi$ for the instance $(\enc(y), \phi_y)$ and statement ``there exists an encoding $\enc(x)$ such that $\phi_y$ is computed from running a circuit that instantiates Algorithm~\ref{alg:partialconvert} on inputs $\enc(x), \enc(y)$''. Here the public parameter $\PP$ is hardcoded in the circuit. The encoding $\enc(x)$ is the witness in the relation. The soundness of SNARG guarantees that $\phi_y$ is an isogeny that corresponds to an $\OOO$-ideal represented by $\enc(y)$. The succinctness of SNARG guarantees that the length of the proof $\pi$ is $\poly\log(|\enc(x)|)$ and the time to verify the proof is $\poly(|\enc(y)|, \log(|\enc(x)|))$. \paragraph{Alternative choices for the class group. } Ideally we would like to efficiently sample an imaginary quadratic order $\OOO$ of large discriminant $D$, together with the class number $h(D)$, a generation set $\set{ C_i }$ and a short basis $\mat{B}$ for $\Lattice_\OOO$, and two large primes $p$, $q$ as well as curves $E_{0,\F_p}, E_{0,\F_q}$ such that $\Endo(E_{0,\F_p})\simeq\Endo(E_{0,\F_q})\simeq \OOO$. In Section~\ref{sec:cryptanalysis} we will explain that if $|D|$ is polynomial then computing the group inversion takes polynomial time, so we are forced to choose a super-polynomially large $|D|$. On the other hand, there is no polynomial time solution for the task of choosing a square-free discriminant $D$ with a super-polynomially large $h(D)$ (see, for instance, \cite{DBLP:conf/asiacrypt/HamdyM00}). In the basic setting of the parameter, we have described a solution where the resulting $D$ is not square-free, of size $\approx\secp^{O(\log\secp)}$, and polynomially smooth. Here we provide the background if one would like to work with a square-free discriminant $D$. In this case, $\OOO$, the ring of integers of $\Q(\sqrt{D})$, is the maximal order of an imaginary quadratic field $K$. According to the Cohen-Lenstra heuristics \cite{cohenheuristics}, about $97.7575\%$ of the imaginary quadratic fields $K$ have the odd part of $\CL(\OOO_K)$ cyclic. If we choose $D$ such that $|D|\equiv 3\bmod 4$ and is a prime, then $h(D)$ is odd (by genus theory). So we might as well assume that $\CL(\OOO_K)$ is cyclic with odd order. For a fixed discriminant $D$, heuristically about half of the primes $\ell$ satisfies $\kron{D}{\ell} = 1$, so there are polynomially many ideals of polynomially large norm that can be used in the generation set. However, it is not clear how to efficiently choose $p$, $q$ and curves $E_{0,\F_p}$, $E_{0,\F_q}$ such that the endomorphism rings of $E_{0,\F_p}$ and $E_{0,\F_q}$ have the given discriminant $D$ of super-polynomial size. The classical CM method (cf. \cite{lay1994constructing} and more) requires computing the Hilbert class polynomial $H_D$, whose cost grows proportional to $|D|$. Let us remark that the CM method might be an overkill, since we do not need to specify the number of points $\#(E_{0,\F_p}(\F_p))$ and $\#(E_{0,\F_q}(\F_q))$. However, we do not know any other better methods. \iffalse \begin{constru} A maximal order $\OOO$ of an imaginary quadratic field is chosen as follows: \begin{itemize} \item Choose a fundamental discriminant $D<0$ such that $|D| \approx \secp^{O(\log^{1-\epsilon}(\secp))}$, and let $\OOO$ ... \item Pick a set of ideal classes $S = \set{ C_i = [(\ell_i, b_i, \cdot)] }_{i\in[m]}$ that generate $\CL(\OOO)$, where $m = O(\log(\secp))$. \item Compute $h(D)$, and then a basis $\mat{B}$ of $\Lattice_\OOO = \set{ \ary{e} \mid \ary{e}\in\Z^m, \prod_{i\in[m]} C_i^{e_i} = 1_\G }$ by solving discrete-log over $\CL(\OOO)$. Run LLL on $\mat{B}$ to obtain a short basis. \end{itemize} \end{choice} Suppose that the lattice $\Lattice_\OOO$ satisfies the Gaussian heuristic. That is, for all $1\leq i\leq m$, the $i^{th}$ successive minimum of $\Lattice_\OOO$ $ \lambda_i$ satisfies $\lambda_i \approx \sqrt{m}\cdot h(\OOO_K)^{1/m} $. Since we choose $m = O(\log(\secp))$, $|D| \approx \secp^{O(\log^{1-\epsilon}(\secp))}$. Then $h(D) \approx O(\sqrt{|D|}) = \secp^{O(\log^{1-\epsilon}(\secp))}$, and the discrete-log problem over $\CL(D)$ can be solved in time $e^{O( \sqrt{ \log^{2-\epsilon}(\secp) \log( \log^2(\secp) ) } )}\in \poly(\secp)$, which means we can efficiently generate a (possibly large) basis of $\Lattice_\OOO$. The short basis $\mat{B}$ of $\Lattice_\OOO$, produced by the LLL algorithm, satisfies $\| \mat{B} \|\leq 2^\frac{m}{2}\cdot \lambda_m \in \poly(\secp)$. \anote{What is $\lambda_m$?} So, under all the heuristics, the system parameter does imply polynomial time algorithms for sampling and composing encodings. \anote{We can choose $-D$ to be prime $=1 \bmod 4$ to make the class group odd (by genus theory). We should then be hitting a cyclic class group quite often. [Passing by, the paper \cite{S.-Holmin:2017aa} seems to conjecture precise numbers for the structure of the class groups of imaginary quadratic fields (Conjecture 1.7 and Theorem 1.9).] Then, to get a cyclic class group (or a subgroup of it) we can try: \begin{enumerate} \item Choose a $D<0$ such that $-D$ is a prime $=1 \bmod 4$ such that $\ell = \mathfrak{l}\bar{\mathfrak{l}}$. \item Consider the subgroup $\langle \mathfrak{l}\rangle$. \end{enumerate} } The record for computing Hilbert class polynomial is $|D|~10^{15}$ from \cite{enge2010class}. Given a relation table, we can find a short table by LLL. HHS paper says when $|D| = 10^{40}$, with $n = 40$, LLL gives short basis whose largest entry is 17. According to \cite[Page~254]{cohen1995course}, in many cases $6 \log^2 |D|$ is bigger than $ L(|D|)$ when $D$ has less than 103 digits. When $D = -10^{50}$, $B = 79500$, $n = 3900$. The exact sequence \cite[Eqn.~(4.1)]{kohel1996endomorphism} \[ 1 \longrightarrow \frac{ (\OOO_K / f\OOO_K)^* }{ \bar{\OOO_K^*}(\Z/f\Z)^* } \longrightarrow \CL(\OOO) \longrightarrow \CL(\OOO_K) \longrightarrow 1 \] provides more information about the group structure. Here the group $(\OOO_K / f\OOO_K)^* \simeq \prod_{i\in [k]} (\OOO_K / f_i\OOO_K)^*$ \cite[Page~142]{cox2011primes}. For each $i\in[k]$, \[ |(\OOO_K / f_i\OOO_K)^*| = f_i^2 (1 - \frac{1}{f_i})(1 - \kron{D_0}{f_i}\cdot \frac{1}{f_i}). \] Using the generalization of the Pohlig-Hellman algorithm for possibly non-cyclic groups \cite{teske1999pohlig}, we can still solve the discrete-log problem over $\CL(\OOO)$ efficiently. The group structure of $\CL(\OOO_K)$ can be efficiently computed when $|D_0|$ is polynomial. Let $S_K$ be the generation set of $\CL(\OOO_K)$ according to Lemma~\ref{lemma:classgroupstructure}. From \cite[Page~142]{cox2011primes}, the group $(\OOO_K / f\OOO_K)^* \simeq \prod_{i\in [k]} (\OOO_K / f_i\OOO_K)^*$. For each $i\in[k]$, \[ |(\OOO_K / f_i\OOO_K)^*| = f_i^2 (1 - \frac{1}{f_i})(1 - \kron{D_0}{f_i}\cdot \frac{1}{f_i}). \] Let $S_f$ be the set of generators for $(\OOO_K / f\OOO_K)^*$. Then $S_f \cup S_K$ generates $\CL(\OOO)$. So if we choose $k = O(\log(\secp))$, $|D_0|, f_i \in\poly(\secp)$. With probability $97.7575\%$ $S_K = 1$ assuming Cohen-Lenstra heuristic, $|S_f| = O(\log(\secp))$, $h(D) \approx \secp^{O(\log(\secp))}$ and is polynomially smooth. \ynote{Question: If $\CL(D_0)$ is cyclic, is $\CL(D)$ also cyclic? } \anote{Answer: Not always. But we can choose a $D$ to make it cyclic as follows: First let us choose a $D_0$ such that $\mathcal{C}(D_0)$ is cyclic. Assuming Cohen-Lenstra, the odd part of the class group will be cyclic with high probability (Note to self:set the actual probability! Update: Yilei put it up.). If needed, we can also arrange the $2$-part so that the whole group is cyclic (Note to self: Can formalize with genus theory, but probably would not be needed, Update: Done in the part above.). Then, choose a prime $p\in\mathbb{Z}$ such that $\gcd(p-\left(\frac{D_0}{p}\right),D_0)=1$ and $p-\left(\frac{D_0}{p}\right)$ is square-free. Then since $h(D)$ is given by Eqn.~\eqref{eqn:classnumbernonmaximal} we necessarily have $\mathcal{D}$ cyclic.} \ynote{Oh, I guess you are using the fact that a finite group is cyclic iff it has precisely one subgroup of each divisor of its order. Then with $\gcd( p-\left(\frac{D_0}{p}\right),D_0)=1$ (btw I don't know where the extra factor $p$ is from so I drop it from the expression) and $p-\left(\frac{D_0}{p}\right)$ being square-free, at least we can show that the odd part of $\CL(D)$ is cyclic.} \anote{Yes, and yes; there shouldn't be a $p$, it is just $p-\left(\frac{D_0}{p}\right)$.} \paragraph{Concrete parameters.} The record for computing class group structure is $|D|~10^{100}$ from \cite{sutherland2007order}. The record for computing Hilbert class polynomial is $|D|~10^{15}$ from \cite{enge2010class}. Given a relation table, we can find a short table by LLL. HHS paper says when $|D| = 10^{40}$, with $n = 40$, LLL gives short basis whose largest entry is 17. According to \cite[Page~254]{cohen1995course}, in many cases $6 \log^2 |D|$ is bigger than $ L(|D|)$ when $D$ has less than 103 digits. When $D = -10^{50}$, $B = 79500$, $n = 3900$. \fi \iffalse \begin{choice}[Choice II]\label{choice:II} A non-maximal order $\OOO$ of an imaginary quadratic field is chosen as follows: \begin{itemize} \item Select a polynomially large negative square-free integer $D_0\equiv 1 \bmod 4$ such that $h(D_0)$ is a prime. \item Choose an integer $n = O(\log(\secp))$, and choose a set of polynomially large prime numbers $\set{ p_i }_{i\in[n]}$ such that the odd-part of $\left(p_i - \kron{D_0}{p_i}\right) $ is square-free and nor divisible by $h(D_0)$ for all $i\in[n]$. Let $f = \prod_{i\in[n]} p_i$. \item Set $D = f^2 D_0$. Recall from Eqn.~\eqref{eqn:classnumbernonmaximal} that \begin{equation} h(D) = 2\cdot \frac{h(D_0)}{w(D_0)}\prod_{i\in [n]}\left( p_i - \kron{D_0}{p_i} \right) \end{equation} Let $\CL(\OOO)_{\text{odd}}$ be the odd part of $\CL(\OOO)$, and correspondingly $h(D)_{\text{odd}}$ be the odd part of $h(D)$. \item Pick a set of ideal classes $S = \set{ C_i = [(\ell_i, b_i, \cdot)] }_{i\in[m]}$ that generates $\CL(\OOO)_{\text{odd}}$, and $\ell_i\in\poly(\secp)$. \item Compute a basis $\mat{B}$ of $\Lattice_\OOO = \set{ \ary{e} \mid \ary{e}\in\Z^m, \prod_{i\in[m]} C_i^{e_i} = 1_\G }$ by solving discrete-log over $\CL(\OOO)_{\text{odd}}$. Run LLL on the basis $\mat{B}$ to obtain a short basis. \end{itemize} \end{choice} Remark that Choice II gives $\CL(\OOO)_{\text{odd}}$ of cardinality $h(D)_{\text{odd}} \approx \secp^{O(\log(\secp))}$. Moreover, by construction, $\CL(\OOO)_{\text{odd}}$ is cyclic since its order is square-free, and the class number $h(D)_{\text{odd}}$ is polynomially smooth so that the discrete-log problem over $\CL(\OOO)_{\text{odd}}$ can be solved using the Pohlig-Hellman algorithm. Same as Choice I, the short basis of $\Lattice_\OOO$ produced by LLL algorithm satisfies $\| \mat{B} \|\leq 2^\frac{m}{2}\cdot \lambda_m \in \poly(\secp)$. Here we explain a bit more on the plaintext representation of a group element of $\CL(\OOO)$ for a non-maximal order $\OOO$. The main references are \cite{cox2011primes,huhnlein1998cryptosystem}. From \cite[Proposition~7.22]{cox2011primes}, for an order $\OOO$ of conductor $f$ in an imaginary quadratic field $K$, there are isomorphisms \[ \CL(\OOO) = I(\OOO)/P(\OOO) \simeq I(\OOO, f)/P(\OOO, f) \simeq {I_K(f)} / P_{K, \Z}(f) \] where \begin{itemize} \item $I(\OOO, f)$ denotes the subgroup of $I(\OOO)$ generated by the $\OOO$ ideals prime to $f$; \item $P(\OOO, f)$ denotes the subgroup generated by the principle ideals $\alpha\OOO$ where $\alpha\in\OOO$ has norm $N(\alpha)$ prime to $f$; \item $I_K(f)$ denotes the subgroup of $I_K$ (the group of fractional $\OOO_K$ ideals) generated by the $\OOO_K$ ideals prime to $f$ (an $\OOO_K$ ideal $\mfk{a}$ is prime to $f$ if $\gcd( N(\mfk{a}), f )=1$); \item $P_{K, \Z}(f) $ denotes the subgroup of $I_K(f)$ generated by the principle ideals of the form $\alpha\OOO_K$, where $\alpha\in\OOO_K$ satisfies $\alpha\equiv a \mod f\OOO_K$ for an integer $a$ relatively prime to $f$. \end{itemize} Therefore a group element of $\CL(\OOO)$ can be represented by an ideal in $I(\OOO, f)$ or $I_K(f)$. Remark that Choice II gives $\CL(\OOO)_{\text{odd}}$ of cardinality $h(D)_{\text{odd}} \approx \secp^{O(\log(\secp))}$. Moreover, by construction, $\CL(\OOO)_{\text{odd}}$ is cyclic since its order is square-free, and the class number $h(D)_{\text{odd}}$ is polynomially smooth so that the discrete-log problem over $\CL(\OOO)_{\text{odd}}$ can be solved using the Pohlig-Hellman algorithm. Same as Choice I, the short basis of $\Lattice_\OOO$ produced by LLL algorithm satisfies $\| \mat{B} \|\leq 2^\frac{m}{2}\cdot \lambda_m \in \poly(\secp)$. Let $\G := \CL(\OOO)$. Define the relation lattice w.r.t. the set $S$ as \begin{equation}\label{eqn:Lattice_OOO} \Lattice_\OOO: = \set{ \ary{e} \mid \ary{e}\in\Z^m, \prod_{i\in[m]} C_i^{e_i} = 1_\G }. \end{equation} Let $\mat{B}$ be a short basis of $\Lattice_\OOO$. \fi
{"config": "arxiv", "file": "1810.00022/sec_general_TGII_optional.tex"}
TITLE: If $a,b$ are irrational numbers, Is $K=[a,b] \cap \mathbb Q$ closed in $\mathbb Q$? QUESTION [3 upvotes]: Suppose $a,b \in \mathbb R-\mathbb Q , a <b$. Consider $K=[a,b] \bigcap \mathbb Q$. Now, $[a,b]$ is closed in the metric superspace of $\mathbb Q$ i.e in $\mathbb R$. Thus, $K=[a,b] \bigcap Q$ is closed in $\mathbb Q$ But, $K=[a,b] \bigcap Q=(a,b) \bigcap Q$ doesn't contain it's limit points, namely, $a,b$ . Suppose $a=\sqrt 2, b = \sqrt 5$. If we define a function $A= \{x \in K~|~x^2 <5 \}$, then there is a sequence in $K$ which converges to $\sqrt 5$ but $\sqrt 5 \notin K .$ Then how can K be closed in $\mathbb Q$? Thanks a lot for the help! REPLY [1 votes]: Since $\Bbb{Q}$ inherits the topology of $\Bbb{R}$, $K$ is closed in $\Bbb{Q}$ if and only if we can write it as an intersection of a closed set in $\Bbb{R}$ with $\Bbb{Q}$. But this is true: $$K = \underbrace{[a,b]}_{\text{closed in }\Bbb{R}} \cap \Bbb{Q}.$$
{"set_name": "stack_exchange", "score": 3, "question_id": 3648228}
\begin{document} \begin{frontmatter} \title{Gaussian Whittle--Mat\'ern fields on metric graphs} \runtitle{Gaussian Whittle--Mat\'ern fields on metric graphs} \begin{aug} \author[A]{\fnms{David} \snm{Bolin}\ead[label=e1,mark]{david.bolin@kaust.edu.sa}} \and \author[A]{\fnms{Alexandre B.} \snm{Simas}\ead[label=e2]{alexandre.simas@kaust.edu.sa}} \and \author[B]{\fnms{Jonas} \snm{Wallin}\ead[label=e3]{jonas.wallin@stat.lu.se}} \runauthor{David Bolin, Alexandre Simas and Jonas Wallin } \address[A]{Statistics Program, Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, \printead{e1}, \printead{e2}} \address[B]{Department of Statistics, Lund University, \printead{e3}} \end{aug} \begin{abstract} We define a new class of Gaussian processes on compact metric graphs such as street or river networks. The proposed models, the Whittle--Mat\'ern fields, are defined via a fractional stochastic partial differential equation on the compact metric graph and are a natural extension of Gaussian fields with Mat\'ern covariance functions on Euclidean domains to the non-Euclidean metric graph setting. Existence of the processes, as well as their sample path regularity properties are derived. The model class in particular contains differentiable Gaussian processes. To the best of our knowledge, this is the first construction of a valid differentiable Gaussian field on general compact metric graphs. We then focus on a model subclass which we show contains processes with Markov properties. For this case, we show how to evaluate finite dimensional distributions of the process exactly and computationally efficiently. This facilitates using the proposed models for statistical inference without the need for any approximations. Finally, we derive some of the main statistical properties of the model class, such as consistency of maximum likelihood estimators of model parameters and asymptotic optimality properties of linear prediction based on the model with misspecified parameters. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary ]{60G60} \kwd[; secondary ]{} \kwd{62M99} \kwd{62M30} \end{keyword} \begin{keyword} \kwd{Networks, Gaussian processes, stochastic partial differential equations, Gaussian Markov random fields, quantum graphs} \end{keyword} \end{frontmatter} \section{Introduction} In many areas of applications, statistical models need to be defined on networks such as connected rivers or street networks \citep{okabe2012spatial,baddeley2017stationary,cronie2020}. In this case, one wants to define a model using a metric on the network rather than the Euclidean distance between points. However, formulating Gaussian fields over linear networks, or more generally on metric graphs, is difficult. The reason being that it is hard to find flexible classes of functions that are positive definite when some non-Euclidean metric on the graph is used. Often the shortest distance between two points is explored, i.e., the geodesic metric. However, it has been argued that this is an unrealistic metric for many processes observed in real life \citep{baddeley2017stationary}. A common alternative is the electrical resistance distance \citep{okabe2012spatial}, which was used recently by \citet{anderes2020isotropic} to create isotropic covariance functions on a subclass of metric graphs with Euclidean edges. \citet{anderes2020isotropic} in particular showed that for graphs with Euclidean edges, one can define a valid Gaussian field by taking the covariance function to be of Mat\'ern type \citep{matern60}: \begin{equation}\label{eq:matern_cov} r(s,t) = \frac{\Gamma(\nu)}{\tau^2\Gamma(\nu + \nicefrac{1}{2})\sqrt{4\pi}\kappa^{2\nu}}(\kappa d(s,t))^{\nu}K_\nu(\kappa d(s,t)), \end{equation} where $d(\cdot,\cdot)$ is the resistance metric, $\tau,\kappa>0$ are parameters controlling the variance and practical correlation range, and $0<\nu\leq \nicefrac1{2}$ is a parameter controlling the sample path regularity. The fact that we obtain the restriction $\nu\leq \nicefrac1{2}$ means that we cannot use this approach to create differentiable Gaussian processes on metric graphs, even if they have Euclidean edges. Because of this, and because of the many other difficulties in creating Gaussian fields via covariance functions on non-Euclidean spaces, we take a different approach in this work and focus on creating a Gaussian random field $u$ on a compact metric graph $\Gamma$ as a solution to a stochastic partial differential equation (SPDE) \begin{equation}\label{eq:Matern_spde} (\kappa^2 - \Delta)^{\alpha/2} (\tau u) = \mathcal{W}, \qquad \text{on $\Gamma$}, \end{equation} where $\alpha = \nu + \nicefrac1{2}$, $\Delta$ is the Laplacian equipped with suitable ``boundary conditions'' in the vertices, and $\mathcal{W}$ is Gaussian white noise. The advantage with this approach is that, if the solution exists, it automatically has a valid covariance function. The reason for considering this particular SPDE is that when \eqref{eq:Matern_spde} is considered on $\mathbb{R}^d$, it has Gaussian random fields with the covariance function \eqref{eq:matern_cov} as stationary solutions \citep{whittle63}. The method of generalizing the Mat\'ern fields to Riemaniann manifolds by \emph{defining} Whittle--Mat\'ern field as solutions to \eqref{eq:Matern_spde} specified on the manifold was proposed by \citet{lindgren11}, and has since then been extended to a number of scenarios \citep[see][for a recent review]{lindgren2022spde}, including non-stationary \citep{bakka2019, Hildeman2020} and non-Gaussian \citep{bolin14,bw20} models. The difficulty in extending the SPDE approach to metric graphs is that it is not clear how to define the differential operator in this case, and it is also not clear what type of covariance functions one would obtain. We will in this work use quantum graph theory \citep{Berkolaiko2013} to define the operator and show that \eqref{eq:Matern_spde} then has a unique solution, for which we can derive sample path regularity properties. We will furthermore show that this solution has Markov properties when $\alpha\in\mathbb{N}$, and that we in these cases can derive finite dimensional distributions of the process analytically. For $\alpha=1$ we obtain a process with a covariance function that is similar to the exponential covariance function, i.e., the case $\nu=\nicefrac1{2}$ in \eqref{eq:matern_cov}, which was shown to be a valid covariance for metric graphs with Euclidean edges by \citet{anderes2020isotropic}. However, our construction has two major advantages. First, it has Markov properties, which means that the precision matrices (inverse covariance matrices) of the finite dimensional distributions of the process will be sparse. This greatly simplifies the use of the model for applications involving big data sets. Secondly, the model is well-defined for \emph{any} compact metric graph, and not only of the subclass with Euclidean edges. An example of this covariance for a simple graph can be seen in Figure~\ref{fig:cov_example}, and an example for a graph from a real street network can be seen in Figure~\ref{fig:street_graph}. Furthermore, we derive an explicit density for the finite dimensional distributions of the processes for higher values of $\alpha\in\mathbb{N}$. In this case one cannot use the corresponding Mat\'ern covariance function to construct a valid Gaussian process in general, even for graphs with Euclidean edges \citep{anderes2020isotropic}. Thus, this construction provides, as far as we know, for the first time a covariance function for differentiable random fields on compact metric graphs. See Figure~\ref{fig:cov_example} for an example. \begin{figure}[t] \label{fig:cov_example} \includegraphics[width=0.45\linewidth]{figs/fig1exp} \includegraphics[width=0.45\linewidth]{figs/fig1beta1} \includegraphics[width=0.45\linewidth]{figs/fig1exp_u} \includegraphics[width=0.45\linewidth]{figs/fig1beta1_u} \caption{Examples of covariance functions (top) and corresponding realizations of a Gaussian process (bottom) on a graph with $\alpha=1$ (left) and $\alpha=2$ (right). In both cases, $\kappa=3$ and the covariance $\Cov(u(s_0), u(s))$, for $s_0=(1,0.5)$, is shown. } \end{figure} Defining Gaussian random fields on graphs has recently received some attention. In particular various methods based on the graph Laplacian have been proposed \citep{Alonso2021,dunson2020graph,borovitskiy2021matern}. However, these approaches do not define the Gaussian random field on the metric graph, but only at the vertices of the graph. Since it is unclear if methods based on the graph Laplacian can be used to define a Gaussian process on the entire metric graph, we will not consider these approaches in this work. However, we will later show that the construction by \citet{borovitskiy2021matern} can be viewed as an approximation of the exact Whittle--Mat\'ern fields on metric graphs. The outline of this work is as follows. In Section~\ref{sec:graphs}, we introduce the notion of compact metric graphs in detail and provide some of the key results from quantum graph theory which we will need in later sections. In Section~\ref{sec:spde} we introduce the Whittle--Mat\'ern fields via \eqref{eq:Matern_spde} and prove that the SPDE has a unique solution. Section~\ref{sec:properties} provides results on the regularity of the covariance function, and sample path regularity of the corresponding Whittle--Mat\'ern fields. In Section~\ref{sec:markov}, we consider the Markov subclass $\alpha\in\mathbb{N}$ and show that we in this case can derive closed-form expressions for the covariance function of the model. Section~\ref{sec:statistical} contains results on the statistical properties of the model, where we in particular prove consistency results regarding maximum likelihood estimation of the parameters and consider asymptotic optimality of kriging prediction based on the model with misspecified parameters. Section~\ref{sec:inference} shows that we can use the Markov properties of the model for computationally efficient inference. Section~\ref{sec:graph_laplacian} contains the comparison with the method based on the graph Laplacian, and the article concludes with a discussion in Section~\ref{sec:discussion}. Proofs and further details are given in six appendices of the manuscript. \section{Quantum graphs and notation}\label{sec:graphs} In this section, we introduce the notation that will be used throughout the article as well as some of the key concepts from quantum graph theory that we will need in later sections. We refer to \citet{Berkolaiko2013} for a complete introduction to the field. \subsection{Quantum graphs} A finite undirected metric graph $\Gamma$ consists of a finite set of vertices $\mathcal{V}=\{v_i\}$ and a set $\mathcal{E}=\{e_j\}$ of edges connecting the vertices. We write $u\sim v$ or $(u,v)\in\mathcal{E}$ if $u$ and $v$ are connected with an edge. Each edge $e$ is defined by a pair of vertices $(\bar{e},\underbar{e}) = (v_i,v_k)$ and a length $l_e \in (0,\infty)$. For every $v\in\mathcal{V}$, we denote the set of edges incident to the vertex $v$ by $\mathcal{E}_v$, and define the degree of $v$ by $\deg(v) = \#\mathcal{E}_v$. We assume that the graph is connected, so that there is a path between all vertices, and we assume that the degree of each vertex (i.e., the number of edges connected to it) is finite. Since the lengths $l_e$ are assumed to be finite, this means that the graph is compact. A point $s\in \Gamma$ is a position on an edge, i.e., $s=(e,t)$ where $t\in[0,l_e]$. A natural choice of metric for the graph is the shortest path distance, which for any two points in $\Gamma$ is defined as the length of the shortest path in $\Gamma$ connecting the two. We denote this metric by $d(\cdot,\cdot)$ from now on. \begin{example} A typical example of an edge is a simple (i.e., without self intersection) piecewise $C^1$ curve $\gamma:[0,l_e]\to \mathbb{R}^d$, $0<l_e<\infty$, for some $d\in\mathbb{N}$, which is regular (i.e., for all $t\in [0,l_e]$ such that $\gamma$ is differentiable at $t$, we have $\gamma'(t)\neq 0$) and is parametrized by arc-length. It is a basic result in differential geometry that every regular piecewise $C^1$ curve admits a parametrization given by arc-length. Therefore, if we take the edge $e$ to be induced by the curve $\gamma$, then given $t,s\in [0,l_e]$, with $s<t$, the distance between the points $x = (e,s)$ and $y=(e,t)$ is given by the length of the curve $\gamma([s,t])$, which is given by $L(\gamma|_{[s,t]}) = t-s$. This shows that this is a suitable choice for edges in compact metric graphs, as the curve, under these assumptions, is isometric to a closed interval. \end{example} A metric graph coupled with a differential operator on that graph is referred to as a quantum graph. The most important differential operator in this context is the Laplacian. However, there is no unique way of defining the Laplacian on a metric graph. The operator is naturally defined as the second derivative on each edge, but at the vertices there are several options of ``boundary conditions'' or \emph{vertex conditions} which may be used. One of the most popular choices is the Kirchhoff conditions $$ \{\mbox{$f $ is continuous on $\Gamma$ and $\forall v\in\mathcal{V}: \sum_{e \in\mathcal{E}_v} \partial_e f(v) = 0$} \}, $$ where $\partial_e$ denotes the directional derivative away from the vertex. The Laplacian with these vertex conditions is often denoted as the Kirchhoff-Laplacian, which we here simply denote by $\Delta_{\Gamma}$, and which by \citep[][Theorem 1.4.4]{Berkolaiko2013} is self-adjoint. It turns out that this particular Laplacian is the most natural for defining Whittle--Mat\'ern fields on metric graphs, and we from now on only consider this choice. We let $\{\hat{\lambda}_i\}_{i\in\mathbb{N}}$ denote the eigenvalues of $\Delta_{\Gamma}$, sorted in non-decreasing order, and let $\{\eig_j\}_{j\in\mathbb{N}}$ denote the corresponding eigenfunctions. By Weyl's law \citep{Odzak2019Weyl} we have that $\hat{\lambda}_i \sim i^2$ as $i\rightarrow\infty$, so there exists constants $c_\lambda, C_\lambda$ such that $0<c_\lambda< C_\lambda<\infty$ and \begin{equation}\label{eq:weyl} \quad c_\lambda i^2 \leq \lambda_i \leq C_\lambda i^2 \quad \forall i\in\mathbb{N}. \end{equation} \subsection{Function spaces and additional notation} The space $L_2(\Gamma)$ is defined as the direct sum of the $L_2(e)$ spaces on the edges of $\Gamma$. That is, $f = \{f_e\}_{e\in \mathcal{E}} \in L_2(\Gamma)$ if $f_e \in L_2(e)$ for each $e\in \mathcal{E}$, $$ L_2(\Gamma) = \bigoplus_{e \in \mathcal{E}} L_2(e), \qquad \|f\|_{L_2(\Gamma)}^2 = \sum_{e\in\mathcal{E}}\|f\|_{L_2(e)}^2. $$ We let $C(\Gamma) = \{f\in L_2(\Gamma): f\hbox{ is continuous}\}$ denote the space of continuous functions on $\Gamma$ and let $\|\phi\|_{C(\Gamma)} = \sup\{|\phi(x)|: x\in\Gamma\}$ denote the supremum norm. For $0<\gamma<1$, we introduce the $\gamma$-Hölder seminorm $$ [\phi]_{C^{0,\gamma}(\Gamma)} = \sup\left\{\frac{|\phi(x)-\phi(y)|}{d(x,y)^{\gamma}}: x,y\in \Gamma, x\neq y \right\}, $$ and the $\gamma$-Hölder norm $\|\phi\|_{C^{0,\gamma}(\Gamma)} = \|\phi\|_{C(\Gamma)} + [\phi]_{C^{0,\gamma}(\Gamma)}.$ We let $C^{0,\gamma}(\Gamma)$ denote the space of $\gamma$-Hölder continuous functions, that is, the set of functions $\phi\in C(\Gamma)$ with $\|\phi\|_{C^{0,\gamma}(\Gamma)}<\infty$. We define the Sobolev space $H^1(\Gamma)$ as the space of all continuous functions on $\Gamma$ with $$ \|f\|_{H^1(\Gamma)}^2 = \sum_{e\in\mathcal{E}} \|f\|_{H^1(e)}^2 < \infty, $$ where $H^1(e)$ is the Sobolev space of order 1 on the edge $e$. The continuity assumption guarantees that $f$ is uniquely defined at the vertices. Dropping the continuity requirement, one can construct decoupled Sobolev spaces of any order $k$ as the direct sum $ \widetilde{H}^k(\Gamma) = \bigoplus_{e\in \mathcal{E}} H^k(e), $ endowed with the Hilbertian norm $ \|f\|_{\widetilde{H}^k(\Gamma)}^2 = \sum_{e\in\mathcal{E}} \|f\|_{H^k(e)}^2 < \infty, $ where $H^k(e)$ is the Sobolev space of order $k$ on the edge $e$. We refer the reader to Appendix~\ref{app:sob} for details about these Sobolev spaces. Finally, for $0<\alpha<1$, we define the fractional Sobolev space of order $\alpha$ by the real interpolation between the spaces $L_2(\Gamma)$ and $H^1(\Gamma)$, $$H^\alpha(\Gamma) = (L_2(\Gamma), H^1(\Gamma))_\alpha.$$ See Appendix \ref{app:interpolation} for the basic definitions of real interpolation of Hilbert spaces. Throughout the article, we let $(\Omega, \mathcal{F},\mathbb{P})$ be a complete probability space, and for a real-valued random variable $Z$, we let $\pE(Z) = \int_\Omega Z(\omega) d\mathbb{P}$ denote its expectation. We let $L_2(\Omega)$ denote the Hilbert space of all (equivalence classes) of real-valued random variables with finite second moment, $\pE(Z^2) < \infty$ . For a Hilbert space $(E, \|\cdot \|_E)$, we let $L_2(\Omega; E)$ denote the space of $E$-valued Bochner measurable random variables with finite second moment. This space is equipped with the norm $\|u\|_{L_2(\Omega; E)} = \pE[\|u(\omega)\|_E^2]$. \section{Whittle--Mat\'ern fields on compact metric graphs}\label{sec:spde} Let $\kappa^2>0$ and define the differential operator \begin{equation}\label{eq:Loperator} Lu = (\kappa^2 - \Delta_\Gamma) u, \quad u\in \mathcal{D}(L) \subset L_2(\Gamma). \end{equation} The operator induces a symmetric and continuous bilinear form \begin{equation}\label{eq:bilinearform} a_L : H^1(\Gamma) \times H^1(\Gamma) \rightarrow \mathbb{R}, \quad a_L(\phi,\psi) = (\kappa^2 \phi,\psi)_{L_2(\Gamma)} + \sum_{e\in\mathcal{E}}(\phi',\psi')_{L_2(e)}, \end{equation} which is coercive with coercivity constant $\min(1,\kappa)$ \citep{Arioli2017FEM}. Furthermore, it is clear that the operator is densely defined and positive definite and since the metric graph is compact, and $L$ has a discrete spectrum, where each eigenvalue has finite multiplicity \citep[][Chapter 3]{Berkolaiko2013}. Clearly, the operator $L$ diagonalizes with respect to the eigenfunctions of $\Delta_{\Gamma}$, and it has eigenvalues $\{ \lambda_i\}_{i\in\mathbb{N}} = \{\kappa^2 + \hat{\lambda}_i\}_{i\in\mathbb{N}}$. In order to define the Whittle--Mat\'ern fields, we introduce the fractional operator $L^\beta$ in the spectral sense as follows \citep[see also][]{BKK2020}. Define the Hilbert space $$ \dot{H}^{2\beta} = \mathcal{D}(L^\beta) := \{ \phi \in L_2(\Gamma) : \|\phi\|_{2\beta} < \infty\}, $$ with inner product $(\phi,\psi)_{2\beta} := (L^\beta\phi, L^\beta \psi)_{L_2(\Gamma)}$ and induced norm $\|\phi\|_{2\beta} = \|L^\beta \phi\|_{L_2(\Gamma)}$. Here, the action of $L^\beta : \mathcal{D}(L^\beta) \rightarrow L_2(\Gamma)$ is defined by $$ L^\beta \phi := \sum_{j\in\mathbb{N}} \lambda_j^\beta (\phi,\eig_j)\eig_j, $$ and thus $\|\phi\|_{2\beta}^2 = \sum_{j\in\mathbb{N}} \lambda_j^{2\beta} (\phi,\eig_j)^2$. We denote the dual space of $\dot{H}^{2\beta}$ by $\dot{H}^{-2\beta}$, which has norm $\|\phi\|_{-2\beta}^2 = \sum_{j\in\mathbb{N}} \lambda_j^{-2\beta} \langle\phi,\eig_j\rangle^2$, where $\langle\phi,\eig_j\rangle$ denotes the duality pairing between $\dot{H}^{-2\beta}$ and $\dot{H}^{2\beta}$. Let $\mathcal{W}$ denote Gaussian white noise on $L_2(\Gamma)$, which we may, formally, represent through the series expansion $\mathcal{W} = \sum_{j\in\mathbb{N}} \xi_j \eig_j$ $\mathbb{P}$-a.s., where $\{\xi_j\}_{j\in\mathbb{N}}$ are independent standard Gaussian random variables on $(\Omega, \mathcal{F},\mathbb{P})$. By \citep[][Proposition 2.3]{BKK2020} this series converges in $L_2(\Omega; \dot{H}^{-s})$ for any $s>1/2$, and thus, $\mathcal{W} \in \dot{H}^{-\frac1{2}-\epsilon}$ holds $\mathbb{P}$-a.s.\ for any $\epsilon>0$. Alternatively, we can represent Gaussian white noise as a family of centered Gaussian variables $\{\mathcal{W}(h) : h\in L_2(\Gamma)\}$ which satisfy \begin{equation}\label{isometryW} \pE[\mathcal{W}(h)\mathcal{W}(g)] = (h,g)_{L_2(\Gamma)} \qquad \forall h,g\in L_2(\Gamma). \end{equation} We define the Whittle--Mat\'ern fields through the fractional-order equation \begin{equation}\label{eq:spde} L^{\alpha/2} (\tau u) = \mathcal{W}, \end{equation} where $\tau>0$ is a constant that controls the variance of $u$. By the facts above in combination with \citep[][Lemma 2.1]{BKK2020}, the SPDE \eqref{eq:spde} has a unique solution $u\in L_2(\Gamma)$ ($\mathbb{P}$-a.s.) given that $\alpha>1/2$. This solution is a Gaussian random field satisfying \begin{equation}\label{eq:solspde} (u,\psi)_{L_2(\Gamma)} = \mathcal{W}(\tau^{-1}L^{-\alpha/2}\psi) \quad \mathbb{P}\text{-a.s.} \quad \forall \psi\in L_2(\Gamma), \end{equation} with covariance operator $\mathcal{C} = \tau^{-2}L^{-\alpha}$ satisfying $$ (\mathcal{C}\phi,\psi)_{L_2(\Gamma)} = \pE[(u,\phi)_{L_2(\Gamma)}(u,\psi)_{L_2(\Gamma)}] \quad \forall \phi,\psi \in L_2(\Gamma). $$ We note that $u$ is centered, and we let $\varrho$ denote the covariance function corresponding to $\mathcal{C}$, defined as \begin{equation}\label{eq:covfunc} \varrho(s,s') = \pE(u(s)u(s')) \quad \text{a.e. in $\Gamma\times \Gamma$}. \end{equation} \begin{Remark} This covariance function $\varrho$ is in general not stationary. For example, the marginal variances at the vertices will depend on the degrees, due to the Kirchhoff vertex conditions. In particular, compared to vertices of degree 2, the field will have a larger variance at vertices of degree 1 whereas it will have a smaller variance at vertices with higher degrees. An example of this can be seen in Figure~\ref{fig:variances}. We discuss this further in Section~\ref{sec:boundary}. \end{Remark} \begin{figure}[t] \includegraphics[width=0.45\linewidth]{figs/fig21} \includegraphics[width=0.45\linewidth]{figs/fig22} \caption{Marginal variances of the Whittle--Mat\'ern field with $\alpha=1$ and $\kappa=5$ on two different graphs. } \label{fig:variances} \end{figure} \section{Sample path regularity and spectral representations}\label{sec:properties} The goal of this section is to characterize the regularity of solutions $u$ to the SPDE \eqref{eq:spde}. We will derive two main results, as well as a few important consequences of these. The first main result, Theorem \ref{regularity1}, provides sufficient conditions for $u$ to have, $\bbP$-a.s., continuous sample paths, as well as sufficient conditions for existence of the weak derivatives. In the second regularity result, Theorem \ref{thm:weakregularity}, we fine-tune the conditions to ensure continuity. We are then able to ensure continuity of the sample paths even when $u$ does not belong to $\dot{H}^1$. Furthermore, we are able to obtain H\"older continuity, with an order depending on the fractional exponent $\alpha$ of \eqref{eq:spde}. To simplify the exposition, we postpone the proofs of all statements in this section to Appendix~\ref{app:proofsregularitysection}. However, one of the key ideas of the proofs is to use interpolation theory (see Appendix~\ref{app:interpolation}) to characterize the spaces $\dot{H}^\alpha$. Another main idea is to exploit a novel Sobolev embedding for compact metric graphs, namely Theorem \ref{thm:sobembedding}, which we introduce and prove in this paper, as well as a suitable version of the Kolmogorov-Chentsov theorem. We begin with the statement of our first regularity result: \begin{Theorem}\label{regularity1} Let $u$ be the solution of \eqref{eq:spde}. If $\alpha > \nicefrac{3}{2}$, then, $u\in H^1(\Gamma)$. In particular, $u'$ exists $\bbP$-a.s. and $u$ has, $\bbP$-a.s., continuous sample paths. Furthermore, let \begin{equation*} \alpha^\ast = \begin{cases} \left\lfloor\alpha-\nicefrac{1}{2} \right\rfloor & \alpha-\nicefrac{1}{2} \notin \bbN \\ \alpha-\nicefrac{3}{2} & \alpha-\nicefrac{1}{2} \in \bbN, \end{cases} \quad \text{and} \quad \widetilde{\alpha} = \begin{cases} \left\lfloor\nicefrac{\alpha}{2} - \nicefrac{3}{4}\right\rfloor & \nicefrac{\alpha}{2} - \nicefrac{3}{4} \notin \bbN\\ \nicefrac{\alpha}{2} - \nicefrac{7}{4} & \nicefrac{\alpha}{2} - \nicefrac{3}{4} \in \bbN. \end{cases} \end{equation*} Then, for $j=1,\ldots, \alpha^\ast$, $D^j u$ exists $\bbP$-a.s.. Also, for $m=0,\ldots, \widetilde{\alpha}$, $D^{2m}u$ exists and has continuous sample paths $\bbP$-a.s.. Moreover, for $\alpha\geq 2$ and $1\leq k \leq \alpha^\ast-1$, such that $k$ is an odd integer, we have that, $\bbP$-a.s., $\sum_{e\in\mathcal{E}_v} \partial_e^k u(v) =0$, for every $v\in\mathcal{V}$, where $\partial_e^k u$ is the directional derivative of $D^{k-1}u$. Finally, if $1/2 < \alpha \leq 3/2$, then $u$ does not belong to $H^1(\Gamma)$ $\mathbb{P}$-a.s., so in particular, $u$ does not have a weak derivative. \end{Theorem} Our goal now is to obtain more refined regularity results for the field $u$. More precisely, we want to show that for all $\alpha> \nicefrac{1}{2}$, $u$ has, $\bbP$-a.s., continuous sample paths. Actually, we will show that if $\alpha\geq 1$ and $0<\gamma<\nicefrac{1}{2}$, then $u$ has $\gamma$-Hölder continuous sample paths and if $\nicefrac{1}{2} < \alpha < 1$, then for every $0<\widetilde{\gamma}<\alpha -\nicefrac{1}{2}$, $u$ has $\widetilde{\gamma}$-Hölder continuous sample paths. We will also derive Hölder regularity properties of the derivatives for higher values of $\alpha$. To that end, we now note that these derivatives that we will consider are well-defined. \begin{Remark} For $\alpha\geq 1$ and $j=0,\ldots, \left\lfloor\frac{\alpha-1}{2}\right\rfloor$, we have that, for any $\epsilon>0$, the realizations of $u$ belong to $\dot{H}^{\alpha-\frac{1}{2}-\epsilon}$. Let $j\leq \nicefrac{\alpha}{2} - \nicefrac{1}{2}$, which implies that $2j+1\leq \alpha$. Take $0 <\epsilon < \nicefrac{1}{2}$ so that $2j \leq 2j + \nicefrac{1}{2} - \epsilon \leq \alpha - \nicefrac{1}{2} -\epsilon$. Then, by Proposition \ref{prp:hdotk} in Appendix~\ref{app:proofsregularitysection}, $\dot{H}^{\alpha-\frac{1}{2}-\epsilon}\subset \dot{H}^{2j} \subset \widetilde{H}^{2j}(\Gamma)$ and, therefore, $D^{2j}u$ is well-defined for $j=0,\ldots,\left\lfloor\frac{\alpha-1}{2}\right\rfloor$. By an analogous argument, we have that $D^{\lfloor\alpha\rfloor}u$ is well-defined if $\lfloor\alpha\rfloor$ is even and $\alpha - \lfloor\alpha\rfloor > \nicefrac{1}{2}$. \end{Remark} Recall that, for two Hilbert spaces $E$ and $F$, we have the continuous embedding $E\hookrightarrow F$ if the inclusion map from $E$ to $F$ is continuous, i.e., there is a constant $C>0$ such that for every $f\in E$, $\|f\|_F \leq C \|f\|_E$. A crucial result that is needed in order to derive our second regularity result is the following Sobolev-type embedding for metric graphs, whose proof is given in Appendix \ref{app:proofsregularitysection}. \begin{Theorem}[Sobolev embedding for compact metric graphs]\label{thm:sobembedding} Let $\nicefrac{1}{2} < \alpha \leq 1$ and $\Gamma$ be a compact metric graph. We have the continuous embedding $H^\alpha(\Gamma) \hookrightarrow C^{0,\alpha-\frac{1}{2}}(\Gamma).$ \end{Theorem} As a direct consequence we also have the embedding: \begin{Corollary}\label{cor:sobembeddingHdot} Let $\alpha>\nicefrac{1}{2}$ and define $\widetilde{\alpha} = \alpha - \nicefrac{1}{2}$ if $\alpha\leq 1$ and $\widetilde{\alpha}=\nicefrac{1}{2}$ if $\alpha>1$. Then $\dot{H}^\alpha \hookrightarrow C^{0,\widetilde{\alpha}}(\Gamma).$ \end{Corollary} By combining the Sobolev embedding with a suitable version of the Kolmogorov-Chentsov theorem, we arrive at the following regularity result, which shows that the smoothness parameter $\alpha$ provides precise control over $\gamma$-H\"older continuity of the sample paths. \begin{Theorem}\label{thm:weakregularity} Fix $\alpha > \nicefrac{1}{2}$ and let $\widetilde{\alpha} = \alpha-\nicefrac{1}{2}$ if $\alpha\leq 1$ and $\widetilde{\alpha}=\nicefrac{1}{2}$ if $\alpha>1$. Then, for $0<\gamma<\widetilde{\alpha}$, the solution $u$ of \eqref{eq:spde} has a modification with $\gamma$-Hölder continuous sample paths. Furthermore, if $\alpha\geq 1$, then for $j=0,\ldots,\left\lfloor\frac{\alpha - 1}{2}\right\rfloor$, the derivative $D^{2j}u$ has $\gamma$-Hölder continuous sample paths. Finally, if $\lfloor\alpha\rfloor$ is even and $\alpha - \lfloor\alpha\rfloor > \nicefrac{1}{2}$, then for any $\tilde{\gamma}$ such that $0<\tilde{\gamma} < \alpha - \lfloor\alpha\rfloor - \nicefrac{1}{2}$ we have that $D^{\lfloor\alpha\rfloor}u$ has $\tilde{\gamma}$-Hölder continuous sample paths. \end{Theorem} \begin{Remark} It is noteworthy that in both Theorems \ref{regularity1} and \ref{thm:weakregularity}, we can only establish continuity of the derivatives of even order. The reason is that the odd order derivatives satisfy the boundary condition $\sum_{e\in\mathcal{E}_v} \partial_e^{2k+1} f(v) = 0$, where $\partial_e^{2k+1}f$ is the directional derivative of $\partial^{2k}f$. This condition does not ensure continuity of $\partial^{2k+1} f$. Another problem is that in order to define continuity of $\partial^{2k+1}f$, one needs to fix, beforehand, a direction for each edge, as these derivatives are direction-dependent, which shows that there is no natural way of defining continuity of such functions. On the other hand, the even order derivatives are direction-free, so the continuity of them is well-defined. \end{Remark} As a consequence of the previous results, we have continuity of the covariance function: \begin{Corollary}\label{cor:covfunccont} Fix $\alpha > \nicefrac{1}{2}$, then the covariance function $\varrho(\cdot,\cdot)$ in \eqref{eq:covfunc} is continuous. \end{Corollary} We also have the following series expansion of the covariance function. \begin{Proposition}\label{cor:mercercov} Let $\alpha > 1/2$, then the covariance function $\varrho$ admits the series representation in terms of the eigenvalues and eigenvectors of $L$: $$ \varrho(s,s') = \sum_{j=1}^{\infty} \frac1{(\kappa^2 + \hat{\lambda}_j)^{\alpha}} \eig_j(s)\eig_j(s'), $$ where the convergence of the series is absolute and uniform. Further, the series also converges in $L_2(\Gamma\times\Gamma)$. \end{Proposition} The continuity of the covariance function implies that we can represent $u$ through the Karhunen-Lo\`eve expansion as explained in the following proposition. \begin{Proposition}\label{prp:KLexp} Let $\alpha > \nicefrac{1}{2}$, and let $u$ be the solution of \eqref{eq:spde}. Then, $$ u(s) = \sum_{j=1}^{\infty} \xi_j (\kappa^2 + \hat{\lambda}_j)^{-\alpha/2} \eig_j(s), $$ where $\xi_j$ are independent standard Gaussian variables. Further the series, \begin{equation}\label{eq:un} u_n(s) = \sum_{j=1}^n \xi_j (\kappa^2 + \hat{\lambda}_j)^{-\alpha/2} \eig_j(s), \end{equation} converges in $L_2(\mathbb{P})$ uniformly in $s$, that is, $\lim_{n\to\infty} \sup_{s\in \Gamma} \pE\left(\left| u(s) - u_n(s)\right|^2 \right) =0.$ \end{Proposition} Given that we are on a graph where the eigenfunctions of $\Delta_{\Gamma}$ are known, we can use the Karhunen-Lo\`eve expansion as a simulation method by using the truncated series expansion \eqref{eq:un}. The rate of convergence of this approximation is clarified in the following proposition. \begin{Proposition}\label{prp:rateKL} Given that $\alpha > \nicefrac{1}{2}$, we have that $u_n$ converges to $u$ in $L_2(\Omega, L_2(\Gamma))$, $$\lim_{n\to\infty} \left\| u - u_n\right\|_{L_2(\Omega,L_2(\Gamma))} = 0,$$ and there exist $k\in\mathbb{N}$ and some constant $C>0$ such that for all $n>k$ $$ \|u-u_n\|_{L_2(\Omega, L_2(\Gamma))} \leq C n^{-(\alpha-1/2)}. $$ \end{Proposition} Moreover, we can also use the Karhunen-Lo\`eve expansion to obtain the following result on boundary conditions of the odd-order derivatives, which will be needed in Section \ref{sec:markov}, when we apply Theorem \ref{thm:CondDens} to obtain the density of the Whittle-Mat\'ern Gaussian process. More precisely, there, we will use that the directional derivatives of the solution $u$ of $L^\alpha u = \mathcal{W}$, for integer $\alpha>1$, satisfy the Kirchhoff boundary conditions. \begin{Proposition}\label{prp:oddorderderiv} Let $\alpha=2k$ for $k\in\bbN$, and let $u$ be a solution of $L^{\alpha/2} u = L^k u = \mathcal{W}$. Then, the odd-order directional derivatives of $u$ are continuous on each edge $e\in\mathcal{E}$ and satisfy the Kirchhoff boundary condition, $\sum_{e\in\mathcal{E}_v} \partial_e^{2m+1} u(v) = 0$, where $m=0,\ldots,k-1$. \end{Proposition} Finally, as a corollary we obtain that for $\alpha>3/2$, the solution $u$ of \eqref{eq:spde} is mean-square differentiable (i.e., it has $L_2(\Omega)$ derivatives) at every point and the $L_2(\Omega)$-derivative agrees with the weak derivative. \begin{Corollary}\label{cor:L2diffweakdiff} Let $u$ be the solution of \eqref{eq:spde}. If $\alpha>3/2$, then for every edge $e\in\mathcal{E}$, and every point $s\in e$, $u$ is $L_2(\Omega)$-differentiable at $s$ (if $s$ is a boundary term, then one must consider a lateral derivative). Furthermore, for every point of every edge of the graph, the $L_2(\Omega)$-derivative coincides with the weak derivative of $u$. Moreover, the same result is true for higher order derivatives when they exist. \end{Corollary} \section{The Markov subclass}\label{sec:markov} Even though it is important to be able to consider a general smoothness parameter in the Whittle--Mat\'ern fields, we believe that the most important cases are $\alpha\in\mathbb{N}$ which corresponds to the case of a local precision operator $\mathcal{Q} = \mathcal{C}^{-1}$. The reason being that, as we will show, this results in Gaussian random fields with Markov properties. In this section, we will show that one can compute finite dimensional distributions as well as the covariance function of the Whittle--Mat\'ern fields explicitly in the case $\alpha\in\mathbb{N}$. We will first present the general strategy for deriving the finite dimensional distributions and then provide details for the two most important special cases $\alpha=1$ and $\alpha=2$. Before that, however, we provide the result that the random field indeed is Markov in these cases. \subsection{Markov properties} Let us begin recalling some basic definitions regarding random fields and their Markov properties. To this end we will follow \cite{Rozanov1982Markov}, adapted to our case of random fields on compact metric graphs. \begin{Definition}\label{def:SigmaAlgebraRandomField} Let $\Gamma$ be a compact metric graph and $\{u(s): s\in \Gamma\}$ be a random field, that is, for every $s\in \Gamma$, $u(s)$ is a random variable. Define, for each open set $S\subset\Gamma$, the $\sigma$-algebra $\mathcal{F}_u(S) = \sigma(u(s): s\in S).$ The family $\{\mathcal{F}_u(S): S\subset\Gamma, S\hbox{ is open}\}$ is called the random field $\sigma$-algebra induced by $u$. \end{Definition} In order to define a Markov random field, we need to define splitting $\sigma$-algebras: \begin{Definition}\label{def:splittingsigmaalg} Let $\mathcal{F}_1, \mathcal{F}_2$ and $\mathcal{G}$ be sub-$\sigma$-algebras of $\mathcal{F}$. We say that $\mathcal{G}$ splits $\mathcal{F}_1$ and $\mathcal{F}_2$ if $\mathcal{F}_1$ and $\mathcal{F}_2$ are conditionally independent given $\mathcal{G}$. That is, $\mathcal{G}$ splits $\mathcal{F}_1$ and $\mathcal{F}_2$ if for any $A_1\in\mathcal{F}_1$ and any $A_2\in\mathcal{F}_2$, we have $\bbP(A_1\cap A_2 | \mathcal{G}) = \bbP(A_1|\mathcal{G}) \bbP(A_2|\mathcal{G}).$ \end{Definition} Given any set $S\subset \Gamma$, we will denote the topological boundary of $S$ by $\partial S$. Let, also, $(\partial S)_\varepsilon = \{x\in \Gamma: d(x, \partial S)<\varepsilon\}$ be the $\varepsilon$-neighborhood of $\partial S$. We can now define what it means to be a Markov random field on a metric graph. \begin{Definition}\label{def:MarkovPropertyField} Let $u$ be a random field on a compact metric graph $\Gamma$. We say that $u$ is Markov if for every open set $S$ there exists $\widetilde{\varepsilon}>0$ such that for every $0<\varepsilon< \widetilde{\varepsilon}$, $\mathcal{F}_u((\partial S)_\varepsilon)$ splits $\mathcal{F}_u(S)$ and $\mathcal{F}_u(\Gamma \setminus\overline{S})$, where $\{\mathcal{F}_u(S): S\subset\Gamma, S\hbox{ is open}\}$ is the random field $\sigma$-algebra induced by $u$. \end{Definition} We can now state the Markov property for the Whittle--Mat\'ern random fields on compact metric graphs, whose proof can be found in Appendix \ref{app:Markovprop}: \begin{Theorem}\label{thm:markovspde} Let $\Gamma$ be a compact metric graph. Let, also, $\kappa^2>0$ and $\alpha \in \{1,2\}$. Then, the solution $u$ of \eqref{eq:spde} is a Markov random field in the sense of Definition \ref{def:MarkovPropertyField}. \end{Theorem} \subsection{Finite dimensional distributions} In order to obtain the finite dimensional distributions of the solution $u$ of the SPDE \eqref{eq:spde} one must compute the covariance function $\varrho$ of $u$. However, $\varrho$ is given by the Green function of the precision operator $L^\alpha$ and it is in general very difficult to compute $\varrho$ explicitly. Typically expressions for $\varrho$ are given as series expansions, such as that in Proposition~\ref{cor:mercercov}, which is something we would like to avoid. Therefore, we will compute the covariance function in an indirect manner. To this end, we will need some key facts about the process. The first fact is that the Kirchhoff vertex conditions allows us to remove vertices of degree $2$ (and then merging the two corresponding edges) without affecting the distribution of $u$. We formulate this fact as a proposition. \begin{Proposition}\label{prop:join} Suppose that $\Gamma$ has a vertex $v_i$ of degree 2, connected to edges $e_k$ and $e_\ell$, and define $\widetilde{\Gamma}$ as the graph where $v_i$ has been removed and $e_k,e_\ell$ merged to one new edge $\widetilde{e}_k$. Let $u$ be the solution to \eqref{eq:spde} on $\Gamma$, and let $\widetilde{u}$ be the solution to \eqref{eq:spde} on $\widetilde{\Gamma}$, then $u$ and $\widetilde{u}$ have the same covariance function. \end{Proposition} \begin{proof} Let $a_L : H^1(\Gamma) \times H^1(\Gamma) \rightarrow \mathbb{R}$ denote the bilinear form corresponding to $L$ on $\Gamma$, and let $\widetilde{a}_L$ denote the corresponding bilinear form on $\widetilde{\Gamma}$. Note that $H^1(\Gamma) = H^1(\widetilde{\Gamma})$ due to the continuity requirement, and since the bilinear forms coincide for all functions on ${H^1(\Gamma)= H^1(\widetilde{\Gamma})}$, we have that the Kirchhoff-Laplacians $\Delta_\Gamma$ and $\Delta_{\widetilde{\Gamma}}$ have the same eigenfunctions and eigenvalues. Therefore, the random fields $u$ and $\widetilde{u}$ as defined in terms of their spectral expansions coincide. \end{proof} The second key fact of the Whittle--Mat\'ern fields that we need for deriving the conditional distributions is formulated in Remark \ref{rem:conditional}, which is a direct consequence of the Markov property of the process and that for $\alpha\in\mathbb{N}$, the precision operator of the solution $u$ of the SPDE \eqref{eq:spde} is a local operator. In Remark \ref{rem:localprec}, we explain in more detail how we use the fact that the precision operator is local. \begin{Remark}\label{rem:localprec} For $\alpha\in\mathbb{N}$, the precision operator of $u$, $L^\alpha$, can be decomposed as the sum $L^\alpha f = \sum_{e\in \mathcal{E}} L_e^\alpha f|_e$, where $L_e$ is the restriction of the differential operator \eqref{eq:Loperator} to the edge $e$ and $f\in\dot{H}^{2\alpha}$. Therefore, the restriction of the Green function of $L^\alpha$ to an edge $e$ is given by the Green function of $L_e^\alpha$ with appropriate boundary conditions. Also, note that the covariance function of $u$ is the Green function of the precision operator, so we have that the restriction of the covariance function of $u$ to the edge $e$ coincides with the covariance function of a Whittle-Mat\'ern random field on the edge $e$ with appropriate boundary conditions. Note that when $\alpha$ is an even positive integer, we can actually say more. Indeed, in this case, the restriction of the solution of $L^{\alpha/2}u=\mathcal{W}$ to the edge $e$ solves $L_e^{\alpha/2} u = \mathcal{W}_e$, where $\mathcal{W}_e$ is the restriction of $\mathcal{W}$ to the edge $e$, that is, it acts on functions whose support is contained in $e$. \end{Remark} \begin{Remark}\label{rem:conditional} When we consider the finite dimensional distributions of the Whittle--Mat\'ern field on an edge conditionally on the values of the process at the vertices connected to that edge, they are the same as those of a Gaussian process with a Mat\'ern covariance function conditionally on the vertex values. \end{Remark} The two key facts, together with a symmetry assumption, defines a unique covariance function of the Whittle--Mat\'ern field when evaluated on any edge. This is shown in the following theorem. The theorem is formulated for a multivariate random field since we will need to consider the process and its derivatives when deriving the finite dimensional distributions of the Whittle--Mat\'ern fields if $\alpha>1$. \begin{Theorem} \label{thm:CondDens} Let $(\Omega, \mathcal{F},\mathbb{P})$ be a complete probability space and let $\mv{u} : \mathbb{R}\times \Omega \rightarrow \mathbb{R}^d$ be a stationary Gaussian Markov process with strictly positive definite (matrix valued) covariance function $\mv{r}\left(s, t\right) = \Cov \left[ \mv{u}\left(s\right), \mv{u}\left(t\right) \right]$. Then \begin{align} \label{eq:covmod} \tilde{\mv{r}}_T\left(s,t\right) = \mv{r}(s,t) + \begin{bmatrix}\mv{r}(s, 0) & \mv{r}(s,T)\end{bmatrix} \begin{bmatrix} \mv{r}(0,0) & -\mv{r}(0,T) \\ -\mv{r}(T,0) & \mv{r}(0,0) \end{bmatrix}^{-1} \begin{bmatrix}\mv{r}(t,0)\\ \mv{r}(t,T) \end{bmatrix} \end{align} is a strictly positive definite covariance function on the domain $[0,T]$. Further, the family $\{\tilde{\mv{r}}_T \}_{T\in \mathbb{R}_+}$ is the unique family of covariance functions on $\{[0,T]\}_{T\in \mathbb{R}_+}$ satisfying the following conditions: \begin{itemize} \item[i)] If $\tilde{ \mv{u} }$ is a centered Gaussian process on $[0,T]$ with covariance $\tilde{\mv{r}}_T\left(s,t\right)$, then for any $m\in\mathbb{N}$, any $\mv{t}\in\mathbb{R}^m, \mv{t}=(t_1,\ldots,t_m)$, and any $\mv{u}_0, \mv{u}_T\in\mathbb{R}^d$, $$ \tilde{ \mv{u} }(\mv{t})| \{\tilde{ \mv{u}}(0)=\mv{u}_0, \tilde{ \mv{u}}(T) = \mv{u}_T \} \stackrel{d}{=} \mv{u}(\mv{t})| \{\mv{u}(0)=\mv{u}_0, \mv{u}(T) = \mv{u}_T\}, $$ where $\tilde{ \mv{u} }(\mv{t}) = (\tilde{\mv{u}}(t_1), \ldots,\tilde{\mv{u}}(t_m)), \mv{u}(\mv{t})= (\mv{u}(t_1), \ldots,\mv{u}(t_m)).$ \item[ii)] $\tilde{\mv{r}}_T\left(0,0\right) = \tilde{\mv{r}}_T\left(T,T\right)$. \item[iii)] Let $T_1+T_2=T$ and let $\tilde{\mv{u}}_{T_1},\tilde{\mv{u}}_{T_2}$ and $\tilde{\mv{u}}_{T}$ be three independent centered Gaussian processes with covariance functions $\tilde{\mv{r}}_{T_1}\left(s,t\right),\tilde{\mv{r}}_{T_2}\left(s,t\right)$ and $\tilde{\mv{r}}_{T}\left(s,t\right)$, respectively. Define \begin{equation}\label{eq:cond3thmconddens} \tilde{ \mv{u}}^*(t ) = \mathbb{I}\left(t \leq T_1\right)\tilde{ \mv{u}}_{T_1}(t) + \mathbb{I}\left(t \geq T_1\right)\tilde{ \mv{u}}_{T_2}(t-T_1) | \tilde{ \mv{u}}_{T_1}(T_1) = \tilde{ \mv{u}}_{T_2}(0), \quad t\in [0,T], \end{equation} as the process obtained by joining $\tilde{\mv{u}}_{T_1}(t)$ and $\tilde{\mv{u}}_{T_2}(t)$ on $[0,T]$. Then $ \tilde{ \mv{u}}^* \stackrel{d}{=} \tilde{ \mv{u}}_{T} $ \end{itemize} \end{Theorem} The proof is given in Appendix~\ref{app:proof_theorem}. The results above give us the following strategy for deriving densities of finite dimensional distributions of Gaussian Whittle-Mat\'ern fields on $\Gamma$: \begin{enumerate} \item Let $\alpha\in\mathbb{N}$ , and for each edge $e_i$ define independent multivariate processes \begin{equation}\label{eq:edge_process} \mv{u}^{\alpha}_i(s) = [u_i(s),u_i'(s),u_i''(s),\ldots, u_i^{(\alpha-1)}(s)]^\top, \end{equation} with covariance function \eqref{eq:covmod}, where $\mv{r}(s,t) = [r_{ij}(s,t)]$, $r_{ij}(s,t) = \frac{\pd^{i-1}}{\pd t^{i-1}}\frac{\pd^{j-1}}{\pd s^{j-1}}r(s,t)$, and $r$ is the Mat\'ern covariance function \eqref{eq:matern_cov} with $\nu=\alpha - \nicefrac{1}{2}$ and $d(s,t) = |s-t|$. \item Let $\mv{u}_e^\alpha(s) = \sum_{e_i \in \Gamma} \mathbb{I}\left(s \in e_i\right) \mv{u}^{\alpha}_i(s)$, and let $\mathcal{A}$ denote the Kirchhoff vertex conditions. More precisely, by Theorem \ref{thm:weakregularity} and Proposition \ref{prp:oddorderderiv}, $\mathcal{A}$ encodes continuity of the functions $u^{(2k)}$, $k=0,\ldots,\lfloor\nicefrac{\alpha-1}{2}\rfloor$, at the vertices, and that $u^{(2k+1)}$, $k=0,\ldots,\lfloor\nicefrac{\alpha}{2}-1\rfloor$, satisfy the Kirchhoff boundary condition. Now, define \begin{equation}\label{eq:process} \mv{u}^\alpha(s) = [u(s),u'(s),u''(s),\ldots, u^{(\alpha-1)}(s)]^\top = \mv{u}_e^\alpha(s) |\mathcal{A}. \end{equation} \end{enumerate} Then by construction $\mv{u}^\alpha(s) $ satisfies the Kirchhoff vertex conditions, so by Proposition \ref{prop:join}, and Remark \ref{rem:conditional}, $u(s)$ is a solution to \eqref{eq:spde}. We can now derive the density for $\mv{u}^\alpha(s) $ evaluated at the vertices, which due to Remark \ref{prop:join} uniquely defines any finite dimensional distribution of $u$ on $\Gamma$. We introduce the vector \begin{align} \label{eq:def_ue} \mv{U}_e^\alpha= [\mv{u}_1^\alpha(\underbar{e}_1),\mv{u}_1^\alpha(\bar{e}_1),\mv{u}_1^\alpha(\underbar{e}_2),\mv{u}_1^\alpha(\bar{e}_2),\ldots, \mv{u}_m^\alpha(\underbar{e}_m),\mv{u}_m^\alpha(\bar{e}_m)], \end{align} then the distribution for this vector is defined by \begin{align} \label{eq:dens_ue} \mv{U}_e^\alpha \sim \mathcal{N}\left(\mv{0},\mv{Q}^{-1}_e\right), \end{align} where $\mv{Q}_e= diag(\{\mv{Q}_i\}^m_{i=1})$, and $\mv{Q}_i$ is the precision matrix of $\{\mv{u}_i^\alpha(\underbar{e}_i),\mv{u}_i^\alpha(\bar{e}_i)\}$ which is given by \eqref{eq:Qedge} in the following basic result about the covariance function in \eqref{eq:covmod}, whose proof is also given in Appendix \ref{app:proof_theorem}: \begin{Corollary}\label{cor:precfuncconddens} \label{Cor:prec} If $\tilde{ \mv{u} }$ is a Gaussian processes on $[0,T]$ with covariance $\tilde{\mv{r}}_T\left(s,t\right)$ from \eqref{eq:covmod}, then the precision matrix of $\left[\tilde{\mv{u}}_T(0),\tilde{\mv{u}}_T(T)\right]$ is \begin{equation}\label{eq:Qedge} \tilde{\mv{Q}} = \mv{Q} - \frac{1}{2}\begin{bmatrix} \mv{r}(0,0)^{-1} & \mv{0} \\ \mv{0} & \mv{r}(0,0)^{-1} \end{bmatrix}, \end{equation} where $\mv{Q} $ is the precision matrix of $\left[{ \mv{u} }(0),{ \mv{u} }(T) \right]$. \end{Corollary} Since, the Kirchhoff vertex conditions are linear on $\mv{U}_e^\alpha $, there exists a matrix $\mv{A}$ such that the conditions can be written as \begin{equation}\label{eq:Akirchoff} \mv{A}\mv{U}_e^\alpha = \mv{0}. \end{equation} Thus, the distribution of the vector \begin{align} \label{eq:def_u} \mv{U}^\alpha= [\mv{u}^\alpha(\underbar{e}_1),\mv{u}^\alpha(\bar{e}_1),\mv{u}^\alpha(\underbar{e}_2),\mv{u}^\alpha(\bar{e}_2),\ldots, \mv{u}^\alpha(\underbar{e}_m),\mv{u}^\alpha(\bar{e}_m)], \end{align} is given by $ \mv{U}^{\alpha} \sim \mathcal{N}\left(\mv{0},\mv{Q}^{-1}_e\right)| \mv{A} \mv{U}^{\alpha} = \mv{0}. $ The following example illustrates this procedure for a simple graph with three edges. \begin{figure} \label{fig:split_graph} \includegraphics[height=0.35\linewidth]{figs/graph_edges} \includegraphics[height=0.35\linewidth]{figs/graph_edges2} \caption{A graph $\Gamma$ (left) and the split used to define the independent edge processes (right).} \end{figure} \begin{example} In the case of the graph in Figure~\ref{fig:split_graph}, we have that $\mathcal{A} = \{\mathcal{A}_1,\mathcal{A}_2\}$, where if $\alpha=1$ we simply have \begin{equation}\label{eq:cont} \mathcal{A}_1 = \{u_1(\underbar{e}_1) = u_2(\underbar{e}_2) = u_3(\underbar{e}_3) \}, \quad \mathcal{A}_2 = \{u_1(\bar{e}_1) = u_2(\bar{e}_2) = u_3(\bar{e}_3) \}, \end{equation} and $\mv{A}$ is a $4\times 6$ matrix $$ \mv{A} = \begin{bmatrix} 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 \end{bmatrix}. $$ If $\alpha=2$ we instead have \begin{equation}\label{eq:cont2} \begin{split} \mathcal{A}_1 &= \{u_1(\underbar{e}_1) = u_2(\underbar{e}_2) = u_3(\underbar{e}_3), u_1^{(1)}(\underbar{e}_i) + u_2^{(1)}(\underbar{e}_i) + u_3^{(1)}(\underbar{e}_i)=0 \}, \\ \mathcal{A}_2 &= \{u_1(\bar{e}_1) = u_2(\bar{e}_2) = u_3(\bar{e}_3),u_1^{(1)}(\bar{e}_i) + u_2^{(1)}(\bar{e}_i) + u_3^{(1)}(\bar{e}_i)=0 \}, \end{split} \end{equation} and $\mv{A}$ is a $6\times 12$ matrix \setcounter{MaxMatrixCols}{20} $$ \mv{A} = \begin{bmatrix} 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 &1 \end{bmatrix}. $$ \end{example} We will now illustrate how to use this procedure for $\alpha=1$ and $\alpha=2$. \subsection{The case $\alpha=1$} In this case, by Theorem~\ref{regularity1}, the process is not differentiable, so we do not need to consider any derivatives in the construction. Then it is easy to see that the covariance in \eqref{eq:covmod} with $T=l_e$ is given by \begin{align} \tilde{r}(s,t) &= r(s-t) + \begin{bmatrix} r(s) & r(s-\ell_e) \end{bmatrix} \begin{bmatrix} r(0) & -r(\ell_e) \\ -r(\ell_e) & r(0) \end{bmatrix}^{-1} \begin{bmatrix} r(t) \\ r(t-\ell_e) \end{bmatrix}\label{eq:exp1}\\ &= \frac1{2\kappa \tau^2\sinh(\kappa \ell_e)}(\cosh(\kappa(\ell_e-|s-t|)) + \cosh(\kappa(s+t-\ell_e))),\label{eq:exp2} \end{align} where $r(h) = (2\kappa\tau^2)^{-1}\exp(-\kappa |h|)$. Here \eqref{eq:exp2} follows by simple trigonometric identities. Further, the precision matrix in \eqref{eq:Qedge} is \begin{equation} \label{eq:precQexp} \tilde{\mv{Q}} = (2\kappa\tau^2) \begin{bmatrix} \frac{1}{2} + \frac{e^{-2\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} & -\frac{e^{-\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} \\ -\frac{e^{-\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} & \frac{1}{2} + \frac{e^{-2\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} \end{bmatrix}. \end{equation} Since the processes is not differentiable, the conditioning step in \eqref{eq:process} is so simple that we can obtain an explicit precision matrix for all vertices. We formulate this as a corollary. \begin{Corollary}\label{cor:exp_Q} For each edge $e_i\in\mathcal{E}$, define $u_i$ as a centered Gaussian process with covariance given by \eqref{eq:exp1}. Define the processes $u(s)$ on each edge as $u(s)|_{e_i} = u_i(s) | \mathcal{A}$ where $\mathcal{A}$ encodes the continuity requirement at each vertex (see \eqref{eq:cont}). Then $u$ solves \eqref{eq:spde} on $\Gamma$ with $\alpha=1$. Further, let $\mv{U}$ be an $n_v$-dimensional Gaussian random variable with $U_j = u(v_j)$. Then $\mv{U} \sim \mathcal{N} \left( \mv{0}, \mv{Q}^{-1}\right)$, where the precision matrix $\mv{Q}$ has elements \begin{equation} \label{eq:expprec} Q_{ij} = 2\kappa\tau^2\cdot \begin{cases} \sum\limits_{e \in \mathcal{E}_{v_i} } \left( \frac{1}{2} + \frac{e^{-2\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} \right) \mathbb{I}\left(\bar{e}\neq \underbar{e}\right)+ \tanh(\kappa \frac{l_e}{2} ) \mathbb{I}\left(\bar{e}= \underbar{e}\right) & \text{if $i=j$},\\ \sum\limits_{e \in \mathcal{E}_{v_i} \cap \mathcal{E}_{v_j} } -\frac{e^{-\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} & \text{if $i\neq j$.} \end{cases} \end{equation} \end{Corollary} \begin{proof} Assume without loss of generality that $\tau^2=\frac{1}{2\kappa}$ and that $\Gamma$ has $m_0$ edges. For each circular edge, with $\bar{e} = \underbar{e}$, add a vertex $v^*$ at an arbitrary location on the edge which we denote $e^*$. Assume now that we have $m$ edges. Define $\bar{\mv{u}}^1= \left[u_1(0),u_1(\ell_{e_1}),\ldots, u_m(0),u_m(\ell_{e_m}) \right]$, which by \eqref{eq:precQexp} has density \begin{align*} f_{\mv{u}_e}\left(\mv{u}_e\right) \propto \exp \left( - \frac{1}{2} \sum_{e\in\mathcal{E}}q_{e,1} u^2_e(0) + q_{e,2} u^2_e(l_e)- 2 q_{e,1}u_e(0)u_e(l_e) \right), \end{align*} where $q_{e,1} = \frac{1}{2} + \frac{e^{-2\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} $ and $q_{e,2} = -\frac{e^{-\kappa \ell_e}}{1-e^{-2\kappa \ell_e}}$. Now define $\mv{u}_v = \left[u(v_1), \ldots, u(v_{n_v})\right]$ and let $ \mv{A}$ be the mapping that links each vertex, $v_i$, to its position on the edges. Then \begin{align*}f_{\mv{u}_v}\left(\mv{u}_v\right) = f_{\mv{u}_e}\left(\mv{A}\mv{u}_v\right) &\propto \exp \left( - \frac{1}{2} \sum_{e\in\mathcal{E}} \left[ q_{e,1} u^2(\underbar{e}) + q_{e,1} u^2(\bar{e})- 2 q_{e,2}u(\underbar{e})u(\bar{e})\right] \right) \\ &= \exp \left(- \frac{1}{2} \mv{u}_v ^\trsp \mv{Q} \mv{u}_v\right). \end{align*} Now for each vertex, $v^*$ that was created to remove a circular edge we have the two edges $e^*_0$ and $e^*_1$ that split $e^*$ then for some vertex $v_i$ we have \begin{align*} f_{\mv{u}_v}\left(\mv{u}_v\right) \propto \exp \left( - \frac{1}{2} \left( q_{e^*_0,1} + q_{e^*_1,1} \right)^2 \left( u_{v^*}^2 +u_{v_i}^2 \right)- 2 \left( q_{e^*_0,2} + q_{e^*_1,2} \right) u_{v^*} u_{v_i} , \right) \end{align*} and integrating out $u_{v^*}$ gives $\exp \left( - \frac{1}{2} \tanh\left(\kappa \frac{l_{e^*}}{2} \right) u_{v_i}^2 \right)$. \end{proof} Let us consider a few examples of this construction. The first is when we have a graph with only one edge and two vertices, i.e., an interval. \begin{example}\label{ex:exp_line} Consider the SDE $(\kappa^2 - \Delta)^{1/2} \tau u = \mathcal{W}$ on an interval $[0,\ell_e]$, where $\Delta$ is the Neumann-Laplacian. Then the covariance function of $u$ is given by \eqref{eq:exp2}. \end{example} The second special case is the circle, where we only have one continuity requirement. \begin{example}\label{ex:exp_circle} Consider the SDE $(\kappa^2 - \Delta)^{\alpha/2} \tau u = \mathcal{W}$ on a circle with perimeter $T$. Then the covariance function of $u$ is given by \begin{align*} \tilde{r}(t,s) &= \frac1{2\kappa\tau^2}e^{-\kappa |s-t|} + \frac{e^{-\kappa T}}{\kappa\tau^2(1-e^{-\kappa T})}\cosh(\kappa|t-s|) = \frac{\cosh(\kappa(|t-s|-T/2))}{2\kappa\tau^2\sinh(\kappa T/2)}. \end{align*} \end{example} There are close connections between the Whittle--Mat\'ern process and the process \citet{anderes2020isotropic} used to define the resistance metric of a graph. \begin{Corollary} Assume that $\Gamma$ is a graph with Euclidean edges. $Z_{\mathcal{G}}$ in (6) \cite{anderes2020isotropic}, without origin node, is obtained by setting $\tau^2 = 2\kappa$ and letting $\kappa \rightarrow 0$ in \eqref{eq:Matern_spde}. \end{Corollary} \begin{proof} Since a $\Gamma$ is a graph with Euclidean edges there are no circular edges, hence by the expression of $Q_{ij}$ in \eqref{eq:precQexp} and L'H\^ospital's rule we have that $$ \lim_{\kappa\rightarrow 0} 2\kappa Q_{ij} = \begin{cases} \sum_{e \in \mathcal{E}_{v_i}} \frac{1}{\ell_e} & \mbox{if }i=j,\\ \sum_{e\in\mathcal{E}_{v_i}\cap\mathcal{E}_{v_j}} -\frac{1}{\ell_e} & \mbox{if }i\neq j. \end{cases} $$ Noting that $\ell_e$ corresponds to the distance between two vertices completes the proof. \end{proof} At this stage it is interesting to compare to the case of a Gaussian process with an exponential covariance function on $\Gamma$, which (given that $\Gamma$ is a graph with Euclidean edges) by \citet{anderes2020isotropic} is a valid Gaussian process. In the case of a graph with one edge and two vertices (i.e., a Gaussian process on an interval), this is a Markov process and thus has the same conditional behavior as the process in Example \ref{ex:exp_line}. However, in the case of a circle, it is easy to show that this is not a Markov process. For this reason, and despite the popularity of the exponential covariance function, we believe that the Whittle--Mat\'ern field is a more natural choice than a Gaussian process with an exponential covariance function for data on metric graphs. In particular since it has the natural SPDE representation. To conclude this section, we note that we can now write down an explicit formula for any finite dimensional distribution of the process. For this, we need the notion of an extended graph. Suppose that we have some locations $s_1,\ldots,s_m \in \Gamma$. Then we let $\bar{\Gamma}_{\mv{s}}$ denote the extended graph where $\{s_1,\ldots,s_m\}$ are added as vertices, and the edges containing those locations are subdivided, and where we remove any duplicate nodes (which would occur if $s_i\in V$ for some $i$). Let \begin{equation}\label{eq:Qbar} \bar{\mv{Q}}_{\mv{s}} = \begin{bmatrix} \mv{Q}_{\mv{vv}} & \mv{Q}_{\mv{vs}} \\ \mv{Q}_{\mv{sv}} & \mv{Q}_{\mv{ss}} \end{bmatrix} \end{equation} denote the corresponding vertex precision matrix from Corollary~\ref{cor:exp_Q}. Then we have: \begin{Corollary}\label{cor:exp_density} Suppose that $u(s)$ is the solution to \eqref{eq:spde} on a metric graph $\Gamma$ with $\alpha=1$. Suppose that we have locations $s_1,\ldots,s_n \in \Gamma$, and let $\mv{U}$ denote the joint distribution of $u(s)$ evaluated at these locations. Then $\mv{U}\sim\mathcal{N}(\mv{0},\mv{Q}_\mv{s}^{-1})$. Here $\mv{Q}_{\mv{s}} = \mv{Q}_{ss} - \mv{Q}_{sv} \mv{Q}_{vv}^{-1} \mv{Q}_{vs}$ where the elements are given in \eqref{eq:Qbar}. \end{Corollary} \subsection{The case $\alpha=2$} We now consider the second fundamental case of the Markov subclass. When $\alpha=2$ the process is differentiable on the edges, and the construction of $u$ is then slightly more involved since we need to consider $\mv{u}_2(s) = [u(s), u'(s)]^\top$. However, we can still construct the process by first considering independent processes on each edge and then combining them by conditioning on the vertex conditions. Specifically, we have the following representation of the covariance for each edge \begin{Corollary}\label{prop:alpha2} For $\alpha=2$ the covariance of $u$ on the interval $[0,l_e]$ is given, as a marginal covariance of \eqref{eq:covmod}, by \begin{equation*} \tilde{r}(s,t) = r(s-t) - \begin{bmatrix} r(s) \\ -r'(s) \\ r(s-\ell_e)\\ -r'(s-\ell_e) \end{bmatrix}^\top \begin{bmatrix} -r(0) && 0 && r(\ell_e) && r'(\ell_e) \\ 0 && r''(0) && -r'(\ell_e) && -r''(\ell_e) \\ r(\ell_e) && -r'(\ell_e) && -r(0) && 0 \\ r'(\ell_e) && -r''(\ell_e) && 0 && r''(0) \\ \end{bmatrix}^{-1}\begin{bmatrix} r(t) \\ -r'(t) \\ r(t-\ell_e)\\ -r'(t-\ell_e) \end{bmatrix}, \end{equation*} where $r(h)=(4\kappa^3\tau^2)^{-1}\left(1 + \kappa |h|\right)\exp(-\kappa |h|)$. Further, the precision matrix for the vector $[u(0),u'(0),u(l_e),u'(l_e)]$ is $$ \tilde{\mv{Q}}= \frac{2\kappa\tau^2}{\left(1-2 \kT^2\right)^2-2 e^{2 \kT} \left(2 \kT^2+1\right)+e^{4 \kT}}\begin{bmatrix} \kappa^2q_{1} && \kappa q_{2} && \kappa^2q_{3} && \kappa q_{4}\\ \kappa q_{2} && q_{5} && \kappa q_{4} && q_{6}\\ \kappa^2 q_{3} && \kappa q_{4} && \kappa^2q_{1} && \kappa q_{2} \\ \kappa q_{4} && q_{6} && \kappa q_{2} && q_{5} \end{bmatrix},$$ where $\kT = \kappa T$ and the $q$-coefficients are $q_{1} = e^{4 \kT} -\left(1-2 \kT^2\right)^2+4 \kT e^{2\kT}$, $q_{2} = 4 \kT e^{2 \kT},$ $q_{3} = 2e^{\kT} \left(2\kT^2(\kT-1) -\kT-e^{2 \kT} (\kT+1)+1\right), $ $q_{4} = 2 \kT e^{\kT} \left(2 \kT^2-e^{2 \kT}-1\right),$ $$ q_{5} = -\kappa ^2 \left(1-2 \kT^2\right)^2+e^{2 \kT} \left(2 \kappa ^2+4 \left(\kappa ^2-1\right) \kT^2-4 \kT -2\right)-\left(\kappa ^2-2\right) e^{4 \kT},$$ and $q_{6} = 2 e^{\kT} \left(-2 \kT^3-2 \kT^2+\kT+e^{2 \kT} (\kT-1)+1\right).$ \end{Corollary} \begin{comment} $$ \tilde{\mv{Q}}= \begin{bmatrix} q_{1,1}& q_{1,2} & q_{1,3} & q_{1,4}\\ q_{1,2} & q_{2,2} & q_{2,3} & q_{2,4}\\ q_{1,3} & q_{2,3} & q_{3,3} & q_{3,4} \\ q_{1,4} & q_{2,4} & q_{3,4} & q_{4,4} \end{bmatrix},$$ where $$q_{1,1} = q_{3,3}= -\frac{2 \kappa ^3 \tau ^2 \left(\left(1-2 \kappa ^2 T^2\right)^2-e^{4 \kappa T}-4 \kappa T e^{2 \kappa T}\right)}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}},$$ $$q_{1,2} = q_{3,4} = \frac{8 \kappa ^3 \tau ^2 T e^{2 \kappa T}}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}},$$ $$q_{1,3} = -\frac{4 \kappa ^3 \tau ^2 e^{\kappa T} \left(-2 \kappa ^3 T^3+2 \kappa ^2 T^2+\kappa T+e^{2 \kappa T} (\kappa T+1)-1\right)}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}},$$ $$q_{1,4} = q_{2,3}= -\frac{4 \kappa ^3 \tau ^2 T e^{\kappa T} \left(-2 \kappa ^2 T^2+e^{2 \kappa T}+1\right)}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}},$$ $$q_{2,2} = q_{4,4} = $$ $$-\frac{2 \kappa \tau ^2 \left(\kappa ^2 \left(1-2 \kappa ^2 T^2\right)^2+e^{2 \kappa T} \left(-2 \kappa ^2-4 \left(\kappa ^2-1\right) \kappa ^2 T^2+4 \kappa T+2\right)+\left(\kappa ^2-2\right) e^{4 \kappa T}\right)}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}},$$ and $$q_{2,4} = \frac{4 \kappa \tau ^2 e^{\kappa T} \left(-2 \kappa ^3 T^3-2 \kappa ^2 T^2+\kappa T+e^{2 \kappa T} (\kappa T-1)+1\right)}{\left(1-2 \kappa ^2 T^2\right)^2-2 e^{2 \kappa T} \left(2 \kappa ^2 T^2+1\right)+e^{4 \kappa T}}.$$ \end{Corollary} \end{comment} In the case of a graph with a single edge and two vertices, i.e., an interval, we do not have any additional constraints and can simplify the covariance $\tilde{r}(s,t)$ as follows. \begin{example}\label{ref:lemNeumann2} Consider the SDE $(\kappa^2 - \Delta) \tau u = \mathcal{W}$ on an interval $[0,\ell_e]$, where $\Delta=\frac{d^2}{dt^2}$ is the Neumann-Laplacian. Then the covariance function of $u$ is given by \begin{align*} \tilde{r}(t,s) =& r(|h|) + \frac{r(h)+r(-h)+e^{2\kappa \ell_e}r(v)+ r(-v) }{2e^{\kappa \ell_e} \sinh(\kappa \ell_e) } + \frac{\ell_e\cosh(\kappa s)\cosh(\kappa t)}{2\kappa^2\tau^2\sinh(\kappa \ell_e)^2 } \end{align*} where $h=s-t, v= s+t$ and $r(h)=(4\kappa^3\tau^2)^{-1}\left(1 + \kappa h\right)\exp(-\kappa h)$. \end{example} In the case of a circle, we instead get the following simplification. \begin{example}\label{ref:circle2} Consider the SDE $(\kappa^2 - \Delta) \tau u = \mathcal{W}$ on a circle with perimeter $\ell_e$. Then the covariance function of $u$ is given by \begin{align*} r(t,s) =& \frac1{4\kappa^3\tau^2\sinh(\kappa \ell_e/2)}\left(\left[1+\frac{\kappa \ell_e}{2}\coth\left(\frac{\kappa \ell_e}{2}\right)\right]\cosh(v) + v\sinh(v)\right), \end{align*} where $v = \kappa(|s-t|-\ell_e/2)$. \end{example} We can derive the finite dimensional distributions for point observations of $u$ in the case $\alpha=2$ by first constructing the precision matrix for $u$ and $u'$ and then marginalizing out $u'$. This will, however, result in a dense precision matrix, and we discuss better options for inference in Section~\ref{sec:inference}. \subsection{Non symmetric boundary conditions}\label{sec:boundary} \begin{figure}[t] \includegraphics[width=0.4\linewidth]{figs/fig2_adj1} \includegraphics[width=0.4\linewidth]{figs/fig2_adj2} \caption{Marginal variances of the Whittle--Mat\'ern field with $\alpha=1$ and $\kappa=5$ and adjusted boundary conditions on two different graphs. } \label{fig:variances2} \end{figure} The assumption $ii)$ in Theorem \ref{thm:CondDens} can be relaxed to allow for different boundary conditions. From a practical view point the increasing variance at boundary vertices, seen in Figure \ref{fig:variances}, often will not be ideal, instead one can make the boundary conditions on each vertex depend on the degree. The easiest is to assume that vertices of degree one has a stationary distribution this means that one does not remove $\frac{1}{2}\mv{r}(0,0)^{-1}$ for one of the end points in \eqref{eq:Qedge}. Thus for instance for $\alpha=1$, \eqref{eq:expprec} would be modified to \begin{equation*} Q_{ij} = 2\kappa\tau^2\cdot \begin{cases} \frac{1}{2}\mathbb{I}\left(\deg(v_i) =1\right)+ \hspace{-0.15cm}\sum\limits_{e \in \mathcal{E}_{v_i}} \hspace{-0.2cm}\left( \frac{1}{2} + \frac{e^{-2\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} \right) \mathbb{I}\left(\bar{e}\neq \underbar{e}\right)+ \tanh(\kappa \frac{l_e}{2} ) \mathbb{I}\left(\bar{e}= \underbar{e}\right) &\hspace{-0.15cm} i=j,\\ \sum\limits_{e \in \mathcal{E}_{v_i} \cap \mathcal{E}_{v_j} } -\frac{e^{-\kappa \ell_e}}{1-e^{-2\kappa \ell_e}} &\hspace{-0.15cm} \text{$i\neq j$.} \end{cases} \end{equation*} This adjustment corresponds to changing the vertex conditions at vertices of degree 1. With this adjustment, we obtain the marginal variances shown in Figure~\ref{fig:variances2} for the examples in Figure~\ref{fig:variances}. We applied this boundary condition to the field generated in Figure \ref{fig:street_graph}, where the graph is generated on a connected component of the street map of New York \citep{openstreet}. The graph contains $21297$ edges and $13026$ vertices and the process has $\alpha=1$. It might be possible to remove the sink in the variance for vertices with degree greater than two, seen in Figure \ref{fig:variances}, by adjusting the precision matrix in a similar way as in Corollary \ref{Cor:prec}. However, we leave investigations of such adjusted boundary conditions for future work. \section{Statistical properties}\label{sec:statistical} In this section, we consider the most important statistical properties of the Whittle--Mat\'ern fields. Let $\mu(\cdot; \kappa,\tau,\alpha) = \pN(0,L^{-\alpha})$ denote the Gaussian measure corresponding to $u$ on $L_2(\Gamma)$. That is, for every Borel set $B\in \mathcal{B}(L_2(\Gamma))$, we have $$ \mu(B; \kappa,\tau,\beta) = \mathbb{P}(\{\omega\in\Omega : u(\cdot, w)\in B\}). $$ In the following subsections, we first investigate consistency of maximum-likelihood parameter estimation of the Whittle--Mat\'ern fields, and then consider asymptotic optimality of kriging prediction based on misspecified model parameters. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{Street_graph} \includegraphics[width=0.49\linewidth]{Street_graph_cov} \caption{A realization of a Whittle--Mat\'ern field with $\alpha=1, \kappa=2$, and $\tau = \nicefrac{1}{24}$ over a New York street network (left) and example of the covariance of the field between one point in the network and all others (right). The graph contains $21297$ edges and $13026$ vertices.} \label{fig:street_graph} \end{figure} \subsection{Parameter estimation} Recall that two probability measures $\mu,\widetilde{\mu}$ on $L_2(\Gamma)$ are said to be equivalent if for any Borel set $B$, $\mu(B)=0$ holds if and only if $\widetilde{\mu}(B)=0$. If, on the other hand, there exists a Borel set $B$ such that $\mu(B)=0$ and $\widetilde{\mu}(B)=1$, one says that the measures are orthogonal. Equivalence and orthogonality plays a crucial role in the study of asymptotic properties of Gaussian random fields. Necessary and sufficient conditions for equivalence are given by the Feldman--H\'ajek theorem \citep[][Theorem 2.25]{daPrato2014}; however, the conditions of this theorem are given in terms of the corresponding covariance operators and are therefore difficult to verify in general. In our setting, we have that the covariance operators for two Whittle--Mat\'ern fields diagonalize with respect to the eigenfunctions of the Kirchhoff-Laplacian. This allows us to derive the following result concerning equivalence of measures for the Whittle--Mat\'ern fields on metric graphs. \begin{Proposition}\label{prop_measure} Suppose that $\mu(\cdot; \kappa,\tau,\alpha)$ and $\mu(\cdot; \widetilde{\kappa},\widetilde{\tau},\widetilde{\alpha})$ are two Gaussian measures on $L_2(\Gamma)$ as defined above, with parameters $\kappa,\tau>0, \alpha>1/2$ and $ \widetilde{\kappa},\widetilde{\tau}>0, \widetilde{\alpha}>1/2$ respectively. Then $\mu(\cdot; \kappa,\tau,\alpha)$ and $\mu(\cdot; \widetilde{\kappa},\widetilde{\tau},\widetilde{\alpha})$ are equivalent if and only if $\alpha=\widetilde{\alpha}$ and $\tau=\widetilde{\tau}$. \end{Proposition} \begin{proof} The two Gaussian measures can be written as $\mu(\cdot; \kappa,\tau,\alpha) = \pN(0,\tau^{-2} L^{-\alpha})$ and $\mu(\cdot; \widetilde{\kappa},\widetilde{\tau},\widetilde{\alpha}) = \pN(0,\widetilde{\tau}^{-2} \widetilde{L}^{-\widetilde{\alpha}})$, where $L = \kappa^2 + \Delta_\Gamma$ and $\widetilde{L} = \widetilde{\kappa}^2 + \Delta_\Gamma$. Recall that $\{\hat{\lambda}_{i}\}$ and $\{\eig_i\}$ denote the eigenvalues and corresponding eigenvectors of the Kirchhoff-Laplacian and that $\{\eig_i\}$ forms an orthonormal basis of $L_2(\Gamma)$ and that this is also an eigenbasis for both $L$ and $\widetilde{L}$, with corresponding eigenvalues $\lambda_j = \kappa^2 + \hat{\lambda}_j$ and $\widetilde{\lambda}_j = \widetilde{\kappa}^2 + \hat{\lambda}_j$ respectively. Define $c_j := \widetilde{\tau}^{2/\alpha}\tau^{-2/\alpha}\widetilde{\lambda}_j^{\delta}\lambda_j^{-1}$ for some $\delta>0$. Then, the asymptotic behavior of the eigenvalues $\hat{\lambda}_j$ in \eqref{eq:weyl} shows that $0<c_{-} < c_j < c_{+} < \infty$ if and only if $\delta = \widetilde{\alpha}/\alpha = 1$. In this case we have that $\lim_{j\rightarrow\infty} c_j = (\widetilde{\tau}/\tau)^{2/\alpha}$. This means that we can only have that $\sum_{j=1}^{\infty}(c_j-1)^2<\infty$ if $\tau=\widetilde{\tau}$ and $\alpha = \widetilde{\alpha}$. The results then follows by Corollary 3.1 of \citet{bk-measure}. \end{proof} Note that Gaussian measures are either equivalent or orthogonal \citep[][Theorem 2.7.2]{Bogachev1998}. Thus, whenever $\alpha\neq\widetilde{\alpha}$ or $\tau\neq\widetilde{\tau}$, the Gaussian measures are orthogonal. On the other hand, if only $\kappa\neq\widetilde{\kappa}$, the measures are equivalent. This has important consequences for parameter estimation. It is well-known \citep{Zhang2004} that all parameters of Matérn fields on bounded subsets of $\mathbb{R}^2$ cannot be estimated consistently under infill asymptotics, but that one can estimate the so-called micro-ergodic parameter, which in our case corresponds to $\tau$. The following proposition shows that $\tau$ in the Whittle--Mat\'ern fields on metric graphs also can be estimated consistently. \begin{Proposition} Suppose that $u_1,u_2,\ldots$ is a sequence of observations of $u$ at distinct locations $s_1,s_2,\ldots$ that accumulate at every point in $\Gamma$. Assume that $\alpha$ is known and fix $\kappa$ at some value $\kappa^*$. Let $\tau_n$ denote the value of $\tau$ that maximizes the likelihood $L(y_1,\ldots, y_n; \tau, \kappa^*)$ over $\tau\in\mathbb{R}^+$. Then $\tau_n\rightarrow \tau$, $\mu$-a.s. \end{Proposition} \begin{proof} Let $f_{n,\tau}$ be the probability density function of the vector $(u(s_1),\ldots, u(s_n))^\top$ and define $\rho_n = \log f_{n,\tau,\kappa} - \log f_{n,\tau^*,\kappa^*}$. By Proposition~\ref{prop_measure}, the Gaussian measures $\mu(\cdot; \kappa,\tau,\alpha)$ and $\mu(\cdot; \kappa,\tau^*,\alpha)$ are orthogonal if $\tau^*\neq \tau$ and equivalent if $\tau^* = \tau$. Also, by continuity of sample paths (Theorem \ref{thm:weakregularity}), we can apply \cite[Theorem 1, p.100]{gikhmanskorohod}. Therefore, if $\tau^*\neq \tau$, we have that $\rho_n \rightarrow -\infty$. If, on the other hand, $\tau^* = \tau$, then $\rho_n \rightarrow \log C$, where $C$ is the Radon--Nikodym derivative $d \mu(u; \kappa,\tau,\alpha)/ d\mu(u; \kappa^*,\tau,\alpha)$. Then the result follows by the same arguments as in the proof of \citep[][Theorem 3]{Zhang2004}: It is enough to show that $\tau_n \rightarrow \tau$ $\mu$-a.s., and it is therefore sufficient to show that for any $\epsilon>0$, there exists an integer $N$ such that for $n>N$ and $|\tau - \tau^*|>\epsilon$, $\rho_n < \log C -1$. This fact follows immediately from the concavity of the log-likelihood function $L_n = \log f_{n,\tau}$. \end{proof} \subsection{Kriging prediction} In this section, we want to characterize the asymptotic properties of linear prediction for $u$ based on misspecified parameters. A sufficient criteria for asymptotic optimality is equivalence of the corresponding Gaussian measures \citep{stein99}, which by Proposition~\ref{prop_measure} means that we obtain asymptotically optimal linear prediction as soon as $\widetilde{\alpha} = \alpha$ and $\widetilde{\tau} = \tau$. However, by \citet{kb-kriging} we know that equivalence of measures is not necessary, and in this section we show that we in fact only need $\widetilde{\alpha} = \alpha$ to obtain asymptotic optimality. To clarify the setup, let $Z$ denote the vector space of all linear combinations of $u$, i.e., elements of the form ${\gamma_1 u(s_1) + \ldots + \gamma_K u(s_K)}$, where $K\in\mathbb{N}$ and $\gamma_j \in \mathbb{R}$, $s_j \in \Gamma$ for all $j\in\{1,\ldots,K\}$. To characterize optimal linear prediction for $u$, we introduce the Hilbert space $\mathcal{H}$ as the closure of, with respect to the norm $\|\,\cdot\,\|_{\mathcal{H}}$ induced by the inner product, \[ \biggl( \sum\limits_{j=1}^{K} \gamma_j u(s_j), \, \sum\limits_{k=1}^{K'} \gamma_k' u(s_k') \biggr)_{\mathcal{H}} := \sum\limits_{j=1}^{K} \sum\limits_{k=1}^{K'} \gamma_j \gamma_k' \pE\bigl[ u(s_j) u(s_k') \bigr]. \] Suppose now that we want to predict $h\in\mathcal{H}$ based on a set of observations $\{y_{nj}\}_{j=1}^n$ of the process. Then, the best linear predictor, $h_n$, is the $\mathcal{H}$-orthogonal projection of $h$ onto the subspace $ \mathcal{H}_n := \operatorname{span}\bigl\{y_{nj}^0 \bigr\}_{j=1}^n$. A relevant question is now what happens if we replace if we replace $h_n$ with another linear predictor $\widetilde{h}_n$, which is computed based on a Whittle--Mat\'ern field with misspecified parameters. To answer this, we assume that the set of observations $\bigl\{ \{ y_{nj} \}_{j=1}^n : n\in\mathbb{N} \bigr\}$ yields $\mu$-consistent kriging prediction, i.e., \begin{equation}\label{eq:ass:Hn-dense} \lim\limits_{n\to\infty} \pE\bigl[ (h_n - h)^2 \bigr] = \lim\limits_{n\to\infty} \| h_n - h \|_{\mathcal{H}}^2 = 0. \end{equation} Following \citep{kb-kriging}, we let $\mathcal{S}^\mu_{\mathrm{adm}}$ denote the set of all admissible sequences of observations which provide $\mu$-consistent kriging prediction. We then have the following result regarding the Whittle--Mat\'ern fields. \begin{Proposition}\label{prop:A-kriging} Let $h_n, \widetilde{h}_n$ denote the best linear predictors of $h\in\mathcal{H}$ based on~$\mathcal{H}_n$~and the measures $\mu(\cdot;\kappa,\tau,\alpha)$ and $\mu(\cdot;\widetilde{\kappa},\widetilde{\tau},\widetilde{\alpha})$. Set $\mathcal{H}_{-n}:=\bigl\{ h\in\mathcal{H}:\pE\bigl[ (h_n - h)^2 \bigr]> 0 \bigr\}$. Then, any of the following statements, \begin{align} \lim_{n\to\infty} \sup_{h\in \mathcal{H}_{-n}} \frac{ \pE\bigl[ ( \widetilde{h}_n - h)^2 \bigr] }{ \pE\bigl[ ( h_n - h)^2 \bigr] } = \lim_{n\to\infty} \sup_{h\in \mathcal{H}_{-n}} \frac{ \widetilde{\pE}\bigl[ ( h_n - h)^2 \bigr] }{ \widetilde{\pE}\bigl[ ( \widetilde{h}_n - h)^2 \bigr] } = 1, \label{eq:prop:A-kriging-1} \end{align} or, for some $c\in\mathbb{R}_+$, \begin{align} \lim_{n\to\infty} \sup_{h\in \mathcal{H}_{-n}} \left| \frac{ \widetilde{\pE}\bigl[ ( h_n - h)^2 \bigr] }{ \pE\bigl[ ( h_n - h)^2 \bigr] } - c \right| = \lim_{n\to\infty} \sup_{h\in \mathcal{H}_{-n}} \left| \frac{ \pE\bigl[ ( \widetilde{h}_n - h)^2 \bigr] }{ \widetilde{\pE}\bigl[ ( \widetilde{h}_n - h)^2 \bigr] } - \frac{1}{c} \right| = 0, \label{eq:prop:A-kriging-2} \end{align} holds for all $\{\mathcal{H}_n\}_{n\in\mathbb{N}} \in \mathcal{S}^\mu_{\mathrm{adm}}$ if and only if $\alpha=\widetilde{\alpha}$. In this case, the constant $c$ in \eqref{eq:prop:A-kriging-2} is $c = (\widetilde{\tau}/\tau)^{2/\alpha}$. \end{Proposition} \begin{proof} We use the same notation as in the proof of Proposition~\ref{prop_measure}, and thus have $\mu(\cdot; \kappa,\tau,\alpha) = \pN(0,\tau^{-2} L^{-\alpha})$ and $\mu(\cdot; \widetilde{\kappa},\widetilde{\tau},\widetilde{\alpha}) = \pN(0,\widetilde{\tau}^{-2} \widetilde{L}^{-\widetilde{\alpha}})$, where $L = \kappa^2 + \Delta_\Gamma$ and $\widetilde{L} = \widetilde{\kappa}^2 + \Delta_\Gamma$. Since both measures are centered, we have by \citep{kb-kriging} that that necessary and sufficient conditions for any of the statements in \eqref{eq:prop:A-kriging-1} or \eqref{eq:prop:A-kriging-2} is: \begin{enumerate}[i] \item The operators $\tau^{-2} L^{-\alpha}$ and $\widetilde{\tau}^{-2} \widetilde{L}^{-\widetilde{\alpha}}$ have isomorphic Cameron--Martin spaces. \item There exists $c>0$ such that $L^{-\alpha/2}\widetilde{L}^{\widetilde{\alpha}}L^{-\alpha/2} -c^{-1} I$ is a compact operator on $L_2(\Gamma)$. \end{enumerate} As in Proposition~\ref{prop_measure}, define $c_j := \widetilde{\tau}^{1/\beta}\tau^{-2/\alpha}\widetilde{\lambda}_j^{\delta}\lambda_j^{-1}$ for some $\delta>0$. Then, the asymptotic behavior of the eigenvalues of the Kirchhoff-Laplacian shows that $0<c_{-} < c_j < c_{+} < \infty$ if and only if $\delta = \widetilde{\alpha}/\alpha = 1$. In this case we have that $\lim_{j\rightarrow\infty} c_j = (\widetilde{\tau}/\tau)^{2/\alpha}$. The result then follows from Corollary 3.1 in \citet{bk-measure}. \end{proof} \section{Inference for the Markov subclass}\label{sec:inference} In this section we are interested practical methods for estimating the parameters of the Whittle--Mat\'ern fields on a graph $\Gamma$ in the Markov case, as well as performing prediction for unobserved locations. We will consider both the case of direct observations of the random field as well as observations under Gaussian measurement noise. That is, we assume that we have observations $\mv{y}= [y_1,\ldots, y_n]$ at locations $s_1,\ldots,s_n$ where $s_i \in \Gamma$, which in the first setting are obtained as $y_i = u(s_i)$ and in the second as $$ y_i |u(\cdot)\sim \mathcal{N}\left( u(s_i), \sigma^2\right). $$ When studying prediction we will look at a set of location $\mv{s}^*\in\Gamma$ and predict $u(\mv{s}^*) | \mv{y}$ based on the corresponding conditional distribution. We will start by considering inference in the case $\alpha=1$, where we will provide two possible methods for computing the likelihood. The first method is the standard method for latent Gaussian processes, while the second utilizes that one can integrate out certain variables with help of the Markov property of the exponential covariance function. After this we will extend the method to processes with higher values of $\alpha\in\mathbb{N}$, which is much less straightforward compared to the exponential case. \subsection{Inference for the case $\alpha=1$} The easiest method to derive inference is to expand the graph $\Gamma$ to include the observations locations $s_1,\ldots,s_n$, which we denote $\bar{\Gamma}_{\mv{s}}$, and consider the random variable $\bar{\mv{U}} = \left[u(v_1),\ldots, u(v_m) , u(s_1), \ldots u(s_n)\right]$ obtained by evaluating the Gaussian processes at the vertices of the extended graph. In the case of direct observations, the likelihood is then obtained by Corollary~\ref{cor:exp_density}, and in the case of observations under measurement noise we have \begin{align*} \bar{\mv{U}} &\sim \mathcal{N}\left(\mv{0}, \mv{Q}^{-1}\right), \\ \mv{y} | u(\cdot) &\sim \mathcal{N}\left( \bar{\mv{B}}\bar{\mv{U}} , \sigma^2 \mv{I}\right), \end{align*} where $\bar{\mv{B}} = \left[\mv{0}_{n\times m}, \mv{I}_{n \times n} \right]$ and $\mv{Q}$ is the precision matrix defined by \eqref{eq:expprec}. The resulting log-likelihood is thus \begin{align*} l(\sigma,\tau,\kappa)= \log\left ( \frac{\left|\mv{Q}\right|^{1/2}\sigma^{n}}{\left| \mv{Q}_p\right|^{1/2}} \right) - \frac{ 1}{2\sigma^2}\mv{y}^T\mv{y} + \frac12 \mv{\mu}^T \mv{Q}_p \mv{\mu}, \end{align*} where $\mv{Q}_{p} = \mv{Q}+ \frac{1}{\sigma^2} \bar{\mv{B}}^T\bar{\mv{B}}$ and $\mv{\mu} = \mv{Q}_{p}^{-1} \frac{\bar{\mv{B}}^T\mv{y} }{\sigma^2} .$ In either case, it is straightforward to perform prediction of unobserved nodes by including the prediction locations in the graph, computing the joint density of the observations and the prediction locations, and finally computing the conditionally distribution using the standard formulas for conditionally densities of Gaussian Markov random fields \citep[see, e.g.,][]{rue2005gaussian}. An alternative method in the case of indirect observations is to utilize the fact that the Gaussian processes on the different edges are independent given $\mv{U} =\left[u(v_1),\ldots, u(v_m)\right] $, and that these processes have exponential covariance functions conditionally on the vertex values. Let $\mv{y}_{e_i}$ denote all observations on edge $e_i$, let $\mv{t}_i$ denote the position of their observations on the edge, and set $\mv{s}_{i}=[0,l_{e_i}]$. Then the set of observations $\{\mv{y}_{e_i}\}_{i=1}^l$, are independent given $\mv{U}$ with distribution given by $ \mv{y}_{e_i} |u(\cdot) \sim \mathcal{N}\left( \mv{B}_{i}\mv{U}, \mv{\Sigma}_i\right). $ Here \begin{align*} \mv{B}_{i} &= \mv{r}_1(\mv{t}_{i}, \mv{s}_{i}) \mv{r}_1(\mv{s}_i, \mv{s}_i ) ^{-1} \mv{B}_{v(e_i)},\\ \mv{\Sigma}_i &= \mv{r}_1(\mv{t}_{i},\mv{t}_{i} ) - \mv{r}_1(\mv{t}_{i}, \mv{s}_{i}) \mv{r}_1(\mv{s}_i, \mv{s}_i ) \mv{r}_1(\mv{t}_{i}, \mv{s}_{i})^\trsp + \sigma^2\mathbf{I} , \end{align*} where $\mv{B}_{v(e_i)}$ is the matrix such that $\mv{B}_{v(e_i)}\mv{U}=\mv{u}(\mv{s}_{i})$ and $ \mv{r}_1(\mv{t}, \mv{s})$ is a matrix with elements $\mv{r}_1(\mv{t}, \mv{s})_{ij} = \Cov\left[u(t_i),u(s_j) \right] = r_1(t_i,s_j)$, where $r_1$ is given by \eqref{eq:matern_cov} with $\nu=1/2$. Note that the matrices do not depend on $\tilde{r}$ in Theorem~\ref{thm:CondDens} since $\tilde{r}$ satisfies Remark \ref{rem:conditional} and, hence, $\mv{u}\left(\mv{t}_{i}\right)|\mv{u}\left(\mv{s}_{i}\right)$ has a conditional exponential covariance structure. The resulting log-likelihood is thus given by \begin{align*} l(\sigma,\tau,\kappa)= \log\left( \frac{\left|\mv{Q}\right|^{1/2}}{\left| \mv{Q}_p\right|^{1/2}\prod_{i=1}^l \left| \mv{\Sigma}_i\right|^{1/2} } \right) + \frac12 \mv{\mu}^T \mv{Q}_p \mv{\mu} - \frac{ 1}{2}\sum_{i=1}^l\mv{y}_i^T\mv{\Sigma}_i^{-1}\mv{y}_i , \end{align*} where $\mv{Q}_p = \mv{Q}+ \mv{B}_i^T \mv{\Sigma}_i^{-1} \mv{B}_i$ and $\mv{\mu} = \mv{Q}_p^{-1} \sum_{i=1}^l \mv{B}_i^T\mv{\Sigma}_i^{-1} \mv{y}_i$. \subsection{Inference for the case $\alpha \in \mathbb{N} + 1$} In order to derive the density of the observations in the case $\alpha \in \mathbb{N}$ we use the construction of the distribution from Section~\ref{sec:markov}. That is, we consider centered independent Gaussian processes $\mv{u}^{\alpha}_i$ on each edge $e_i$ with covariance given by \eqref{eq:covmod}, and then generate the Kirchhoff boundary condition by \eqref{eq:Akirchoff}. As noted in Section~\ref{sec:markov} it suffices to derive the density of \begin{align*} \mv{U}^\alpha= [\mv{u}^\alpha(\underbar{e}_1),\mv{u}^\alpha(\bar{e}_1),\mv{u}^\alpha(\underbar{e}_2),\mv{u}^\alpha(\bar{e}_2),\ldots, \mv{u}^\alpha(\underbar{e}_m),\mv{u}^\alpha(\bar{e}_m)]. \end{align*} The classical method for deriving the density with linear constraint, $\mv{A}\mv{U}^\alpha=\mv{0}$, (see \cite{rue2005gaussian}) requires computing $\mv{A} \mv{Q}^{-1}\mv{A}^T$ where $\mv{Q}$ is the precision matrix of $\mv{U}^\alpha$. Using this method would mean that one cannot take advantage of the Markov properties of the process to obtain a computationally efficient method. However, as we will now see, one can use the ideas developed in \citet{bolin2021efficient} to reduce the computational costs significantly. To define the likelihood (the density of the observations given $ \mv{U}^\alpha$) we use the same notation as in the $\alpha=1$ case, and have the same formula $$ \mv{y}_{e_i} |\mv{u}^{\alpha}(\cdot)\sim \mathcal{N}\left( \mv{B}_{i}\mv{u}, \mv{\Sigma}_i\right). $$ Now to formulate $\mv{B}_{i}$ and $\mv{\Sigma}_i$ we need the matrices \begin{align*} \mv{R}_{\mv{s},\mv{s}}= \Cov\left[ x\left(\mv{s}\right), x\left(\mv{s}\right) \right], \quad \mv{R}_{\mv{t},\mv{t}}= \Cov\left[ \mv{x}^{\alpha}\left(\mv{t}\right), \mv{x}^{\alpha}\left(\mv{t}\right)\right], \,\, \text{and}\,\, \mv{R}_{\mv{s},\mv{t}}= \Cov\left[ x\left(\mv{s}\right), \mv{x}^{\alpha}\left(\mv{t}\right)\right], \end{align*} where $x$ is a Gaussian processes with Mat\'ern covariance function \eqref{eq:matern_cov}, where $\nu=\alpha - \nicefrac{1}{2}$ and $d(s,t) = |s-t|$. Then $\mv{B}_{i} = \mv{R}_{\mv{s},\mv{t}} \mv{R}_{\mv{t},\mv{t}}^{-1} \mv{B}_{v(e_i)}$ and $\mv{\Sigma}_i = \mv{R}_{\mv{s},\mv{s}}- \mv{R}_{\mv{s},\mv{t}}\mv{R}_{\mv{t},\mv{t}}^{-1} \mv{R}_{\mv{s},\mv{t}}^{\trsp} + \sigma^2 \mv{I}.$ This allows us to write the likelihood as \begin{align} \label{piyx} \pi_{Y|\mv{U}^{\alpha}}(\mv{y} | \mv{u}) =& \mathcal{N}\left(\mv{y};\mv{B} \mv{u}, \mv{\Sigma} \right) \end{align} where $ \mv{B} = diag\left( \{\mv{B}_i \}_{i=1}^m \right)$ and $\mv{\Sigma} = diag\left( \{\mv{\Sigma}_i \}_{i=1}^m \right)$. Before conditioning on the Kirchhoff boundary condition we have $ \pi_{\mv{U}^{\alpha}} \left(\mv{u} \right) = \mathcal{N}\left(\mv{u};\mv{0}, \mv{Q}^{-1}\right) $ with $\mv{Q}= diag\left( \{\mv{Q}_i \}_{i=1}^m \right)$, where $\mv{Q}_i$ is given by Corollary \ref{Cor:prec}. In \cite{bolin2021efficient} the authors derived a computationally efficient method for computing the likelihood in the above situation when $ \{\mv{\Sigma}_i \}_{i=1}^m$ are diagonal matrices. We cannot directly apply these results since we do not have diagonal matrices, so we will now generalize the method to our situation. The idea is to change basis of $\mv{u}$ in such a way that the constraints imposed via $\mv{A}$ become non-interacting, since one can then trivially write down the likelihood. This means that we transform $\mv{u}$ to $\mv{u}^* = \mv{T}\mv{u}$ such that the $k$ constraints of $\mv{A}$ act only on $\mv{u}^*_{\A}$ where $\A= \{1,\ldots,k\}$. We denote the remaining, unconstrained, nodes by $\mv{u}^*_{\Ac}$. We let $\mv{T}$ denote the change-of-basis matrix of $\mv{A}$, which can be computed efficiently through Algorithm 2 in \cite{bolin2021efficient}. It should be noted that the constrains in $\mv{A}$ are sparse and non-overlapping in the sense that each constraint acts on only one vertex. This means that computational complexity of creating $\mv{T}$ through Algorithm 2 in \cite{bolin2021efficient} is linear in the number of vertices. We now show how one can use the change of basis to facilitate computationally efficient inference. The following two theorems provide the expressions needed. \begin{Theorem} \label{Them:piAXsoft} Let $\mv{U} \sim \mathcal{N}\left(\mv{\mu}, \mv{Q}^{-1}\right)$ be a $m$-dimensional Gaussian random variable where $\mv{Q}$ is a symmetric and positive definite matrix. Let $\mv{Y}$ be a $n$-dimensional Gaussian random variable with $\mv{Y} | \mv{U} \sim \mathcal{N}\left( \mv{B}\mv{U} , \mv{\Sigma}\right),$ where $\mv{\Sigma}$ is a positive definite matrix. Further let $\mv{A}$ be a $k \times m$ matrix of full rank, take $\mv{b}\in \mathbb{R}^k$, and let $\mv{T}$ be the change-of-basis matrix of $\mv{A}$ and define $\mv{Q}^*= \mv{T}\mv{Q}\mv{T}^\trsp$ and $\mv{B}^*=\mv{B}\mv{T}^\top$. Then the density of $\mv{Y}|\mv{AU}=\mv{b}$ is \begin{align*} \pi_c(\mv{y}) = & \frac{|\mv{Q}^*_{\Ac\Ac}|^{\frac{1}{2}} |\mv{\Sigma}|^{-1/2}}{\left(2\pi \right)^{c_0} |\widehat{\mv{Q}}^*_{\Ac\Ac} |^{\frac{1}{2}}} \exp \left(- \frac{1}{2}\left[\frac{ \mv{y}^{T}\mv{\Sigma}^{-1}\mv{y}}{\sigma^2_Y}+ \widetilde{\mv{\mu}}_{\Ac} ^{*\top} \mv{Q}^*_{\Ac\Ac}\widetilde{\mv{\mu}}^*_{\Ac}- \widehat{\mv{\mu}}_{\Ac}^{*\top}\widehat{\mv{Q}} ^*_{\Ac\Ac} \widehat{\mv{\mu}}^*_{\Ac}\right]\right), \end{align*} where $c_0>0$ is a constant independent of the parameters, $\widehat{\mv{Q}}^*_{\Ac\Ac} = \mv{Q}^*_{\Ac\Ac} + \left(\mv{B}^*_{\Ac}\right)^\top \mv{\Sigma}^{-1}\ \mv{B}^*_{\Ac}$ with $\left[\mv{b}^*, \mv{\mu}^*\right]=\mv{T}\left[\mv{b}\,\mv{\mu}\right]$, and finally $\widehat{\mv{\mu}}^*_{\Ac} =\left(\widehat{\mv{Q}}^{*}_{\Ac\Ac} \right)^{-1} \left( \mv{Q}^*_{\Ac\Ac} \widetilde{\mv{\mu}}^*_{\Ac} + \left(\mv{B}^*_{\Ac}\right)^\top \mv{\Sigma}^{-1}\ \mv{y} \right)$ with $$ \widetilde{\mv{\mu}}^* = \begin{bmatrix} \mv{b}^* \\ \mv{\mu}_{\Ac}^* - \left(\mv{Q}_{\Ac\Ac}^{*}\right)^{-1} \mv{Q}^*_{\Ac\A} \left( \mv{b}^* - \mv{\mu}^*_{\A} \right) \end{bmatrix}. $$ \end{Theorem} \begin{Theorem} \label{Them:piXgby} Let $\mv{U} \sim \mathcal{N}\left(\mv{\mu}, \mv{Q}^{-1}\right)$ be a $m$-dimensional Gaussian random variable where $\mv{Q}$ is a symmetric positive definite matrix and suppose that $\mv{Y}$ is a $n$-dimensional Gaussian random variable satisfying $ \mv{Y} | \mv{U} \sim \mathcal{N}\left( \mv{B}\mv{U} , \mv{\Sigma}\right), $ where $\mv{\Sigma}$ is positive definite. Further let $\mv{A}$ be a $k \times m$ matrix of full rank, $\mv{b}\in \mathbb{R}^k$, and let $\mv{T}$ be the change-of-basis matrix of $\mv{A}$. Then $$\mv{U}| \mv{AU}=\mv{b},\mv{Y}=\mv{y} \sim \mathcal{N}\left( \widehat{\mv{\mu}}, \widehat{\mv{Q}} \right), $$ where $\widehat{\mv{Q}} = \mv{T}^\top_{\Ac,} \widehat{\mv{Q}}^*_{\Ac\Ac} \mv{T}_{\Ac,}$ and $ \widehat{\mv{\mu}} = \mv{T}^\top \scalebox{0.75}{$\begin{bmatrix} \mv{b}^* \\ \widehat{\mv{\mu}}^*_{\Ac} \end{bmatrix}$} $. Here $\widehat{\mv{Q}}^*_{\Ac\Ac}$ and $\widehat{\mv{\mu}}^*_{\Ac}$ are given in Theorem~\ref{Them:piAXsoft}. \end{Theorem} These two theorems directly provide numerically efficient methods for sampling and likelihood inference for our Gaussian random fields on $\Gamma$ since standard sparse matrix methods can be used to evaluate the expressions. The proofs of the theorems are given in Appendix~\ref{app:proofpiX}. \section{A comparison to models based on the graph Laplacian}\label{sec:graph_laplacian} \citet{borovitskiy2021matern} suggested constructing Gaussian Mat\'ern fields on graphs based on the graph Laplacian. As previously mentioned, this method does not define a stochastic process on the metric graph, but only a multivariate distribution at the vertices. In this section we will investigate the connection between this approach and our Whittle--Mat\'ern fields. For simplicity we assume that $\alpha=1$ so that the precision matrix at the vertices of the Whittle--Mat\'ern field is given by Corollary~\ref{cor:exp_Q}. The model proposed by \citet{borovitskiy2021matern} for a graph $\Gamma = (V,\mathcal{E})$ is to define a Gaussian process $\mv{u}_V$ on $V$ via $$ (\hat{\kappa}^2\mv{I} + \mv{D} - \mv{W})^{\alpha/2}\mv{u}_V \sim \pN(\mv{0},\mv{I}), $$ where $\mv{W}$ is the adjacency matrix of the graph and $\mv{D}$ is a diagonal matrix with elements $D_{ii} = d_i = \sum_{j=1}^{n_V} W_{i,j}$. For $\alpha=1$ this simply means that $\mv{u}_V$ is a multivariate normal distribution with precision matrix $\hat{\mv{Q}}$ with elements \begin{equation} \hat{Q}_{i,j} = \begin{cases} \hat{\kappa}^2 + d_i & i=j \\ -1 & i \sim j\\ 0 & \text{otherwise}, \end{cases} \end{equation} where $i\sim j$ denotes that the vertices $i$ and $j$ are neighbors in the graph. Comparing this expression to that in Corollary~\ref{cor:exp_Q} we see that this defines a different distribution than that of the Whittle--Mat\'ern field on $\Gamma$ evaluated on $V$. However, suppose that we take a graph such as that shown in Figure~\ref{fig:cov_example} and subdivide each edge into pieces of length $h$, so that we obtain a new graph $\Gamma_h$ with edges of length $h$. In this graph, most vertices now have degree $2$, and if we then define $\hat{c} = e^{-\kappa h}/(1-e^{-2\kappa h})$ and consider $\hat{c}\hat{\mv{Q}}$ for this graph, where we choose $\hat{\kappa}^2 = \hat{c}^{-1} + 2 e^{-\kappa h} - 2$, we obtain that \begin{equation} \hat{c}\hat{Q}_{i,j} = \begin{cases} Q_{i,i} & \text{$i=j$ and $d_i = 2$} \\ Q_{i,i} + 1 - \frac{d}{2} + \hat{c}(d-2) (e^{-\kappa h} -1) & \text{$i=j$ and $d_i \neq 2$} \\ Q_{i,j} & i \neq j \end{cases} \end{equation} Thus, the precision matrix $\hat{c}\hat{\mv{Q}}$ agrees with the precision matrix of the Whittle--Mat\'ern field for all vertices except those with degree different than $2$. Hence, the construction based on the graph Laplacian can be seen as an approximation of our exact construction, and the difference between the covariance matrices $\mv{\Sigma} = \mv{Q}^{-1}$ and $\hat{\mv{\Sigma}} = \hat{c}^{-1}\hat{\mv{Q}}^{-1}$ will be small if $h$ is small and most vertices have degree $2$. For example, suppose that we in the graph only have one vertex with degree $3$, whereas the other vertices have degree 2. Then $\hat{c}\hat{\mv{Q}} = \mv{Q} + \mv{v}\mv{v}^T$, where $\mv{v}$ is a vector with all zeros except for one element $$ v_i = \sqrt{1 - \frac{d}{2} + \hat{c}(d-2) (e^{-\kappa h} -1)}. $$ Then, via the Sherman--Morrison formula we obtain $$ \mv{\Sigma} - \hat{\mv{\Sigma}} = \frac{v_i}{1+v_i^2 \Sigma_{ii}} \mv{\Sigma}_i\mv{\Sigma}_i^\top, $$ where $\mv{\Sigma}_i$ is the $i$th column of $\mv{\Sigma}$. Since $v_i\rightarrow 0$ as $h\rightarrow 0$, we see that each element in the difference $\mv{\Sigma} - \hat{\mv{\Sigma}}$ converges to $0$ as $h\rightarrow 0$. Thus, the construction based on the graph Laplacian can in some sense be viewed as a finite difference approximation of the true Whittle--Mat\'ern field. Note, however, that for a non-subdivided graph, the difference between the construction based on the graph Laplacian and the exact Whittle--Mat\'ern field can be large. Also, subdividing the graph in order to get a good approximation would induce a high computational cost. Thus, we cannot think of any reasons for why one would use the construction based on the graph Laplacian as an approximation of the exact Whittle--Mat\'ern field for practical applications. \section{Discussion}\label{sec:discussion} We have introduced the Gaussian Whittle-Mat\'ern random fields on metric graphs and have provided a comprehensive characterization of their regularity properties and statistical properties. We argue that this class of models is a natural choice for applications where Gaussian random fields are needed to model data on metric graphs. Of particular importance here are the Markov cases $\alpha\in\mathbb{N}$. We derived explicit densities for the finite dimensional distributions in the exponential case $\alpha=1$, where we can note that the model has a conditional autoregressive structure of the precision matrix \citep{besag1974spatial}. For the differentiable cases, such as $\alpha=2$, we derived a semi-explicit precision matrix formulated in terms of conditioning on vertex conditions. In both cases, we obtain sparse precision matrices that facilitates the use in real applications to big datasets via computationally efficient implementations based on sparse matrices \citep{rue2005gaussian}. There are several extensions that can be considered of this work. The most interesting in the applied direction is to use the models in log-Gaussian Cox processes to model count data on networks, where it is likely that most applications of this work can be found. For example, log-Gaussian Cox processes on linear networks can be suitable for modeling of apartments sales or murders in cities or accidents on road networks. Another interesting aspect is to consider generalized Whittle--Mat\'ern fields, where we would allow the parameters $\kappa$ and $\tau$ to be non-stationary over the graph. In the Markov case, a natural extension is to expand Theorem \ref{thm:CondDens} to allow for a non-stationary covariance, and also relax condition ii) to allow for a directional relation in the Gaussian random field. In the non-Markovian case, a natural extension is to use a finite element method combined with an approximation of the fractional operator, similarly to the methods by \citet{BKK2020} and \citet{BK2020rational}, to derive numerical approximations of the random fields. An other interesting extension is to consider Type-G extensions of the Gaussian Whittle--Mat\'ern fields similarly to the Type-G random fields in \citep{bw20}. An interesting property in such a construction is that the process on each edge could be represented as a subordinated Gaussian process. \bibliographystyle{chicago} \bibliography{graph} \newpage \begin{appendix} \section{Real interpolation of Hilbert spaces}\label{app:interpolation} Let $H_0$ and $H_1$ be two Hilbert spaces, with corresponding inner products $(\cdot,\cdot)_{H_0}$ and $(\cdot,\cdot)_{H_1}$, respectively. Let, also, $H_1\subset H_0$ and for every $u\in H_1$, $\|u\|_{H_1}\geq \| u\|_{H_0}$, where $\|u\|_{H_j}^2 = (u,u)_{H_j}$, $j=0,1$. We say that a Hilbert space $H$ is an intermediate space between $H_0$ and $H_1$ if $H_1\subset H \subset H_0$, with continuous embeddings. We say that $H$ is an interpolation space relative to $(H_0,H_1)$ if $H$ is an intermediate space and for every bounded linear operator $T:H_0\to H_0$ such that $T(H_1)\subset H_1$ and the restriction $T|_{H_1}:H_1\to H_1$ is bounded, we also have that $T(H)\subset H$ and $T|_H:H\to H$ is bounded. We will consider the so-called $K$ method of real interpolation. For further discussions on such methods we refer the reader to \cite{lunardi}, \cite{mclean} or \cite{chandlerwildeetal}. Define the $K$-functional for $t>0$ and $\phi\in H_0$ as \begin{equation}\label{eq:kfunctional} K(t,\phi; H_0,H_1) = \inf\{(\|\phi_0\|_{H_0}^2 + t^2\|\phi_1\|_{H_1}^2)^{1/2}: \phi_j\in H_j, j=0,1, \phi_0+\phi_1= \phi\}, \end{equation} and for $0<\alpha<1$, define the weighted $L_2$ norm by \begin{equation}\label{eq:fracnorm} \|f\|_\alpha = \left(\int_0^\infty |t^{-\alpha} f(t)|^2 \frac{dt}{t}\right)^{1/2}. \end{equation} Then we have the following interpolation space for $0<\alpha<1$, \begin{equation}\label{eq:interpolspace} (H_0,H_1)_{\alpha} = \{\phi\in H_0: \|K(\cdot, \phi; H_0,H_1)\|_\alpha <\infty\}, \end{equation} which is a Hilbert space with respect to the inner product $$(\phi,\psi)_{(H_0,H_1)_\alpha} = \int_0^\infty t^{-2\alpha}K(t,\phi;H_0,H_1)K(t,\psi;H_0,H_1) \frac{dt}{t}.$$ Let, now $\widetilde{H}_0$ and $\widetilde{H}_1$ be two additional Hilbert spaces, with Hilbertian norms $\|\cdot\|_{\widetilde{H}_0}$ and $\|\cdot\|_{\widetilde{H}_1 }$, respectively, such that $\widetilde{H}_1\subset\widetilde{H}_0$ and for every $u\in \widetilde{H}_1$, we have $\|u\|_{\widetilde{H}_1} \geq \|u\|_{\widetilde{H}_0}$. We say that $T:H_0\to\widetilde{H}_0$ is a couple map, and we write $T:(H_0,H_1)\to(\widetilde{H}_0,\widetilde{H}_1)$ if $T$ is a bounded linear operator, $T(H_1)\subset \widetilde{H}_1$, and $T|_{H_1}:H_1\to\widetilde{H}_1$ is also a bounded linear operator. Given Hilbert spaces $H$ and $\widetilde{H}$, we say that $(H,\widetilde{H})$ is a pair of interpolation spaces relative to $\left((H_0,H_1), (\widetilde{H}_0, \widetilde{H}_1) \right)$ if $H$ and $\widetilde{H}$ are intermediate spaces with respect to $(H_0,H_1)$ and $(\widetilde{H}_0,\widetilde{H}_1)$, respectively, and if, whenever $T:(H_0, H_1)\to (\widetilde{H}_0, \widetilde{H}_1)$ is a couple map, we have that $T(H) \subset \widetilde{H}$ and that $T|_H:H\to \widetilde{H}$ is a bounded linear operator. The following theorem, which is proved in \cite[Theorem 2.2]{chandlerwildeetal}, tells us that if $H$ and $\widetilde{H}$ are both obtained from the $K$ method, then the pair $(H,\widetilde{H})$ is an interpolation pair: \begin{Theorem}\label{thm:interpolationpairKmethod} Let $0 < \alpha < 1$. Then, $\left((H_0,H_1)_\alpha, (\widetilde{H}_0, \widetilde{H}_1)_\alpha\right)$ is a pair of interpolation spaces with respect to $\left((H_0,H_1), (\widetilde{H}_0, \widetilde{H}_1) \right)$. \end{Theorem} \section{The fractional Sobolev space $H^\alpha(I)$}\label{app:sob} In this section we will provide different definitions for the Sobolev space of arbitrary positive order on a bounded interval $I=(a,b)$, and show how they are connected. All the proofs and details of the results in here can be found in \cite{mclean} and \cite{demengel}. The final result on interpolation of Fourier-based fractional Sobolev spaces can be found in \cite{chandlerwildeetal}. We begin with the definition of the Sobolev spaces of positive integer orders. Let $C_c^\infty(I)$ be the space of infinitely differentiable functions on $I$ with compact support.We say that a function $f\in L_2(I)$ has a weak derivative of order $m$ if there exists a function $v\in L_2(I)$ such that for every $\phi \in C_c^\infty(I)$, $$\int_I u(t) \frac{d^m \phi(t)}{dt^m} dt = (-1)^m \int_I v(t)\phi(t)dt.$$ In this case we say that $v$ is the $m$th weak derivative of $u$ and we write $v = D^m u$. The Sobolev space $H^m(I)$ is defined as \begin{equation}\label{eq:sobinteger} H^m(I) = \{u\in L_2(I): D^j u \hbox{ exists for every } j=1,\ldots, m\}, \end{equation} and we have the following characterization of $H^1(I)$: \begin{equation}\label{eq:sobabscont} H^1(I) = \{u\in L_2(I): u \hbox{ is absolutely continuous}\}. \end{equation} We will now define the fractional Sobolev-Slobodeckij space of order $0<\alpha<1$. To this end, first we consider the Gagliardo-Slobodeckij semi-norm and the corresponding bilinear form $$ [u]_{\alpha} = \left(\int_I \int_I \frac{|u(t) - u(s)|^2}{|t-s|^{2\alpha+1}} dtds \right)^{1/2}, \quad [u,v]_\alpha = \int_I \int_I \frac{(u(t) - u(s))(v(t)-v(s))}{|t-s|^{2\alpha+1}} dtds. $$ The Sobolev-Slobodeckij space $H^\alpha_S(I) = \{u\in L_2(I): [u]_\alpha <\infty\}$, for ${0<\alpha<1}$, is then a Hilbert space with respect to the inner product $(u,v)_{H_S^\alpha(I)} = (u,v)_{L_2(I)} + [u,v]_\alpha$. We have the following Sobolev embedding \cite[Theorem 4.57]{demengel}: \begin{Theorem}\label{thm:sobolevembeddingR} For $1/2 < \alpha \leq 1$, we have that $H_S^\alpha(I) \hookrightarrow C^{0,\alpha-1/2}(I)$, where we define $H_S^1(I):= H^1(I)$. \end{Theorem} We also have Fourier-based fractional Sobolev spaces. To define these, let $$ \mathcal{S}(\mathbb{R}) = \{u\in C^\infty(\mathbb{R}): \forall a,b\in\mathbb{N}, \sup_{x\in\mathbb{R}} |x^a D^bu(x)| <\infty \} $$ be the Schwartz space of rapidly decreasing functions, where $C^\infty(\mathbb{R})$ is the space of infinitely differentiable functions in $\mathbb{R}$. Let $\mathcal{S}'(\mathbb{R})$ be the dual of $\mathcal{S}(\mathbb{R})$ and we embed $L_2(\mathbb{R})$ in $\mathcal{S}'(\mathbb{R})$ by the action $\<u,v\> = (u,v)_{L_2(\mathbb{R})},$ where $u\in L_2(\mathbb{R})$ and $v\in \mathcal{S}(\mathbb{R})$. Let, for $u\in\mathcal{S}(\mathbb{R})$, $\mathcal{F}(u)$ be the Fourier transform of $u$: $$\mathcal{F}(u)(x) = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{-ixt} u(t)dt.$$ We can then extend $\mathcal{F}$ to a map from $\mathcal{S}'(\mathbb{R})$ to $\mathcal{S}'(\mathbb{R})$, see, for instance, \cite{mclean}. For $\alpha\in\mathbb{R}$, we define $H^\alpha_F(\mathbb{R})\subset \mathcal{S}'(\mathbb{R})$ as $H^\alpha_F(\mathbb{R}) = \{ u\in \mathcal{S}'(\mathbb{R}): \|u\|_{H^\alpha_F(\mathbb{R})} <\infty\},$ where $$ \|u\|_{H^\alpha_F(\mathbb{R})} = \left(\int_{\mathbb{R}} (1+|x|^2)^\alpha |\mathcal{F}(u)(x)|^2 dx\right)^{1/2}.$$ Note that the above norm assumes two things: that $\mathcal{F}(u)$ can be identified with a measurable function, and that $(1+|x|^2)^{\alpha/2} |\mathcal{F}(u)(x)|$ belongs to $L_2(\mathbb{R})$. We can now define the Fourier-based fractional Sobolev spaces on a bounded interval $I$. To this end, let $\mathcal{D}(I)=C_c^\infty(I)$, and $\mathcal{D}'(I)$ be its dual. Then, we define the Fourier-based Sobolev space of order $\alpha$ as $H_F^\alpha(I) = \{u\in \mathcal{D}'(I): u = U|_I \hbox{ for some } U \in H_F^s(\mathbb{R})\},$ where $U|_I$ is the restriction of the distribution $U$ to the set $I$, see \cite[p. 66]{mclean}. The space $H_F^\alpha(I)$ is a Hilbert space with respect to the norm $\|u\|_{H_F^\alpha(I)} = \inf\{\|U\|_{H_F^\alpha(\mathbb{R})}: U|_I = u\}.$ Recall that two Hilbert spaces $E$ and $F$ are isomorphic, which we denote by $E \cong F$, if $E \hookrightarrow F \hookrightarrow E$. The two definitions of fractional Sobolev spaces are in fact equivalent in this sense, \cite[Theorems 3.18 and A.4]{mclean}: \begin{Theorem}\label{thm:equivfracsob} For any bounded interval $I\subset \mathbb{R}$ and any $0\leq \alpha\leq 1$, we have that $H_S^\alpha(I) \cong H_F^\alpha(I),$ where $H_S^1(I) = H^1(I)$ and $H_S^0(I) = L_2(I)$. \end{Theorem} The advantage of the Fourier-based definition of fractional Sobolev spaces is that it is suitable for interpolation. Indeed, we have by \cite[Lemma 4.2 and Theorem 4.6]{chandlerwildeetal}: \begin{Theorem}\label{thm:interpolFourierFrac} Let $I\subset \mathbb{R}$ be a bounded interval, $\alpha_0\leq \alpha_1$, and $\theta\in (0,1)$. Further, set $\alpha = \alpha_0 (1-\theta) + \alpha_1\theta$. Then, $H_F^\alpha(I) \cong (H_F^{\alpha_0}(I), H_F^{\alpha_1}(I))_{\theta}.$ \end{Theorem} Finally, for $0<\alpha<1$ we have the following identification of the interpolation-based Sobolev space $H^\alpha(I) = (L_2(I), H^1(I))_\alpha.$ \begin{Proposition}\label{prp:identificationFracSobint} For $0<\alpha < 1$ and any bounded interval $I$, we have $H^\alpha(I) \cong H_S^\alpha(I).$ \end{Proposition} \begin{proof} By Theorem \ref{thm:equivfracsob}, we have that $H_F^0(I) \cong L_2(I)$ and $H_F^1(I)\cong H^1(I)$. Thus, it follows from the same arguments as in the proof of Corollary \ref{cor:identificationHalpha} that \begin{equation}\label{eq:interpsobspacfrac} H^\alpha(I) = (L_2(I), H^1(I))_\alpha \cong (H_F^0(I), H_F^1(I))_\alpha. \end{equation} Now, by Theorems \ref{thm:equivfracsob}--\ref{thm:interpolFourierFrac} and \eqref{eq:interpsobspacfrac}, we have that $H^\alpha(I) \cong H_F^\alpha(I) \cong H_S^\alpha(I).$ \end{proof} As an immediate consequence we obtain a Sobolev embedding for the interpolation-based fractional Sobolev space: \begin{Corollary}\label{cor:SobembedInterpFracSob} For $1/2 < \alpha \leq 1$, we have that $H^\alpha(I) \hookrightarrow C^{0,\alpha-1/2}(I).$ \end{Corollary} \begin{proof} Follows directly from Proposition \ref{prp:identificationFracSobint} and Theorem \ref{thm:sobolevembeddingR}. \end{proof} \section{Proofs of the results in Section \ref{sec:properties}}\label{app:proofsregularitysection} We start by introducing some additional notation that we need in this section. For two Hilbert spaces $E$ and $F$, recall that $E\cong F$ if $E \hookrightarrow F \hookrightarrow E$. The space of all bounded linear operators from $E$ to $F$ is denoted by $\mathcal{L}(E;F)$, which is a Banach space when equipped with the norm $\|T\|_{\mathcal{L}(E;F)} = \sup_{f\in E\setminus\{0\}}\frac{\|Tf\|_F}{\|f\|_E}$. We let $E^*$ denote the dual of $E$, which contains all continuous linear functionals $f : E \rightarrow \mathbb{R}$. Note that the eigenfunctions $\{\eig_i\}_{i\in\mathbb{N}}$ of $\Delta_\Gamma$ form an orthonormal basis of $L_2(\Gamma)$ and is an eigenbasis for $L$ defined in \eqref{eq:Loperator}. Also note that $\eig_i$ satisfies $a_L(\eig_i,\phi) = \lambda_i (\eig_i,\phi)_{L_2(\Gamma)}$, where $a_L$ is defined in \eqref{eq:bilinearform}. In particular, $\eig_i\in H^1(\Gamma)$ for every $i\in\mathbb{N}$. The above expression implies that for each $i\in\mathbb{N}$, \begin{equation}\label{intbypartseigenvec} (\eig_i', \phi')_{L_2(\Gamma)} = \sum_{e\in\mathcal{E}} (\eig_i',\phi')_{L_2(e)} = (\lambda_i-\kappa^2) (\eig_i,\phi)_{L_2(\Gamma)}, \end{equation} so if $\lambda_i = \kappa^2$ then $\|\eig_i'\|_{L_2(\Gamma)} = 0$ and $\eig_i$ is constant on each edge $e\in\mathcal{E}$. Since $\eig_i\in H^1(\Gamma)$, we have that $\eig_i$ is continuous and, therefore, $\eig_i$ is constant on $\Gamma$. Thus, we directly have that if $\eig_i$ is non-constant, which means that $\|\eig_i'\|_{L_2(\Gamma)}^2 > 0$, then $\lambda_i > \kappa^2$. This also tells us that the eigenspace associated to the eigenvalue $\lambda_i = \kappa^2$ has dimension 1. Throughout the proofs we will assume, without loss of generality, that $\tau=1$ to keep the notation simple. \begin{Proposition}\label{prp:hdot1} We have the identification $\dot{H}^1 \cong H^1(\Gamma)$. \end{Proposition} \begin{proof} By \cite[Section 2.3]{kostenkoetal} we have $\mathcal{D}(L) = \widetilde{H}^2(\Gamma) \cap C(\Gamma) \cap K(\Gamma),$ where $K(\Gamma) = \{f\in\widetilde{H}^2(\Gamma): \sum_{e\in\mathcal{E}_v} \partial_e f(v) = 0\}$. Therefore, $\mathcal{D}(L)\subset H^1(\Gamma)$. For $f\in \mathcal{D}(L)$, define the energetic norm $$\|f\|_E^2 = a_L(f,f) = (L^{1/2}f,L^{1/2}f)_{L_2(\Gamma)} = (\kappa^2 f,f)_{L_2(\Gamma)} + \sum_{e\in\mathcal{E}}(f',f')_{L_2(e)}.$$ Since $\kappa^2>0$, it is immediate that $\|\cdot\|_E$ is equivalent to $\|\cdot\|_{H^1(\Gamma)}$. Further, by \cite[Theorem 2.8 and Remark 3.1]{kostenkoetal}, $\mathcal{D}(L)$ is dense in $H^1(\Gamma)$, with respect to $\|\cdot\|_{H^1(\Gamma)}$, since $\Gamma$ is compact. In particular, $\mathcal{D}(L)$ is dense in $H^1(\Gamma)$ with respect to $\|\cdot\|_E$. Furthermore, $L$ is linear, coercive and self-adjoint. Since $\|\cdot\|_E$ and $\|\cdot\|_{H^1(\Gamma)}$ are equivalent, $\mathcal{D}(L)\subset H^1(\Gamma)$, $H^1(\Gamma)$ is complete, and $\mathcal{D}(L)$ is dense in $H^1(\Gamma)$ with respect to $\|\cdot\|_E$, it follows that $(H^1(\Gamma),\|\cdot\|_E)$ is the energetic space associated to $L$. We refer the reader to \cite[Section 5.3]{zeidler} for the definition of energetic spaces. Now, note that, since $\Gamma$ is a compact metric graph (thus, we have finitely many edges of finite length), it follows by Rellich-Kondrachov theorem, see, for instance, \cite{evans2010partial}, that $\widetilde{H}^1(\Gamma)$ is compactly embedded on $L_2(\Gamma)$. Therefore, the embedding $H^1(\Gamma) \subset L_2(\Gamma)$ is compact. By the characterization of energetic spaces, see \cite[Example 4, p. 296]{zeidler}, it follows that $(\dot{H}^1,\|\cdot\|_1) = (\mathcal{D}(L^{1/2}), \|\cdot\|_1) = (H^1(\Gamma), \|\cdot\|_E).$ Since $\|\cdot\|_E$ and $\|\cdot\|_{H^1(\Gamma)}$ are equivalent, the result follows. \end{proof} \begin{Corollary}\label{cor:identificationHalpha} We have the identification $\dot{H}^\alpha \cong H^\alpha(\Gamma).$ \end{Corollary} \begin{proof} Begin by observing that, by \cite[Theorem 4.36]{lunardi}, \begin{equation}\label{eq:interpHdot} \dot{H}^\alpha \cong (\dot{H}^0, \dot{H}^1)_\alpha, \end{equation} where $(\dot{H}^0, \dot{H}^1)_\alpha$ is the real interpolation between $\dot{H}^0$ and $\dot{H}^1$, see Appendix \ref{app:interpolation}. In view of \eqref{eq:interpHdot} and the fact that $\dot{H}^0 = L_2(\Gamma)$, it is enough to show that $(L_2(\Gamma), \dot{H}^1)_\alpha \cong H^\alpha(\Gamma).$ By Proposition \ref{prp:hdot1}, $\dot{H}^1 \cong H^1(\Gamma)$. In particular, there exists some $C>0$ such that for every $u\in H^1(\Gamma)$ we have that $ C^{-1} \|u\|_1 \leq \|u\|_{H^1(\Gamma)} \leq C\|u\|_1.$ Define $\widetilde{C}= 1+C$ and $\breve{C} = 1+C^{-1}$, and observe that we have $K(t,\phi; L_2(\Gamma), H^1(\Gamma)) \leq \widetilde{C} K(t,\phi; L_2(\Gamma), \dot{H}^1)$ and $K(t,\phi; L_2(\Gamma), \dot{H}^1) \leq \widetilde{C} K(t,\phi; L_2(\Gamma), H^1(\Gamma)),$ where $K(\cdot,\cdot;\cdot,\cdot)$ is the $K$-functional given by \eqref{eq:kfunctional}. The identification $\dot{H}^\alpha\cong H^\alpha(\Gamma)$ now follows directly by \eqref{eq:interpHdot}, \eqref{eq:interpolspace}, \eqref{eq:fracnorm} and by noting that $H^\alpha(\Gamma) = (L_2(\Gamma),H^1(\Gamma))_\alpha$. \end{proof} We will now obtain an embedding for higher order spaces. \begin{Proposition}\label{prp:hdotk} We have the identifications $\dot{H}^2 = \widetilde{H}^2(\Gamma) \cap C(\Gamma) \cap K(\Gamma),$ where $K(\Gamma) = \{f\in\widetilde{H}^2(\Gamma): \sum_{e\in\mathcal{E}_v} \partial_e f(v) = 0\}$, and for $k>2$, $$\dot{H}^k = \left\{f\in \widetilde{H}^k(\Gamma): D^{2\left\lfloor \frac{k}{2}\right\rfloor} f \in C(\Gamma), \forall m=0,\ldots, \left\lfloor\frac{k-2}{2} \right\rfloor, D^{2m} f\in\dot{H}^2 \right\}.$$ Furthermore, the norms $\|\cdot\|_k$ and $\|\cdot\|_{\widetilde{H}^k(\Gamma)}$ are equivalent. \end{Proposition} \begin{proof} The identification of $\dot{H}^2$ follows from \cite[Section 2.3]{kostenkoetal}. We will now check that the embedding $\dot{H}^k\hookrightarrow \widetilde{H}^k(\Gamma)$ is continuous. First, observe that \eqref{intbypartseigenvec} tells us, from the definition of weak derivatives, that $D^2\eig_i = \eig_i''$ exists and is given by ${\eig_i'' = (\lambda_i - \kappa^2)\eig_i}$. By using the above identity and induction, we have that for every $k,i\in\mathbb{N}$, $D^k\eig_i$ exists. Furthermore, for $m\in\mathbb{N}$, \begin{equation}\label{diffeigengen} D^{2m+1} \eig_i = (\lambda_i - \kappa^2)^m D\eig_i = (\lambda_i - \kappa^2)^m \eig_i'\quad\hbox{and}\quad D^{2m} \eig_i = (\lambda_i - \kappa^2)^m \eig_i. \end{equation} Now, we have by \eqref{diffeigengen} that $\|D^k \eig_j\|_{L_2(\Gamma)}^2 \leq \lambda_j^k.$ Indeed, if $k$ is odd, $k=2m+1$, for some non-negative integer $m$, and by \eqref{intbypartseigenvec} $$\|D^k \eig_j\|_{L_2(\Gamma)}^2 = \|D^{2m+1}\eig_j\|_{L_2(\Gamma)}^2 = (\lambda_i - \kappa^2)^{2m} \|\eig_j'\|_{L^2(\Gamma)}^2 = (\lambda_j - \kappa^2)^{2m+1} \leq \lambda_j^k.$$ The case of even $k$ is analogous. Let $\phi\in\dot{H}^k$, so $\phi = \sum_{i\in\mathbb{N}}\alpha_i \eig_i$, where $\alpha_i = (\phi,\eig_i)_{L_2(\Gamma)}$. We have that, for every $N\in\mathbb{N}$, $\sum_{i=1}^N \alpha_i \eig_i\in \widetilde{H}^k(\Gamma)$. Furthermore, $$ \Bigl\|\sum_{i=1}^N \alpha_i \eig_i \Bigr\|_{\widetilde{H}^k(\Gamma)}^2 \leq \sum_{l=0}^k \sum_{i=1}^N \alpha_i^2 \lambda_i^l.$$ Since $\phi\in\dot{H}^k$, the series $\phi = \sum_{i\in\mathbb{N}}\alpha_i \eig_i$ converges in $\widetilde{H}^k$, whence $\phi\in \widetilde{H}^k$. Now, observe that, for $j=1,\ldots, k$, the $j$th order weak derivative, $D^j:\widetilde{H}^k\to L_2(\Gamma)$, is a bounded linear operator. Thus, $D^j\phi = \sum_{i\in\mathbb{N}} \alpha_i D^j \eig_i.$ By the above expression, we have that \begin{align*} \|\phi\|_{\widetilde{H}^k(\Gamma)}^2 &\leq \sum_{l=0}^k \sum_{i\in\mathbb{N}} \lambda_i^l \alpha_i^2\leq \sum_{l=0}^{k-1} \sum_{i\in\mathbb{N}} \lambda_i^l \alpha_i^2 + \sum_{i\in\mathbb{N}} \lambda_i^k \alpha_i^2 \leq \sum_{l=0}^{k} \sum_{i\in\mathbb{N}} \left(\frac{1}{\lambda_1} \right)^{k-l} \lambda_i^k \alpha_i^2 \\ &\leq \left(\sum_{l=0}^k \left(\frac{1}{\lambda_1} \right)^{k-l} \right) \|\phi\|_k^2, \end{align*} This shows the continuity of the embedding. The converse inequality to show the equivalence of norms is simpler. Let us now show the characterization of $\dot{H}^k$ for $k\geq 3$. Note that $f\in\dot{H}^3$ if and only if $f \in\dot{H}^2$ and $Lf \in \dot{H}^1.$ Now, observe that $Lf = \kappa^2 f - D^2f$ and $\dot{H}^2\subset\dot{H}^1$. Therefore, \begin{align*} f\in \dot{H}^3 &\Leftrightarrow f\in\dot{H}^2\hbox{ and }Lf \in \dot{H}^1 \Leftrightarrow f\in \dot{H}^2\hbox{ and }D^2 f\in \dot{H}^1\\ & \Leftrightarrow f\in\dot{H}^2, D^2f \in \widetilde{H}^1(\Gamma), D^2f\in C(\Gamma) \Leftrightarrow f\in \widetilde{H}^3(\Gamma), f\in\dot{H}^2, D^2f\in C(\Gamma). \end{align*} The characterization for $\dot{H}^k$, $k>3$, follows similarly. \end{proof} Observe that if $j = 2m$, we have \begin{equation}\label{evenorderdiff} D^{2m}\phi = \sum_{i\in\mathbb{N}} \alpha_i \lambda_i^{m} \eig_i. \end{equation} The above expression allows us to obtain additional regularity for the derivatives of even order of functions belonging to $\dot{H}^k$. \begin{Corollary}\label{cor:evendiffcont} Let $\phi \in \dot{H}^k$. Then, $D^{2m}\phi\in H^1(\Gamma)$ for $m=0,\ldots, \left\lfloor\frac{k-1}{2} \right\rfloor$ and $k\in\mathbb{N}$. In particular, $D^{2m}\phi$ is continuous. \end{Corollary} \begin{proof} Since, by Proposition \ref{prp:hdot1}, $\dot{H}^1\cong H^1(\Gamma)$, it is enough to show that ${D^{2m}\phi \in \dot{H}^1}$. Now, observe that as $\phi\in\dot{H}^k$, we have $\phi = \sum_{i\in\mathbb{N}} \alpha_i \eig_i$, with $\alpha_i = (\phi,\eig_i)_{L_2(\Gamma)}$, and $\sum_{i\in\mathbb{N}} \alpha_i^2\lambda^k<\infty$. By assumption, $m\leq (k-1)/2$, hence $2m + 1 \leq k$, which implies that $\sum_{i\in\mathbb{N}} \alpha_i^2 \lambda_i^{2m+1}<\infty$. On the other hand, by \eqref{evenorderdiff}, $D^{2m}\phi = \sum_{i\in\mathbb{N}} \alpha_i \lambda_i^{m} \eig_i.$ Thus, we have $D^{2m}\phi \in \dot{H}^1$, which concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{regularity1}] First observe that by Remark 2.4 in \cite{BKK2020}, for any $\epsilon>0$, $u$ has sample paths in $\dot{H}^{\alpha-\frac{1}{2}-\epsilon}$. For $\alpha > \frac{3}{2}$, we have directly that for sufficiently small $\epsilon$, $\dot{H}^{\alpha - \frac{1}{2}-\epsilon}\subset \dot{H}^1$. By Proposition \ref{prp:hdot1}, we have $\dot{H}^1\subset H^1(\Gamma)$. So, $\mathbb{P}$-a.s., $u$ belongs to $H^1(\Gamma)$. Furthermore, by Proposition \ref{prp:hdotk}, $\dot{H}^{\alpha - \frac{1}{2}-\epsilon} \subset \dot{H}^{\alpha^\ast}\subset \widetilde{H}^{\alpha^\ast}(\Gamma)$. This shows that for $j=1,\ldots,\alpha^\ast$, $\mathbb{P}$-a.s., $D^ju$ exists. The statement regarding even order derivatives follows directly from Corollary \ref{cor:evendiffcont}, whereas the statement for odd order derivatives follows directly from Proposition \ref{prp:hdotk}. Finally, recall that $\mathcal{W} = \sum_{j\in\mathbb{N}} \xi_j \eig_j$ $\mathbb{P}$-a.s., where $\{\xi_j\}_{j\in\mathbb{N}}$ are independent standard Gaussian random variables on $(\Omega, \mathcal{F},\mathbb{P})$. Further, by \eqref{eq:solspde} and the previous representation of $\mathcal{W}$, we have that $(u,\eig_j)_{L_2(\Gamma)} = \lambda_j^{-\alpha/2} \xi_j$. Therefore, if $1/2 < \alpha \leq 3/2$, we have by \eqref{eq:weyl}, that there exist constants $K_1,K_2>0$, that do not depend on $j$, such that $$\|u\|_{1}^2 = \sum_{j\in\mathbb{N}} \lambda_j^{-\alpha+1} \xi_j^2 \geq K_1 \sum_{j\in\mathbb{N}} \lambda_j^{-1/2} \xi_j^2 \geq K_2 \sum_{j\in\mathbb{N}} j^{-1} \xi_j^2.$$ Now, let $Y_j = j^{-1} \xi_j^2 \mathbb{I}\left(j^{-1}\xi_j^2 \leq 1\right) = j^{-1} \xi_j^2 \mathbb{I}\left(\xi_j^2 \leq j\right)$. Observe that $$\E(Y_j) \geq j^{-1} \E\left(\xi_j^2 \mathbb{I}\left(\xi_j^2 \leq 1\right)\right) = \frac{\delta}{j},$$ where $\delta>0$ does not depend on $j$, since $\{\xi_j\}_{j\in\mathbb{N}}$ are i.i.d. standard Gaussian variables. Hence, $\sum_{j\in\mathbb{N}} \E(Y_j) \geq \sum_{j\in\mathbb{N}} \frac{\delta}{j} = \infty.$ Therefore, by Kolmogorov's three-series theorem \citep[Theorem 2.5.8]{durrettPTE} we have that $\sum_{j\in\mathbb{N}} j^{-1} \xi_j^2$ diverges with positive $\mathbb{P}$ probability. By Kolmogorov's 0-1 law \citep[Theorem 2.5.3]{durrettPTE}, the series diverges $\mathbb{P}$-a.s. This implies that, $\mathbb{P}$-a.s., $\|u\|_1 = \infty$. Thus, $\mathbb{P}$-a.s., $u$ does not belong to $\dot{H}^1$. By Proposition~\ref{prp:hdot1}, $u$ does not belong to $H^1(\Gamma)$, $\mathbb{P}$-a.s. \end{proof} Due to the complicated geometry of $\Gamma$ we also need the following version of the Kolmogorov-Chentsov theorem. \begin{Theorem}\label{kolmchent} Let $u$ be a random process on a compact metric graph $\Gamma$. Let $M,p>0$ and $q>1$ be such that $$E(|u(x)-u(y)|^p) \leq M d(x,y)^q,\quad x,y\in\Gamma.$$ Then, for any $\beta \in (0,(q-1)/p)$, $u$ has a modification with $\beta$-Hölder continuous sample paths. \end{Theorem} \begin{proof} Our goal is to apply \cite[Theorem 1.1]{kratschmerurusov}. To this end, we first observe that since $\Gamma$ is compact, it is totally bounded. Now, let $N(\Gamma,\eta)$ be the minimal number to cover $\Gamma$ with balls of radius $\eta>0$. Let $g$ be the maximum degree of the vertices in $\Gamma$. It is easy to see that we have $$N(\Gamma,\eta) \leq \frac{2\hbox{diam}(\Gamma) g}{\eta},$$ where $\hbox{diam}(\Gamma) = \sup\{d(x,y): x,y\in\Gamma\}$ is the diameter of the graph $\Gamma$. This shows that we can apply \cite[Theorem 1.1]{kratschmerurusov} to any compact graph $\Gamma$, and that we can take $t=1$ their statement. Therefore, if $u$ is a random field such that $$E(|u(x)-u(y)|^p) \leq M d(x,y)^q,\quad x,y\in\Gamma.$$ Then, for any $\beta \in (0,(q-1)/p)$, $(u_x)_{x\in\Gamma}$ has a modification with $\beta$-Hölder continuous sample paths. \end{proof} Our goal now is to prove the Sobolev embedding for compact metric graphs. To this end, let us begin introducing some notation. Given $x,y\in\Gamma$, let $[x,y]\subset \Gamma$ be the path connecting $x$ and $y$ with shortest length and for a function $u:\Gamma\to\mathbb{R}$, we denote by $u|_{[x,y]}$ the restriction of $u$ to the path $[x,y]$. We begin by showing that restrictions of functions in the Sobolev space $H^1(\Gamma)$ to a path $[x,y]$ belong to the Sobolev space $H^1([x,y])$, which is isometrically isomorphic to $H^1(I_{x,y})$, where $I_{x,y} = (0,d(x,y))$. \begin{Proposition}\label{prp:restSobSpace} Let $\Gamma$ be a compact metric graph. Let, also, $x,y\in\Gamma$. If $u\in H^1(\Gamma)$, then $u|_{[x,y]}\in H^1([x,y])$. \end{Proposition} \begin{proof} Let $[x,y] = p_1\cup p_2\cup\cdots\cup p_N$, for some $N\in\mathbb{N}$, where $p_i\subset e_i$, with $e_i\in\mathcal{E}$ and $e_i\neq e_j$ if $i\neq j$. Furthermore, let the paths be ordered in such a way that $p_1=[x,v_1]$, $p_N = [v_{N-1},y]$, $p_i\cap p_{i+1} = \{v_i\}$ and $p_i\cap p_j=\emptyset$ if $|i-j|>1$. Let $v_0 = x$ and $v_{N} = y$. By definition of $H^1(\Gamma)$, we have that $u|_{p_i} \in H^1(p_i)$ for every $i$. Thus, by \eqref{eq:sobabscont}, we have that for every $z\in p_i$, $u(z) = u(v_{i-1}) + \int_{[v_{i-1},z]} u'(t) dt.$ So, it follows from the continuity requirement of $u\in H^1(\Gamma)$, that for every $z\in p_i$, $$u(z) = u(v_{i-1}) + \int_{[v_{i-1},z]} u'(t) dt = u(x) + \sum_{j=0}^{i-2} \int_{[v_j,v_{j+1}]} u'(t)dt + \int_{[v_{i-1},z]} u'(t).$$ It then follows that for every $z\in [x,y]$, $u(z) = u(x) + \int_{[x,z]} u'(t)dt.$ Therefore, by \eqref{eq:sobabscont} again, $u\in H^1([x,y])$. \end{proof} We will now show that the restriction map $u\mapsto u|_{[x,y]}$ is a bounded linear operator from $H^\alpha(\Gamma)$ to $H^\alpha([x,y]) = (L_2([x,y]), H^1([x,y]))_\alpha$: \begin{Proposition}\label{prp:restrboundedFrac} Let $\Gamma$ be a compact metric graph. Let, also, $x,y\in\Gamma$. Let $u\in L_2(\Gamma)$ and define $T^{x,y}(u) = u|_{[x,y]}$. Then, for $0<\alpha \leq 1$, we have that $T^{x,y}|_{H^\alpha(\Gamma)}$ is a bounded linear operator from $H^\alpha(\Gamma)$ to $H^\alpha([x,y])$. \end{Proposition} \begin{proof} In view of Theorem \ref{thm:interpolationpairKmethod}, it is enough to show that $T^{x,y}$ is a couple map from $(L_2(\Gamma),H^1(\Gamma))$ to $(L_2([x,y]),H^1([x,y]))$. It is immediate that $T^{x,y}(L_2(\Gamma)) \subset L_2([x,y])$ and that for every $u\in L_2(\Gamma)$, $\|u\|_{L_2([x,y])}\leq \|u\|_{L_2(\Gamma)}$, so that $T^{x,y}:L_2(\Gamma)\to L_2([x,y])$ is bounded. Furthermore, by Proposition \ref{prp:restSobSpace}, $T^{x,y}(H^1(\Gamma))\subset H^1([x,y])$. It is clear that $\|u\|_{H^1([x,y])}\leq \|u\|_{H^1(\Gamma)}$ for every $u\in H^1(\Gamma)$, so that $T^{x,y}|_{H^1(\Gamma)}\to H^1([x,y])$ is bounded. Therefore, $T^{x,y}$ is a couple map from from $(L_2(\Gamma),H^1(\Gamma))$ to $(L_2([x,y]),H^1([x,y]))$. The result thus follows from Theorem \ref{thm:interpolationpairKmethod}. \end{proof} We are now in a position to prove Theorem \ref{thm:sobembedding}: \begin{proof}[Proof of Theorem \ref{thm:sobembedding}] Let $x,y\in\Gamma$ and $I_H^{x,y}: H^\alpha([x,y])\to C^{0,\alpha-\frac{1}{2}}([x,y])$ be the Sobolev embedding on $[x,y]$, which, by Corollary \ref{cor:SobembedInterpFracSob}, is a bounded linear operator. Let, now, $T^{x,y}$ be the restriction operator, $T^{x,y}(u) = u|_{[x,y]}$, where $u\in L_2(\Gamma)$. By Proposition \ref{prp:restrboundedFrac}, we have that the composition $I_H^{x,y} \circ T^{x,y}$ is a bounded linear map from $H^\alpha(\Gamma)$ to $C^{0,\alpha-\frac{1}{2}}([x,y])$. In particular, there exists $0< C_{x,y}<\infty$ such that for every $u\in H^\alpha(\Gamma)$, \begin{equation}\label{eq:holderboundpath} \sup_{t\in [x,y]} |u(t)| + \sup_{\substack{t,s\in [x,y]\\ t\neq s}} \frac{|u(t)-u(s)|}{d(t,s)^{\alpha-\frac{1}{2}}} \leq C_{x,y} \|u\|_{H^1(\Gamma)}. \end{equation} Now, note that for every $x,y\in\Gamma$, $x\neq y$, there exist $v,\widetilde{v}\in\mathcal{V}$ such that $[x,y] \subset [v,\widetilde{v}]$. Therefore, by \eqref{eq:holderboundpath}, $$\frac{|u(x)-u(y)|}{d(x,y)^{\alpha-\frac{1}{2}}} \leq C_{v,\widetilde{v}} \|u\|_{H^1(\Gamma)} \leq \sum_{w\neq\widetilde{w}\in\mathcal{V}} C_{w,\widetilde{w}} \|u\|_{H^1(\Gamma)}.$$ Since $\Gamma$ is compact, the set $\mathcal{V}$ is finite, so that $C := \sum_{w\neq \widetilde{w}\in\mathcal{V}} C_{w,\widetilde{w}} < \infty.$ Therefore, $$[u]_{C^{0,\alpha-\frac{1}{2}}(\Gamma)} = \sup_{\substack{x,y\in\Gamma\\ x\neq y}} \frac{|u(x)-u(y)|}{d(x,y)^{\alpha-\frac{1}{2}}} \leq C \|u\|_{H^1(\Gamma)}.$$ Similarly, $\|u\|_{C(\Gamma)} \leq C \|u\|_{H^1(\Gamma)}$ and therefore, $\|u\|_{C^{0,\alpha-\frac{1}{2}}}(\Gamma) \leq 2C \|u\|_{H^1(\Gamma)}.$ \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:sobembeddingHdot} ] First note that if $\alpha>1$, then $\dot{H}^\alpha\hookrightarrow \dot{H}^1$, so it is enough to consider $1/2<\alpha\leq 1$. The result thus follows directly from Theorem \ref{thm:sobembedding}, Proposition \ref{prp:hdot1} and Corollary \ref{cor:identificationHalpha}. \end{proof} We are now in a position to start to prove the more refined regularity statement. For this part, our ideas are inspired by the regularity results in \cite{coxkirchner}. We begin with the following key result, which allows us to obtain a continuous process which we will show solves \eqref{eq:spde}. \begin{Lemma}\label{kolmchentbounds} Fix $\alpha > 1/2$ and let $\widetilde{\alpha} = \alpha-1/2$ if $\alpha\leq 1$ and $\widetilde{\alpha}=1/2$ if $\alpha>1$. For $s\in\Gamma$, let $u_0(s) = \mathcal{W}\left((L^{-\alpha/2})^\ast(\delta_s)\right)$, where $\delta_s$ is the Dirac measure concentrated at $s\in\Gamma$ . Then, \begin{equation}\label{holdercontvar} E\left(|u_0(x) - u_0(y)|^2\right) \leq \|L^{-\alpha/2}\|_{\mathcal{L}(L_2(\Gamma),C^{0,\widetilde{\alpha}}(\Gamma))}^2 d(x,y)^{2\widetilde{\alpha}}. \end{equation} Furthermore, for any $0<\gamma<\widetilde{\alpha}$, $u_0$ has a $\gamma$-Hölder continuous modification. \end{Lemma} \begin{proof} We begin by observing that for $\alpha > 1/2$, we have by Theorem \ref{thm:sobembedding}, that $$ L^{-\alpha/2}:L_2(\Gamma)\to \dot{H}^{\alpha} \cong H^\alpha(\Gamma) \hookrightarrow C^{0,\widetilde{\alpha}}(\Gamma), $$ where $\widetilde{\alpha} = \alpha - 1/2$ if $\alpha \leq 1$ and $\widetilde{\alpha}=1/2$ if $\alpha>1$. Therefore, ${L^{-\alpha/2}:L_2(\Gamma)\to C^{0,\widetilde{\alpha}}(\Gamma)}$ is a bounded linear operator. Let $(L^{-\alpha/2})^\ast: \left(C^{0,\widetilde{\alpha}}(\Gamma) \right)^\ast \to \left( L_2(\Gamma)\right)^\ast = L_2(\Gamma)$ be its adjoint. Define, for $f\in C^{0,\widetilde{\alpha}}(\Gamma)$, ${\<\delta_x, f\> = \int f d \delta_x = f(x)}$. Then $|\<\delta_x,f\>| = |f(x)| \leq \|f\|_{C^{0,\widetilde{\alpha}}(\Gamma)}$, and therefore $\delta_x \in \left(C^{0,\widetilde{\alpha}}(\Gamma) \right)^\ast$. Furthermore, $(L^{-\alpha/2})^\ast(\delta_x) \in L_2(\Gamma)$. This tells us that $\mathcal{W}\left((L^{-\alpha/2})^\ast(\delta_x)\right)$ is well-defined. We now use linearity of $\mathcal{W}$, the isometry \eqref{isometryW}, and the considerations above to obtain \begin{align*} &\left(E\left( \left|u_0(x) - u_0(y)\right|^2 \right)\right)^{1/2} = \left(E\left(\left|\mathcal{W}\left((L^{-\alpha/2})^\ast(\delta_x-\delta_y)\right)\right|^2\right)\right)^{1/2}\\ &\quad= \|(L^{-\alpha/2})^\ast(\delta_x-\delta_y)\|_{L_2(\Gamma)} \leq \|(L^{-\alpha/2})^\ast\|_{\mathcal{L}\left(\left(C^{0,\widetilde{\alpha}}(\Gamma)\right)^\ast, L_2(\Gamma)\right)} \|\delta_x-\delta_y\|_{\left(C^{0,\widetilde{\alpha}}(\Gamma)\right)^\ast}\\ &\quad\leq \|L^{-\alpha/2}\|_{\mathcal{L}(L_2(\Gamma),C^{0,\widetilde{\alpha}}(\Gamma))} d(x,y)^{\widetilde{\alpha}}, \end{align*} where, in the last equality, we used that, for $f\in C^{0,\widetilde{\alpha}}(\Gamma)$, $$|\<\delta_x-\delta_y,f\>| = |f(x)-f(y)| \leq \|f\|_{C^{0,\widetilde{\alpha}}(\Gamma)} d(x,y)^{\widetilde{\alpha}},$$ which implies that $\|\delta_x-\delta_y\|_{\left(C^{0,\widetilde{\alpha}}(\Gamma)\right)^\ast}\leq d(x,y)^{\widetilde{\alpha}}$. This gives us \eqref{holdercontvar}. Now, observe that $u_0(x)-u_0(y) = \mathcal{W}\left((L^{-\alpha/2})^\ast(\delta_x-\delta_y)\right)$ by the linearity of $\mathcal{W}$, Therefore, $u_0(x)-u_0(y)$ is a Gaussian random variable with the variance bounded above by $\|L^{-\alpha/2}\|_{\mathcal{L}(L_2(\Gamma),C^{0,\widetilde{\alpha}}(\Gamma))}^2 d(x,y)^{2\widetilde{\alpha}}$. Thus, for any $p>2$, $$E\left(|u_0(x) - u_0(y)|^p\right) \leq E(|U|^p) \|L^{-\alpha/2}\|_{\mathcal{L}(L_2(\Gamma),C^{0,\widetilde{\alpha}}(\Gamma))}^p d(x,y)^{\widetilde{\alpha}p},$$ where $U$ follows a standard normal distribution. The proof is completed by applying the above version of the Kolmogorov-Chentsov theorem (Theorem \ref{kolmchent}) and standard arguments, since $p>2$ is arbitrary, to obtain the existence of a $\gamma$-Hölder continuous modification for any $0<\gamma<\widetilde{\alpha}$. \end{proof} We will now show that for any $0<\gamma<\widetilde{\alpha}$, a $\gamma$-Hölder continuous version of $u_0$ solves \eqref{eq:spde}: \begin{Lemma}\label{weakregularity} Fix $\alpha > 1/2$ and let $\widetilde{\alpha} = \alpha-1/2$ if $\alpha\leq 1$ and $\widetilde{\alpha}=1/2$ if $\alpha>1$. Fix, also, $0<\gamma<\widetilde{\alpha}$ and let $u$ be any $\gamma$-Hölder continuous modification of $u_0$, where ${u_0(x) = \mathcal{W}\left((L^{-\alpha/2})^\ast \delta_x\right)}$. Then, for all $\phi\in L_2(\Gamma)$, $(u,\phi)_{L_2(\Gamma)} = \mathcal{W}(L^{-\alpha/2}\phi).$ \end{Lemma} \begin{proof} Recall that given $x,y\in\Gamma$, $[x,y]\subset \Gamma$ is the path connecting $x$ and $y$ with shortest length, and let $(x,y] = [x,y]\setminus\{x\}$. We will begin by considering $x,y\in e$, for some $e\in\mathcal{E}$ and showing that $\int_{(x,y]} u(x) dx = \mathcal{W}(L^{-\alpha/2}1_{(x,y]}).$ Since the set of half-open paths is a semi-ring that generates the Borel sets in $\Gamma$, the simple functions obtained by the above indicator functions are dense in $L_2(\Gamma)$. Now, also observe that \begin{equation} \int_{(x,y]} u(x) dx = \int_{[x,y]} u(x)dx. \end{equation} Therefore, we can take the closed path $[x,y]$. Since $u$ has continuous sample paths, the integral $ \int_{[x,y]} u(x)dx$ is the limit of Riemann sums. Specifically, let $x=x_0,\ldots,x_n=y$ be a partition of $[x,y]$, and $x_i^\ast$ be any point in $[x_i,x_{i+1}]$, $i=0,\ldots,n-1$, where $\max_i l([x_i,x_{i+1}])\to 0$ as $n\to\infty$. Then, by linearity of $\mathcal{W}$, we have \begin{align*} \int_{[x,y]} u(x)dx &= \lim_{n\to\infty} \sum_{i=0}^{n-1} u(x_i^\ast) l([x_i,x_{i+1}]) = \lim_{n\to\infty} \sum_{i=0}^{n-1} \mathcal{W}\left((L^{-\alpha/2})^\ast \delta_{x_i^\ast}\right) l([x_i,x_{i+1}])\\ &= \lim_{n\to\infty} \mathcal{W}\left((L^{-\alpha/2})^\ast\left(\sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}]) \right) \right). \end{align*} Now, observe that $C^{0,\widetilde{\alpha}}(\Gamma)\hookrightarrow C^{0,\gamma}(\Gamma)$, since $[f]_{C^{0,\gamma}(\Gamma)} \leq \hbox{diam}(\Gamma)^{\widetilde{\alpha}-\gamma}[f]_{C^{0,\widetilde{\alpha}}(\Gamma)}$. Let us then study the convergence of $\sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}])$ in $\left(C^{0,\gamma}(\Gamma)\right)^\ast$. We have that for any $f\in C(\Gamma)$ \begin{align*} \Bigl| \<1_{[x,y]},f\> - & \left\< \sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}]) ,f\right\> \Bigr| \leq \sum_{i=0}^n \int_{[x_i,x_{i+1}]} |f(x) - f(x_i^\ast)| dx\\ &\leq \|f\|_{C^{0,\gamma}(\Gamma)} \sum_{i=0}^n \int_{[x_i,x_{i+1}]} d(x_i,x_{i+1})^\gamma dx \leq \|f\|_{C^{0,\gamma}(\Gamma)} d(x,y) \max_{i} d(x_i,x_{i+1})^\gamma. \end{align*} Therefore, $$ \Bigl\|1_{[x,y]} - \sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}]) \Bigr\|_{\left(C^{0,\gamma}(\Gamma)\right)^\ast} \leq d(x,y) \max_{i} d(x_i,x_{i+1})^\gamma \to 0 $$ as $n\to\infty$, since $\max_i l([x_i,x_{i+1}])\to 0$ as $n\to\infty$. Since $(L^{-\alpha/2})^\ast: \left(C^{0,\gamma}(\Gamma)\right)^\ast\to L_2(\Gamma)$, is bounded, it follows that $$ (L^{-\alpha/2})^\ast \left( \sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}])\right) \to (L^{-\alpha/2})^\ast 1_{[x,y]} $$ in $L_2(\Gamma)$. Finally, by \eqref{isometryW}, we have that $$ \lim_{n\to\infty} \mathcal{W}\left((L^{-\alpha/2})^\ast\left(\sum_{i=0}^{n-1} \delta_{x_i^\ast} l([x_i,x_{i+1}]) \right) \right) = \mathcal{W}((L^{-\alpha/2})^\ast 1_{[x,y]}) = \mathcal{W}(L^{-\alpha/2}1_{[x,y]}). $$ This proves that $\int_{[x,y]} u(x) dx = \mathcal{W}(L^{-\alpha/2}1_{[x,y]}).$ By linearity the above identity holds for simple functions with indicator functions of the above type, and from our discussion in the beginning of this proof, the simple functions are dense in $L_2(\Gamma)$. Let, $\phi\in L_2(\Gamma)$ be any function and $\phi_n$ be a sequence of simple functions such that $\phi_n\to \phi$ in $L_2(\Gamma)$. Then, \begin{align*} E\left(\left| (u,\phi)_{L_2(\Gamma)} - \mathcal{W}(L^{-\alpha/2}\phi) \right|^2\right)^{1/2} &\leq E\left(\left|(u,\phi)_{L_2(\Gamma)} - (u,\phi_n)_{L_2(\Gamma)} \right|^2\right)^{1/2}\\ &+ E\left(\left|\mathcal{W}(L^{-\alpha/2}(\phi_n - \phi)) \right|^2\right)^{1/2}. \end{align*} Observe that $E\left(\left|(u,\phi)_{L_2(\Gamma)} - (u,\phi_n)_{L_2(\Gamma)} \right|^2\right)^{1/2}\to 0$ as $n\to\infty$ since $u$ is bounded ($u$ is continuous and $\Gamma$ is compact) and $\phi_n\to \phi$ in $L_2(\Gamma)$. Now, since $L^{-\alpha/2}:L_2(\Gamma) \to L_2(\Gamma)$ is a bounded operator, thus $L^{-\alpha/2}(\phi_n - \phi)\to 0$ in $L_2(\Gamma)$ and by \eqref{isometryW}, we obtain that $E\left(\left|\mathcal{W}(L^{-\alpha/2}(\phi_n - \phi)) \right|^2\right)^{1/2}\to 0$ as $n\to\infty$. Thus $(u,\phi)_{L_2(\Gamma)} = \mathcal{W}(L^{-\alpha/2}\phi)$. \end{proof} We now obtain the proof of our second regularity result as a consequence of Lemma \ref{weakregularity}: \begin{proof}[Proof of Theorem \ref{thm:weakregularity}] Observe that Lemma \ref{kolmchentbounds} and Lemma \ref{weakregularity} directly implies that $u$ is a solution of $L^{\alpha/2} u = \mathcal{W}$. Therefore, for any $0<\gamma<\frac{1}{2}$, $u$ has a modification with $\gamma$-Hölder continuous sample paths. Now, if $\alpha = 3$, then $L^{3/2} u = \mathcal{W}$ if and only if $v = Lu = L^{-1/2}\mathcal{W}.$ This implies that $v$ solves the SPDE $L^{1/2}v = \mathcal{W}$ and by Lemma \ref{weakregularity}, $v$ has $\gamma$-Hölder continuous sample paths. Now, note that $v = Lu = \kappa^2 u - u''.$ By using the first part of this proof directly to $u$, we also have that $u$ has a modification with $\gamma$-Hölder continuous sample paths. Therefore, also $D^2 u = u'' = v + \kappa^2 u$ has a modification with $\gamma$-Hölder continuous sample paths. The general case can be handled similarly, which thus concludes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:covfunccont}] Observe that since $u$ is a modification of $u_0$, we have, by Lemma \ref{kolmchentbounds}, that \begin{equation}\label{contcovfunc} E\left(|u(x) - u(y)|^2\right) = E\left(|u_0(x) - u_0(y)|^2\right) \leq \|L^{-\alpha/2}\|_{\mathcal{L}(L_2(\Gamma),C^{0,\widetilde{\alpha}}(\Gamma))}^2 d(x,y)^{2\widetilde{\alpha}}. \end{equation} Note that \eqref{contcovfunc} implies that $u$ is continuous in $L_2$ at every point $x\in\Gamma$. This implies that the covariance function $\varrho$ is continuous at every $(x,y)\in\Gamma\times\Gamma$. \end{proof} We will now show that the covariance operator is an integral operator with kernel given by the covariance function (which follows from standard theory on $L_2$ processes but we prove here for completeness). \begin{Lemma}\label{prp:covintegralrep} Let $\alpha > 1/2$ and $\mathcal{C}= L^{-\alpha}$ be the covariance operator of $u$, where $u$ is the solution of $L^{\alpha/2} u = \mathcal{W}$. Then, $$(\mathcal{C}\phi)(y) = (\varrho(\cdot, y), \phi)_{L_2(\Gamma)} = \sum_{e\in\mathcal{E}} \int_e \varrho(x,y)\phi(x)dx,$$ where $\varrho$ is the covariance function of $u$. \end{Lemma} \begin{proof} Let $T:L_2(\Gamma)\to L_2(\Gamma)$ be the integral operator with kernel $\varrho$. $$(T\phi)(y) = (\varrho(\cdot, y), \phi)_{L_2(\Gamma)} = \sum_{e\in\mathcal{E}} \int_e \varrho(x,y)\phi(x)dx.$$ Now, fix $\alpha > 1/2$ and recall that, by Corollary \ref{cor:covfunccont}, $\varrho$ is continuous. Since, $\Gamma$ is compact, $\varrho$ is bounded, say by $K>0$. Further, since $\Gamma$ has finite measure, $L_2(\Gamma)\subset L_1(\Gamma)$, so for any $\phi,\psi\in L_2(\Gamma)$, we have, by the Cauchy-Schwarz inequality, \begin{align*} \sum_{e,\widetilde{e}\in\mathcal{E}} \int_{\widetilde{e}}\int_{e} \pE(|u(x)u(y) \phi(x)\psi(y)|) dx dy &\leq \sum_{e,\widetilde{e}\in\mathcal{E}} \int_{\widetilde{e}}\int_{e} \sqrt{\varrho(x,x)\varrho(y,y)} |\phi(x)| |\psi(y)| dxdy\\ &\leq K \|\phi\|_{L_1(\Gamma)} \|\psi\|_{L_1(\Gamma)} <\infty. \end{align*} Therefore, by Fubini's theorem, \begin{align*} (T\phi, \psi)_{L_2(\Gamma)} &= \sum_{e,\widetilde{e}\in\mathcal{E}} \int_{\widetilde{e}} \int_{e} \varrho(x,y) \phi(x) dx\psi(y) dy = \sum_{e,\widetilde{e}\in\mathcal{E}} \int_{\widetilde{e}} \int_{e} \pE(u(x)u(y)) \phi(x) dx\psi(y) dy \\ &= \sum_{e,\widetilde{e}\in\mathcal{E}} \pE\left( \int_{\widetilde{e}} \left( \int_e u(x) \varphi(x) dx\right) u(y) \psi(y) dy \right)\\ & = \sum_{e,\widetilde{e}\in\mathcal{E}} \pE\left((u,\phi)_{L_2(e)}(u,\psi)_{L_2(e)}\right) = \pE\left((u,\phi)_{L_2(\Gamma)} (u,\psi)_{L_2(\Gamma)} \right) = (\mathcal{C} \phi,\psi)_{L_2(\Gamma)}. \end{align*} This shows that the covariance operator is the integral operator with kernel $\varrho(\cdot,\cdot)$. \end{proof} Proposition \ref{cor:mercercov} is now a direct consequence of the fact that the covariance operator is an integral operator. \begin{proof}[Proof of Proposition \ref{cor:mercercov}] It is a direct consequence of Lemma \ref{prp:covintegralrep}, Corollary \ref{cor:covfunccont} and Mercer's theorem \citep{Steinwart2012}. Furthermore, since the $\Gamma$ has finite measure, it follows by the dominated convergence theorem that the series also converges in $L_2(\Gamma\times\Gamma)$. \end{proof} We are now in a position to prove the Karhunen-Lo\`eve expansion for the solution of \eqref{eq:spde}: \begin{proof}[Proof of Proposition \ref{prp:KLexp}] First, we define the random variables $\xi_i = \lambda_i^{\alpha/2} (u, \eig_i)_{L_2(\Gamma)}$. By Lemma \ref{weakregularity}, we have that $\xi_i = \lambda_i^{\alpha/2} \mathcal{W}(L^{-\alpha/2}\eig_i) = \mathcal{W}(\eig_i).$ Therefore, $\xi_i$ are centered Gaussian random variables. Furthermore, by \eqref{isometryW}, we have that $\pE(\xi_i^2) = 1$ and if $i\neq j$, $\pE(\xi_i \xi_j) = (\eig_i,\eig_j)_{L_2(\Gamma)} = 0.$ Therefore, by the Gaussianity of $\xi_i$, $\{\xi_i\}$ is a sequence of independent standard Gaussian random variables. Therefore, for every $s\in\Gamma$, \begin{align*} \pE\left( |u(s) - u_n(s)|^2\right) &= \pE(u_n(s)^2) - 2 \pE(u_n(s) u(s)) + \pE(u(s)^2)\\ &= \sum_{i=1}^n \frac1{\lambda_i^{\alpha}} \eig_i(s)^2 - 2\sum_{i=1}^n \frac1{\lambda_i^{\alpha/2}}\pE(u(s)\xi_i) \eig_i(s) + \varrho(s,s). \end{align*} Now, by Fubini's theorem (with a similar justification as the one in the proof of Lemma~\ref{prp:covintegralrep}) and the fact that $\rho$ is the kernel of the covariance operator $L^{-\alpha}$, we have that \begin{align*} \pE(u(s)\xi_i) &= \sum_{e\in\mathcal{E}} \int_e \lambda_i^{\alpha/2}\pE(u(s) u(t)) \eig_i(t) dt = \lambda_i^{\alpha/2}\sum_{e\in\mathcal{E}} \int_e \varrho(s,t)\eig_i(t)dt\\ &= \lambda_i^{\alpha/2}(L^{-\alpha} \eig_i)(s) = \lambda_i^{-\alpha/2} \eig_i(s). \end{align*} Thus, by the previous computations and Corollary \ref{cor:mercercov}, we have that $$ \lim_{n\to\infty} \sup_{s\in\Gamma} \pE\left( |u(s) - u_n(s)|^2\right) = \lim_{n\to\infty} \sup_{s\in\Gamma} \left( \varrho(s,s) - \sum_{i=1}^n \frac1{(\kappa^2 + \hat{\lambda}_i)^{\alpha}} \eig_i(s)^2 \right) = 0.$$ \end{proof} Next, we will prove the rate of convergence of the Karhunen-Lo\`eve expansion. \begin{proof}[Proof of Proposition~\ref{prp:rateKL}] First, note that by Proposition \ref{prp:KLexp} combined with the fact that $\Gamma$ has finite measure, it follows from the dominated convergence theorem $u_n$ converges to $u$ in the $L_2(\Omega,L_2(\Gamma))$ norm. To simplify the notation, we write $L_2 = L_2(\Omega,L_2(\Gamma))$. Let $N>n$ and observe that \begin{align*} \left\| u_N - u_n\right\|_{L_2}^2 &= \Bigl\| \sum_{j=n+1}^{N} \xi_j (\kappa^2 + \hat{\lambda}_j)^{-\alpha/2} \eig_j \Bigr\|_{L_2}^2\\ &= \sum_{e\in\mathcal{E}} \int_e \pE\left( \sum_{i=n+1}^N \sum_{j=n+1}^N (\kappa^2 + \hat{\lambda}_i)^{-\alpha/2}(\kappa^2 + \hat{\lambda}_j)^{-\alpha/2} \xi_i \xi_j \eig_i(s) \eig_j(s)\right) ds \\ &= \sum_{i=n+1}^N (\kappa^2 + \hat{\lambda}_j)^{-\alpha} \pE(\xi_i^2) \sum_{e\in\mathcal{E}} \int_e \eig_i(s)^2ds = \sum_{i=n+1}^N (\kappa^2 + \hat{\lambda}_j)^{-\alpha}. \end{align*} Since we have convergence of $u_N$ to $u$ in the $L_2$-norm, we obtain by the above expression, the Weyl asymptotics \eqref{eq:weyl} and the integral test for series, that \begin{align*} \left\| u - u_n\right\|_{L_2}^2 &= \lim_{N\to\infty} \left\| u_N - u_n\right\|_{L_2}^2 = \lim_{N\to\infty} \left\| \sum_{j=n+1}^{N} \xi_j (\kappa^2 + \hat{\lambda}_j)^{-\alpha/2} \eig_j \right\|_{L_2}^2\\ &= \lim_{N\to\infty} \sum_{j=n+1}^N (\kappa^2 + \hat{\lambda}_j)^{-\alpha} = \sum_{j=n+1}^\infty (\kappa^2 + \hat{\lambda}_j)^{-\alpha} \leq C \sum_{j=n+1}^\infty j^{-2\alpha} \leq \frac{C}{n^{2\alpha-1}}. \end{align*} \end{proof} Finally, we prove that the solution of $L^k u = \mathcal{W}$ satisfy the Kirchhoff boundary conditions: \begin{proof}[Proof of Proposition \ref{prp:oddorderderiv}] Let $\alpha=2$ and $u$ be a solution of $L^{\alpha/2}u=Lu = \mathcal{W}$. Then, by Theorem \ref{regularity1}, $Du$ exists. Furthermore, the restriction of $u$ to an edge $e$ is a solution of $L_e u = \mathcal{W}_e$, where $L_e$ and $\mathcal{W}_e$ are the restrictions of $L$ in \eqref{eq:Loperator} and the white noise, respectively, to functions with support within the edge $e$. See, also, Remark \ref{rem:localprec}. Now, observe that since $u\in \dot{H}^1\cong H^1(\Gamma)$ and the derivative is a bounded operator in $H^1(\Gamma)$, we have that $Du = \sum_{i\in\mathbb{N}} \xi_i \lambda_i^{-1} D\eig_i$, where $\{\xi_i\}_{i\in\mathbb{N}}$ are independent standard Gaussian random variables, $\{\eig_i\}_{i\in\mathbb{N}}$ are the eigenvectors of $L$ and $\{\lambda_i\}_{i\in\mathbb{N}}$ are the corresponding eigenvalues. Further, the by standard regularity theory, we have that for each edge $e$ and each $i\in\mathbb{N}$, $\eig_i\in C^{\infty}(e)$, in particular, $D\eig_i$ is continuous on $e$. Now, by combining \cite[Proposition 4.5]{bk-measure} with our proofs of Theorem \ref{thm:weakregularity} and Lemma \ref{kolmchentbounds}, together with the fact that on each edge $e$, we can write $L_e = (\kappa-D)(\kappa+D)$, with $(\kappa+D)^\ast = (\kappa-D)$, we obtain that the restriction of $Du$ to each edge $e$ is continuous. In the same fashion, we obtain that the restriction of the covariance function of $Du$ to each edge $e$ is continuous. Now, observe that the expansion $Du = \sum_{i\in\mathbb{N}} \xi_i \lambda_i^{-1} D\eig_i$ tells us that the covariance function of $Du$ is given by \begin{equation}\label{eq:covfuncDiff} \widetilde{\varrho}(t,s) = \sum_{i=1}^\infty \lambda_i^{-2} D\eig_i(t) D\eig_i(s). \end{equation} In particular, $D\eig_i$ is an eigenfunction of the covariance operator of $Du$, which is the integral operator with kernel $\widetilde{\varrho}$, associated to the eigenvalue $\lambda_i^{-1}$: \begin{equation}\label{eq:covfuncDiffEigenfunc} \sum_{e\in\mathcal{E}} \int_e \widetilde{\varrho}(s,t) D\eig_i(t)dt = \lambda_i^{-1} D\eig_i(s). \end{equation} As noted above $D\eig_i$ is continuous on each edge $e$. Also, observe that for each edge $e$, with $t,s\in e$, we have \begin{align*} \sum_{i=1}^\infty \lambda_i^{-2} |D\eig_i(t) D\eig_i(s)| &\leq \frac{1}{2} \left(\sum_{i=1}^\infty \lambda_i^{-2} |D\eig_i(t)|^2 \right) + \frac{1}{2} \left(\sum_{i=1}^\infty\lambda_i^{-2}|D\eig_i(s)|^2 \right)\\ &\leq \frac{1}{2} \widetilde{\varrho}(t,t) + \frac{1}{2} \widetilde{\varrho}(s,s) \leq \sup_{x\in e} \widetilde{\varrho}(x,x). \end{align*} The continuity of $\widetilde{\varrho}$ on each edge $e$, together with the fact that each edge is compact, implies that the supremum in the expression above is finite. We can use the above bound to proceed as in the proof of Mercer's theorem, for example in \cite[theorem on p.245]{riesznagy}, and conclude that the convergence of the series \eqref{eq:covfuncDiff} is absolute and uniform on each edge $e$. Now, we follow our proof of Proposition \ref{prp:KLexp} to obtain that for each $s\in\Gamma$, \begin{align*} \pE\left( \left|Du(s) - \sum_{i=1}^n \xi_i \lambda_i^{-1} D\eig_i(s)\right|^2\right) &= \pE\left( \left(\sum_{i=1}^n \xi_i \lambda_i^{-1} D\eig_i(s)\right)^2\right)\\ &\hspace{-0.2in} - 2 \pE\left(Du(s)\sum_{i=1}^n \xi_i \lambda_i^{-1} D\eig_i(s)\right) + \pE(Du(s)^2)\\ &\hspace{-0.2in}= \sum_{i=1}^n \lambda_i^{-2}D\eig_i(s)^2 - 2\sum_{i=1}^n \lambda_i^{-1} \pE(u(s)\xi_i) D\eig_i(s) + \widetilde{\varrho}(s,s). \end{align*} Now, we use that $\{D\eig_i\}_{i\in\mathbb{N}}$ is orthogonal in $L_2(\Gamma)$ with $\|D\eig_i\|_{L_2(\Gamma)}^2 = \lambda_i$ and \eqref{eq:covfuncDiffEigenfunc} to obtain that $$\pE(Du(s) \xi_i) = \sum_{e\in\mathcal{E}} \int_e \widetilde{\varrho}(s,t) D\eig_i(t) dt = \lambda_i^{-1} D\eig_i(s).$$ Therefore, we can use the uniform convergence of the series \eqref{eq:covfuncDiff} on each edge $e$ to conclude that \begin{equation}\label{eq:klderiv} \lim_{n\to\infty} \sup_{s\in e} \pE\left(\left| Du(s) - \sum_{i=1}^n \xi_i \lambda_i^{-1} D\eig_i(s)\right|^2 \right) = 0. \end{equation} Now, we have that $\eig_i\in\mathcal{D}(L)$, $i=1,2,\ldots,$ and from Proposition \ref{prp:hdotk}, we have that for each $i$, $\sum_{e\in\mathcal{E}_v} \partial_e \eig_i(v)=0$. Since this sum is finite, we obtain that from \eqref{eq:klderiv}, for each $v\in\mathcal{V}$, $$ \pE\left(\left| \sum_{e\in\mathcal{E}_v} \partial_e u(v) \right|^2\right) = \lim_{n\to\infty} \pE\left(\left| \sum_{e\in\mathcal{E}_v} \partial_e u(v) - \sum_{i=1}^n \xi_i \lambda_i^{-1} \partial_e \eig_i(v)\right|^2 \right) = 0.$$ Therefore, $\sum_{e\in\mathcal{E}_v} \partial_e u(v) = 0.$ Similarly, for $\alpha=2k$, where $k$ is a positive integer, we have that the odd-order directional derivatives of the solution $u$ of $L^{\alpha/2} u = L^k u = \mathcal{W}$, satisfy the Kirchhoff boundary condition, $\sum_{e\in\mathcal{E}_v} \partial_e^{2m+1} u(v) = 0$, where $m=0,\ldots,k-1$. \end{proof} Let us now connect $L_2(\Omega)$-derivatives with weak derivatives. \begin{proof}[Proof of Corollary \ref{cor:L2diffweakdiff}] Since $\alpha>3/2$, we have by Theorem \ref{regularity1} that the weak derivative $u'$ exists. Given $x,y\in\Gamma$, recall that we denote by $[x,y]$ the shortest path on $\Gamma$ connecting $x$ and $y$. By Proposition \ref{prp:restSobSpace}, given $x,y\in\Gamma$, $$u(y)-u(x) = \int_{[x,y]} u'(t)dt.$$ By Proposition \ref{prp:oddorderderiv}, for every edge $e$, the covariance function of $u'$ is continuous on $e\times e$, that is, the function $(x,y)\mapsto \E(u'(x)u'(y))$ is continuous on $e\times e$. Now, it follows from the fundamental theorem of calculus that for every $x\in e$, with the limit being taken as a lateral limit if $x$ is a boundary point, we have \begin{equation}\label{eq:lebdifthmL2diffpart1} \lim_{y\to x} \frac{1}{d(x,y)} \int_{[x,y]} \E(u'(x)u'(t)) dt = \E(u'(x)^2), \end{equation} and \begin{equation}\label{eq:lebdifthmL2diffpart2} \lim_{y\to x} \frac{1}{d(x,y)} \int_{[x,y]} \E(u'(t)^2)dt = \E(u'(x)^2). \end{equation} Now, by Jensen's inequality, with respect to the measure $\frac{dt}{d(x,y)},$ along with Fubini's theorem, we obtain that \begin{align*} \E\left[\left(\frac{1}{d(x,y)} \right.\right.&\left.\left.\int_{[x,y]} u'(t) - u'(x) dt\right)^2\right] \leq \frac{1}{d(x,y)} \int_{[x,y]} \E((u'(t) - u'(x))^2)dt \\ &= \E(u'(x)^2)+\frac{1}{d(x,y)} \int_{[x,y]} \E(u'(t)^2) dt - \frac{2}{d(x,y)} \int_{[x,y]} \E(u'(t)u'(x)) dt \end{align*} Therefore, it follows by \eqref{eq:lebdifthmL2diffpart1}, \eqref{eq:lebdifthmL2diffpart2} and the previous computation that, for every $x\in e$, $$\lim_{y\to x} \E\left[\left(\frac{1}{d(x,y)} \int_{[x,y]} u'(t) - u'(x) dt\right)^2\right] = 0.$$ Now, since $\frac{u(y)-u(x)}{d(x,y)} = \frac{1}{d(x,y)} \int_{[x,y]} u'(t)dt$, we have that for every edge $e$ and every $x\in e$, $u'(x)$ is the $L_2(\Omega)$-derivative of $u$ at $x$. This shows that $u$ is $L_2(\Omega)$ differentiable at every point of every edge and that its $L_2(\Omega)$-derivative agrees with the weak derivative. The result for higher order follows by induction. This concludes the proof. \end{proof} \section{The Markov property}\label{app:Markovprop} In this section we will prove that the random field $u$, given by the solution of \eqref{eq:spde}, is a Markov random field, provided that $\alpha\in\mathbb{N}$. We begin by providing some definitions and results regarding Markov properties of random fields. We will follow \cite{Rozanov1982Markov}. Let us begin with a very general and abstract definition of random fields. Observe that Definition \ref{def:RandomField} is a natural generalization of Definition~\ref{def:SigmaAlgebraRandomField}. \begin{Definition}\label{def:RandomField} Let $(\Omega,\mathcal{F})$ be a measurable space and let $\Gamma$ be a compact metric graph. A random field on $\Gamma$, with respect to $(\Omega,\mathcal{F})$, is a family of $\sigma$-algebras, $$ \{\mathcal{F}(S): S\subset \Gamma, S \hbox{ is open}\}, $$ such that for every open $S$, $\mathcal{F}(S)\subset\mathcal{F}$ and for every pair of open sets $S,\widetilde{S}\subset \Gamma$, we have $\mathcal{F}(S\cup\widetilde{S}) = \sigma(\mathcal{F}(S)\cup \mathcal{F}(\widetilde{S})).$ \end{Definition} Similarly, by using the above definition, we can generalize Definition \ref{def:MarkovPropertyField}: \begin{Definition}\label{def:MarkovProperty} Let $(\Omega,\mathcal{F},\bbP)$ be a probability space, $\Gamma$ be a compact metric graph and $\{\mathcal{F}(S): S\subset T, S \hbox{ is open}\}$ be a random field. We say that the $\{\mathcal{F}(S): S\subset T, S \hbox{ is open}\}$ is Markov if for every open set $S$ there exists $\widetilde{\varepsilon}>0$ such that for every $0<\varepsilon< \widetilde{\varepsilon}$, $\mathcal{F}((\partial S)_\varepsilon)$ splits $\mathcal{F}(S)$ and $\mathcal{F}(T\setminus\overline{S})$. \end{Definition} Observe that when we consider the random field $\sigma$-algebra induced by the random field $u$, Definitions \ref{def:MarkovProperty} and \ref{def:MarkovPropertyField} coincide. In this section, the following class of random fields will be very useful to us: \begin{example}\label{exm:SpaceRandomField} Let $(\Omega,\mathcal{F},\bbP)$ be a probability space. Let also $\Gamma$ be a compact metric graph and suppose that for every open set $S\subset \Gamma$, we have a closed linear space $H(S) \subset L_2(\Omega,\mathcal{F},\bbP)$ such that for every pair of open sets $S$ and $\widetilde{S}$, we have $$H(S\cup \overline{S}) = \overline{\textrm{span}(H(S)\cup H(\widetilde{S}))}.$$ Define for every open set $S\subset \Gamma$, $\mathcal{F}(S) = \sigma(w: w\in H(S)).$ Then, $\{\mathcal{F}(S): S\subset \Gamma, S\hbox{ is open}\}$ is a random field. We will also call $\{H(S): S\subset \Gamma, S\hbox{ is open}\}$ a random field. \end{example} We will say that a random field $\{H(S): S\subset \Gamma, S\hbox{ is open}\}$ is Gaussian if for every open $S$, every $w\in H(S)$ is a Gaussian random variable with mean zero. We will now give two concrete examples of Gaussian random fields. The reader should compare Example \ref{exm:functionrandomfields} below with Definition \ref{def:SigmaAlgebraRandomField} and observe that they induce the same random field. \begin{example}\label{exm:functionrandomfields} Let $\Gamma$ be a compact metric graph and $\{u(t): t\in \Gamma\}$ be a Gaussian random function, that is, for every $t\in \Gamma$, $u(t)$ is a random variable and the finite dimensional distributions are Gaussian. Then, $$H(S) = \overline{\textrm{span}(u(w): w\in S)},$$ where $S$ is open, forms a Gaussian random field. Indeed, each element in $\textrm{span}(u(w): w\in S)$ is Gaussian. Thus, the elements of $H(S)$ are $L_2(\bbP)$-limits of Gaussian variables. Hence each element in $H(S)$ is Gaussian, see, e.g., \cite[Theorem 1.4.2, p.39]{ashtopics}. \end{example} We can also consider random fields generated by linear functionals acting on some Banach space of functions defined in $\Gamma$ (which are also called generalized functions): \begin{example}\label{exm:genfunctionrandomfields} Let $\Gamma$ be a compact metric graph and let $X(\Gamma)$ be some Banach space of real-valued functions defined on $\Gamma$. Let, $u$ be a linear functional on $X(\Gamma)$, that is, for every $\phi\in X(\Gamma)$, $\<u,\phi\>\in\mathbb{R}$ and for $\phi,\psi\in X(\Gamma)$ and $a,b\in\mathbb{R}$, $$\<u, a \phi + b\psi\> = a\<u,\phi\> + b\<u,\psi\>, \quad a.s.$$ Let us also assume that $u$ is continuous in the sense that $\lim_{n\to\infty} \pE\left( [\<u,\phi\> - \<u,\psi\>]^2\right) = 0,$ for every sequence $\phi_n\to \phi$, where the convergence is in the space $X(\Gamma)$. We then say that $u$ is a random linear functional. We say that the random linear functional is Gaussian if for every $\phi\in X(\Gamma)$, $\<u,\phi\>$ is a mean-zero Gaussian random variable. Let $u$ be a Gaussian random linear functional and define for every open set $S\subset \Gamma$, $$H(S) = \overline{\textrm{span}(\<u,w\>: w\in X(\Gamma), Supp(w)\subset S)},$$ where $Supp(w)$ stands for the support of the function $w$. Then, by a similar argument to the one in Example \ref{exm:functionrandomfields}, $\{H(S): S\subset \Gamma, S\hbox{ is open}\}$ is a Gaussian random field. \end{example} Our goal now is to obtain conditions for a random field $\{H(S): S\subset \Gamma\}$ be Markov in the sense of Definition \ref{def:MarkovProperty}. To this end we need some additional definitions. We begin with the definition of orthogonal random fields: \begin{Definition}\label{def:orthorandomfields} Let $\Gamma$ be a compact metric graph and $\{H(S): S\subset \Gamma\}$ be a random field in the sense of Example \ref{exm:SpaceRandomField}. We say that $H$ is orthogonal with respect to a family of open sets $\mathcal{D}_0$ if for every $S\in \mathcal{D}_0$ and any closed set $K\supset \partial S$ we have $H(S\setminus K) \perp H(\widetilde{S}\setminus K),$ where $\widetilde{S} = \Gamma\setminus \overline{S}$. \end{Definition} Related to this is the notion of dual fields, which is essential for characterizing Markov properties: \begin{Definition}\label{def:dualfield} Let $\Gamma$ be a compact metric graph with metric $d$ and $\{H(S): S\subset \Gamma\}$ be a random field in the sense of Example \ref{exm:SpaceRandomField}. Let $C\subset \Gamma$ be a closed set and define \begin{equation}\label{eq:hplusspace} H_+(C) = \bigcap_{\varepsilon>0} H(C_\varepsilon), \end{equation} where $C_\varepsilon = \{x\in \Gamma: d(x, C) < \varepsilon\}$. We say that a random field $\{H^\ast(S): S\subset \Gamma, S\hbox{ is open}\}$ is dual to the random field $H$ if $H^\ast(\Gamma) = H(\Gamma)$ and for every open $S$, $H^\ast(S) = H_+(S^c)^\perp,$ where $S^c = \Gamma\setminus S$ and $\perp$ stands for the orthogonal complement in the space $H(\Gamma)$. \end{Definition} We then have the following result, whose proof can be found in \cite[p. 100]{Rozanov1982Markov}: \begin{Theorem}\label{thm:MarkovDual} Let $\Gamma$ be a compact metric graph and $\{H(S): S\subset \Gamma\}$ be a Gaussian random field in the sense of Example \ref{exm:SpaceRandomField}. Let $\{H^\ast(S): S\subset \Gamma, S\hbox{ is open}\}$ be the dual field to $H$. If $H^\ast$ is orthogonal with respect to class of open subsets of $\Gamma$, then $H$ is a Markov random field. \end{Theorem} The above statement was adapted our scenario, that is we assume the metric space to be the metric graph $\Gamma$ and the complete system of domains to be the family of open subsets of $\Gamma$. We are now in a position to prove the Markov property of solutions of \eqref{eq:spde} when $\alpha\in\mathbb{N}$. \begin{proof}[Proof of Theorem \ref{thm:markovspde}] Let $\Gamma$ be a compact metric graph and consider the operator $L^{\alpha/2} : \dot{H}^\alpha \to L_2(\Gamma),$ where $\dot{H}^\alpha$, for $\alpha\in\mathbb{N}$, is given in Propositions \ref{prp:hdot1} and \ref{prp:hdotk}. Observe that, by \eqref{eq:solspde}, the solution is the random linear functional given by $\<u, \varphi\> = \mathcal{W}(L^{-\alpha/2}\varphi),$ where $\varphi\in L_2(\Gamma)$. We will now describe the steps of this proof. Our goal is to use Theorem \ref{thm:MarkovDual}. To this end, we will use the solution $u$ to define $H(S)$ as in Example \ref{exm:genfunctionrandomfields}. Next, we will provide a suitable candidate for the dual field to $H$, which we will denote by $\{H^\ast(S):S\subset\Gamma, S\hbox{ is open}\}$. We then want to show that $H^\ast$ is actually dual to $H$. So, following Definition \ref{def:dualfield}, we need to show that: i) $H(\Gamma) = H^\ast(\Gamma)$; ii) $H^\ast(S) = H_+(S^c)^\perp$. This step ii) will be done in two parts. We first show that $H^\ast(S) \subset H_+(S^c)^\perp$, then we show that we actually have the equality $H^\ast(S) = H_+(S^c)^\perp$. With this, we have that $H^\ast$ is dual to $H$, so as the final step in order to apply Theorem \ref{thm:MarkovDual}, we show that $H^\ast$ is orthogonal with respect to the class of open subsets of $\Gamma$. This gives us that the solution, when viewed as a random linear functional, satisfies the Markov property. To conclude, we show that $u$ can be identified with a function in $L_2(\Gamma)$, which gives us the Markov property for the random field $u$. We will now begin the proof. Observe that since $L^{-\alpha/2}$ is a bounded functional, it follows by \eqref{isometryW} that $u$ is indeed a random linear functional in the sense of Example \ref{exm:genfunctionrandomfields}. Let us now define a formal dual process, which our goal will be to show that it is actually the dual process. To this end, we define $\<u^\ast, \phi\> = \mathcal{W}(L^{\alpha/2}\phi),$ where $\phi\in\dot{H}^\alpha$. Observe that $\dot{H}^\alpha$ is dense in $L_2(\Gamma)$ (for instance, it contains all the eigenvectors of $L$). By using \eqref{isometryW} again, we also have that $u^\ast$ is a random linear functional. Observe that $\{\eig_i: i\in\mathbb{N}\}$ is dense in $L_2(\Gamma)$, is contained in $\dot{H}^\alpha$, and that by linearity of $\mathcal{W}$, $\<u,\eig_i\> = \lambda_i^{-\alpha/2} \mathcal{W}(\eig_i)$ and $\<u^\ast,\eig_i\> = \lambda_i^{\alpha/2} \mathcal{W}(\eig_i)$. Let $H$ and $H^\ast$ be the random fields induced by $u$ and $u^\ast$, respectively, in the sense of Example \ref{exm:genfunctionrandomfields}. Therefore, we have that $$H(\Gamma) = \overline{\textrm{span}\left\{\mathcal{W}(\eig_i):i\in\mathbb{N}\right\}} = H^\ast(\Gamma).$$ Now, take any open set $S\subset\Gamma$ and note that if $Supp(\phi)\subset S$ and $Supp(\psi)\subset S^c$, then \eqref{isometryW} implies that $\pE[\<u,\phi\> \<u^\ast, \psi\>] = (L^{-\alpha/2}\phi, L^{\alpha/2}\psi)_{L_2(\Gamma)} = (\phi,\psi)_{L_2(\Gamma)} = 0.$ This implies that $H^\ast(S) \subset H_+(S^c)^\perp.$ To show that $u^\ast$ is the dual field to $u$, we need to show that we actually have the equality in the above expression. To this end, consider the space \begin{equation}\label{eq:graphschwarz} \mathcal{S}(\Gamma) = \left\{f\in \bigoplus_{e\in \mathcal{E}} C^\infty(e): \forall m=0,1,2,\ldots, D^{2m}f \in \dot{H}^2\right\}. \end{equation} Observe that for every $\alpha\in\mathbb{N}$, $\mathcal{S}(\Gamma)\subset \dot{H}^\alpha$. Note that $\mathcal{S}(\Gamma)$ is dense in $\dot{H}^k(\Gamma)$, with respect to the norm $\|\cdot\|_k$, $k\in\mathbb{N}$, since $\{\eig_i:i\in\mathbb{N}\}\subset \mathcal{S}(\Gamma)$. It is easy to see that if $\phi\in \mathcal{S}(\Gamma)$ and $\psi\in \dot{H}^k$, then $\phi\psi \in \dot{H}^k$, for $k=1,2$. Now, observe that by Propositions \ref{prp:hdot1} and \ref{prp:hdotk}, the norms $\|\cdot\|_{k}$ and $\|\cdot\|_{\widetilde{H}^k(\Gamma)}$ are equivalent. Therefore, it follows by the Leibniz rule of differentiation together with equivalence of norms that for $\alpha=1,2$, and any $\phi\in \mathcal{S}(\Gamma)$ and $\psi\in \dot{H}^k$ that $$\|L^{\alpha/2}(\phi\psi)\|_{L_2(\Gamma)} = \|\phi\psi\|_{\alpha} \leq C\|\phi\psi\|_{\widetilde{H}^\alpha(\Gamma)} \leq \widetilde{C} \|\phi\|_{\widetilde{H}^\alpha(\Gamma)} \leq \widehat{C} \|\phi\|_{\alpha} = \widehat{C} \|L^{\alpha/2}\phi\|_{L_2(\Gamma)}.$$ By using the explicit characterization of $\mathcal{S}(\Gamma)$ and the density of $\mathcal{S}(\Gamma)$ in $\dot{H}^\alpha$, the same proof as that of \cite[Lemma 1, p.108]{Rozanov1982Markov} allows us to conclude that $H^\ast(S) = H_+(S^c)^\perp.$ This proves that $u^\ast$ is the dual process to $u$. We can now apply Theorem \ref{thm:MarkovDual}. To this end, fix $\alpha\in\{1,2\}$, let $S\subset \Gamma$ be any open set and take any $\phi,\psi\in\mathcal{S}(\Gamma)$ with $Supp(\phi)\subset S$ and $Supp(\psi)\subset S^c = \Gamma\setminus S$. Then, by \eqref{isometryW}, $$ E[\<u^\ast,\phi\>\<u^\ast,\psi\>] = (L^{\alpha/2} \phi, L^{\alpha/2}\psi)_{L_2(\Gamma)} = (L^\alpha \phi, \psi)_{L_2(\Gamma)} = 0, $$ since $L^\alpha$ is a local operator. By using that $\mathcal{S}(\Gamma)$ is dense in $\dot{H}^\alpha$ with respect to $\|\cdot\|_\alpha$, it follows that for any $\phi,\psi\in\dot{H}^\alpha$, with $Supp(\phi)\subset S$ and $Supp(\psi)\subset S^c = \Gamma\setminus S$, we have $ (L^{\alpha/2} \phi, L^{\alpha/2}\psi)_{L_2(\Gamma)} = 0$. This implies the orthogonality condition in Definition \ref{def:orthorandomfields} and by Theorem \ref{thm:MarkovDual}, $u$ is a Markov random field. Finally, by Lemma \ref{kolmchentbounds} and Lemma \ref{weakregularity}, $u$ can be identified with a random function in $L_2(\Gamma)$. We then have by Examples \ref{exm:functionrandomfields} and \ref{exm:genfunctionrandomfields}, that if $u$ is seen as random linear functional or as a random function in $L_2(\Gamma)$, the resulting random field $\{H(S):S\subset\Gamma, S\hbox{ is open}\}$ is the same. Thus, $u$, seen as a random function in $L_2(\Gamma)$ is a Markov random field. \end{proof} \section{Proof of Theorem \ref{thm:CondDens} }\label{app:proof_theorem} To prove the theorem we need the following lemma and corollary. The result in the lemma is known in the literature as adjusting the $c$-marginal \cite[p.~134]{lauritzen1996graphical}. Due to its importance we state (in a slightly more general form than in \cite{lauritzen1996graphical}) and prove it. \begin{Lemma}\label{lem:conditional} Assume that \begin{align*} \mv{X}= \begin{bmatrix} \mv{X}_A \\ \mv{X}_B \end{bmatrix} \sim \mathcal{N}\left(\mv{0},\mv{\Sigma}\right), \quad \text{with} \quad \mv{\Sigma} = \begin{bmatrix} \mv{\Sigma}_{AA} & \mv{\Sigma}_{AB}\\ \mv{\Sigma}_{BA} & \mv{\Sigma}_{BB} \end{bmatrix}. \end{align*} where $\mv{X}_A\in\mathbb{R}^{n_A}$ and $\mv{X}_B\in\mathbb{R}^{n_B}$. Let $\mv{Q} = \mv{\Sigma}^{-1}$ (with the corresponding block structure) and fix some symmetric and non-negative definite $n_B\times n_B$ matrix $\mv{H}$. Then, if $\mv{X}_{B}^* \sim \mathcal{N}\left(0,\mv{H} \right)$ and $\mv{X}^*_A = \mv{X}_{A|B} + \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{X}^*_B$, where ${\mv{X}_{A|B} \sim \mathcal{N} \left( \mv{0}, \mv{\Sigma}_{AA} - \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{\Sigma}_{BA} \right)}$, we have that $\mv{X}^*= [(\mv{X}^*_A)^\top, (\mv{X}^*_B)^\top]^\top \sim \mathcal{N}\left(\mv{0},\mv{\Sigma}^*\right)$, where \begin{align}\label{eq:inverseprecmatstr} \mv{\Sigma}^*= \begin{bmatrix} \mv{\Sigma}_{AA} - \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{\Sigma}_{BA} + \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{H}\mv{\Sigma}^{-1}_{BB}\mv{\Sigma}_{BA} &\quad \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{H}\\ \mv{H}\mv{\Sigma}^{-1}_{BB} \mv{\Sigma}_{BA} &\quad \mv{H} \end{bmatrix} \end{align} and \begin{align} \label{eq:PrecLemma} \left( \mv{\Sigma}^* \right)^{-1} = \mv{Q}^* = \begin{bmatrix} \mv{Q}_{AA} && \mv{Q}_{AB} \\ \mv{Q}_{BA} && \mv{Q}_{BB} + \mv{H}^{-1} - \mv{\Sigma}^{-1}_{BB} \end{bmatrix} = \begin{bmatrix} \mv{Q}_{AA} && \mv{Q}_{AB} \\ \mv{Q}_{BA} && \mv{H}^{-1} + \mv{Q}_{BA}\mv{Q}^{-1}_{AA} \mv{Q}_{AB} \end{bmatrix} . \end{align} If $\mv{H}$ is singular where the projection onto the non null space is $\mv{P} = \mv{H}^\dagger \mv{H}$, then \begin{align*} \mv{Q}^* = \begin{bmatrix} \mv{Q}_{AA} && \mv{Q}_{AB}\mv{P} \\ \mv{P}\mv{Q}_{BA} && \mv{H}^{\dagger} + \mv{P}\left( \mv{Q}_{BB} - \mv{\Sigma}_{BB}^{-1} \right)\mv{P} \end{bmatrix} = \begin{bmatrix} \mv{Q}_{AA} && \mv{Q}_{AB}\mv{P} \\ \mv{P}\mv{Q}_{BA} && \mv{H}^{\dagger} + \mv{P}\left( \mv{Q}_{BA}\mv{Q}^{-1}_{AA} \mv{Q}_{AB} \right)\mv{P} \end{bmatrix} . \end{align*} \end{Lemma} \begin{proof} The expression for $\mv{\Sigma}^*$ follows directly by the definitions of $\mv{X}_A^*$ and $\mv{X}_B^*$, so we only need to show that $\mv{Q}^*$ has the desired expression. By Schur complement we get that \begin{align*} \mv{Q}_{AA}^* &= \left(\mv{\Sigma}^*_{AA} - \mv{\Sigma}^*_{AB} \left(\mv{\Sigma}^{*}_{BB}\right)^{-1} \mv{\Sigma}^*_{BA}\right)^{-1}= \left(\mv{\Sigma}_{AA} - \mv{\Sigma}_{AB} \left(\mv{\Sigma}_{BB}\right)^{-1} \mv{\Sigma}_{BA}\right)^{-1} = \mv{Q}_{AA}. \end{align*} Then using that $\mv{\Sigma}_{AB} \mv{\Sigma}^{-1}_{BB} = -\mv{Q}_{AA} ^{-1} \mv{Q}_{AB}$ (see \cite{rue2005gaussian} p.21 and 23) we have \begin{align*} \mv{Q}_{BA}^* &= -\mv{Q}_{AA} \mv{\Sigma}_{AB} \mv{\Sigma}^{-1}_{BB} \mv{H}\mv{H}^{-1} = -\mv{Q}_{AA} \mv{\Sigma}_{AB} \mv{\Sigma}^{-1}_{BB} = \mv{Q}_{AB} . \end{align*} Using, again, the Schur complement and that $\mv{\Sigma}_{AB} \mv{\Sigma}^{-1}_{BB} = -\mv{Q}_{AA} ^{-1} \mv{Q}_{AB}$ we get \begin{align*} \mv{Q}^*_{BB} &= \mv{H}^{-1} + \mv{H}^{-1} \mv{\Sigma}^*_{BA} \mv{Q}^*_{AA}\mv{\Sigma}^*_{AB} \mv{H}^{-1} = \mv{H}^{-1} + \mv{\Sigma}_{BB}^{-1} \mv{\Sigma}_{BA} \mv{Q}_{AA}\mv{\Sigma}_{AB} \mv{\Sigma}_{BB}^{-1} \\ &= \mv{H}^{-1} + \mv{Q}_{BA}\mv{Q}^{-1}_{AA} \mv{Q}_{AB} = \mv{H}^{-1} + \mv{Q}_{BB} - \mv{\Sigma}_{BB}^{-1}. \end{align*} Finally, if $\mv{H}$ is singular then \begin{align*} \mv{Q}^*_{BB} &= \mv{H}^{\dagger} + \mv{H}^{\dagger} \mv{\Sigma}^*_{BA} \mv{Q}^*_{AA}\mv{\Sigma}^*_{AB} \mv{H}^{\dagger} = \mv{H}^{\dagger} +\mv{P} \mv{\Sigma}_{BB}^{-1} \mv{\Sigma}_{BA} \mv{Q}_{AA}\mv{\Sigma}_{AB} \mv{\Sigma}_{BB}^{-1}\mv{P} \\ &= \mv{H}^{\dagger} +\mv{P}\left( \mv{Q}_{BB} - \mv{\Sigma}_{BB}^{-1} \right)\mv{P}. \end{align*} \end{proof} The lemma thus shows that one can change the conditional distribution of a Gaussian random variable, $\mv{X}_{B}$ without affecting the conditional distribution of $\mv{X}_{A}|\mv{X}_B$. The following particular case is needed for the proof of the theorem. \begin{Corollary}\label{cor:conditional} Assume the setting of Lemma~\ref{lem:conditional} and that $\mv{H}^{-1} = \mv{\Sigma}^{-1}_{BB} + \mv{C}$, where $\mv{C}$ is a symmetric and non-singular matrix. We then have that \begin{align*} \mv{X}^*= \begin{bmatrix} \mv{X}^*_A \\ \mv{X}^*_B \end{bmatrix} \sim \mathcal{N}\left(\mv{0}, \begin{bmatrix} \mv{\Sigma}_{AA} - \mv{\Sigma}_{AB}\left(\mv{\Sigma}_{BB} + \mv{C}^{-1} \right)^{-1}\mv{\Sigma}_{BA} &\qquad \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{H}\\ \mv{H}\mv{\Sigma}^{-1}_{BB}\mv{\Sigma}_{BA} &\qquad \mv{H} \end{bmatrix} \right), \end{align*} and that $$ \mv{Q}^* = \mv{Q} + \begin{bmatrix} \mv{0} && \mv{0} \\ \mv{0} && \mv{C} \end{bmatrix} . $$ \end{Corollary} \begin{proof} The result follows immediate from Theorem \ref{lem:conditional} and the Woodbury matrix identity. \end{proof} The proof of Lemma \ref{lem:conditional} also provides us the following Corollary about precision matrices with certain structures: \begin{Corollary}\label{cor:precmatstructure} Let $\mv{Q}^*$ be a precision matrix of the form \begin{equation}\label{eq:precmatstructS} \mv{Q}^* = \begin{bmatrix} \mv{Q}_{AA} &\quad \mv{Q}_{AB} \\ \mv{Q}_{BA} &\quad \mv{S} + \mv{Q}_{BA}\mv{Q}^{-1}_{AA} \mv{Q}_{AB} \end{bmatrix} . \end{equation} Then, the matrix $\mv{S}$ is symmetric and positive definite. Furthermore, the inverse $\mv{\Sigma}^* = (\mv{Q}^*)^{-1}$ is given by \begin{align}\label{eq:inverseprecmatstrS} \mv{\Sigma}^*= \begin{bmatrix} \mv{\Sigma}_{AA} - \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{\Sigma}_{BA} + \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{S}^{-1}\mv{\Sigma}^{-1}_{BB}\mv{\Sigma}_{BA} &\qquad \mv{\Sigma}_{AB}\mv{\Sigma}^{-1}_{BB} \mv{S}^{-1}\\ \mv{S}^{-1}\mv{\Sigma}^{-1}_{BB}\mv{\Sigma}_{BA} & \qquad\mv{S}^{-1} \end{bmatrix} \end{align} \end{Corollary} \begin{proof} Since $\mv{Q}^*$ is a precision matrix, it is invertible. We can find the expression for its inverse, $\mv{\Sigma}^* = (\mv{Q}^*)^{-1}$, by comparing equations \eqref{eq:precmatstructS} and \eqref{eq:inverseprecmatstrS} with \eqref{eq:PrecLemma} and \eqref{eq:inverseprecmatstr}, respectively, where we obtain that $\mv{H}^{-1}=\mv{S}$. In particular, from the proof of Lemma \ref{lem:conditional}, we obtain that $\mv{S}$ is invertible, and its inverse, $\mv{S}^{-1}$, is a covariance matrix, so it is symmetric and non-negative definite. Since $\mv{S}$ is invertible, it is positive definite. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:CondDens}] We will use the following matrix notation throughout the proof: Suppose that $\mv{\Sigma}$ is the covariance matrix of a vector $\mv{U}= (\mv{u}(t_1),\ldots \mv{u}(t_n))^\top$ for some $t_1,\ldots, t_n \in \mathbb{R}$, then we use the notation $\mv{\Sigma}^{t_it_j}$ to denote the submatrix that is the covariance matrix of $(\mv{u}(t_i),\mv{u}(t_j))^\top$. We use the notation $\mv{Q} = \mv{\Sigma}^{-1}$ for the corresponding precision matrix. Let, also, $\mv{Q}^{t_it_j} = (\mv{\Sigma}^{t_it_j})^{-1}$, whereas for a matrix $\mv{M}$, we denote by $\mv{M}_{t_i t_j}$ the submatrix obtained from $\mv{M}$ with respect to the indices $t_i$ and $t_j$. We will show the result by establishing that any covariance function $\tilde{\mv{r}}$ satisfying conditions (i), (ii) and (iii) must be of the form \eqref{eq:covmod}. Finally, we will show that $\tilde{\mv{r}}_T$ is, indeed, a covariance function. Fix some $t, s \in (0,T)$. Let $\mv{Q}$ denote the precision matrix of the vector $\mv{U}=[ \mv{u}(t), \mv{u}(s), \mv{u}(0) \mv{u}(T)]$ and $\widetilde{\mv{Q}}$ denote the precision matrix of the vector $\tilde{\mv{U}}=[ \tilde{\mv{u}}(t), \tilde{\mv{u}}(s), \tilde{\mv{u}}(0), \tilde{\mv{u}}(T)]$. Let, also, $A = \{t,s\}$ and $B = \{0,T\}$. Then (i) and the Markov property of $\mv{u}$ imply that $\widetilde{\mv{Q}}_{AA} = \mv{Q}_{AA} $ and $\widetilde{\mv{Q}}_{AB} = \mv{Q}_{AB}.$ Hence, $\widetilde{\mv{Q}}$ is of the form \eqref{eq:precmatstructS}, so by Corollary \ref{cor:precmatstructure}, its inverse is given by \eqref{eq:inverseprecmatstrS}, which directly implies that $\tilde{\mv{r}}$ is given by \begin{align*} \tilde{\mv{r}}\left(s,t\right) =& \mv{r}(s,t) - \begin{bmatrix}\mv{r}(s, 0) & \mv{r}(s,T)\end{bmatrix} \left(\mv{\Sigma}^{0T}\right)^{-1} \begin{bmatrix}\mv{r}(0,t)\\ \mv{r}(T,t) \end{bmatrix} + \\ & \begin{bmatrix}\mv{r}(s, 0) & \mv{r}(s,T)\end{bmatrix} \left(\mv{\Sigma}^{0T}\right)^{-1}\mv{H}^{0T} \left(\mv{\Sigma}^{0T}\right)^{-1} \begin{bmatrix}\mv{r}(0,t)\\ \mv{r}(T,t) \end{bmatrix}, \end{align*} for some positive definite matrix $\mv{H}^{0T}$. We will now show that conditions (ii) and (iii) provide an explicit form for $\mv{H}^{0T}$. Fix some $T_1\in (0,T)$, let $\widetilde{\mv{u}}^*$ be obtained from \eqref{eq:cond3thmconddens}, where $T_2 = T-T_1$, and $\widetilde{\mv{u}}_3$ be obtained from $\tilde{\mv{r}}_T$. We will obtain an explicit form for $\mv{H}^{0T}$ by obtaining conditions for equality of the densities of $\widetilde{\mv{U}}^*= [\widetilde{\mv{u}}^*\left(0 \right),\widetilde{\mv{u}}^*\left(T_1 \right), \widetilde{\mv{u}}^*\left(T \right)]$ and $\widetilde{\mv{U}}_3= [\widetilde{\mv{u}}_3\left(0 \right),\widetilde{\mv{u}}_3\left(T_1 \right), \widetilde{\mv{u}}_3\left(T \right)]$, which we denote, respectively, by f$_{\widetilde{\mv{U}}^*}$ and $f_{\widetilde{\mv{U}}_3}$. Let $ \mv{C}^{0T} = \left(\mv{H}^{0T} \right)^{-1} -\mv{Q} ^{0T}$. Then, by Lemma \ref{lem:conditional}, we have that \begin{align*} f_{\widetilde{\mv{U}}^*}( \mv{x}) &\propto f_{\widetilde{\mv{u}}_1(0),\widetilde{\mv{u}}_1(T_1)}( \mv{x}_0,\mv{x}_{T_1}) f_{\widetilde{\mv{u}}_1(0),\widetilde{\mv{u}}_1(T_2)}( \mv{x}_{T_1},\mv{x}_{T})\\ & \propto \exp \left( -0.5 \begin{bmatrix} \mv{x}_0 \\ \mv{x}_{T_1} \end{bmatrix}^T \left( \mv{H}^{0T_1} \right)^{-1} \begin{bmatrix} \mv{x}_0 \\ \mv{x}_{T_1} \end{bmatrix} -0.5 \begin{bmatrix} \mv{x}_{T_1} \\ \mv{x}_{T} \end{bmatrix}^T \left( \mv{H}^{0T_2} \right)^{-1} \begin{bmatrix} \mv{x}_{T_1} \\ \mv{x}_{T} \end{bmatrix} \right) \\ &= \exp \left( -0.5 \mv{x}^T \mv{Q}^* \mv{x} \right), \end{align*} where $$ \mv{Q}^* = \begin{bmatrix} \mv{Q}^{0T_1}_{00} + \mv{C}^{0T_1}_{00} &\qquad \mv{Q}^{0T_1}_{0T_1} + \mv{C}^{0T_1}_{0T_1} &\qquad \mv{0} \\ \mv{Q}^{0T_1}_{0T_1} + \mv{C}^{0T_1}_{0T_1} & \qquad \mv{Q}^{0T_1}_{T_1T_1}+ \mv{Q}^{0T_2}_{00} + \mv{C}^{0T_1}_{00}+ \mv{C}^{0T_2}_{T_2T_2} &\qquad \mv{Q}^{0T_2}_{0T_2} + \mv{C}^{0T_2}_{0T_2} \\ \mv{0} &\qquad \mv{Q}^{0T_2}_{0T_2} + \mv{C}^{0T_2}_{0T_2} &\qquad \mv{Q}^{0T_2}_{T_2T_2} +\mv{C}^{0T_2}_{T_2T_2} \end{bmatrix} . $$ By Lemma \ref{lem:conditional} again, the density of $\widetilde{\mv{U}}_3$ is $ f_{\widetilde{\mv{U}}_3}( \mv{x}) = \exp \left( -0.5 \mv{x} ^T \widetilde{\mv{Q}} \mv{x} \right), $ where \begin{equation}\label{eq:qtildeconddens} \widetilde{\mv{Q}} = \left( \mv{Q}^{0T_1T} + \begin{bmatrix} \mv{C}^{0T}_{00} & \mv{0} & \mv{C}^{0T}_{0T} \\ \mv{0} & \mv{0} & \mv{0} \\ \mv{C}^{0T}_{0T}& \mv{0} & \mv{C}^{0T}_{TT} \end{bmatrix} \right), \end{equation} and $\mv{Q}^{0T_1T}$ is the precision matrix of $[\mv{u}(0), \mv{u}(T_1),\mv{u}(T)]$. Now the densities are equal if and only if $\mv{Q}^* = \widetilde{\mv{Q}}_3 $. This establishes three conditions on $\mv{C}^{0T}$: First, $\mv{C}^{0T}_{0T} = \mv{0}$ for all $T$. Second, due to Markov property of $\mv{u}$, we have that $\mv{Q}^{0T_1T}_{00}= \mv{Q}^{0T_1}_{00} $ hence $\mv{C}^{0T_1}=\mv{C}^{0T}$ for all $T_1$ and $T$, Hence $\mv{C}^{0T}_{00} =: \mv{C}_0$ is a matrix independent of $T$. The same reasoning gives that $\mv{C}^{0T}_{TT} =: \mv{C}_{1}$ is independent of $T$. Finally, we can use the Markov property and stationarity of $\mv{u}$ to obtain that $$ \widetilde{\mv{Q}}_{T_1T_1} = \mv{Q}^{0T_1T}_{T_1T_1} = \mv{r}(0,0)^{-1}+\left( \mv{Q}^{0T_1}_{0T_1} \right)^T \left( \mv{Q}^{0T_1}_{00}\right)^{-1} \mv{Q}^{0T_1}_{0T_1} + \left(\mv{Q}^{0T_2}_{0T_2} \right)^T \left( \mv{Q}^{0T_2}_{T_2T_2}\right)^{-1} \mv{Q}^{0T_2}_{0T_2}, $$ and by construction $$ \mv{Q}^*_{T_1T_1} =2\mv{r}(0,0)^{-1}+\left( \mv{Q}^{0T_1}_{0T_1} \right)^T \left( \mv{Q}^{0T_1}_{00}\right)^{-1} \mv{Q}^{0T_1}_{0T_1} + \left(\mv{Q}^{0T_2}_{0T_2} \right)^T \left( \mv{Q}^{0T_2}_{T_2T_2}\right)^{-1} \mv{Q}^{0T_2}_{0T_2} + \mv{C}_0+\mv{C}_1, $$ whence $ \mv{C}_0 + \mv{C}_1 = - \mv{r}(0,0)^{-1}. $ By combining \eqref{eq:qtildeconddens}, the stationarity of $\mv{u}$ and condition (ii), we get that $\mv{C}_0=\mv{C}_1$. More precisely, we invert the right-hand side of \eqref{eq:qtildeconddens} and use stationarity of $\mv{u}$ to conclude that if $\mv{C}_0\neq \mv{C}_1$, then $\tilde{\mv{r}}_T(0,0)\neq \tilde{\mv{r}}_T(T,T)$. Thus, $$ \mv{C}^{0T} = - \frac{1}{2}\begin{bmatrix} \mv{r}(0,0)^{-1} & \mv{0} \\ \mv{0} & \mv{r}(0,0)^{-1} \end{bmatrix}. $$ Now the desired expression for the covariance of $\tilde{ \mv{u}}(s)$ on $[0,T]$ is obtained by applying Corollary \ref{cor:conditional}. Finally, we can see from the Schur complement that the matrix $ \begin{bmatrix} \mv{r}(0,0) & -\mv{r}(0,T) \\ -\mv{r}(T,0) & \mv{r}(0,0) \end{bmatrix}$ is positive definite, hence $\tilde r$ is a covariance function. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:precfuncconddens}] This follows by using the $\mv{C}$ matrix found in the proof of Theorem \ref{thm:CondDens} and applying Corollary \ref{cor:conditional}. \end{proof} \section{Proof of Theorems~\ref{Them:piAXsoft} and \ref{Them:piXgby}}\label{app:proofpiX} \begin{proof}[Proof of Theorem \ref{Them:piXgby}] Note that $\pi_{\mv{U}_{\Ac}^*|\mv{Y},\mv{U}^*_{\A} } (\mv{u}^*_{\Ac}| \mv{y},\mv{b}^*) \propto \pi_{\mv{Y}|\mv{U}_\A^*,\mv{U}_{\Ac}^*}(\mv{y}|\mv{b}^*, \mv{u}_{\Ac}^* ) \pi(\mv{u}^*_{\Ac}|\mv{b}^*)$. We will derive the density by finding these two densities. First it is straightforward to see that \begin{align*} \pi_{\mv{Y}|\mv{U}_\Ac^*,\mv{U}_{\A}^*}(\mv{y}|\mv{u}_{\Ac}^*,\mv{b}^*) = \frac{1}{(2\pi)^{\frac{m}{2}}|\mv{\Sigma}|^{1/2}}\exp \left( -\frac{1}{2}\left( \mv{y}- \mv{B}^* \begin{bmatrix} \mv{b}^* \\ \mv{u}^*_{\Ac} \end{bmatrix} \right)^\top \mv{\Sigma}^{-1} \left( \mv{y}- \mv{B}^* \begin{bmatrix} \mv{b}^* \\ \mv{u}^*_{\Ac} \end{bmatrix} \right)\right), \end{align*} as a function of $\mv{u}_{\Ac}^*$ can be written as \begin{align*} \pi_{\mv{Y}|\mv{U}_\A^*,\mv{X}_{\Ac}^*}(\mv{y}|\mv{b}^*, \mv{u}_{\Ac}^* ) \propto& \exp\left (-\frac{1}{2}\mv{u}^{*\top}_{\Ac}\mv{B}^{*\top}_{\Ac} \mv{\Sigma}^{-1} \mv{B}^*_{\Ac}\mv{u}^*_{\Ac}+ \mv{y}^{\top}\mv{\Sigma}^{-1}\mv{B}^*_{\Ac}\mv{u}^*_{\Ac} \right). \end{align*} From the proof of Theorem 2 in \cite{bolin2021efficient} we have \begin{equation}\label{eq:Xconditional} \mv{U}_{\Ac}^*|\mv{U}^*_\A= \mv{b}^* \sim \mathcal{N}_C\left(\mv{Q}_{\Ac\Ac}^*\left( \mv{\mu}^*_{\Ac}-\left(\mv{Q}_{\Ac\Ac}^{*}\right)^{-1} \mv{Q}_{\Ac\A}^{*} \left( \mv{b}^* - \mv{\mu}^*_{\A}\right)\right) , \mv{Q}_{\Ac\Ac}^* \right). \end{equation} Hence, $\pi_{\mv{U}^*_{\Ac}|\mv{U}^*_\A}(\mv{u}^*_{\Ac}|\mv{b}^*) \propto \exp \left( -\frac{1}{2} \left( \mv{u}^*_{\Ac}- \widetilde{\mv{\mu}}^*_{\Ac} \right)^\top \mv{Q}^*_{\Ac\Ac} \left( \mv{u}^*_{\Ac}- \widetilde{\mv{\mu}}^*_{\Ac} \right) \right),$ as a function of $\mv{u}_{\Ac}^*$, where $\widetilde{\mv{\mu}}^*_{\Ac} = \mv{\mu}^*_{\Ac}-\left(\mv{Q}_{\Ac\Ac}^{*} \right)^{-1}\mv{Q}_{\Ac\A}^{*} \left( \mv{b}^* - \mv{\mu}^*_{\A}\right)$. Thus it follows that \begin{align*} \pi_{\mv{u}_{\Ac}^*|\mv{Y},\mv{U}^*_{\A} } (\mv{u}^*_{\Ac}| \mv{y},\mv{b}^*) \propto &\exp\left (-\frac{1}{2}\mv{u}^{*\top}_{\Ac} \mv{B}^{*\top}_{\Ac}\mv{\Sigma}^{-1} \mv{B}^*_{\Ac}\mv{u}^*_{\Ac}+ \left( \mv{B}^{*\top}_{\Ac} \mv{\Sigma}^{-1}\mv{y} \right)^{\top}\mv{u}^*_{\Ac} \right) \cdot \\ & \exp\left(-\frac{1}{2} \mv{u}^{*\top}_{\Ac} \mv{Q}^*_{\Ac\Ac} \mv{u}^*_{\Ac}+ \left( \mv{Q}^*_{\Ac\Ac} \widetilde{\mv{\mu}}^*_{\Ac}\right)^{\top} \mv{u}^*_{\Ac} \right) \\ \propto &\exp\left(- \frac{1}{2} \left(\mv{u}_{\Ac}^*-\widehat{\mv{\mu}}_{\Ac}^* \right)^\top \widehat{\mv{Q}}_{\Ac\Ac}^* \left(\mv{u}_{\Ac}^*-\widehat{\mv{\mu}}_{\Ac}^* \right)\right). \end{align*} Finally, using the relation $\mv{U}= \mv{T}^\top\mv{U}^*$ completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{Them:piAXsoft}] First note that $\pi_{\mv{Y} | \mv{AU}}\left( \mv{y} | \mv{b} \right)= \pi_{\mv{Y} | \mv{U}_\A^*}(\mv{y}|\mv{b}^*)$ and \begin{align} \pi_{\mv{Y} | \mv{U}^*_{\A}}\left( \mv{y} | \mv{b}^* \right) &= \int \pi_{\mv{U}^*_{\Ac},\mv{Y}| \mv{U}^*_{\A} }\left(\mv{u}^*_{\Ac}, \mv{y}| \mv{b}^* \right) d\mv{u}^*_{\Ac} \notag\\ &= \int \pi_{\mv{Y}|\mv{U}^*_{\Ac}, \mv{U}^*_{\A}}(\mv{y}|\mv{u}_{\Ac}^*, \mv{b}^*) \pi_{\mv{U}^*_{\Ac}| \mv{U}^*_{\A}}(\mv{u}_{\Ac}^*| \mv{b}^*) d\mv{u}^*_{\Ac}. \label{eq:decomposePI} \end{align} The goal is now to derive an explicit form of the density by evaluating the integral in \eqref{eq:decomposePI}. By the expressions in the proof of Theorem~\ref{Them:piXgby} we have that \begin{align*} &\pi_{\mv{Y}|\mv{U}^*_{\Ac}, \mv{U}^*_{\A}}(\mv{y}|\mv{u}_{\Ac}^*, \mv{b}^*) \pi_{\mv{U}^*_{\Ac}| \mv{U}^*_{\A}}(\mv{u}_{\Ac}^*| \mv{b}^*) =\\ &= \exp\left (-\frac{1}{2}\mv{u}^{*\top}_{\Ac} \mv{B}^{*\top}_{\Ac}\mv{\Sigma}^{-1} \mv{B}^*_{\Ac} \mv{u}^*_{\Ac}+ \left( \mv{B}^{*\top}_{\Ac} \mv{\Sigma}^{-1}\mv{y} \right)^{\top}\mv{u}^*_{\Ac} \right) \frac{ |\mv{Q}^*_{\Ac\Ac}|^{1/2}}{\left(2\pi\right)^{c_0} |\mv{\Sigma}|^{1/2}} \cdot \\ &\quad\exp\left(-\frac{1}{2} \mv{u}^{*\top}_{\Ac} \mv{Q}^*_{\Ac\Ac} \mv{u}^*_{\Ac}+ \left( \mv{Q}^*_{\Ac\Ac} \widetilde{\mv{\mu}}^*_{\Ac}\right)^{\top} \mv{u}^*_{\Ac} \right) \exp \left(- \frac{1}{2}\left[ \mv{y}^{\top}\mv{\Sigma}^{-1}\mv{y}+ \widetilde{\mv{\mu}}_{\Ac}^{*\top} \mv{Q}^*_{\Ac\Ac} \widetilde{\mv{\mu}}^*_{\Ac} \right] \right) \\ &=\pi_{\mv{U}^*_{\Ac}|\mv{Y}, \mv{U}^*_{\A}}(\mv{u}_{\Ac}^*| \mv{y},\mv{b}^*) \frac{\exp \left( \frac{1}{2} \widehat{\mv{\mu}}_{\Ac}^{*\top} \widehat{\mv{Q}}^*_{\Ac\Ac} \widehat{\mv{\mu}}^*_{\Ac} \right) }{|\mv{Q}^*_{\Ac\Ac}|^{-1/2}|\mv{\Sigma}|^{1/2} \left(2\pi\right)^{c_1} } \exp \left(- \frac{1}{2}\left[ \mv{y}^{\top} \mv{\Sigma}^{-1}\mv{y} + \mv{\mu}_{\Ac}^{*\top} \mv{Q}^*_{\Ac\Ac}\mv{\mu}^*_{\Ac}\right]\right), \end{align*} where $c_0$ and $c_1$ are positive constants. Inserting this expression in \eqref{eq:decomposePI} and evaluating the integral, where one notes that $\pi_{\mv{U}^*_{\Ac}|\mv{Y}, \mv{U}^*_{\A}}(\mv{u}_{\Ac}^*| \mv{y},\mv{b}^*) $ integrates to one, gives the desired result. \end{proof} \end{appendix} \end{document}
{"config": "arxiv", "file": "2205.06163/graph-arxiv_v1.tex"}
TITLE: maximum number of circles passing through three vertices of a polygon QUESTION [2 upvotes]: What is the maximum number of sum (over all vertices) of number of distinct circles passing through at least three vertices of a convex polygon ($n$-gon), if the center of each circle required to belong to the set of vertices of the polygon? In other words, If we define a "centroid" by a quadruple of vertices $(a,b,c,d)$ such that $a$ is the center of a circle and three other vertices $b$, $c$, and $d$ are on circumference of this circle (so, $|ab|=|ac|=|ad|$), then what we want is the maximum number of centroids in a convex $n$-gon. (centroids are defined here https://arxiv.org/abs/1009.2218) Any suggestion? I guess it should be of order $O(n)$. first i think there exists some 'circular order' for all the centroids around the polygon so it may be of order of n,secondly some results like Bose theorem ( see paper:"The Extremal spheres theorem " by O.Musin et al https://www.sciencedirect.com/science/article/pii/S0012365X10003997 ) suggests that number of some class of circles passing through three vertices is of order n-2. but I have no idea how to prove it. Help me, thanks. REPLY [1 votes]: If $n$ is even, and $m=n/2$ is odd, there can be $m$ circles. Put $m$ points in a regular polygon, then each point is the same distance from the two furthest points. Draw a circular arc connecting the furthest points, and add another point anywhere on the arc. Do that for each of the $m$ initial points and you get $n=2m$ points and $m$ circles.
{"set_name": "stack_exchange", "score": 2, "question_id": 3300400}
\section{A generalized Jacobi-Trudi formula} \label{sec:jacobi_trudi} Recall that when one expands the Jacobi-Trudi determinant for the Schur function $s_\lambda$ after the first row, one gets $s_\lambda = \sum_{q \geq 0} (-1)^q \, s_{\lambda_1 + q} \cdot s_{\mu/(1^q)}$ where $\mu = (\lambda_2,\lambda_3,\dots)$. In this section we will prove a generalization of this result for stable Grothendieck polynomials. To state the formula in sufficient generality we need the following definition. If $I$ is a sequence of integers and $\lambda$ a partition, we write \[ G_{I\sslash \lambda} = \sum_{\nu,\mu} \delta_{I,\nu} d^\nu_{\lambda \mu} \, G_\mu \in \Gamma \,. \] With this notation we have $\Delta G_I = \sum_\lambda G_{I\sslash \lambda} \otimes G_\lambda$. Notice that when $I = \nu$ is a partition, the element $G_{\nu\sslash \lambda}$ depends on both $\nu$ and $\lambda$ and not just the skew diagram $\nu/\lambda$ between them. For example $G_{\lambda\sslash \lambda} = 1$ if and only if $\lambda$ is the empty partition. \begin{thm}[Jacobi-Trudi formula] \label{thm:jacobi_trudi} If $a \in \Z$ is an integer and $I$ is a sequence of integers, then \[ G_{a,I} = G_a \cdot G_I + \sum_{q \geq 1,\, t \geq 0} (-1)^q \binom{q-1+t}{t} \, G_{a+q+t} \cdot G_{I\sslash (1^q)} \,. \] \end{thm} For proving this theorem, the following notation will get rid of a lot of special cases. We let $\bbn{n}{m}$ be the usual binomial coefficient, except that we set $\bbn{-1}{0} = 1$: \[ \bbn{n}{m} = \begin{cases} \binom{n}{m} &\text{if $0 \leq m \leq n$,} \\ 1 &\text{if $n = -1$ and $m = 0$,} \\ 0 &\text{otherwise.} \end{cases} \] \refthm{thm:jacobi_trudi} then asserts that $G_{a,I} = \sum_{q,t \geq 0} (-1)^q \bbn{q-1+t}{t} G_{a+q+t} \cdot G_{I \sslash (1^q)}$, and we have $\bbn{n}{m} = \bbn{n-1}{m-1} + \bbn{n-1}{m}$ whenever $m \leq n$. Notice that since $G_{a,I} = \sum_\mu \delta_{I,\mu} G_{a,\mu}$ and $G_{I\sslash (1^q)} = \sum_\mu \delta_{I,\mu} G_{\mu\sslash (1^q)}$, it is enough to prove the theorem in case $I = \mu$ is a partition. We will give a bijective proof of the theorem when $a \geq \mu_1$. For this we need the following combinatorial objects. Recall that a skew diagram is called a {\em horizontal strip\/} if no two boxes are in the same column, and a {\em vertical strip\/} if no two boxes are in the same row. If both are true then the diagram is a rook strip. \begin{defn} A colored and marked Young diagram (CMYD) relative to a partition $\mu$ is a quadruple of partitions $D = (\lambda_0 \subset \lambda \subset \nu_0 \subset \nu)$ such that \begin{romenum} \item $\lambda \subset \mu$. \item $\mu/\lambda_0$ is a vertical strip. \item $\nu/\lambda$ is a horizontal strip. \item $\lambda/\lambda_0$ and $\nu/\nu_0$ are both rook strips. \item $\nu/\nu_0$ has no box in the top non-empty row of $\nu/\lambda$. \end{romenum} \end{defn} We will regard a CMYD $D = (\lambda_0, \lambda, \nu_0, \nu)$ as the Young diagram for $\nu$ in which the boxes of $\lambda$ are colored white and the boxes of $\nu/\lambda$ are gray; the boxes in $\lambda/\lambda_0$ and in $\nu/\nu_0$ are furthermore marked. The axioms (i)--(v) then say that all white boxes are contained in $\mu$; the boxes in $\mu$ which are not white form a vertical strip; the gray boxes form a horizontal strip; the marked white boxes form a rook strip and the marked gray boxes form a rook strip; and finally the northernmost gray boxes are unmarked. Let \begin{align*} g(D) &= \text{\# unmarked gray boxes in $D$} ~= |\nu_0/\lambda| \,, \\ w(D) &= \text{\# unmarked white boxes in $D$} ~= |\lambda_0| \,, \\ u(D) &= \text{\# unmarked boxes in $D$} ~= g(D) + w(D) \,\text{, and} \\ m(D) &= \text{\# marked boxes in $D$} ~= |\lambda/\lambda_0| + |\nu/\nu_0| \,. \end{align*} We will write $G_D = G_\nu$. By the coproduct Pieri rule of \cite{buch:littlewood-richardson} or \refthm{thm:bialg} we have $G_{\mu \sslash (1^q)} = \sum (-1)^{m(D)} G_D$, the sum over all CMYDs relative to $\mu$ such that $w(D) = |\mu| - q$ and $D$ has no gray boxes. Then using Lenart's Pieri rule \cite[Thm.~3.2]{lenart:combinatorial} or equation (\ref{eqn:lrmult}) we obtain \begin{equation} \label{eqn:cmyd_prod} G_p \cdot G_{\mu \sslash (1^q)} = \sum_D (-1)^{m(D)} G_D \end{equation} where this sum is over all CMYDs $D$ relative to $\mu$ with $g(D) = p$ and $w(D) = |\mu| - q$. \newcommand{\erow}[1]{\raisebox{18pt}{\picD{#1}}\hspace{8pt}} \newcommand{\torow}[1]{\raisebox{9pt}{\picD{#1}}\hspace{8pt}} \newcommand{\trerow}[1]{\picD{#1}\hspace{8pt}} \begin{example} \label{exm:cmyd} If we take $\mu = (1,1)$, $p = 2$, and $q = 1$, we have the following 6 CMYDs: \[ \erow{cmyda} \torow{cmydb} \torow{cmydc} \torow{cmydd} \trerow{cmyde} \trerow{cmydf} \] It follows that $G_2 \cdot G_{1\,1 \sslash 1} = G_3 + G_{2\,1} - 2\,G_{3\,1} - G_{2\,1\,1} + G_{3\,1\,1}$. \end{example} Define the right vertical strip of $\mu$ to be the boxes in $\mu$ with no boxes to the right of them. We will say that a box in a CMYD $D$ is in $\mu$ resp.\ in the right vertical strip of $\mu$ if this is true when the two diagrams are overlaid. We will be interested in the following four types of {\em special boxes\/} in $D$: \begin{description} \item[Type A] An unmarked gray box contained in $\mu$ which does not have a marked white box above it. \item[Type B] Any white box (marked or unmarked) contained in the right vertical strip of $\mu$ which has no box under it. \item[Type C] An unmarked gray box with a marked white box above it. \item[Type D] A marked gray box such that the box above it is in the right vertical strip of $\mu$. \end{description} In \refexm{exm:cmyd} above, each diagram has exactly one special box. From left to right, these boxes have types B, A, D, B, C, and D. With this notion we can rewrite the right hand side of the formula of \refthm{thm:jacobi_trudi} as follows: \begin{lemma} \label{lemma:no_special} For any partition $\mu$ and integer $a \geq \mu_1$ we have \[ \sum_{q,t \geq 0} (-1)^q \bbn{q-1+t}{t} G_{a+q+t} \cdot G_{\mu\sslash (1^q)} = \sum_D (-1)^{|\mu| + w(D) + m(D)} \bbn{g(D)-a-1}{u(D)-a-|\mu|} G_D \] where the sum is over all CMYDs relative to $\mu$ with no special boxes. \end{lemma} \begin{proof} It follows from equation (\ref{eqn:cmyd_prod}) that the asserted identity is true if we sum over all CMYDs relative to $\mu$. We will prove that the terms for which $D$ has special boxes cancel each other out in the right hand side. Notice that each column of a CMYD can have at most one special box. We will group each CMYD $D$ for which the leftmost special box is of type A with two other CMYDs whose leftmost special boxes are of type B, such that the contributions from these three diagrams cancel. Similarly a diagram with a leftmost special box of type C will be grouped with two diagrams with leftmost special boxes of type D. Notice that if $D$ is a CMYD relative to $\mu$ such that $u(D) - a - |\mu| \geq 0$ and $D$ contains a special box, then $D$ has at least $a+1$ gray boxes, so the top row of $D$ contains an unmarked gray box which is outside $\mu$. Notice also that since $\bbn{n}{m} = \bbn{n-1}{m-1} + \bbn{n-1}{m}$ whenever $m \leq n$, we have \begin{equation} \label{eqn:special_cancel} \bbn{g(D)-a-1}{u(D)-a-|\mu|} = \bbn{g(D)-a-2}{u(D)-a-|\mu|-1} + \bbn{g(D)-a-2}{u(D)-a-|\mu|} \end{equation} for any diagram $D$ such that $w(D) < |\mu|$. Now let $D$ be a CMYD whose leftmost special box is of type A. The conditions for a type A box then make it possible to change this box into a white box or a marked white box, while the diagram continues to be a CMYD. Here it is important that the top row of $D$ contains at least one unmarked gray boxes outside $\mu$, since this ensures that the modified diagram satisfies axiom (v). \[ \raisebox{-8pt}{\picB{typea}} ~~~~\longleftrightarrow~~~~ \raisebox{-8pt}{\picB{typeb1}} ~~+~~ \raisebox{-8pt}{\picB{typeb2}} \] The signs of the contributions from the two new diagrams are the opposite of the sign of the contribution from $D$. Since $w(D) < |\mu|$, the contributions from all three diagrams therefore add to zero by equation (\ref{eqn:special_cancel}). Notice that any special box of type B can be changed to a gray box. Therefore all diagrams with a leftmost special box of type B get canceled in this way. Now suppose the leftmost special box in $D$ is of type C. In this case the box can be changed to a marked gray box while the box above is either marked or unmarked white. \[ \raisebox{-16pt}{\picB{typec}} ~~~~\longleftrightarrow~~~~ \raisebox{-16pt}{\picB{typed1}} ~~+~~ \raisebox{-16pt}{\picB{typed2}} \] Again the new diagrams give contributions of opposite sign from that of $D$, and $w(D) < |\mu|$, so the contributions of all three diagrams cancel by equation (\ref{eqn:special_cancel}). Finally all diagrams with a leftmost special box of type D are taken care of in this way, since any diagram with a type D box can be changed so the special box turns into type C. \end{proof} \begin{lemma} \label{lemma:unique_cmyd} Let $D$ be a CMYD with no special boxes and assume $u(D) \geq \mu_1 + |\mu|$. Then $D = (\mu, \mu, \nu, \nu)$ where $\nu = (g(D), \mu) = (g(D), \mu_1, \mu_2, \dots)$. \end{lemma} \begin{proof} We start by observing that $D$ has no marked white boxes. If $D$ has such a box, then since it is not special, there must be a gray box below it. But this gray box must then be special of type C or D, a contradiction. Notice also that no unmarked gray boxes can be contained in $\mu$, since these would necessarily be special of type A. Now suppose $D$ contains a marked gray box, and consider the northernmost such box. Since this box is not special (and not in the top row of $D$), the white box above it is not in the right vertical strip of $\mu$. Now consider the row of boxes in $D$ to the right of this white box. If this row contains a box in the right vertical strip of $\mu$, then this would necessarily be a special box of type B. We conclude that if $D$ contains a marked gray box then some box northeast of this box is contained in $\mu$ but not in $D$. Now assume that $\mu$ is not contained in $D$ and consider the northernmost row where $D$ is missing boxes from $\mu$. Since $D$ contains at least $\mu_1$ gray boxes, this can't be the top row, and the row above must contain a box in the right vertical strip of $\mu$ which has no box below it. Since this box can't be marked gray by the argument above, it must be special of type $B$, again a contradiction. We conclude that $\mu$ is contained in $D$ and that all boxes from $\mu$ are white. To prevent these white boxes from being special, there must furthermore be a gray box in each column of $D$. This proves the result. \end{proof} The preceding two lemmas essentially prove \refthm{thm:jacobi_trudi} when $I = \mu$ is a partition and $a \geq \mu_1$. For the general case of the theorem we will also need the following lemma. Let $h_i(x)$ denote the complete symmetric function of degree $i$. \begin{lemma} \label{lemma:gtos} For any integer $k \in \Z$ we have $G_k(x) = (1 - G_1(x)) \cdot \sum_{i \geq 0} h_{k+i}(x)$. \end{lemma} \begin{proof} If $k \geq 1$, it follows from \cite[Thm.~2.2]{lenart:combinatorial} that $G_k(x) = \sum_{p \geq 0} (-1)^p\, s_{(k,1^p)}(x)$. Alternatively this can be deduced from equation (\ref{eqn:single}), see e.g.\ \cite[\S6]{buch:littlewood-richardson}. Notice in particular that $1 - G_1(x) = \sum_{p \geq 0} (-1)^p\, e_p(x)$. For $k \geq 1$ the lemma therefore follows from the identity $\sum_{i=0}^p (-1)^i\, h_{k+i}\, e_{p-i} = s_{(k,1^p)}$. When $k \leq 0$ the lemma is true because $\sum_{p \geq 0} (-1)^p\, e_p$ is the inverse power series to $\sum_{i \geq 0} h_i$. \end{proof} \begin{proof}[Proof of \refthm{thm:jacobi_trudi}] Suppose at first that $a \geq \mu_1$. If $D$ is a CMYD relative to $\mu$ with no special boxes such that its coefficient $\bbn{g(D)-a-1}{u(D)-a-|\mu|}$ is non-zero, then since $u(D) \geq a + |\mu|$ we conclude by \reflemma{lemma:unique_cmyd} that $D = (\mu,\mu, \nu,\nu)$ where $\nu = (g(D),\mu)$. But then we have $w(D) = |\mu|$ and $\bbn{g(D)-a-1}{g(D)-a} \neq 0$, so $g(D) = a$. The theorem therefore follows from \reflemma{lemma:no_special} in all cases where $a \geq \mu_1$. For the general case it is enough to show that \[ \GG_{a,\mu}(x_1,\dots,x_n) = \sum_{q,t \geq 0} (-1)^q \bbn{q-1+t}{t} \, G_{\mu\sslash (1^q)}(x_1,\dots,x_n) \cdot G_{a+q+t}(x_1,\dots,x_n) \] where $n \geq 1 + \max(\ell(\mu), -a)$; this is sufficient because any partition $\lambda$ such that $G_\lambda$ occurs in either side of the claimed identity must have length at most $\ell(\mu) + 1$, and the stable Grothendieck polynomials for partitions of such lengths are linearly independent when applied to $n$ variables. For the rest of this proof we will let $x$ denote the $n$ variables $x_1,\dots,x_n$. Let $\GG_\mu^{(i)}$ be the cofactor obtained by removing the first row and the $i+1$'st column of the determinant defining $\GG_{a,\mu}(x)$. Notice that this does not depend on $a$, and we have \begin{equation} \label{eqn:jt_lhs} \GG_{a,\mu}(x) = \sum_{i = 0}^{n-1} (-1)^i \, \GG_\mu^{(i)}(x) \cdot h_{a+i}(x) \,. \end{equation} Now using \reflemma{lemma:gtos} we obtain \begin{equation} \label{eqn:jt_rhs} \begin{split} & \sum_{q,t \geq 0} (-1)^q \bbn{q-1+t}{t} \, G_{\mu\sslash (1^q)}(x) \cdot G_{a+q+t}(x) \\ &~~= \sum_{q,t \geq 0,\, i \geq q+t} (-1)^q \bbn{q-1+t}{t} \, G_{\mu\sslash (1^q)}(x) \cdot (1-G_1(x)) \cdot h_{a+i}(x) \\ &~~= \sum_{i \geq 0} \left( (1-G_1(x)) \cdot \sum_{q+t \leq i} (-1)^q \bbn{q-1+t}{t} G_{\mu\sslash (1^q)}(x) \right) \cdot h_{a+i}(x) \,. \end{split} \end{equation} Since (\ref{eqn:jt_lhs}) is equal to (\ref{eqn:jt_rhs}) for all $a \geq \mu_1$, the theorem follows from the following lemma. \end{proof} \begin{lemma} Let $f_j \in \Z[[x_1,\dots,x_n]]$ be a power series for each $j \geq 0$ and assume that \begin{equation} \label{eqn:hshift} \sum_{j \geq 0} h_{a+j}(x_1,\dots,x_n) \cdot f_j = 0 \end{equation} holds for all sufficiently large $a \in \N$. Then (\ref{eqn:hshift}) is true for all $a \geq 1-n$. \end{lemma} \begin{proof} Since the form of each fixed degree in (\ref{eqn:hshift}) must be zero, we can assume that each $f_j$ is a polynomial and that $f_j = 0$ for $j > d$ for some $d \in \N$. Assume at first that $d < n$ and let (\ref{eqn:hshift}) be true whenever $a \geq N$. By assumption we then have \[ \begin{bmatrix} h_{N+d} & h_{N+d+1} & \dots & h_{N+2d} \\ h_{N+d-1} & h_{N+d} & \dots & h_{N+2d-1} \\ \vdots & \vdots && \vdots \\ h_N & h_{N+1} & \dots & h_{N+d} \end{bmatrix} \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_d \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \,. \] Since the determinant of the matrix is the Schur polynomial $s_{(N+d)^{d+1}}(x_1,\dots,x_n) \linebreak \neq 0$, we conclude that each $f_j = 0$. Now assume $d \geq n$. If $a \geq 1-n$ then since $a + d \geq 1$ we get \[ h_{a+d}(x_1,\dots,x_n) = \sum_{j=0}^{d-1} (-1)^{d-j+1} h_{a+j}(x_1,\dots,x_n) e_{d-j}(x_1,\dots,x_n) \,. \] So if we put $g_j = f_j + (-1)^{d-j+1} e_{d-j}(x_1,\dots,x_n) f_d$, the left hand side of (\ref{eqn:hshift}) is equal to \[ \sum_{j=0}^{d-1} h_{a+j}(x_1,\dots,x_n) \cdot g_j \,. \] Since this is equal to zero for all large $a$, we conclude it is zero for all $a \geq 1 - n$ by induction on $d$. \end{proof}
{"config": "arxiv", "file": "math0104029/jacobi-trudi.tex"}
\section{Associating algebras to infinitesimal generators} \label{sec:algebra_generated_by_unbounded_elements} \noindent The main purpose of this section is to discuss a possible interpretation of an observation made by Kijowski and Rudolph in \cite[Section 3]{kijowski04} in the case of a quantum lattice gauge theory, namely that the kernel of the representation $\pi \colon \A \rightarrow B(\cH)$, where as before $\A = B_0(\cH)$, is in some sense generated by the elements of the Lie algebra $\fk$ of the symmetry group $K$. The representation of the group $K$ on $\cH$ can be used to associate differential operators to the elements of $\fk$, which are typically unbounded if $\cH$ is infinite-dimensional. If instead the representation space is finite-dimensional then the representation of the Lie algebra $\fk$ is bounded. Using this fact and other standard results from the representation theory of Lie groups, we will show how the differential operators associated to the elements of the Lie algebra generate $\ker(\pi)$. In addition, we need the following preparatory lemma, which can be found in \cite[Exercise 4.2(c)]{murphy90}: \begin{lem} Let $\cH $ be a Hilbert space, let $a$ be a compact operator on $\cH$, and suppose that $(b_j)_{j \in J}$ is a bounded net of bounded operators that converges strongly to $b \in B(\cH)$. Then the net $(b_j a)_{j \in J}$ converges in norm to $ba$. If in addition the operator $b_j$ is hermitian for each $j \in J$, then the net $(ab_j)_{j \in J}$ converges in norm to $ab$. \label{lem:strong+compact_implies_norm} \end{lem} \noindent The following result shows how $\ker(\pi)$ can be generated by differential operators: \begin{thrm} Suppose $K$ is a compact, connected Lie group. Let $S$ be a collection of finite-dimensional subrepresentations of the continuous representation $\rho \colon K \rightarrow U(\cH)$, and for each $\sigma \in S$, let $\cH_\sigma \subseteq \cH$ be the subspace on which $\sigma$ is represented. Suppose that these representation spaces form an orthogonal decomposition of $\cH$, i.e., \begin{equation*} \cH = \overline{\bigoplus_{\sigma \in S} \cH_\sigma}. \end{equation*} Then $\ker(\pi)$ is the closed, two-sided ideal generated by the set \begin{equation} \left\{ \int_{K} \rho(k) \sigma(X)^n \rho(k)^{-1} \: dk \colon \sigma \in S, \: X \in \fk, \: n \geq 1 \right\}. \label{eq:generators_of_the_kernel} \end{equation} \label{thrm:decomposition_and_algebra_generators} \end{thrm} \begin{rem} In the set of generators above, $\sigma(X)$ is regarded as the compression of $\rho(X)$ to $\cH_\sigma$. Moreover, we note that the integrals of vector-valued functions can be defined using Bochner integration. \end{rem} \begin{proof}[Proof of Theorem \ref{thrm:decomposition_and_algebra_generators}.] Let $I$ be the ideal in $\A^K$ generated by the set in equation \eqref{eq:generators_of_the_kernel}. We first show that $I \subseteq \ker(\pi)$. Indeed, $\sigma(X)^n$ maps $\cH$ into $\cH_\sigma$ for each $\sigma \in S$, each $X \in \fk$ and each $n \geq 1$, hence so does $\int_{K} \rho(k) \sigma(X)^n \rho(k)^{-1} \: dk$, which implies that it is a finite rank operator. In particular, it is compact. Moreover, it follows from left invariance of the Haar measure that $\int_{K} \rho(k) \sigma(X)^n \rho(k)^{-1} \: dk$ is equivariant with respect to $\rho$, so it is an element of $\A^K$. Finally, to show that it is an element of $\ker \pi$, let $p_\sigma \colon \cH \rightarrow \cH_\sigma$ be the orthogonal projection onto the representation space of $\sigma$. For each $v \in \cH^K$ we have $p_\sigma v \in \cH^K$ and $\sigma(X)v = 0$. Hence \begin{align*} \int_{K} \rho(k) \sigma(X)^n \rho(k)^{-1}(v) \: dk = \int_{K} \rho(k) \sigma(X)^n (v) \: dk = 0, \end{align*} and therefore $\int_{K} \rho(k) \sigma(X)^n \rho(k)^{-1} \: dk \in \ker(\pi)$. Thus the generators of $I$ are contained in $\ker(\pi)$. Since $\ker(\pi)$ is a closed, two sided ideal, it follows that $I \subseteq \ker(\pi)$. We turn to the proof of the reverse inclusion. Let $b \in \ker(\pi)$, let $p_{\cH^K}$ be the orthogonal projection of $\cH$ onto $\cH^K$. It is easy to see that \begin{equation*} p_{\cH^K} = \int_{K} \rho(k) \: dk. \end{equation*} Since $b \in \ker(\pi)$, it follows that \begin{equation*} b = b(\Id_{\cH} - p_{\cH^K}) = b \int_{K} ( \Id_{\cH} - \rho(k) )\: dk = \sum_{\sigma \in S} b \int_{K} (p_\sigma - \sigma(k)) \: dk. \end{equation*} By the preceding lemma, the series on the right-hand side is norm-convergent, hence to show that $b \in I$, it suffices to show that \begin{equation*} b \int_{K} (p_\sigma - \sigma(k)) \: dk \in I, \end{equation*} for each $\sigma \in S$. Since $I$ is closed under multiplication with elements of $\A^K$, we are done if we can show that \begin{equation*} \int_{K} (p_\sigma - \sigma(k)) \: dk \in I. \end{equation*} From bi-invariance of the Haar measure and Fubini's theorem, we infer that \begin{equation*} \int_{K} (p_\sigma - \sigma(k)) \: dg = \int_{K} \int_{K} \rho(h) (p_\sigma - \sigma(k))\rho(h)^{-1} \: dh \: dk. \end{equation*} The norm topology and the strong topology coincide on the finite-dimensional algebra $B(\cH_\sigma)$, so the first integral on the right-hand side is a norm limit of Riemann sums, i.e. for each $\varepsilon > 0$, there exist $k_j \in K$ and $c_j \geq 0$ for $j = 1,\ldots,n$, such that \begin{equation*} \left\| \int_{K} \int_{K} \rho(h) (p_\sigma - \sigma(k))\rho(h)^{-1} \: dh \: dk - \sum_{j = 1}^n c_j \int_{K} \rho(h) (p_\sigma - \sigma(k_j))\rho(h)^{-1} \: dh \right\| < \varepsilon. \end{equation*} Since $I$ is closed by definition, it suffices to show that \begin{equation*} \sum_{j = 1}^n c_j \int_{K} \rho(h) (p_\sigma - \sigma(k_j))\rho(h)^{-1} \: dh \in I. \end{equation*} We prove this by showing that \begin{equation*} \int_{K} \rho(h) (p_\sigma - \sigma(k))\rho(h)^{-1} \: dh \in I, \tag{$\ast$} \label{eq:in_the_ideal} \end{equation*} for each $k \in K$. Now fix such a $k$. Because $K$ is both compact and connected, the exponential map $\exp \colon \fk \rightarrow K$ is surjective, so there exists an $X \in \fk$ such that $k = \exp(X)$. But $\sigma$ is a homomorphism of Lie groups, so \begin{equation*} \sigma(k) = \sigma \circ \exp(X) = \exp \circ \sigma(X) = p_\sigma \sum_{j = 0}^\infty \frac{\sigma(X)^j}{j!}. \end{equation*} Thus \begin{equation*} p_\sigma - \sigma(k) = -\sum_{j = 1}^\infty \frac{\sigma(X)^j}{j!}. \end{equation*} The map \begin{equation*} B(\cH_\sigma) \rightarrow B(\cH_\sigma), \quad a \mapsto \int_{K} \rho(h) a \rho(h)^{-1} \: dh, \end{equation*} is a linear operator on the finite-dimensional algebra $B(\cH_\sigma)$, hence it is norm-continuous, so \begin{equation*} \int_{K} \rho(h) (p_\sigma - \sigma(k))\rho(h)^{-1} \: dh = -\sum_{j = 1}^\infty \frac{1}{j!} \int_{K} \rho(h) \sigma(X)^j \rho(h)^{-1} \: dh, \end{equation*} and the series on the right-hand side converges with respect to the norm on $B(\cH)$. Each of the partial sums is an element of $I$, which implies that \eqref{eq:in_the_ideal} holds, as desired. \end{proof} \noindent In general, the set $S$ in the above theorem will not be unique. Suppose that we are in the situation of the theorem, and that we are given a set $S$ satisfying the assumption. If the Hilbert space $\cH$ is infinite-dimensional, there are infinitely many different sets like $S$ that satisfy the assumption. Indeed, $S$ is an infinite set because $\cH$ is infinite-dimensional, so we can take any finite subset $F \subseteq S$ containing at least two representations, define the subrepresentation $\sigma_F := \bigoplus_{\sigma \in F} \sigma$, and the set $S^\prime = (S \backslash F) \cup \{\sigma_F\}$. Then $S^\prime \neq S$, and it satisfies the assumption of the theorem. The last argument can be formulated slightly more generally as follows: Suppose that $S_1$ and $S_2$ are sets of orthogonal finite-dimensional subrepresentations, and that $S_1$ satisfies the assumption of the theorem. If each element of $S_1$ is a subrepresentation of $S_2$, then $S_2$ also satisfies the assumption. If $\cH$ is infinite-dimensional, then from any set $S_1$ one can always construct a different set $S_2$ with these properties. Thus one can always make the set $S$ `arbitrarily coarse', which is another reason why we view Theorem \ref{thrm:decomposition_and_algebra_generators} as a possible way to make the idea of `the ideal generated by unbounded operators' rigorous. The fact that a set $S$ like the one in Theorem \ref{thrm:decomposition_and_algebra_generators} always exists, is a consequence of the following result. Recall that for any representation $\rho$ of a group $K$ on a space $V$, a vector $v$ is called {\em $K$-finite} if and only if the smallest subspace containing $v$ that is invariant under $\rho$, i.e., the span of $\set{\rho(k)v}{k \in K}$, is finite-dimensional. We let $V^{\text{\normalfont fin}}$ denote the subspace of $K$-finite vectors of $V$. \begin{prop} Let $\rho$ be a continuous representation of a compact Lie group $K$ in a complete locally convex topological vector space $V$. Then $V^{\text{\normalfont fin}}$ is dense in $V$. \end{prop} \noindent This result can be found in \cite{duistermaat00} as part of Corollary 4.6.3. Using this result and Zorn's lemma, one can now readily show that there exists a set $S$ which satisfies the assumption of our theorem. Needless to say, explicitly exhibiting such a set might be impossible. However, as we shall see in the next section, there are situations in which there is a natural choice for $S$. Before we end this section, we briefly recall some other notions from representation theory. Let $\widehat{K}$ be the set of equivalence classes of irreducible representations of $K$, and let $[\delta] \in \widehat{K}$. The {\em isotypical component of type $[\delta]$} is the set $V[\delta]$ of elements $v \in V^{\text{\normalfont fin}}$ such that the subrepresentation generated by $v$ is equivalent to the representation $\delta \oplus \dots \oplus \delta$ ($n$ copies) for some $n \in \IN$.
{"config": "arxiv", "file": "1705.05259/Generated_algebras.tex"}
TITLE: Combinatorial proof of an identity involving integer partitions and their conjugates QUESTION [3 upvotes]: I have that $\lambda$ and $\mu$ are integer partitions and $\lambda^*$ and $\mu^*$ are their conjugates (respectively). I am trying to use counting arguments to prove that: \begin{align} \sum_{i,j}min(\lambda_i,\mu_j) = \sum_k \lambda_k^*\mu_k^* \end{align} I am having trouble coming up with a model for what this counts. REPLY [2 votes]: Let's use Young diagrams to visualize the partitions. For the ease of explanation, we will work with a specific example. However, it is not hard to see how to do the general case. The example we will work with is $\lambda = (3,2,1,1)$ and $\mu = (4, 1)$. Then the Young diagram for $\lambda$ looks like this: The young diagram for $\mu$ looks like this Now we take a three-dimensional coordinate system. We place the Young diagram for $\lambda$ under the $xy$-plane, and the Young diagram for $\mu$ behind the $xz$-plane. In the positive $xyz$-octant we now place cubes in all spots that are above a square of $\lambda$ and in front of a square of $\mu$. See the diagram below for a visualization. Here $\lambda$ is colored red, $\mu$ is colored blue, and the new cubes we added are colored yellow. Now if we would want to count the number of yellow cubes, there are several ways to proceed. One way would be to look at each horizontal line of cubes (i.e. in the left-to-right direction). If we number from the back and from below, the $i$-th line on the $j$-level contains exactly $\textrm{min}(\lambda_i, \mu_j)$ blocks. Hence, there are $\sum_{i,j} \textrm{min}(\lambda_i, \mu_j)$ cubes. On the other hand, we can look at the slices that are parallel to the $yz$ plane (i.e. the plane normal to both Young diagrams). Counting from left to right, the $k$-th slice is a rectangle with length $\lambda_k^*$ and heigth $\mu_k^*$. Hence, there are $\sum_{k} \lambda_k^*\mu_k^*$ yellow blocks. We conclude that $$\sum_{i,j} \textrm{min}(\lambda_i, \mu_j) = \sum_{k} \lambda_k^*\mu_k^*.$$
{"set_name": "stack_exchange", "score": 3, "question_id": 1525315}
TITLE: Let $m$ and $n$ be integers. Then $n^3 – 2m – 2 \ne 0$. QUESTION [0 upvotes]: I'm supposed to prove this using proof by contradiction, and I'm having a lot of trouble with it. I understand that I'm supposed to assume the premise is true and the conclusion false, which would mean the new statement would be "Let m and n be integers. Then $n^3 - 2m - 2 = 0$. I've been trying to switch up the equation by adding $2m+2$ to the other side of the equation, but I really can't figure out what to do next. A hint that was given to me by my Professor was to try and contradict the premise by showing that $n, m$ or both are not integers. REPLY [1 votes]: Note that $$n^3 – 2m – 2 = 0\iff n^3=2(m+1)$$ thus $n^3$ must be even let $n=2k$ $$\iff n^3=2(m+1)\iff 8k^3=2(m+1)\iff 4k^3=m+1$$ $$\iff 4k^3-1=m$$ we don't find any contradiction and thus we can find infinitely many solutions: $$(n=2, m=3), (n=4, m=31), etc.$$
{"set_name": "stack_exchange", "score": 0, "question_id": 2636232}
\begin{document} \author{Lee Troupe} \address{Department of Mathematics, Boyd Graduate Studies Research Center, University of Georgia, Athens, GA 30602, USA} \email{ltroupe@math.uga.edu} \let\thefootnote\relax\footnote{The author was partially supported by NSF RTG Grant DMS-1344994.} \begin{abstract} In this paper, we investigate extreme values of $\omega(\efp)$, where $E/\bQ$ is an elliptic curve with complex multiplication and $\omega$ is the number-of-distinct-prime-divisors function. For fixed $\gamma > 1$, we prove that \[ \#\{p \leq x : \omega(\efp) > \gamma\log\log x\} = \frac{x}{(\log x)^{2 + \gamma\log\gamma - \gamma + o(1)}}. \] The same result holds for the quantity $\#\{p \leq x : \omega(\efp) < \gamma\log\log x\}$ when $0 < \gamma < 1$. The argument is worked out in detail for the curve $E : y^2 = x^3 - x$, and we discuss how the method can be adapted for other CM elliptic curves. \end{abstract} \maketitle \section{Introduction} Let $E/\bQ$ be an elliptic curve. For primes $p$ of good reduction, one has \[ E(\bF_p) \simeq \bZ/d_p\bZ \oplus \bZ/e_p\bZ \] where $d_p$ and $e_p$ are uniquely determined natural numbers such that $d_p$ divides $e_p$. Thus, $\efp = d_pe_p$. We concern ourselves with the behavior $\omega(\efp)$, where $\omega(n)$ denotes the number of distinct prime factors of the number $n$, as $p$ varies over primes of good reduction. Work has been done already in this arena: If the curve $E$ has CM, Cojocaru \cite[Corollary 6]{coj05} showed that the normal order of $\omega(\efp)$ is $\log\log p$, and a year later, Liu \cite{liu06} established an elliptic curve analogue of the celebrated Erd{\H o}s - Kac theorem: For any elliptic curve $E/\bQ$ with CM, the quantity \[ \frac{\omega(\efp) - \log\log p}{\sqrt{\log\log p}} \] has a Gaussian normal distribution. In particular, $\omega(\efp)$ has normal order $\log \log p$ and standard deviation $\sqrt{\log\log p}$. (These results hold for elliptic curves without CM, if one assumes GRH.) In light of the Erd{\H o}s - Kac theorem, one may ask how often $\omega(n)$ takes on extreme values, e.g. values greater than $\gamma \log \log n$, for some fixed $\gamma > 1$. A more precise version of the following result appears in \cite{Erd1978-1979}; its proof is due to Delange. \begin{theorem} Fix $\gamma > 1$. As $x \to \infty$, \[ \#\{n \leq x : \omega(n) > \gamma\log\log x\} = \frac{x}{(\log x)^{1 + \gamma\log\gamma - \gamma + o(1)}}. \] \end{theorem} Presently, we establish an analogous theorem for the quantity $\omega(\efp)$, where $E/\bQ$ is an elliptic curve with CM. \begin{theorem} Let $E/\bQ$ be an elliptic curve with CM. For $\gamma > 1$ fixed, \[ \#\{p \leq x : \omega(\efp) > \gamma\log\log x\} = \frac{x}{(\log x)^{2 + \gamma\log\gamma - \gamma + o(1)}}. \] \noindent The same result holds for the quantity $\#\{p \leq x : \omega(\efp) < \gamma\log\log x\}$ when $0 < \gamma < 1$. \end{theorem} In what follows, the above theorem will be proved for $E/\bQ$ with $E : y^2 = x^3 - x$. Essentially the same method can be used for any elliptic curve with CM; refer to the discussion in \S 4 of \cite{poltitec}. To establish the theorem, we prove corresponding upper and lower bounds in sections \S 3 and \S 4, respectively. \bigskip \noindent\textit{Remark.} One can ask similar questions about other arithmetic functions applied to $\efp$. For example, Pollack has shown \cite{poltitec} that, if $E$ has CM, then \[ \sideset{}{'}\sum_{p \leq x} \tau(\efp) \sim c_E \cdot x, \] where the sum is restricted to primes $p$ of good ordinary reduction for $E$. Several elements of Pollack's method of proof will appear later in this manuscript. \bigskip \noindent\textbf{Notation.} $K$ will denote an extension of $\bQ$ with ring of integers $\bZ_K$. For each ideal $\mathfrak{a} \subset \bZ_K$, we write $\Vert \mfa \Vert$ for the norm of $\mfa$ (that is, $\Vert \mfa \Vert = \#\bZ_K/\mfa$) and $\Phi(\mfa) = \#(\bZ_K/\mfa)^{\times}$. The function $\omega$ applied to an ideal $\mfa \subset \bZ_K$ will denote the number of distinct prime ideals appearing in the factorization of $\mfa$ into a product of prime ideals. For $\alpha \in \bZ_K$, $\Vert \alpha \Vert$ and $\Phi(\alpha)$ denote those functions evaluated at the ideal $(\alpha)$. If $\alpha$ is invertible modulo an ideal $\mfu \subset \bZ_K$, we write $\gcd(\alpha, \mfu) = 1$. The notation $\log_kx$ will be used to denote the $k$th iterate of the natural logarithm; this is not to be confused with the base-$k$ logarithm. The letters $p$ and $q$ will be reserved for rational prime numbers. We make frequent use of the notation $\ll, \gg$ and $O$-notation, which has its usual meaning. Other notation may be defined as necessary. \noindent\textbf{Acknowledgements.} The author thanks Paul Pollack for a careful reading of this manuscript and many helpful suggestions. \section{Useful propositions} One of our primary tools will be a version of Brun's sieve in number fields. The following theorem can be proved in much the same way that one obtains Brun's pure sieve in the rational integers, cf. \cite[\S 6.4]{polnabd}. \begin{theorem}\label{brun} Let $K$ be a number field with ring of integers $\bZ_K$. Let $\cA$ be a finite sequence of elements of $\bZ_K$, and let $\cP$ be a finite set of prime ideals. Define \[ S(\cA, \cP) := \#\{a \in \cA : \gcd(a, \mfP) = 1\},\text{ where } \mfP := \prod_{\mfp \in \cP} \mfp. \] For an ideal $\mfu \subset \bZ_K$, write $A_\mfu := \#\{a \in \cA : a \equiv 0 \pmod \mfu\}$. Let $X$ denote an approximation to the size of $\cA$. Suppose $\delta$ is a multiplicative function taking values in $[0, 1]$, and define a function $r(\mfu)$ such that \[ A_\mfu = X\delta(\mfu) + r(\mfu) \] for each $\mfu$ dividing $\mfP$. Then, for every even $m \in \bZ^{+}$, \[ S(\cA, \cP) = X\prod_{\mfp \in \cP} (1 - \delta(\mfp)) + O\bigg(\sum_{\mfu \mid \mfP, \, \omega(\mfu) \leq m} |r(\mfu)|\bigg) + O\bigg(X \sum_{\mfu \mid \mfP, \, \omega(\mfu) \geq m} \delta(\mfu)\bigg). \] All implied constants are absolute. \end{theorem} In our estimation of $O$-terms arising from the use of Proposition \ref{brun}, we will make frequent use of the following analogue of the Bombieri-Vinogradov theorem, which we state for an arbitrary imaginary quadratic field $K/\bQ$ with class number 1. For $\alpha \in \bZ_K$ and an ideal $\mfq \subset \bZ_K$, write \[ \pi(x; \mfq, \alpha) = \#\{\mu \in \bZ_K : \Vert \mu \Vert \leq x, \mu \equiv \alpha \pmod \mfq\}. \] \begin{proposition}\label{bv} For every $A > 0$, there is a $B > 0$ so that \[ \sum_{\Vert \mfq \Vert \leq x^{1/2}(\log x)^{-B}} \max_{\alpha: \gcd(\alpha, \mfu) = 1} \max_{y \leq x} \vert \pi(y; \mfq, \alpha) - w_K \cdot \frac{\Li(y)}{\Phi(\mfq)} \vert \ll \frac{x}{(\log x)^A}, \] where the above sum and maximum are taken over $\mfq \subset \bZ_K$ and $\alpha \in \bZ_K$. Here $w_K$ denotes the size of the group of units of $\bZ_K$ \end{proposition} The above follows from Huxley's analogue of the Bombieri-Vinogradov theorem for number fields \cite{hux71}; see the discussion in \cite[Lemma 2.3]{poltitec}. The following proposition is an analogue of Mertens' theorem for imaginary quadratic fields. It follows immediately from Theorem 2 of \cite{ros99}. \begin{proposition}\label{mertens} Let $K/\bQ$ be an imaginary quadratic field and let $\alpha_K$ denote the residue of the associated Dedekind zeta function, $\zeta_K(s)$, at $s = 1$. Then \[ \prod_{\Vert \mfp \Vert \leq x} \Big( 1 - \frac{1}{\Vert \mfp \Vert} \Big)^{-1} \sim e^\gamma \alpha_K \log x, \] where the product is over all prime ideals $\mfp$ in $\bZ_K$. Here (and only here), $\gamma$ is the Euler-Mascheroni constant. \end{proposition} Note also that the ``additive version'' of Mertens' theorem, i.e., \[ \sum_{\Vert \mfp \Vert \leq x} \frac{1}{\Vert \mfp \Vert} = \log_2 x + B_K + O_K\bigg(\frac{1}{\log x}\bigg) \] for some constant $B_K$, holds in this case as well; it appears as Lemma 2.4 in [Rosen]. Finally, we will make use of the following estimate for elementary symmetric functions \cite[p. 147, Lemma 13]{halroth83}. \begin{lemma}\label{elementary} Let $y_1, y_2, \ldots, y_M$ be $M$ non-negative real numbers. For each positive integer $d$ not exceeding $M$, let \[ \sigma_d = \sum_{1 \leq k_1 < k_2 < \cdots < k_d \leq M} y_{k_1}y_{k_2}\cdots y_{k_d}, \] so that $\sigma_d$ is the $d$th elementary symmetric function of the $y_k$'s. Then, for each $d$, we have \[ \sigma_d \geq \frac{1}{d!} \sigma_1^d\Bigg(1 - \binom{d}{2}\frac{1}{\sigma_1^2}\sum_{k = 1}^M y_k^2\Bigg). \] \end{lemma} \section{An upper bound} \begin{theorem}\label{upperbound} Let $E$ be the elliptic curve $E : y^2 = x^3 - x$ and fix $\gamma > 1$. Then \[ \#\{p \leq x : \omega(\efp) > \gamma \log_2 x\} \ll_\gamma \frac{x(\log_2 x)^5}{(\log x)^{2 + \gamma\log\gamma - \gamma}}. \] The same statement is true if instead $0 < \gamma < 1$ and the strict inequality is reversed on the left-hand side. \end{theorem} Before proving Theorem \ref{upperbound}, we refer to \cite[Table 2]{ju08} for the following useful fact concerning the numbers $\efp$: For primes $p \leq x$ with $p \equiv 1 \pmod 4$, we have \begin{align}\label{efpnorm} \efp = p + 1 - (\pi + \ol{\pi}) = (\pi - 1)\ol{(\pi - 1)}, \end{align} where $\pi \in \bZ[i]$ is chosen so that $p = \pi\ol{\pi}$ and $\pi \equiv 1 \pmod{(1 + i)^3}$. (Such $\pi$ are sometimes called \emph{primary}.) This determines $\pi$ completely up to conjugation. We begin the proof of Theorem \ref{upperbound} with the following lemma, which will allow us to disregard certain problematic primes $p$. \begin{lemma}\label{discard} Let $x \geq 3$ and let $P(n)$ denote the largest prime factor of $n$. Let $\cX$ denote the set of $n \leq x$ for which either of the following properties fail: \begin{itemize} \item[(i)] $P(n) > x^{1/6\log_2 x}$ \item[(ii)] $P(n)^2 \nmid n$. \end{itemize} Then, for any $A > 0$, the size of $\cX$ is $O(x/(\log x)^A)$. \end{lemma} The following upper bound estimate of de Bruijn \cite[Theorem 2]{db66} will be useful in proving the above lemma. \begin{proposition}\label{smooth} Let $x \geq y \geq 2$ satisfy $(\log x)^2 \leq y \leq x$. Whenever $u := \frac{\log x}{\log y} \to \infty$, we have \[ \Psi(x, y) \leq x/u^{u + o(u)}. \] \end{proposition} \begin{proof}[Proof of Lemma \ref{discard}.] If $n \in \cX$, then either (a) $P(n) \leq x^{1/6\log_2 x}$ or (b) $P(n) > x^{1/6\log_2 x}$ and $P(n)^2 \mid n$. By Proposition \ref{smooth}, the number of $n \leq x$ for which (a) holds is $O(x/(\log x)^A)$ for any $A > 0$, noting that $(\log x)^A \ll (\log x)^{\log_3 x} = (\log_2 x)^{\log_2 x}$. The number of $n \leq x$ for which (b) holds is \[ \ll x\sum_{p > x^{1/6\log_2 x}} p^{-2} \ll x\exp(-\log x/6 \log_2x), \] and this is also $O(x/(\log x)^A)$. \end{proof} We would like to use Lemma \ref{discard} to say that a negligible amount of the numbers $\efp$, for $p \leq x$, belong to $\cX$. The following lemma allows us to do so. \begin{lemma}\label{efpsafe} The number of $p \leq x$ with $\efp \in \cX$ is $O(x/(\log x)^B)$, for any $B > 0$. \end{lemma} \begin{proof} Suppose $\efp = b \in \cX$. Then, by (\ref{efpnorm}), $b = \Vert \pi - 1 \Vert$, where $\pi \in \bZ[i]$ is a Gaussian prime lying above $p$. Thus, the number of $p \leq x$ with $\efp = b$ is bounded from above by the number of Gaussian integers with norm $b$, which, by \cite[Theorem 278]{hw00}, is $4\sum_{d \mid b} \chi(d)$, where $\chi$ is the nontrivial character modulo 4. Now, using the Cauchy-Schwarz inequality and Lemma \ref{discard}, \begin{align*} 4 \sum_{b \in \cX} \sum_{d \mid b} \chi(d) \leq 4 \sum_{b \in \cX} \tau(b) &\leq 4\Big(\sum_{b \in \cX} 1\Big)^{1/2} \Big(\sum_{b \in \cX} \tau(b)^2 \Big)^{1/2} \\ &\ll \Big(\frac{x}{(\log x)^{A}}\Big)^{1/2} \Big( x\log^3 x \Big)^{1/2} = \frac{x}{(\log x)^{A/2 - 3/2}}. \end{align*} Since $A > 0$ can be chosen arbitrarily, this completes the proof. \end{proof} For $k$ a nonnegative integer, define $N_k$ to be the number of primes $p \leq x$ of good ordinary reduction for $E$ such that $\efp$ possesses properties $(i)$ and $(ii)$ from the above lemma and such that $\omega(\efp) = k$. Then, in the case when $\gamma > 1$, \[ \#\{p \leq x : \omega(\efp) > \gamma \log\log x\} = \sum_{k > \gamma \log_2 x} N_k + O\Big(\frac{x}{(\log x)^A}\Big) \] for any $A > 0$. Our task is now to bound $N_k$ from above in terms of $k$. Evaluating the sum on $k$ then produces the desired upper bound. It is clear that \begin{align}\label{nkub} N_k \leq \sum_{\substack{a \leq x^{1 - 1/6\log_2 x} \\ \omega(a) = k-1}} \sum_{\substack{p \leq x \\ p \equiv 1 \pmod 4 \\a \mid \efp \\ \efp/a \text{ prime}}} 1. \end{align} To handle the inner sum, we need information on the integer divisors of $\efp$, where $p \leq x$ and $p \equiv 1 \pmod 4$. We employ the analysis of Pollack in his proof of \cite[Theorem 1.1]{poltitec}, which we restate here for completeness. By (\ref{efpnorm}), we have $a \mid \efp$ if and only if $a \mid (\pi - 1)\ol{(\pi - 1)} = \Vert \pi - 1\Vert$. With this in mind, we have \[ \sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sum_{\substack{p \leq x \\ p \equiv 1 \pmod 4 \\a \mid \efp \\ \efp/a \text{ prime}}} 1 = \frac{1}{2} \sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ a \mid \Vert\pi - 1\Vert \\ \Vert\pi - 1\Vert/a \text{ prime}}} 1, \] where the $'$ on the sum indicates a restriction to primes $\pi$ lying over rational primes $p \equiv 1 \pmod 4$. \subsection{Divisors of shifted Gaussian primes.} The conditions on the primed sum above can be reformulated purely in terms of Gaussian integers. \begin{definition}\label{setsa} For a given integer $a \in \bN$, write $a = \prod_q q^{v_q}$, with each $q$ prime. For each $q \mid a$ with $q \equiv 1 \pmod 4$, write $q = \pi_q\ol{\pi}_q$. Define a set $S_a$ which consists of all products $\alpha$ of the form \[ \alpha = (1 + i)^{v_2} \prod_{\substack{q \mid a \\ q \equiv 3 \pmod 4}} q^{\lceil v_q/2 \rceil} \prod_{\substack{q \mid a \\ q \equiv 1 \pmod 4}} \alpha_q, \] where $\alpha_q \in \{\pi_q^i \ol{\pi}_q^{v_q - i} : i = 0, 1, \ldots, v_q\}$. \end{definition} Notice that the condition $a \mid \Vert \pi - 1 \Vert$ is equivalent to $\pi - 1$ being divisible by some element of the set $S_a$. We can therefore write \begin{align}\label{alphasum} \sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sum_{\substack{p \leq x \\ p \equiv 1 \pmod 4 \\a \mid \efp \\ \efp/a \text{ prime}}} 1 \leq \frac{1}{2}\sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sum_{\alpha \in S_a} \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ \alpha \mid \pi - 1 \\ \Vert \pi - 1 \Vert /a \text{ prime}}} 1. \end{align} Now, for any $\alpha \in S_a$, we have \[ \alpha\ol{\alpha} = a\prod_{q \equiv 3 \pmod 4} q^{2\lceil v_q/2 \rceil - v_q}. \] Observe that \begin{align*} \frac{\Vert \pi - 1 \Vert}{a} = \frac{(\pi-1)(\ol{\pi - 1})}{\alpha\ol{\alpha}}\prod_{q \equiv 3 \pmod 4} q^{2\lceil v_q/2 \rceil - v_q}. \end{align*} Therefore, if $\frac{\Vert \pi - 1 \Vert}{a}$ is to be prime, the number $a$ must satisfy exactly one of the following properties: \begin{itemize} \item[1.] The number $a$ is divisible by exactly one prime $q \equiv 3 \pmod 4$ with $v_q$ an odd number, and $\alpha = u(\pi - 1)$ where $u \in \bZ[i]$ is a unit; or \item[2.] All primes $q \equiv 3 \pmod 4$ which divide $a$ have $v_q$ even, and $(\pi - 1) / \alpha$ is a prime in $\bZ[i]$. \end{itemize} This splits the outer sum in (\ref{alphasum}) into two components. \begin{lemma}\label{ubcase1} We have \[ \sideset{}{^\flat}\sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sum_{\alpha \in S_a} \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ (\pi - 1)/\alpha \in U}} 1 = O\bigg(\frac{x}{\log^A x}\bigg), \] where $U$ is the set of units in $\bZ[i]$ and the $\flat$ on the outer sum indicates a restriction to integers $a$ such that there is a unique prime power $q^{v_q} \Vert a$ with $q \equiv 3 \pmod 4$ and $v_q$ odd. \end{lemma} \begin{proof} If $\alpha = u(\pi - 1)$ for $u \in U$, then there are at most four choices for $\pi$, given $\alpha$. Thus \[ \sideset{}{^\flat}\sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} \sum_{\alpha \in S_a} \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ \alpha = u(\pi - 1)}} 1 \leq 4 \sideset{}{^\flat}\sum_{\substack{a \leq x^{1 - 1/6\log\log x} \\ \omega(a) = k-1}} |S_a|. \] We have $|S_a| = \prod_{q \equiv 1 \pmod 4} (v_q + 1)$; this is bounded from above by the divisor function on $a$, which we denote $\tau(a)$. Therefore, the above is \[ \ll \sum_{a \leq x^{1 - 1/6\log\log x}} \tau(a) \ll x^{1 - 1/6\log_2 x}(\log x), \] which is $O(x/\log^Ax)$ for any $A > 0$. \end{proof} The second case provides the main contribution to the sum. \begin{lemma}\label{ubcase2} Let $a \leq x^{1 - 1/6\log\log x}$ with $\omega(a) = k-1$ such that all primes $q \equiv 3 \pmod 4$ dividing $a$ have $v_q$ even. Let $\alpha \in S_a$. Then \[ \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ \alpha \mid \pi - 1 \\ (\pi - 1)/\alpha \text{ prime}}} 1 \ll \frac{x(\log_2 x)^5}{\Vert \alpha \Vert(\log x)^2} \] uniformly over all $a$ as above and $\alpha \in S_a$. \end{lemma} \begin{proof} If $\pi \equiv 1 \pmod \alpha$, then $\pi = 1 + \alpha\beta$ for some $\beta \subset \bZ[i]$. Thus $\beta = \frac{\pi - 1}{\alpha}$, and so $\Vert \beta \Vert \leq \frac{2x}{\Vert \alpha \Vert}$. Let $\cA$ denote the sequence of elements in $\bZ[i]$ given by \[ \Big\{ \beta(1 + \alpha\beta) : \Vert \beta \Vert \leq \frac{2x}{\Vert \alpha \Vert} \Big\}. \] Define $\cP = \{\mfp \subset \bZ[i] : \Vert \mfp \Vert \leq z\}$ where $z$ is a parameter to be chosen later. Then, in the notation of Theorem \ref{brun}, \[ \sideset{}{'}\sum_{\substack{\pi \, : \, \Vert\pi\Vert \leq x \\ \pi \equiv 1 \pmod{(1 + i)^3} \\ \alpha \mid \pi - 1 \\ (\pi - 1)/\alpha \text{ prime}}} 1 \leq S(\cA, \cP) + O(z). \] Here, the $O(z)$ term comes from those $\pi \in \bZ[i]$ such that both $\pi$ and $(\pi - 1)/\alpha$ are primes of norm less than $z$. For $\mfu \subset \bZ[i]$, write $A_\mfu = \#\{a \in \cA : a \equiv 0 \pmod \mfu\}$. An element $\mfa \in \cA$ is counted by $A_\mfu$ if and only if a generator of $\mfu$ divides $a$. Thus, by familiar estimates on the number of integer lattice points contained in a circle, $A_\mfu$ satisfies the equation \[ A_\mfu = \frac{2\pi x}{\Vert\alpha\Vert} \frac{\nu(\mfu)}{\Vert \mfu \Vert} + O\Big(\nu(\mfu)\frac{\sqrt{x}}{(\Vert \alpha\Vert\Vert\mfu \Vert)^{1/2}}\Big), \] where \[ \nu(\mfu) = \#\{\beta \pmod \mfu : \beta(1 + \alpha\beta) \equiv 0 \pmod \mfu\}. \] We apply Theorem \ref{brun} with \[ X = \frac{2\pi x}{\Vert\alpha\Vert} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \delta(\mfu) = \frac{\nu(\mfu)}{\Vert\mfu\Vert}. \] With these choices, we have \[ r(\mfu) = O\Big(\nu(\mfu)\frac{\sqrt{x}}{(\Vert\alpha\Vert\Vert\mfu\Vert)^{1/2}|}\Big). \] Then, for any even integer $m \geq 0$, \begin{align}\label{sap1} S(\cA, \cP) = \frac{2\pi x}{\Vert\alpha\Vert} \prod_{\Vert \mfp \Vert \leq z}\bigg(1 - &\frac{\nu(\mfp)}{\Vert\mfp\Vert}\bigg) + O\bigg(\frac{\sqrt{x}}{\Vert\alpha\Vert^{1/2}}\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} \frac{\nu(\mfu)}{\Vert\mfu\Vert^{1/2}}\bigg) \\ &+ O\bigg(\frac{x}{\Vert\alpha\Vert}\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \geq m}} \delta(\mfu)\bigg), \nonumber \end{align} where $\mfP = \prod_{\mfp \in \cP} \mfp$. For a prime $\mfp$, we have $\nu(\mfp) = 2$ if $\alpha \not\equiv 0 \pmod \mfp$ and $\nu(\mfp) = 1$ otherwise. Therefore, the product in the first term is \begin{align*} \prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \nmid (\alpha)}}\bigg(1 - \frac{2}{\Vert\mfp\Vert}\bigg) &\prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \mid (\alpha)}}\bigg(1 - \frac{1}{\Vert\mfp\Vert}\bigg) \\ &\leq \prod_{\Vert \mfp \Vert \leq z}\bigg(1 - \frac{1}{\Vert\mfp\Vert}\bigg)^2 \prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \mid (\alpha)}}\bigg(1 - \frac{1}{\Vert\mfp\Vert}\bigg)^{-1} \ll \frac{1}{(\log z)^2}\frac{\Vert\alpha\Vert}{\Phi(\alpha)}, \end{align*} where in the last step we used Proposition \ref{mertens}. Choose $z = x^{\frac{1}{200(\log_2 x)^2}}$. Then our first term in (\ref{sap1}) is \[ \ll \frac{x(\log_2 x)^4}{\Phi(\alpha)(\log x)^2}. \] Recall that $\Vert \alpha \Vert = a$, and $a \leq x^{1 - 1/6\log_2 x}$. Since $\Phi(\alpha) \gg \Vert \alpha \Vert/\log_2 x$ (analogous to the minimal order for the usual Euler function, c.f. \cite[Theorem 328]{hw00}), the above is \begin{align*}\label{maintermalpha} \ll \frac{x(\log_2 x)^5}{\Vert \alpha \Vert(\log x)^2}. \end{align*} We now show that this ``main'' term dominates the two $O$-terms uniformly for $\alpha \in S_a$ and $a \leq x^{1 - 1/6\log_2 x}$. For the first $O$-term, we begin by noting that $\nu(\mfu)/\Vert \mfu \Vert^{1/2} \ll 1$. Then, taking $m = 10\lfloor \log_2 x \rfloor$, we have \[ \sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} \frac{\nu(\mfu)}{\Vert \mfu \Vert^{1/2}} \ll \sum_{k = 0}^m \binom{\pi_K(z)}{k} \leq \sum_{k = 0}^m \pi_K(z)^k \leq 2\pi_K(z)^m \leq x^{1/20\log_2 x}, \] where $\pi_K(z)$ denotes the number of prime ideals $\mfp \subset \bZ[i]$ with norm up to $z$. Therefore, the inequality \[ \frac{x(\log_2 x)^5}{\Vert \alpha \Vert(\log x)^2} \gg \frac{x^{1/2 + 1/20\log_2 x}}{\Vert \alpha \Vert^{1/2}} \] holds for all $\alpha$ with $\Vert \alpha \Vert \leq x^{1 - 1/6\log_2x}$, as desired. Next we handle the second $O$-term. The sum in this term is \[ \sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \geq m}} \delta(\mfu) \leq \sum_{s \geq m} \frac{1}{s!}\Big(\sum_{\substack{\Vert \mfp \Vert \leq z}} \frac{\nu(\mfp)}{\Vert \mfp \Vert} \Big)^s. \] Observe that, by Proposition \ref{mertens}, we have \[ \sum_{\substack{\Vert \mfp \Vert \leq z}} \frac{\nu(\mfp)}{\Vert \mfp \Vert} \leq 2\log_2x + O(1). \] Thus, by the ratio test, one sees that the sum on $s$ is \[ \ll \frac{1}{m!}(2\log_2x + O(1))^m. \] Using Proposition \ref{mertens} followed by Stirling's formula, we obtain that the above quantity is \begin{align*} \frac{1}{m!}(2\log_2 x + O(1))^m &\leq \Big(\frac{2e\log_2 x + O(1)}{10\lfloor \log_2 x \rfloor}\Big)^{10\lfloor \log_2 x \rfloor} \\ &\ll \Big(\frac{e}{5}\Big)^{9\log_2 x} \leq \frac{1}{(\log x)^{5}}. \end{align*} So the second $O$-term is \[ \ll \frac{x}{\Vert \alpha \Vert (\log x)^{5}}, \] and this is certainly dominated by the main term. \end{proof} From Lemmas \ref{ubcase1} and \ref{ubcase2}, we see (\ref{nkub}) can be rewritten \[ N_k \ll \frac{x(\log_2 x)^5}{(\log x)^2} \sum_{\substack{a \leq x^{1 - 1/6\log_2 x} \\ \omega(a) = k-1}} \frac{|S_a|}{a} + O\Big(\frac{x}{\log^A x}\Big), \] noting that $\Vert \alpha \Vert = a$ for all $a$ under consideration and all $\alpha \in S_a$. We are now in a position to bound $N_k$ from above in terms of $k$. \begin{lemma} We have \[ \sum_{\substack{a \leq x^{1 - 1/6\log_2 x} \\ \omega(a) = k-1}} \frac{|S_a|}{a} \leq \frac{(\log_2 x + O(1))^{k-1}}{(k - 1)!}. \] \end{lemma} \begin{proof} We have already seen that the size of $S_a$ is $\prod_{p \mid a : p \equiv 1 \pmod 4} (v_p + 1)$, where $v_p$ is defined by $p^{v_p} \parallel a$. Recall that in the current case, each prime $p \equiv 3 \pmod 4$ dividing $a$ appears to an even power. Therefore, we have \begin{align}\label{sumona} \sum_{\substack{a \leq x \\ \omega(a) = k-1}} \frac{|S_a|}{a} \leq \frac{1}{(k-1)!}\Bigg(\sum_{\substack{p^\ell \leq x \\ p \not\equiv 3 \pmod 4}} \frac{|S_{p^\ell}|}{p^\ell} + \sum_{\substack{p^{2k} \leq x \\ p \equiv 3 \pmod 4}} \frac{|S_{p^{2k}}|}{p^{2k}} + O(1)\Bigg)^{k-1}. \end{align} Note that $|S_{p^{2k}}| = 1$ for each prime $p \equiv 3 \pmod 4$. Thus we can absorb the sum corresponding to these primes into the $O(1)$ term, giving \begin{align}\label{sumonamultinomial} \sum_{\substack{a \leq x \\ \omega(a) = k-1}} \frac{|S_a|}{a} \ll \frac{1}{(k-1)!}\Bigg(\displaystyle\sum_{\substack{p^\ell \leq x \\ p \not\equiv 3 \pmod 4}} \frac{|S_{p^\ell}|}{p^\ell} + O(1)\Bigg)^{k-1}. \end{align} Now \begin{align*} \sum_{\substack{p^\ell \leq x \\ p \not\equiv 3 \pmod 4}} \frac{|S_{p^\ell}|}{p^\ell} &= \sum_{\substack{p^\ell \leq x \\ p \equiv 1 \pmod 4}} \frac{\ell + 1}{p^\ell} + O(1) \\ &= \sum_{\substack{p \leq x \\ p \equiv 1 \pmod 4}} \frac{2}{p} + O(1) \\ &= \log_2 x + O(1). \end{align*} Inserting this expression into (\ref{sumonamultinomial}) proves the lemma. \end{proof} \subsection{Finishing the upper bound.} We have shown so far that \[ N_k \ll \frac{x(\log_2 x)^5}{(\log x)^2} \cdot \frac{(\log_2 x + O(1))^{k-1}}{(k-1)!}. \] We now sum on $k > \gamma\log_2 x$ for fixed $\gamma > 1$ to complete the proof of Theorem \ref{upperbound}. (The statement corresponding to $0 < \gamma < 1$ may be proved in a completely similar way.) Again using the ratio test and Stirling's formula, we have \begin{align*} \sum_{k > \gamma\log_2 x} &\frac{(\log_2 x + O(1))^{k-1}}{(k-1)!} \ll \bigg(\frac{e\log_2 x + O(1)}{\lfloor\gamma\log_2 x\rfloor}\bigg)^{\lfloor\gamma\log_2 x\rfloor} \\ &\ll \bigg(\frac{e}{\gamma}\Big(1 + O\Big(\frac{1}{\log_2 x}\Big)\Big)\bigg)^{\lfloor\gamma\log_2 x\rfloor} \ll \Big(\frac{e}{\gamma}\Big)^{\lfloor\gamma\log_2 x\rfloor} \ll_\gamma (\log x)^{\gamma - \gamma\log\gamma}. \end{align*} Thus, we have obtained an upper bound of \[ \ll_\gamma \frac{x(\log_2 x)^5}{(\log x)^{2+\gamma\log\gamma - \gamma}}, \] as desired. \section{A lower bound} \begin{theorem}\label{lowerbound} Consider $E : y^2 = x^3 - x$ and fix $\gamma > 1$. Then \[ \#\{p \leq x : \omega(\efp) > \gamma \log_2 x\} \geq \frac{x}{(\log x)^{2 + \gamma\log\gamma - \gamma + o(1)}}. \] The same statement is true if instead $0 < \gamma < 1$ and the strict inequality is reversed on the left-hand side. \end{theorem} Our strategy in the case $\gamma > 1$ is as follows. As before, we write $\efp = \Vert \pi - 1 \Vert$, where $\pi \equiv 1 \pmod{(1 + i)^3}$ and $p = \pi\ol{\pi}$. Let $k$ be an integer to be specified later and fix an ideal $\mfs \in \bZ[i]$ with the following properties: \begin{itemize} \item[(A)] $((1 + i)^3) \mid \mfs$ \item[(B)] $\omega(\mfs) = k$ \item[(C)] $P^{+}(\Vert \mfs \Vert) \leq x^{1/100\gamma\log_2x}$ \item[(D)] Each prime ideal $\mfp \mid \mfs$ (with the exception of $(1 + i)$) lies above a rational prime $p \equiv 1 \pmod 4$ \item[(E)] Distinct $\mfp$ dividing $\mfs$ lie above distinct $p$ \item[(F)] $\mfs$ squarefree \end{itemize} Here $P^{+}(n)$ denotes the largest prime factor of $n$. Note that we have $\omega(\mfs) = \omega(\Vert \mfs \Vert)$. First, we will estimate from below the size of the set $\cM_\mfs$, defined to be the set of those $\pi \in \bZ[i]$ with $\Vert \pi \Vert \leq x$ satisfying the following properties: \begin{enumerate} \item $\pi$ prime (in $\bZ[i]$) \item $\Vert \pi \Vert$ prime (in $\bZ$) \item $\pi \equiv 1 \pmod \mfs$ \item $P^-\Big(\frac{\Vert\pi - 1\Vert}{\Vert\mfs\Vert}\Big) > x^{1/100\gamma\log_2 x}$. \end{enumerate} Here $P^{-}(n)$ denotes the smallest prime factor of $n$. The conditions on the size of the prime factors of $\Vert \mfs \Vert$ and $\Vert \pi - 1 \Vert / \Vert \mfs \Vert$ imply that each $\pi$ with $\Vert \pi \Vert \leq x$ belongs to at most one of the sets $\cM_\mfs$. If $k$ is chosen to be greater than $\gamma \log_2 x$, then carefully summing over $\mfs$ satisfying the conditions above yields a lower bound on the count of distinct $\pi$ corresponding to $p$ with the property that $\omega(\efp) \geq k > \gamma\log_2 x$. The problem of counting elements $\pi$ and $\ol{\pi}$ with $p = \pi\ol{\pi}$ is remedied by inserting a factor of $\frac{1}{2}$, which is of no concern for us. More care is required in the case $0 < \gamma < 1$, which is handled in Section \ref{smallgamma}. \subsection{Preparing for the proof of Theorem \ref{lowerbound}} Suppose the fixed ideal $\mfs$ is generated by $\sigma \in \bZ[i]$. We will estimate from below the size of $\cM_\mfs$ using Theorem 2.1. Define $\cA$ to be the sequence of elements of $\bZ[i]$ of the form \[ \Big\{\frac{\pi - 1}{\sigma} : \Vert \pi \Vert \leq x, \pi \text{ prime, and } \pi \equiv 1 \pmod{\sigma}\Big\}. \] Let $\cP$ denote the set of prime ideals $\{ \mfp : \Vert \mfp \Vert \leq z\}$, where $z := x^{1/50\gamma\log_2 x}$. Let $\mfP := \prod_{\mfp \in \cP} \mfp$. If $\frac{\pi-1}{\sigma} \equiv 0 \pmod \mfp$ implies $\Vert \mfp \Vert \geq z$, then all primes $p \mid \Vert \frac{\pi - 1}{\sigma} \Vert$ have $p > x^{1/100\gamma\log_2x}$. Note also that if a prime $\pi \in \bZ[i]$, $\Vert \pi \Vert \leq x$ is such that $\Vert \pi \Vert$ is not prime, then $\Vert \pi \Vert = p^2$ for some rational prime $p$, and so the count of such $\pi$ is clearly $O(\sqrt{x})$. Therefore, we have \[ \#\cM_\mfs \geq S(\cA, \cP) + O(\sqrt{x}). \] \begin{lemma}\label{msigmasieve} With $\cM_\mfs$ defined as above, we have \[ \#\cM_\mfs \geq c \cdot \frac{\Li(x)\log_2 x}{\Phi(\mfs)\log x} + O\bigg(\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} |r(\mfu\mfs)|\bigg) + O\bigg(\frac{1}{\Phi(\mfs)}\frac{\Li(x)}{(\log x)^{22}}\bigg) + O(\sqrt{x}), \] where $r(\mfv) = |\frac{\Li(x)}{\Phi(\mfv)} - \pi(x; \mfv, 1)|$ and $c > 0$ is a constant. \end{lemma} \begin{proof} First, note that we expect the size of $\cA$ to be approximately $X := 4\frac{\Li(x)}{\Phi(\mfs)}$. Write $A_\mfu = \#\{a \in \cA : \mfu \mid a\}$. Then \[ A_\mfu = X\delta(\mfu) + r(\mfu\mfs), \] where $\delta(\mfu) = \frac{\Phi(\mfs)}{\Phi(\mfu\mfs)}$ and $r(\mfu\mfs) = |4\frac{\Li(x)}{\Phi(\mfu\mfs)} - \pi(x; \mfu\mfs, 1)|$. By Theorem \ref{brun}, for any even integer $m \geq 0$ we have \begin{align*} S(\cA, \cP) = 4\frac{\Li(x)}{\Phi(\mfs)} \prod_{\Vert \mfp \Vert \leq z}\bigg(1 - &\frac{\Phi(\mfs)}{\Phi(\mfp\mfs)}\bigg) + O\bigg(\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} |r(\mfu\mfs)|\bigg) \\ &+ O\bigg(\frac{\Li(x)}{\Phi(\mfs)}\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \geq m}} \delta(\mfu)\bigg). \end{align*} Using Proposition \ref{mertens}, we have \begin{align*} \prod_{\Vert \mfp \Vert \leq z}\bigg(1 - \frac{\Phi(\mfs)}{\Phi(\mfp\mfs)}\bigg) &= \prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \nmid \mfs}} \bigg(1 - \frac{1}{\Phi(\mfp)}\bigg) \prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \mid \mfs}} \bigg(1 - \frac{1}{\Vert \mfp \Vert}\bigg) \\ &= \prod_{\Vert \mfp \Vert \leq z}\bigg(1 - \frac{1}{\Vert \mfp \Vert}\bigg)\prod_{\substack{\Vert \mfp \Vert \leq z \\ \mfp \nmid \mfs}}\bigg(1 - \frac{1}{(\Vert \mfp \Vert - 1)^2}\bigg) \\ &\gg \frac{1}{\log z} = \frac{\log_2 x}{\log x}. \end{align*} Take $m = 14\lfloor \log_2 x \rfloor$. We leave aside the first $O$-term and concentrate for now on the second. This term is handled in essentially the same way as in the proof of the upper bound: The sum in the this term is bounded from above by \[ \sum_{s \geq m} \frac{1}{s!}\Big(\sum_{\substack{\Vert \mfp \Vert \leq z}} \delta(\mfp)\Big)^s. \] By Proposition \ref{mertens}, we have \[ \sum_{\substack{\Vert \mfp \Vert \leq z}} \delta(\mfp) \leq \log_2 x + O(1). \] Now, one sees once again by the ratio test that the sum on $s$ is \[ \ll \frac{1}{m!}\Big(\sum_{\substack{\Vert \mfp \Vert \leq z}} \delta(\mfp)\Big)^m \leq \frac{1}{m!}(\log_2 x + O(1))^m. \] Thus, by the same calculations as in the proof of Theorem \ref{upperbound}, the second $O$-term is \[ \ll \frac{\Li(x)}{\Phi(\mfs)(\log x)^{22}}, \] completing the proof of the lemma. \end{proof} We now sum this estimate over $\sigma$ in an appropriate range to deal with the $O$-terms and establish a lower bound. Here, the cases $\gamma > 1$ and $0 < \gamma < 1$ diverge. \subsection{The case $\gamma > 1$.} The argument in this case is somewhat simpler. Recall that $\mfs$ is chosen to satisfy properties A through F listed below Theorem \ref{lowerbound}; in particular, $\omega(\mfs) = k$ for some integer $k$ and $P^{+}(\Vert \mfs \Vert) \leq x^{1/100\gamma\log_2x}$. Choose $k := \lfloor \gamma\log_2x \rfloor + 2$. Since $\omega(\Vert \mfs \Vert) = \omega(\mfs)$, we have that $\Vert \mfs \Vert \leq x^{k/100\gamma\log_2 x} \leq x^{1/10}$. A lower bound follows by estimating the quantity \[ \cM = \sideset{}{'}\sum_{\mfs} \#\cM_\mfs, \] where the prime indicates a restriction to those ideals $\mfs \subset \bZ[i]$ satisfying properties A through F mentioned above. \begin{lemma}\label{lowerboundonm} We have \[ \cM \gg \frac{x\log_2 x(\log_2 x + O(\log_3 x))^k}{k!(\log x)^2}. \] \end{lemma} \begin{proof} Since $\sum_{\Vert \mfs \Vert \leq x} 1/\Phi(\mfs) \ll \log x$, the second $O$-term in Lemma \ref{msigmasieve} is, upon summing on $\mfs$, bounded by a constant times $\Li(x)/(\log x)^{21}$. The third error term, $O(\sqrt{x})$, is therefore safely absorbed by this term. We now handle the sum over $\mfs$ of the first $O$-term. We have $|r(\mfu\mfs)| = |\pi(x; \mfu\mfs, 1) - 4\frac{\Li(x)}{\Phi(\mfu\mfs)}|$. We can think of the double sum (over $\mfs$ and $\mfu$) as a single sum over a modulus $\mfq$, inserting a factor of $\tau(\mfq)$ to account for the number of ways of writing $\mfq$ as a product of two ideals in $\bZ[i]$. (Here, $\tau(\mfq)$ is the number of ideals in $\bZ[i]$ which divide $\mfq$.) Recalling our choice of $m = 14\lfloor \log_2 x \rfloor$, we have \begin{align*} \sum_{\Vert \mfs \Vert \leq x^{1/10}} \sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} |r(\mfu\mfs)| &\ll \sum_{\Vert \mfq \Vert < x^{2/5}} \Big\vert\pi(x; \mfq, 1) - \frac{\Li(x)}{\Phi(\mfq)}\Big\vert \cdot \tau(\mfq). \end{align*} The restriction $\Vert \mfq \Vert \leq x^{2/5}$ comes from $\Vert \mfs \Vert \leq x^{1/10}$ and $\Vert \mfu \Vert \leq x^{m/50\gamma\log_2x} \leq x^{.28}$, recalling $m = 14\lfloor \log_2x \rfloor$ and $\gamma > 1$. Now, for all $y > 0$ and nonzero $\mfi \subset \bZ[i]$ we have $\pi(y; \mfi, 1) \ll y/\Vert \mfi \Vert$; indeed, the same inequality is true with $\pi(y; \mfi, 1)$ replaced by the count of all proper ideals $\equiv 1 \pmod \mfi$. Thus \[ \Big|\pi(x; \mfq, 1) - 4\frac{\Li(x)}{\Phi(\mfq)}\Big| \ll \frac{x}{\Phi(\mfq)}. \] Using this together with the Cauchy-Schwarz inequality and Proposition \ref{bv}, we see that, for any $A > 0$, \begin{align*} \sum_{\Vert \mfq \Vert < x^{2/5}} \vert\pi(x; \mfq, 1) - 4\frac{\Li(x)}{\Phi(\mfq)}\vert\tau(\mfq) &\ll \sum_{\Vert \mfq \Vert < x^{2/5}} \vert\pi(x; \mfq, 1) - 4\frac{\Li(x)}{\Phi(\mfq)}\vert^{1/2} \Big(\frac{x}{\Phi(\mfq)}\Big)^{1/2} \tau(\mfq) \\ &\ll \Big(x\sum_{\Vert \mfq \Vert < x^{2/5}} \frac{\tau(\mfq)^2}{\Phi(\mfq)}\Big)^{1/2}\Big(\frac{x}{(\log x)^A}\Big)^{1/2}. \end{align*} We can estimate this sum using an Euler product: \begin{align*} \sum_{\Vert \mfq \Vert < x^{2/5}} \frac{\tau(\mfq)^2}{\Phi(\mfq)} &\ll \prod_{\Vert \mfp \Vert \leq x^{2/5}} \Big(1 + \frac{4}{\Vert \mfp \Vert}\Big) \\ &\leq \exp\Big\{ \sum_{\Vert \mfp \Vert \leq x^{2/5}} \frac{4}{\Vert \mfp \Vert} \Big\} \ll (\log x)^4. \end{align*} Collecting our estimates, we see that the total error is at most $x/(\log x)^{A/2 - 2}$, which is acceptable if $A$ is chosen large enough. For the main term, we need a lower bound for the sum \begin{align}\label{phirecip} \cM = \sideset{}{'}\sum_{\mfs} \frac{1}{\Phi(\mfs)}. \end{align} Let $I = (e^{(\log_2 x)^2/k}, x^{1/10k})$. Define a collection of prime ideals $\cP$ such that each $\mfp \in \cP$ lies above a prime $p \equiv 1 \pmod 4$, each prime $p \equiv 1 \pmod 4$ has exactly one prime ideal lying above it in $\cP$, and $\Vert \mfp \Vert \in I$. We apply Lemma \ref{elementary}, with the $y_i$ chosen to be of the form $1/\Phi(\mfp)$ with $\mfp \in \cP$, obtaining \begin{align}\label{sum1} \frac{1}{\Phi((1 + i)^3)} &\sideset{}{'}\sum_{\mfs : \mfp \mid (\mfs/(1 + i)^3) \implies \mfp \in \cP} \frac{1}{\Phi(\mfs/(1 + i)^3)} \\ &\gg \frac{1}{(k-1)!} \Bigg(\sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)}\Bigg)^{k-1}\Bigg(1 - \binom{k-1}{2} \Big(\frac{1}{S_1^2}\Big)\sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)^2}\Bigg), \nonumber \end{align} where \[ S_1 = \sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)}. \] By Theorem \ref{mertens}, $S_1 = \tfrac{1}{2}\log_2 x - 2\log_3 x + O(1)$. This introduces a factor of $\frac{1}{2^{k-1}}$ to the right-hand side of (\ref{sum1}), but this is of no concern: If each of the $k$ prime factors of $\mfs$, excluding $(1 + i)$, lies above a distinct prime $p \equiv 1 \pmod 4$, then there are $2^{k-1}$ such ideals $\mfs$ of a given norm. Thus, if we extend the sum on the left-hand side of (\ref{sum1}) to range over all $\mfs$ counted in primed sums (cf. the discussion above Lemma \ref{lowerboundonm}), we obtain \begin{align*} \sideset{}{'}\sum_{\mfs} \frac{1}{\Phi(\mfs)} \geq \frac{2^{k-1}}{(k-1)!} \Bigg(\frac{1}{2}\log_2x &- 2\log_3 x + O(1)\Bigg)^{k-1} \\ &\times \Bigg(1 - \binom{k-1}{2} \Big(\frac{1}{S_1^2}\Big)\sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)^2}\Bigg). \end{align*} The quantity $\binom{k-1}{2}$ is bounded from above by $\lceil \gamma\log_2 x \rceil^2$, and the sum on $1/\Phi(\mfp)^2$ tends to 0 as $x \to \infty$. Therefore, \begin{align*} 1 - \binom{k-1}{2} \Big(\frac{1}{S_1^2}\Big)\sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)^2} \geq 1 - 4\gamma^2\sum_{\mfp \in \cP} \frac{1}{\Phi(\mfp)^2} \geq \frac{1}{2} \end{align*} for large enough $x$, and so \[ \frac{x\log_2 x}{(\log x)^2}\sideset{}{'}\sum_{\mfs} \frac{1}{\Phi(\mfs)} \gg \frac{x\log_2 x(\log_2 x + O(\log_3 x))^{k-1}}{(k-1)!(\log x)^2}, \] as desired. \end{proof} With $k = \lfloor \gamma\log_2x \rfloor + 2$ and by the more precise version of Stirling's formula $n! \sim \sqrt{2\pi n}(n/e)^n$, we have \begin{align*} \frac{(\log_2 x + O(\log_3 x))^{k-1}}{(k-1)!} &\gg \frac{1}{\sqrt{\log_2 x}} \bigg(\frac{e\log_2 x + O(\log_3 x)}{\lfloor\gamma\log_2 x\rfloor}\bigg)^{\lceil\gamma\log_2 x\rceil} \\ &= \frac{1}{\sqrt{\log_2 x}}\bigg(\frac{e}{\gamma}\Big(1 + O\Big(\frac{\log_3x}{\log_2 x}\Big)\Big)\bigg)^{\lceil\gamma\log_2 x\rceil} \\ &= (\log x)^{\gamma - \gamma\log\gamma + o(1)}. \end{align*} This yields a main term of the shape \[ \frac{x}{(\log x)^{2 + \gamma\log\gamma - \gamma+ o(1)}}, \] which completes the proof of Theorem \ref{lowerbound} in the case $\gamma > 1$. \subsection{The case $\mathbf{0 < \gamma < 1}$.}\label{smallgamma} Above, we used the fact that if $\pi - 1$ is divisible by certain $\mfs \subset \bZ[i]$ with $\omega(\Vert \mfs \Vert) = k$, then $\Vert \pi - 1 \Vert$ will have at least $k > \gamma\log_2 x$ prime factors. The case $0 < \gamma < 1$ is requires more care: We need to ensure that the quantity $\Vert \pi - 1 \Vert / \Vert \mfs \Vert$ does not have too many prime factors. \begin{lemma}\label{smallgammasieve} For any $\mfs \subset \bZ[i]$ satisfying properties A through F listed below Theorem \ref{lowerbound}, we have \[ \#\{\pi \in \cM_\mfs : \omega\bigg(\frac{\Vert\pi - 1\Vert}{\Vert\mfs\Vert}\bigg) > \frac{\log_2x}{\log_4x}\} \ll \frac{x}{\Vert\mfs\Vert(\log x)^A}. \] \end{lemma} Upon discarding those $\pi$ counted by the above lemma, the remaining $\pi$ will have the property that $\omega(\Vert\pi - 1 \Vert) \in [k, k + \log_2x/\log_4x]$. Choosing $k$ to be the greatest integer strictly less than $\gamma\log_2 x - \log_2x/\log_4x$ ensures that $\Vert \pi - 1 \Vert < \gamma\log_2 x$. \begin{proof}[Proof of Lemma \ref{smallgammasieve}.] We begin with the observation that, for any $\mfs \subset \bZ[i]$ under consideration and $\pi \in \cM_\mfs$, we have $\Vert \pi - 1 \Vert/\Vert \mfs \Vert \leq 2x/\Vert \mfs \Vert$. Therefore, we estimate \[ \sum_{\substack{\Vert \mfa \Vert \leq \frac{2x}{\Vert \mfs \Vert} \\ \omega(\Vert \mfa \Vert) > \log_2x/\log_4x \\ P^{-}(\Vert \mfa \Vert) > x^{1/100\gamma\log_2 x}}} 1 \leq \frac{2x}{\Vert\mfs\Vert} \sum_{\substack{\Vert \mfa \Vert \leq \frac{2x}{\Vert \mfs \Vert} \\ \omega(\Vert \mfa \Vert) > \log_2x/\log_4x \\ P^{-}(\Vert \mfa \Vert) > x^{1/100\gamma\log_2 x}}} \frac{1}{\Vert\mfa\Vert}. \] Noting that $\omega(\Vert \mfa \Vert) \leq \omega(\mfa)$ for any $\mfa \subset \bZ[i]$, by Theorem \ref{mertens} and Stirling's formula, we have \begin{align*} \sum_{\substack{\Vert \mfa \Vert \leq \frac{2x}{\Vert \mfs \Vert} \\ \omega(\Vert \mfa \Vert) > \log_2x/\log_4x \\ P^{-}(\Vert \mfa \Vert) > x^{1/100\log_2 x}}} \frac{1}{\Vert\mfa\Vert} &\leq \sum_{\substack{\Vert \mfa \Vert \leq \frac{2x}{\Vert \mfs \Vert} \\ \omega(\mfa) > \log_2x/\log_4x \\ P^{-}(\Vert \mfa \Vert) > x^{1/100\log_2 x}}} \frac{1}{\Vert\mfa\Vert} \\ &\leq \sum_{\ell > \log_2x/\log_4x} \frac{1}{\ell!} \Big (\sum_{x^{1/100\log_2 x} \leq \Vert \mfp \Vert \leq \frac{2x}{\Vert\mfs\Vert}} \sum_{m = 1}^\infty \frac{1}{\Vert \mfp \Vert^m} \Big)^\ell \\ &\ll \sum_{\ell > \log_2x/\log_4x} \Big( \frac{e\log_3x + O(1)}{\ell} \Big)^\ell. \end{align*} For each $\ell > \log_2x/\log_4x$, we have $(e\log_3x + O(1))/\ell < 1/2$. Thus \begin{align*} \sum_{\ell > \log_2x/\log_4x} \Big( \frac{e\log_3x + O(1)}{\ell} \Big)^\ell &\ll \Big( \frac{e\log_3x + O(1)}{\lfloor \log_2x/\log_4x \rfloor + 1} \Big)^{\lfloor \log_2x/\log_4x \rfloor + 1} \\ &\ll \Big(\frac{1}{(\log_2x)^{1 + o(1)}}\Big)^{\log_2x/\log_4x} \ll e^{-2\log_2x\log_3x/\log_4x}. \end{align*} This last expression is smaller than $(\log x)^{-A}$, for any $A > 0$. Therefore, for any fixed $A > 0$, \[ \#\{\pi \in \cM_\mfs : \omega\bigg( \frac{\Vert \pi - 1 \Vert}{\Vert \mfs \Vert}\bigg) > \frac{\log_2x}{\log_4x}\} \ll \frac{x}{\Vert\mfs\Vert(\log x)^A}. \qedhere \] \end{proof} Write \[ \cM_\mfs' = \{\pi \in \cM_\mfs : \omega\bigg(\frac{\Vert\pi - 1\Vert}{\Vert\mfs\Vert}\bigg) \leq \frac{\log_2x}{\log_4x}\}. \] Lemmas \ref{msigmasieve} and \ref{smallgammasieve} show that $\#\cM_\mfs'$ satisfies \begin{align*} \#\cM_\mfs' \geq c \cdot \frac{ x\log_2 x}{\Phi(\mfs)(\log x)^2} &+ O\bigg(\sum_{\substack{\mfu \mid \mfP \\ \omega(\mfu) \leq m}} |r(\mfu\mfs)|\bigg) \\ &+ O\bigg(\frac{1}{\Phi(\mfs)}\frac{\Li(x)}{(\log x)^{22}}\bigg) + O\bigg(\frac{x}{\Vert\mfs\Vert(\log x)^A}\bigg) + O(\sqrt{x}), \end{align*} for any $A > 0$. Here, all quantities are defined as in the previous section. Just as before, we sum this quantity over $\mfs \subset \bZ[i]$ satisfying conditions A through F listed below Theorem \ref{lowerbound}. Letting $'$ on a sum indicate a restriction to such $\mfs$, we have, by the same calculations as before, \[ \cM' \gg \frac{x\log_2 x(\log_2 x + O(\log_3 x))^{k-1}}{(k-1)!(\log x)^2}, \] where \[ \cM' = \sideset{}{'}\sum_{\mfs} \#\cM'_\mfs. \] Recall that $k$ is chosen to be the largest integer strictly less than $\gamma\log_2x - \log_2x/\log_4x$; then by Stirling's formula, \begin{align*} \frac{(\log_2 x + O(\log_3 x))^{k-1}}{(k-1)!} &\gg \frac{1}{\sqrt{\log_2x}}\Big(\frac{e\log_2x + O(\log_3x)}{k - 1}\Big)^{k-1} \\ &\gg \frac{1}{\sqrt{\log_2x}}\Big(\frac{e}{\gamma}\Big(1 + O\big(\frac{1}{\log_4x}\big)\Big)^{\gamma\log_2 x - \log_2x/\log_4x - 1} \\ &\gg (\log x)^{\gamma\log\gamma - \gamma + o(1)}. \end{align*} A final assembly of estimates yields Theorem \ref{lowerbound} in the case $0 < \gamma < 1$. \bibliographystyle{amsalpha} \bibliography{refs} \end{document}
{"config": "arxiv", "file": "1511.02388/omegaefp.tex"}
TITLE: Why is flipping a head then a tail a different outcome than flipping a tail then a head? QUESTION [4 upvotes]: In either case, one coin flip resulted in a head and the other resulted in a tail. Why is {H,T} a different outcome than {T,H}? Is this simply how we've defined an "outcome" in probability? My main problem with {H,T} being a different outcome than {T,H} is that we apply binomial coefficients (i.e. we count subsets of sets) in some common probability problems. But if we take {H,T} and {T,H} to be different outcomes, then our "sets" are ordered, but sets are by definition unordered... I feel as though the fact that I'm confused about something so basic means that I am missing something fundamental. Any help or insight whatsoever is greatly appreciated! REPLY [8 votes]: An experiment has "outcomes." These are the observable results of having performed our experiment. For example, when we reach our hand into a bucket and pull out a ball this outcome might be the color of the ball, or the number on the ball, etc... A sample space of an experiment is the collection of all possible outcomes of that experiment. An event is a subset of the sample space. Depending on the experiment, we may have many different possible sample spaces that we can choose from. Some choices will be more beneficial than others. For example, if our experiment was the "result" of a game of monopoly, we could have each outcome describe every moment in time throughout the game... who purchased a property here, who had to pay taxes there, etc... Alternatively, our outcomes could have been much more brief and just taken into account only the winner of the game. So long as the events you are interested in describing are in fact subsets of your sample space, that is one of the deciding factors of if a sample space is "valid" to use. That all being said, one of the reasons why you might choose a particular sample space over another is that you want to use counting techniques to calculate probabilities. We have the nice property that if $A$ is an event and $S$ is our finite sample space that in the event that $S$ is "equiprobable", in other words that every outcome is equally likely to occur, that we have $$Pr(A)=\dfrac{|A|}{|S|}$$ This is a very powerful tool for calculating probabilities and for this reason we often prefer to use sample spaces where each outcome happens with equal chance. For this reason, when talking about the experiment of flipping two coins, we often prefer to use the interpretation that the coins are flipped in sequence and order mattering, giving us the sample space of $\{HH,~HT,~TH,~TT\}$ Of course, we may choose to use the sample space where we did not keep track of the order in which these events occurred but just kept track of the total number of times we saw heads versus tails. Just keep in mind that if we do this we will sacrifice the ability to use counting techniques to calculate probability. When outcomes in our sample space are not equally likely to occur, we may not calculate the probability using the above formula. As an example, when playing the lottery you either win or you lose, but certainly winning the lottery doesn't occur half of the time. As a final aside, it is worth mentioning that it is because we like to work with equiprobable sample spaces so much that we will many times reword problems or reimagine problems in such a way as to let us work with an equiprobable sample space. For example, for the example where we have a bucket with $10$ indistinguishable white balls and $2$ indistinguishable red balls and we reach in and pick one ball at random and ask what the probability is that the ball was red, we in truth have only the two possible outcomes. The ball was white, or the ball was red, and there is no additional information that we can get by looking at the ball. We can imagine what would happen if each ball were numbered as well with a unique number, suddenly making the balls all distinguishable. This would allow us to change the sample space into something much more favorable, letting the outcome be the number on the ball as well as the color rather than just the color, making each outcome suddenly become equally likely to occur. Now we can see that the probability is $\dfrac{2}{12}$ for the ball to be red.
{"set_name": "stack_exchange", "score": 4, "question_id": 3321361}
TITLE: Relationship between variances in perfect correlation QUESTION [2 upvotes]: I have two random variables $X$ and $Y$ with mean and standard deviation $(\mu_1,\sigma_1)$ and $(\mu_2,\sigma_2)$ respectively. I know that for perfect correlation the relationship is given by a linear regression. I also know that positive perfect correlation establishes that $\mu_2$ will be linear function of $\mu_1$. But is there a relationship between variances ? More specifically given $\mu_1,\sigma_1$ and $\mu_2$, can I evaluate what is the value of $\sigma_2$ ? You can assume normal distributions for $X$ and $Y$ if that helps. REPLY [1 votes]: If $X$ and $Y$ are perfectly positively correlated random variables, then all the probability mass lies on a line of slope $\frac{\sigma_Y}{\sigma_X}$ passing through the point $(\mu_X, \mu_Y)$. So, if you know the values of $\mu_X, \sigma_X$ anfd $\mu_Y$, you know one point on the line but cannot determine its slope (and hence the value of $\sigma_Y$) from just this information.
{"set_name": "stack_exchange", "score": 2, "question_id": 251967}
TITLE: Calculate: $F(x)=\int_{0}^{+\infty}\frac{e^{i xt}}{t^{\alpha}}dt\quad \text{avec}~x\in \mathbb{R}~\text{ et }~0<\alpha<1$ QUESTION [1 upvotes]: I would like to calculate this integral: $F(x)=\int_{0}^{+\infty}\frac{e^{i xt}}{t^{\alpha}}dt\quad \text{avec}~x\in \mathbb{R}~\text{ et }~0<\alpha<1$ I calculated : $\displaystyle F(ix)=\int_{0}^{+\infty}\frac{e^{-xt}}{t^{\alpha}}dt=x^{\alpha-1}\Gamma(1-\alpha)$ but not $F(x)$ I would be interest for any replies or any comments. REPLY [2 votes]: 1)-for the case $x>0$ , suppose that :$u=(xt)^{1-\alpha}$ , we get : $F(x)=\displaystyle\int_0^{+\infty}\frac{e^{i xt}}{t^{\alpha}}dt=\frac1{x^{1-\alpha}(1-\alpha)}\int_0^{+\infty}e^{\displaystyle i u^\beta}du$ with $\beta=\dfrac1{1-\alpha}>1$ the last integral is classic (generalisation of FRESNEL integral) to be: $\displaystyle\frac1{\beta}\Gamma(\frac1{\beta})e^{\displaystyle i\frac{\pi}{2\beta}}$ we get finally :$F(x)=\dfrac{\Gamma(1-\alpha) e^{\displaystyle i\frac{\pi}2(1-\alpha)}}{x^{1-\alpha}}$ 2)- for $x<0$ we get with conjugate $F(x)=\dfrac{\Gamma(1-\alpha) e^{\displaystyle -i\frac{\pi}2(1-\alpha)}}{(-x)^{1-\alpha}}$ 3)-for $x=0$ the integral is diverge
{"set_name": "stack_exchange", "score": 1, "question_id": 849846}
TITLE: Free Particle Path Integral Matsubara Frequency QUESTION [7 upvotes]: I am trying to calculate $$Z = \int\limits_{\phi(\beta) = \phi(0) =0} D \phi\ e^{-\frac{1}{2} \int_0^{\beta} d\tau \dot{\phi}^2}$$ without transforming it to the Matsubara frequency space, I can show that $Z = \sqrt{\frac{1}{2\pi \beta}}$. However, I have a problem in obtaining the same result in the Matsubara frequency space: \begin{equation} \phi (\tau) = \frac{1}{\sqrt{\beta}} \left( \sum_{n} \phi_n \ e^{i\omega_n\tau} \right), \end{equation} with $\sum_n \phi_n =0, \omega_n = \frac{2\pi n}{\beta}$. And \begin{equation} Z = \int \prod_n D\phi_n\ \delta\left(\sum_n \phi_n\right)\ e^{-\frac{1}{2} \sum_n \phi_n \phi_{-n} \omega_n^2 } \end{equation} which, I think, vanishes. I guess the problem lies in the measure. Any comments? Info: I write the Schulman's derivation in imaginary time here. \begin{eqnarray} Z &=& \int\limits_{\phi(0) =\phi(\beta) = 0} D\phi(\tau) e^{-\frac{1}{2}\int_0^{\beta}d\tau\dot{\phi}^2}\\ &=& \text{lim}_{N \rightarrow \infty} (\frac{1}{2\pi \epsilon})^{(N+1)/2} \int d\phi_1 \dots d\phi_N e^{-\frac{1}{2\epsilon} \sum_{i =0}^N (\phi_{i+1} -\phi_i)^2} \end{eqnarray} Then, we can use the identity \begin{equation} \int_{-\infty}^{\infty} du \sqrt{\frac{a}{\pi}} e^{-a(x-u)^2}\sqrt{\frac{b}{\pi}} e^{-b(u -y)^2} = \sqrt{\frac{ab}{\pi(a+b)}} e^{-\frac{ab}{a+b}(x-y)^2} \end{equation} to evaluate the sum to be \begin{equation} Z = \sqrt{\frac{1}{2\pi \beta}}. \end{equation} REPLY [3 votes]: I) The Euclidean path integral with $\hbar=1$ reads $$ Z~=~\int_{DBC} \!{\cal D}x ~e^{-S},\tag{1}$$ with Dirichlet boundary conditions (DBC) $$ x(0)~=~0~=~x(T).\tag{2}$$ We expand the real periodic variable $x\in\mathbb{R}$ in Fourier series$^1$ $$ \begin{align} x(t) ~=~&\frac{a_0}{2}+\sum_{n\in\mathbb{N}} \left\{a_n \cos(\omega_n t) + b_n \sin(\omega_n t)\right\}\cr ~=~& \sum_{n\in\mathbb{Z}} c_n e^{i\omega_n t}, \cr \omega_n~:=~&\frac{2\pi n}{T},\qquad a_n,b_n~\in~ \mathbb{R},\cr c_n~\in~& \mathbb{C}, \qquad c_n^{\ast}~=~c_{-n}. \end{align} \tag{3} $$ The DBC (2) becomes $$ \sum_{n\in\mathbb{Z}} c_n~=~0\qquad\Leftrightarrow\qquad c_0 ~=~ -2{\rm Re}\sum_{n\in\mathbb{N}}c_n .\tag{4}$$ The action for a free non-relativistic point particle with mass $m=1$ reads: $$ S~=~\frac{1}{2} \int_0^T \!dt~ \dot{x}^2~=~T\sum_{n\in\mathbb{N}}\omega_n^2 |c_n|^2 . \tag{5} $$ II) We know that the proper normalization of the path integral (1) is $$ Z~=~\frac{1}{\sqrt{2 \pi T}}.\tag{6} $$ This can e.g. be deduced (without introducing fudge factors!) from the (semi)group property of Feynman path integrals, cf. this Phys.SE post and links therein. Up till now we have basically just restated what OP wrote in his question. III) Now we would like to repeat the same calculation using Fourier series, i.e. work with the Matsubara frequencies. In this answer, we will not explore the (semi)group property, but just do a quick and dirty calculation using various fudge factors, and see what we get. Since this is homework, the explanation will be somewhat brief. To make heuristic sense of the path integral (1), we will use the following zeta function regularization rules: $$ \prod_{n\in\mathbb{N}} a ~=~\frac{1}{\sqrt{a}} \quad\text{and}\quad \prod_{n\in\mathbb{N}} n ~=~\sqrt{2\pi}, \tag{7}$$ stemming from the zeta function values $$ \zeta(0)~=~-\frac{1}{2} \quad\text{and}\quad \zeta^{\prime}(0)~=~-\ln\sqrt{2\pi} , \tag{8} $$ respectively. Now let the path integral measure be $$ \begin{align} {\cal D}x~:=~&\delta\left(B\sum_{n\in\mathbb{Z}} c_n\right) A\mathrm{d} c_0 \prod_{n\in\mathbb{N}} A^2 \mathrm{d}^2c_n \cr ~\stackrel{(7)}{=}~&\frac{1}{B}\delta\left(\sum_{n\in\mathbb{Z}} c_n\right) \mathrm{d} c_0 \prod_{n\in\mathbb{N}} \mathrm{d}^2c_n , \end{align}\tag{9} $$ where $A$, $B$ are fudge factors. Interestingly, eq. (9) is independent of the $A$-fudge factor! After performing the delta function integration and the Gaussian integrals, we find $$ \begin{align} Z~=~&\frac{1}{B} \prod_{n\in\mathbb{N}} \frac{\pi}{T\omega_n^2}\cr ~=~&\frac{1}{B} \prod_{n\in\mathbb{N}} \frac{T}{4\pi n^2}\cr ~\stackrel{(7)}{=}~&\frac{1}{B\sqrt{\pi T}}. \end{align}\tag{10}$$ Apparently we should chose the fudge factor $B=\sqrt{2}$ in order to achieve the correct normalization (6). -- $^1$ Note that the sine (cosine) modes (3) correspond trivially (non-trivially) to the even (odd) modes in my Phys.SE answer here, respectively.
{"set_name": "stack_exchange", "score": 7, "question_id": 126064}
TITLE: Let $X$ be a TVS and let $f$ be a linear functional on $X$. If $f$ is continuous, how to show $f$ is bounded in some neighborhood of $0$? QUESTION [1 upvotes]: Let $X$ be a topological vector space and let $f$ be a linear functional on $X$. If $f$ is continuous, how to show $f$ is bounded in some neighborhood of $0$? Thank you in advance!! REPLY [2 votes]: We can assume $f \neq 0$. Or else any neighbourhood of $0$ will do. $O = f^{-1}[(-1,1)]$ is an open neighbourhood of $0$ by continuity of $f$ (and $f(0) = 0$ of course). And clearly $f$ is bounded on $O$.
{"set_name": "stack_exchange", "score": 1, "question_id": 2642465}
\begin{document} \maketitle \begin{abstract}{\footnotesize We develop a new randomized iterative algorithm---{\em stochastic dual ascent (SDA)}---for finding the projection of a given vector onto the solution space of a linear system. The method is dual in nature: with the dual being a non-strongly concave quadratic maximization problem without constraints. In each iteration of SDA, a dual variable is updated by a carefully chosen point in a subspace spanned by the columns of a random matrix drawn independently from a fixed distribution. The distribution plays the role of a parameter of the method. Our complexity results hold for a wide family of distributions of random matrices, which opens the possibility to fine-tune the stochasticity of the method to particular applications. We prove that primal iterates associated with the dual process converge to the projection exponentially fast in expectation, and give a formula and an insightful lower bound for the convergence rate. We also prove that the same rate applies to dual function values, primal function values and the duality gap. Unlike traditional iterative methods, SDA converges under no additional assumptions on the system (e.g., rank, diagonal dominance) beyond consistency. In fact, our lower bound improves as the rank of the system matrix drops. Many existing randomized methods for linear systems arise as special cases of SDA, including randomized Kaczmarz, randomized Newton, randomized coordinate descent, Gaussian descent, and their variants. In special cases where our method specializes to a known algorithm, we either recover the best known rates, or improve upon them. Finally, we show that the framework can be applied to the distributed average consensus problem to obtain an array of new algorithms. The randomized gossip algorithm arises as a special case. } \end{abstract} \section{Introduction} Probabilistic ideas and tools have recently begun to permeate into several fields where they had traditionally not played a major role, including fields such as numerical linear algebra and optimization. One of the key ways in which these ideas influence these fields is via the development and analysis of {\em randomized algorithms} for solving standard and new problems of these fields. Such methods are typically easier to analyze, and often lead to faster and/or more scalable and versatile methods in practice. \subsection{The problem} In this paper we consider a key problem in linear algebra, that of finding a solution of a system of linear equations \begin{equation}\label{eq:Axbx}Ax =b,\end{equation} where $A \in \R^{m \times n}$ and $b \in \R^m$. We shall assume throughout that the system is {\em consistent}, that is, that there exists $x^*$ for which $Ax^*=b$. While we assume the existence of a solution, we do not assume uniqueness. In situations with multiple solutions, one is often interested in finding a solution with specific properties. For instance, in compressed sensing and sparse optimization, one is interested in finding the least $\ell_1$-norm, or the least $\ell_0$-norm (sparsest) solution. In this work we shall focus on the canonical problem of finding the solution of \eqref{eq:Axbx} closest, with respect to a Euclidean distance, to a given vector $c\in \R^n$: \begin{eqnarray} \text{minimize} && P(x)\eqdef \tfrac{1}{2}\|x-c\|_B^2 \notag\\ \text{subject to} && Ax=b \label{eq:P}\\ && x\in \R^n.\notag \end{eqnarray} where $B$ is an $n\times n$ symmetric positive definite matrix and $\|x\|_B \eqdef \sqrt{x^\top B x}$. By $x^*$ we denote the (necessarily) unique solution of \eqref{eq:P}. Of key importance in this paper is the {\em dual problem}\footnote{Technically, this is both the Lagrangian and Fenchel dual of \eqref{eq:P}.} to \eqref{eq:P}, namely \begin{eqnarray}\label{eq:D} \text{maximize} && D(y)\eqdef (b-Ac)^\top y - \tfrac{1}{2}\|A^\top y\|_{B^{-1}}^2\\ \text{subject to}&& y \in \R^m. \notag \end{eqnarray} Due to the consistency assumption, strong duality holds and we have $P(x^*) = D(y^*)$, where $y^*$ is any dual optimal solution. \subsection{A new family of stochastic optimization algorithms} We propose to solve \eqref{eq:P} via a new method operating in the dual \eqref{eq:D}, which we call {\em stochastic dual ascent} (SDA). The iterates of SDA are of the form \begin{equation}\label{eq:methoddual0} y^{k+1} = y^k + S \lambda^k,\end{equation} where $S$ is a random matrix with $m$ rows drawn in each iteration independently from a pre-specified distribution ${\cal D}$, which should be seen as a parameter of the method. In fact, by varying ${\cal D}$, SDA should be seen as a family of algorithms indexed by $\cal D$, the choice of which leads to specific algorithms in this family. By performing steps of the form \eqref{eq:methoddual0}, we are moving in the range space of the random matrix $S$. A key feature of SDA enabling us to prove strong convergence results despite the fact that the dual objective is in general not strongly concave is the way in which the ``stepsize'' parameter $\lambda^k$ is chosen: we chose $\lambda^k$ to be the {\em least-norm} vector for which $D(y^k + S\lambda)$ is maximized in $\lambda$. Plugging this $\lambda^k$ into~\eqref{eq:methoddual0}, we obtain the SDA method: \begin{equation}\label{eq:SDA-compact0} \boxed{\quad y^{k+1} = y^k + S \left(S^\top A B^{-1} A^\top S\right)^\dagger S^\top \left(b - A \left( c+B^{-1}A^\top y^k \right) \right) \quad } \end{equation} The symbol $\dagger$ denotes the Moore-Penrose pseudoinverse\footnote{It is known that the vector $M^\dagger d$ is the least-norm solution of the least-squares problem $\min_{\lambda} \|M \lambda - d\|^2$. Hence, if the system $M\lambda =d$ has a solution, then $M^\dagger d = \arg \min_\lambda \{\| \lambda \| \;:\; M \lambda =d\}$.}. To the best of our knowledge, a randomized optimization algorithm with iterates of the {\em general} form \eqref{eq:methoddual0} was not considered nor analyzed before. In the special case when $S$ is chosen to be a random unit coordinate vector, SDA specializes to the {\em randomized coordinate descent method}, first analyzed by Leventhal and Lewis \cite{Leventhal:2008:RMLC}. In the special case when $S$ is chosen as a random column submatrix of the $m\times m$ identity matrix, SDA specializes to the {\em randomized Newton method} of Qu, Fercoq, Richt\'{a}rik and Tak\'{a}\v{c} \cite{SDNA}. With the dual iterates $\{y^k\}$ we associate a sequence of primal iterates $\{x^k\}$ as follows: \begin{equation} \label{eq:primaliterates0} x^k \eqdef c + B^{-1}A^\top y^k.\end{equation} In combination with \eqref{eq:SDA-compact0}, this yields the primal iterative process \begin{equation}\label{eq:SDA-primal0} \boxed{\quad x^{k+1} = x^k - B^{-1}A^\top S \left(S^\top A B^{-1} A^\top S\right)^\dagger S^\top \left(A x^k -b \right) \quad } \end{equation} Optimality conditions (see Section~\ref{subsec:OptCond}) imply that if $y^*$ is any dual optimal point, then $c+B^{-1}A^\top y^*$ is necessarily primal optimal and hence equal to $x^*$, the optimal solution of \eqref{eq:P}. Moreover, we have the following useful and insightful correspondence between the quality of the primal and dual iterates (see Proposition~\ref{lem:correspondence}): \begin{equation}\label{eq:iugs8gs}D(y^*) - D(y^k) = \tfrac{1}{2}\|x^k-x^*\|_B^2.\end{equation} Hence, {\em dual convergence in function values is equivalent to primal convergence in iterates.} Our work belongs to a growing literature on randomized methods for various problems appearing in linear algebra, optimization and computer science. In particular, relevant methods include sketching algorithms, randomized Kaczmarz, stochastic gradient descent and their variants \cite{SV:Kaczmarz2009,Needell2010,Drineas2011,hogwild,Zouzias2012, Needell2012,Needell2012a, Ramdas2014, SAG, pegasos2,SVRG, NSync, S2GD, proxSVRG, SAGA, mS2GD, IProx-SDCA,Needell2014, Dai2014, NeedellWard2015, Ma2015a, Gower2015b,Oswald2015,LiuWright-AccKacz-2016} and randomized coordinate and subspace type methods and their variants \cite{Leventhal:2008:RMLC,Lin:2008:DCDM,ShalevTewari09,Nesterov:2010RCDM,Wright:ABCRRO,shotgun,UCDC,nesterov2011random,PCDM, tao2012stochastic,Necoara:Parallel,ICD,Necoara:rcdm-coupled,Hydra,Hydra2,SDCA,Fercoq-paralleladaboost,APPROX, Lee2013,QUARTZ,SPCDM,ALPHA,ESO,SPDC,SDNA,NIPSdistributedSDCA,ASDCA, APCG,AdaSDCA,Gower2015b}. \subsection{The main results} We now describe two complexity theorems which form the core theoretical contribution of this work. The results hold for a wide family of distributions ${\cal D}$, which we describe next. \paragraph{Weak assumption on ${\cal D}$.} In our analysis, we only impose a very weak assumption on $\cal D$. In particular, we only assume that the $m\times m$ matrix \begin{equation} \label{eq:H} H \eqdef \mathbf{E}_{S\sim {\cal D}} \left[ S\left(S^\top AB^{-1}A^\top S\right)^{\dagger}S^\top\right]\end{equation} is well defined and nonsingular\footnote{It is known that the pseudoinverse of a symmetric positive semidefinite matrix is again symmetric and positive semidefinite. As a result, if the expectation defining $H$ is finite, $H$ is also symmetric and positive semidefinite. Hence, we could equivalently assume that $H$ be positive definite. }. Hence, we do not assume that $S$ be picked from any particular random matrix ensamble: the options are, quite literally, limitless. This makes it possible for practitioners to choose the best distribution specific to a particular application. We cast the first complexity result in terms of the primal iterates since solving \eqref{eq:P} is our main focus in this work. Let $\Range{M}, \Rank{M}$ and $\lambda_{\min}^+(M)$ denote the range space, rank and the smallest nonzero eigenvalue of $M$, respectively. \begin{theorem}[\bf Convergence of primal iterates and of the residual] \label{theo:Enormerror} Assume that the matrix $H$, defined in \eqref{eq:H}, is nonsingular. Fix arbitrary $x^0\in \R^n$. The primal iterates $\{x^k\}$ produced by \eqref{eq:SDA-primal0} converge exponentially fast in expectation to $x^* + t$, where $x^*$ is the optimal solution of the primal problem \eqref{eq:P}, and $t$ is the projection of $x^0-c$ onto $\Null{A}$: \begin{equation}\label{eq:def_of_t}t \eqdef \arg \min_{t'} \left\{ \|x^0-c - t'\|_B \;:\; t'\in \Null{A}\right\}.\end{equation} In particular, for all $k\geq 0$ we have \begin{eqnarray}\label{eq:Enormerror} \text{Primal iterates:} \quad &&\E{\norm{x^{k} - x^{*}- t}_{B}^2}\leq \rho^k \cdot \norm{x^{0} - x^{*} - t}_{B}^2,\\ \label{eq:s98h09hsxxx} \text{Residual:} \quad && \E{\|A x^k-b\|_B} \leq \rho^{k/2} \|A\|_B \|x^0-x^*-t\|_B + \|At\|_B,\end{eqnarray} where $\|A\|_B \eqdef \max \{\|Ax\|_B \;:\; \|x\|_B\leq 1\}$ and \begin{equation}\label{eq:rho} \rho \eqdef 1- \lambda_{\min}^+\left(B^{-1/2}A^\top H A B^{-1/2}\right). \end{equation} Furthermore, the convergence rate is bounded by \begin{equation} \label{eq:nubound} 1-\frac{\E{\Rank{S^\top A}}}{\Rank{A}}\leq \rho < 1. \end{equation} \end{theorem} If we let $S$ be a unit coordinate vector chosen at random, $B$ be the identity matrix and set $c=0$, then \eqref{eq:SDA-primal0} reduces to the {\em randomized Kaczmarz (RK)} method proposed and analyzed in a seminal work of Strohmer and Vershynin \cite{SV:Kaczmarz2009}. Theorem~\ref{theo:Enormerror} implies that RK converges with an exponential rate so long as the system matrix has no zero rows (see Section~\ref{sec:discrete}). To the best of our knowledge, such a result was not previously established: current convergence results for RK assume that the system matrix is full rank~\cite{Ma2015a, Ramdas2014}. Not only do we show that the RK method converges to the least-norm solution for any consistent system, but we do so through a single all encompassing theorem covering a wide family of algorithms. Likewise, convergence of block variants of RK has only been established for full column rank~\cite{Needell2012,Needell2014}. Block versions of RK can be obtained from our generic method by choosing $B=I$ and $c=0$, as before, but letting $S$ to be a random column submatrix of the identity matrix (see \cite{Gower2015b}). Again, our general complexity bound holds under no assumptions on $A$, as long as one can find $S$ such that $H$ becomes nonsingular. The lower bound \eqref{eq:nubound} says that for a singular system matrix, the number of steps required by SDA to reach an expected accuracy is at best inversely proportional to the rank of $A$. If $A$ has row rank equal to one, for instance, then RK converges in one step (this is no surprise, given that RK projects onto the solution space of a single row, which in this case, is the solution space of the whole system). Our lower bound in this case becomes $0$, and hence is tight. While Theorem~\ref{theo:Enormerror} is cast in terms of the primal iterates, if we assume that $x^0 = c+ B^{-1}A^\top y^0$ for some $y^0\in \R^m$, then an equivalent dual characterization follows by combining \eqref{eq:primaliterates0} and \eqref{eq:iugs8gs}. In fact, in that case we can also establish the convergence of the primal function values and of the duality gap. {\em No such results were previously known. } \begin{theorem}[\bf Convergence of function values]\label{theo:2} Assume that the matrix $H$, defined in \eqref{eq:H}, is nonsingular. Fix arbitrary $y^0\in \R^m$ and let $\{y^k\}$ be the SDA iterates produced by \eqref{eq:SDA-compact0}. Further, let $\{x^k\}$ be the associated primal iterates, defined by \eqref{eq:primaliterates0}, $OPT \eqdef P(x^*)=D(y^*)$, \[U_0 \eqdef \tfrac{1}{2}\|x^0-x^*\|_B^2 \overset{\eqref{eq:iugs8gs}}{=} OPT -D(y^0),\] and let $\rho$ be as in Theorem~\ref{theo:Enormerror}. Then for all $k\geq 0$ we have the following complexity bounds: \begin{eqnarray} \text{Dual suboptimality:} \quad &&\E{OPT - D(y^k)}\leq \rho^k U_0\label{eq:DUALSUBOPT} \\ \text{Primal suboptimality:} \quad &&\E{P(x^k) - OPT}\leq \rho^k U_0 + 2 \rho^{k/2} \sqrt{OPT \times U_0} \label{eq:PRIMALSUBOPT} \\ \text{Duality gap:} \quad && \E{P(x^k) - D(y^k)}\leq 2\rho^k U_0 + 2 \rho^{k/2} \sqrt{OPT\times U_0} \label{eq:GAPSUBOPT} \end{eqnarray} \end{theorem} Note that the dual objective function is {\em not} strongly concave in general, and yet we prove linear convergence (see \eqref{eq:DUALSUBOPT}). It is known that for {\em some} structured optimization problems, linear convergence results can be obtained without the need to assume strong concavity (or strong convexity, for minimization problems). Typical approaches to such results would be via the employment of error bounds \cite{LuoTseng93-AOR, Tseng95-JCAM, HongLuo2013,Ma-Tapp-Takac-FeasibleDescent-2015,NecoaraClipiciSIOPT2016}. {\em In our analysis, no error bounds are necessary.} \subsection{Outline} The paper is structured as follows. Section~\ref{sec:SDA} describes the algorithm in detail, both in its dual and primal form, and establishes several useful identities. In Section~\ref{sec:discrete} we characterize discrete distributions for which our main assumption on $H$ is satisfied. We then specialize our method to several simple discrete distributions to better illustrate the results. We then show in Section~\ref{sec:gossip} how SDA can be applied to design new randomized gossip algorithms. We also show that our framework can recover some standard methods. Theorem~\ref{theo:Enormerror} is proved in Section~\ref{sec:proof} and Theorem~\ref{theo:2} is proved in Section~\ref{sec:proof2}. In Section~\ref{sec:experiments} we perform a simple experiment illustrating the convergence of the randomized Kaczmarz method on rank deficient linear systems. We conclude in Section~\ref{sec:conclusion}. To the appendix we relegate two elementary but useful technical results which are needed multiple times in the text. \section{Stochastic Dual Ascent} \label{sec:SDA} By {\em stochastic dual ascent} (SDA) we refer to a randomized optimization method for solving the dual problem \eqref{eq:D} performing iterations of the form \begin{equation}\label{eq:methoddual} y^{k+1} = y^k + S \lambda^k,\end{equation} where $S$ is a random matrix with $m$ rows drawn in each iteration independently from a prespecified distribution. We shall not fix the number of columns of $S$; in fact, we even allow for the number of columns to be random. By performing steps of the form \eqref{eq:methoddual}, we are moving in the range space of the random matrix $S$, with $\lambda^k$ describing the precise linear combination of the columns used in computing the step. In particular, we shall choose $\lambda^k$ from the set \[Q^k \eqdef \arg \max_{\lambda} D(y^k + S\lambda) \overset{\eqref{eq:D}}{=} \arg\max_{\lambda} \left\{ (b-Ac)^\top (y^k + S\lambda) - \tfrac{1}{2}\left\|A^\top (y^k + S \lambda)\right\|_{B^{-1}}^2\right\}.\] Since $D$ is bounded above (a consequence of weak duality), this set is nonempty. Since $D$ is a concave quadratic, $Q^k$ consists of all those vectors $\lambda $ for which the gradient of the mapping $\phi_k(\lambda): \lambda \mapsto D(y^k + S\lambda)$ vanishes. This leads to the observation that $Q^k$ is the set of solutions of a random linear system: \[Q^k = \left\{\lambda \in \R^m \;:\; \left(S^\top A B^{-1}A^\top S \right) \lambda = S^\top \left(b - Ac - A B^{-1}A^\top y^k \right) \right\}.\] If $S$ has a small number of columns, this is a small easy-to-solve system. A key feature of our method enabling us to prove exponential error decay despite the lack of strong concavity is the way in which we choose $\lambda^k$ from $Q^k$. In SDA, $\lambda^k$ is chosen to be the least-norm element of $Q^k$, \[\lambda^k \eqdef \arg\min_{\lambda \in Q^k} \|\lambda\|,\] where $\|\lambda\| = (\sum_i \lambda_i^2)^{1/2}$ denotes standard Euclidean norm. The least-norm solution of a linear system can be written down in a compact way using the (Moore-Penrose) pseudoinverse. In our case, we obtain the formula \begin{equation}\label{eq:lambda_closed_form} \lambda^k = \left(S^\top A B^{-1} A^\top S\right)^\dagger S^\top \left(b - Ac - A B^{-1}A^\top y^k \right), \end{equation} where $\dagger$ denotes the pseudoinverse operator. Note that if $S$ has only a few columns, then~\eqref{eq:lambda_closed_form} requires projecting the origin onto a small linear system. The SDA algorithm is obtained by combining \eqref{eq:methoddual} with \eqref{eq:lambda_closed_form}. \begin{algorithm}[!h] \begin{algorithmic}[1] \State \textbf{parameter:} ${\cal D}$ = distribution over random matrices \State Choose $y^0 \in \R^m$ \Comment Initialization \For {$k = 0, 1, 2, \dots$} \State Sample an independent copy $S\sim {\cal D}$ \State $\lambda^k = \left(S^\top A B^{-1} A^\top S\right)^\dagger S^\top \left(b - A c - AB^{-1}A^\top y^k \right)$ \State $y^{k+1} = y^k + S \lambda^k$ \Comment Update the dual variable \EndFor \end{algorithmic} \caption{Stochastic Dual Ascent (SDA)} \label{alg:SDA} \end{algorithm} The method has one parameter: the distribution $\cal D$ from which the random matrices $S$ are drawn. Sometimes, one is interested in finding any solution of the system $Ax=b$, rather than the particular solution described by the primal problem \eqref{eq:P}. In such situations, $B$ and $c$ could also be seen as parameters. \subsection{Optimality conditions} \label{subsec:OptCond} For any $x$ for which $Ax=b$ and for any $y$ we have \[P(x) - D(y) \overset{\eqref{eq:P}+\eqref{eq:D}}{=} \tfrac{1}{2}\|x-c\|_B^2 + \tfrac{1}{2}\|A^\top y\|_{B^{-1}}^2 + (c-x)^\top A^\top y \geq 0,\] where the inequality (weak duality) follows from the Fenchel-Young inequality\footnote{Let $U$ be a vector space equipped with an inner product $\langle \cdot, \cdot \rangle : U\times U \to \R$. Given a function $f:U \to \R$, its convex (or Fenchel) conjugate $f^*:U\to \R\cup \{+\infty\}$ is defined by $f^*(v) = \sup_{u \in U} \langle u, v \rangle - f(u)$. A direct consequence of this is the Fenchel-Young inequality, which asserts that $f(u) + f^*(v)\geq \langle u, v\rangle$ for all $u$ and $v$. The inequality in the main text follows by choosing $f(u)=\tfrac{1}{2}\|u\|_B^2$ (and hence $f^*(v)=\tfrac{1}{2}\|v\|^2_{B^{-1}}$), $u=x-c$ and $v=A^\top y$. If $f$ is differentiable, then equality holds if and only if $v=\nabla f(u)$. In our case, this condition is $x=c+B^{-1}A^\top y$. This, together with primal feasibility, gives the optimality conditions \eqref{eq:opt_cond}. For more details on Fenchel duality, see \cite{bookBorweinLewis2006}.}. As a result, we obtain the following necessary and sufficient optimality conditions, characterizing primal and dual optimal points. \begin{proposition} [Optimality conditions] \label{eq:prop_opt_cond}Vectors $x\in \R^n$ and $y\in\R^m$ are optimal for the primal \eqref{eq:P} and dual \eqref{eq:D} problems respectively, if and only if they satisfy the following relation \begin{equation} \label{eq:opt_cond}Ax = b, \qquad x = c + B^{-1} A^\top y.\end{equation} \end{proposition} In view of this, it will be useful to define a linear mapping from $\R^m$ to $\R^n$ as follows: \begin{equation}\label{eq:98s98hs}x(y) = c + B^{-1}A^\top y.\end{equation} As an immediate corollary of Proposition~\ref{eq:prop_opt_cond} we observe that for any dual optimal $y^*$, the vector $x(y^*)$ must be primal optimal. Since the primal problem has a unique optimal solution, $x^*$, we must necessarily have \begin{equation}\label{eq:opt_primal}x^* =x(y^*) = c + B^{-1} A^\top y^*.\end{equation} Another immediate corollary of Proposition~\ref{eq:prop_opt_cond} is the following characterization of dual optimality: $y$ is dual optimal if and only if \begin{equation} \label{eq:98hs8h9sss}b - Ac = AB^{-1}A^\top y.\end{equation} Hence, the set of dual optimal solutions is ${\cal Y}^* = (AB^{-1}A^\top)^\dagger (b-Ac) + \Null{AB^{-1}A^\top}$. Since, $\Null{AB^{-1}A^\top} = \Null{A^\top}$ (see Lemma~\ref{lem:09709s}), we have \[{\cal Y}^* = \left(AB^{-1}A^\top\right)^\dagger (b-Ac) + \Null{A^\top}.\] Combining this with \eqref{eq:opt_primal}, we get \[x^* = c + B^{-1}A^\top \left(AB^{-1}A^\top \right)^\dagger(b-Ac).\] \begin{remark} [The dual is also a least-norm problem.] Observe that: \begin{enumerate} \item The particular dual optimal point $y^* = (AB^{-1}A^\top)^\dagger (b-Ac)$ is the solution of the following optimization problem: \begin{equation} \label{eq:iusiuh7ss}\min \left\{ \tfrac{1}{2}\|y\|^2 \;:\; A B^{-1} A^\top y = b-Ac\right\}.\end{equation} Hence, this particular formulation of the dual problem has the same form as the primal problem: projection onto a linear system. \item If $A^\top A$ is positive definite (which can only happen if $A$ is of full column rank, which means that $Ax=b$ has a unique solution and hence the primal objective function does not matter), and we choose $B=A^\top A$, then the dual constraint \eqref{eq:iusiuh7ss} becomes \[A (A^\top A)^{-1}A^\top y = b - Ac.\] This constraint has a geometric interpretation: we are seeking vector $y$ whose orthogonal projection onto the column space of $A$ is equal to $b-Ac$. Hence the reformulated dual problem \eqref{eq:iusiuh7ss} is asking us to find the vector $y$ with this property having the least norm. \end{enumerate} \end{remark} \subsection{Primal iterates associated with the dual iterates} With the sequence of dual iterates $\{y^k\}$ produced by SDA we can associate a sequence of primal iterates $\{x^k\}$ using the mapping \eqref{eq:98s98hs}: \begin{equation} \label{eq:primaliterates} x^k \eqdef x(y^k) = c + B^{-1}A^\top y^k.\end{equation} This leads to the following {\em primal version of the SDA method}. \begin{algorithm}[!h] \begin{algorithmic}[1] \State \textbf{parameter:} ${\cal D}$ = distribution over random matrices \State Choose $x^0 \in \R^n$ \Comment Initialization \For {$k = 0, 1, 2, \dots$} \State Sample an independent copy $S\sim {\cal D}$ \State $x^{k+1} = x^k - B^{-1}A^\top S \left(S^\top A B^{-1} A^\top S\right)^\dagger S^\top (A x^k -b )$ \Comment Update the primal variable \EndFor \end{algorithmic} \caption{Primal Version of Stochastic Dual Ascent (SDA-Primal)} \label{alg:SDA-Primal} \end{algorithm} \begin{remark} \label{lem:5shsuss} A couple of observations: \begin{enumerate} \item {\em Self-duality.} If $A$ is positive definite, $c=0$, and if we choose $B=A$, then in view of \eqref{eq:primaliterates} we have $x^k = y^k$ for all $k$, and hence Algorithms~\ref{alg:SDA} and \ref{alg:SDA-Primal} coincide. In this case, Algorithm~\ref{alg:SDA-Primal} can be described as {\em self-dual.} \item {\em Space of iterates.} A direct consequence of the correspondence between the dual and primal iterates \eqref{eq:primaliterates} is the following simple observation (a generalized version of this, which we prove later as Lemma~\ref{lem:error}, will be used in the proof of Theorem~\ref{theo:Enormerror}): Choose $y^0\in \R^m$ and let $x^0 = c + B^{-1}A^\top y^0$. Then the iterates $\{x^k\}$ of Algorithm~\ref{alg:SDA-Primal} are of the form $x^k = c + B^{-1} A^\top y^k$ for some $y^k\in \R^m$. \item {\em Starting point.} While we have defined the primal iterates of Algorithm~\ref{alg:SDA-Primal} via a linear transformation of the dual iterates---see \eqref{eq:primaliterates}---we {\em can}, in principle, choose $x^0$ arbitrarily, thus breaking the primal-dual connection which helped us to define the method. In particular, we can choose $x^0$ in such a way that there does not exist $y^0$ for which $x^0 = c + B^{-1}A^\top y^0$. As is clear from Theorem~\ref{theo:Enormerror}, in this case the iterates $\{x^k\}$ will not converge to $x^*$, but to $x^*+t$, where $t$ is the projection of $x^0-c$ onto the nullspace of $A$. \end{enumerate} \end{remark} It turns out that Algorithm~\ref{alg:SDA-Primal} is equivalent to the {\em sketch-and-project} method \eqref{eq:sketchandproject} of Gower and Richt\'{a}rik~\cite{Gower2015b}: \begin{equation}\label{eq:sketchandproject}x^{k+1} = \arg \min_{x} \left\{ \|x-x^k\|_B \;:\; S^\top Ax = S^\top b \right\},\end{equation} where $S$ is a random matrix drawn in an i.i.d.\ fashion from a fixed distribution, just as in this work. In this method, the ``complicated'' system $Ax=b$ is first replaced by its sketched version $S^\top Ax =S^\top b$, the solution space of which contains all solutions of the original system. If $S$ has a few columns only, this system will be small and easy to solve. Then, progress is made by projecting the last iterate onto the sketched system. We now briefly comment on the relationship between \cite{Gower2015b} and our work. \begin{itemize} \item \textbf{Dual nature of sketch-and-project.} It was shown in \cite{Gower2015b} that Algorithm~\ref{alg:SDA-Primal} is equivalent to the sketch-and-project method. In fact, the authors of \cite{Gower2015b} provide five additional equivalent formulations of sketch-and-project, with Algorithm~\ref{alg:SDA-Primal} being one of them. Here we show that their method can be seen as a primal process associated with SDA, which is a new method operating in the dual. By observing this, we uncover a hidden dual nature of the sketch-and-project method. For instance, this allows us to formulate and prove Theorem~\ref{theo:2}. No such results appear in \cite{Gower2015b}. \item \textbf{No assumptions on the system matrix.} In \cite{Gower2015b} the authors only studied the convergence of the primal iterates $\{x^k\}$, establishing a (much) weaker variant of Theorem~\ref{theo:Enormerror}. Indeed, convergence was only established in the case when $A$ has full column rank. In this work, we lift this assumption completely and hence establish complexity results in the general case. \item \textbf{Convergence to a shifted point.} As we show in Theorem~\ref{theo:Enormerror}, Algorithm~\ref{alg:SDA-Primal} converges to $x^*+t$, where $t$ is the projection of $x^0-c$ onto $\Null{A}$. Hence, in general, the method does not converge to the optimal solution $x^*$. This is not an issue if $A$ is of full column rank---an assumption used in the analysis in \cite{Gower2015b}---since then $\Null{A}$ is trivial and hence $t=0$. As long as $x^0-c$ lies in $\Range{B^{-1}A^\top}$, however, we have $x^k\to x^*$. This can be easily enforced (for instance, we can choose $x^0=c$). \end{itemize} \subsection{Relating the quality of the dual and primal iterates} The following simple but insightful result (mentioned in the introduction) relates the ``quality'' of a dual vector $y$ with that of its primal counterpart, $x(y)$. It says that the dual suboptimality of $y$ in terms of function values is equal to the primal suboptimality of $x(y)$ in terms of distance. \begin{proposition}\label{lem:correspondence} Let $y^*$ be any dual optimal point and $y\in \R^m$. Then \[D(y^*) - D(y) = \tfrac{1}{2}\|x(y^*) - x(y)\|_B^2.\] \end{proposition} \begin{proof} Straightforward calculation shows that \begin{eqnarray*} D(y^*) -D(y) &\overset{\eqref{eq:D}}{=}& (b-Ac)^\top (y^* - y) - \tfrac{1}{2}(y^*)^\top A B^{-1} A^\top y^* + \tfrac{1}{2}y^\top A B^{-1} A^\top y\\ &\overset{\eqref{eq:98hs8h9sss}}{=}&(y^*)^\top A B^{-1} A^\top (y^* - y) - \tfrac{1}{2}(y^*)^\top A B^{-1} A^\top y^* + \tfrac{1}{2}y^\top A B^{-1} A^\top y\\ &=& \tfrac{1}{2}(y-y^*)^\top AB^{-1} A^\top (y-y^*)\\ &\overset{\eqref{eq:98s98hs}}{=}& \tfrac{1}{2}\|x(y) - x(y^*)\|_B^2. \end{eqnarray*} \qed \end{proof} Applying this result to sequence $\{(x^k,y^k)\}$ of dual iterates produced by SDA and their corresponding primal images, as defined in \eqref{eq:primaliterates}, we get the identity: \[D(y^*)- D(y^k) = \tfrac{1}{2}\|x^k - x^*\|_B^2.\] Therefore, {\em dual convergence in function values $D(y^k)$ is equivalent to primal convergence in iterates $x^k$}. Furthermore, a direct computation leads to the following formula for the {\em duality gap}: \begin{equation}\label{eq:dualitygap09709709}P(x^k ) - D(y^k) \overset{\eqref{eq:primaliterates}}{=} (AB^{-1}A^\top y^k + Ac - b)^\top y^k = -(\nabla D(y^k) )^\top y^k.\end{equation} Note that computing the gap is significantly more expensive than the cost of a single iteration (in the interesting regime when the number of columns of $S$ is small). Hence, evaluation of the duality gap should generally be avoided. If it is necessary to be certain about the quality of a solution however, the above formula will be useful. The gap should then be computed from time to time only, so that this extra work does not significantly slow down the iterative process. \section{Discrete Distributions} \label{sec:discrete} Both the SDA algorithm and its primal counterpart are generic in the sense that the distribution $\cal D$ is not specified beyond assuming that the matrix $H$ defined in \eqref{eq:H} is well defined and nonsingular. In this section we shall first characterize finite discrete distributions for which $H$ is nonsingular. We then give a few examples of algorithms based on such distributions, and comment on our complexity results in more detail. \subsection{Nonsingularity of $H$ for finite discrete distributions} For simplicity, we shall focus on {\em finite discrete} distributions $\cal D$. That is, we set $S = S_i$ with probability $p_i>0$, where $S_1,\dots,S_r$ are fixed matrices (each with $m$ rows). The next theorem gives a necessary and sufficient condition for the matrix $H$ defined in \eqref{eq:H} to be nonsingular. \begin{theorem}\label{thm:H} Let $\cal{D}$ be a finite discrete distribution, as described above. Then $H$ is nonsingular if and only if \[\Range{[S_1S_1^\top A, \cdots, S_r S_r^\top A]} = \R^m .\] \end{theorem} \begin{proof} Let $K_i = S_i^\top AB^{-1/2}$. In view of the identity $\left(K_i K_i^\top \right)^{\dagger} = (K_i^\dagger )^\top K_i^\dagger$, we can write \[H \overset{\eqref{eq:H}}{=} \sum_{i=1}^r H_i,\] where $H_i = p_i S_i (K_i^\dagger)^\top K_i^\dagger S_i^\top$. Since $H_i$ are symmetric positive semidefinite, so is $H$. Now, it is easy to check that $y^\top H_i y = 0$ if and only if $y \in \Null{H_i}$ (this holds for any symmetric positive semidefinite $H_i$). Hence, $y^\top H y = 0$ if and only if $y \in \cap_i \Null{H_i}$ and thus $H$ is positive definite if and only if \begin{equation} \label{eq:0h09sh0976}\bigcap_{i} \Null{H_i} = \{0\}.\end{equation} In view of Lemma~\ref{lem:09709s}, $\Null{H_i} = \Null{\sqrt{p_i}K_i^\dagger S_i^\top} = \Null{K_i^\dagger S_i^\top}$. Now, $y\in \Null{K_i^\dagger S_i^\top}$ if and only of $S_i^\top y \in \Null{K_i^\dagger} = \Null{K_i^\top} = \Null{A^\top S_i}$. Hence, $\Null{H_i} = \Null{A^\top S_i S_i^\top}$, which means that \eqref{eq:0h09sh0976} is equivalent to $\Null{[S_1S_1^\top A, \cdots, S_r S_r^\top A]^\top} = \{0\}$. \qed \end{proof} \bigskip We have the following corollary.\footnote{We can also prove the corollary directly as follows: The first assumption implies that $S_i^\top A B^{-1} A^\top S_i$ is invertible for all $i$ and that $V \eqdef \mbox{Diag}\left(p_i^{1/2}(S_i^\top A{B^{-1}}A^\top S_i)^{-1/2}\right)$ is nonsingular. It remains to note that \[ H \overset{\eqref{eq:H}}{=} \E{ S\left(S^\top AB^{-1}A^\top S\right)^{-1} S^\top} \\ = \sum_i p_i S_i\left(S_i^\top AB^{-1}A^\top S_i \right)^{-1} S_i^\top = \bar{S}V^2 \bar{S}^\top. \] } \begin{corollary}\label{cor:09hs09hs} Assume that $S_i^\top A$ has full row rank for all $i$ and that $\bar{S} \eqdef [S_1,\ldots, S_r]$ is of full row rank. Then $H$ is nonsingular. \end{corollary} We now give a few illustrative examples: \begin{enumerate} \item \emph{Coordinate vectors.} Let $S_i = e_i$ ($i^{\text{th}}$ unit coordinate vector) for $i=1,2,\dots,r=m$. In this case, $\bar{S} = [S_1,\dots,S_m]$ is the identity matrix in $\R^m$, and $S_i^\top A$ has full row rank for all $i$ as long as the rows of $A$ are all nonzero. By Corollary~\ref{cor:09hs09hs}, $H$ is positive definite. \item \emph{Submatrices of the identity matrix.} We can let $S$ be a random column submatrix of the $m\times m$ identity matrix $I$. There are $2^m-1$ such potential submatrices, and we choose $1\leq r \leq 2^m-1$. As long as we choose $S_1,\dots,S_r$ in such a way that each column of $I$ is represented in some matrix $S_i$, the matrix $\bar{S}$ will have full row rank. Furthermore, if $S_i^\top A$ has full row rank for all $i$, then by the above corollary, $H$ is nonsingular. Note that if the row rank of $A$ is $r$, then the matrices $S_i$ selected by the above process will necessarily have at most $r$ columns. \item \emph{Count sketch and Count-min sketch.} Many other ``sketching'' matrices $S$ can be employed within SDA, including the count sketch \cite{CountSketch2002} and the count-min sketch \cite{CountMinSketch2005}. In our context (recall that we sketch with the transpose of $S$), $S$ is a count-sketch matrix (resp. count-min sketch) if it is assembled from random columns of $[I,-I]$ (resp $I$), chosen uniformly with replacement, where $I$ is the $m\times m$ identity matrix. \end{enumerate} \subsection{Randomized Kaczmarz as the primal process associated with randomized coordinate ascent } \label{subsec:RKvsRCA} Let $B=I$ (the identity matrix). The primal problem then becomes \begin{eqnarray} \text{minimize} && P(x)\eqdef \tfrac{1}{2}\|x-c\|^2 \notag\\ \text{subject to} && Ax=b \notag\\ && x\in \R^n.\notag \end{eqnarray} and the dual problem is \begin{eqnarray}\notag \text{maximize} && D(y)\eqdef (b-Ac)^\top y - \tfrac{1}{2}y^\top A A^\top y\\ \text{subject to}&& y \in \R^m. \notag \end{eqnarray} \paragraph{Dual iterates.} Let us choose $S=e^i$ (unit coordinate vector in $\R^m$) with probability $p_i>0$ (to be specified later). The SDA method (Algorithm~\ref{alg:SDA}) then takes the form \begin{equation}\label{eq:SDA-compact08986986098} \boxed{\quad y^{k+1} = y^k + \frac{b_i - A_{i}c - A_{i:} A^\top y^k }{\|A_{i:}\|^2}e_i \quad } \end{equation} This is the randomized coordinate ascent method applied to the dual problem. In the form popularized by Nesterov \cite{Nesterov:2010RCDM}, it takes the form \[y^{k+1} = y^k + \frac{e_i^\top \nabla D(y^k)}{L_i} e_i,\] where $e_i^\top \nabla D(y^k)$ is the $i$th partial derivative of $D$ at $y^k$ and $L_i>0$ is the Lipschitz constant of the $i$th partial derivative, i.e., constant for which the following inequality holds for all $\lambda\in \R$: \begin{equation}\label{eq:s97g98gs} | e_i^\top \nabla D(y + \lambda e_i) - e_i^\top \nabla D(y) | \leq L_i |\lambda|.\end{equation} It can be easily verified that \eqref{eq:s97g98gs} holds with $L_i=\|A_{i:}\|^2$ and that $e_i^\top \nabla D(y^k)=b_i - A_{i:}c - A_{i:} A^\top y^k $. \paragraph{Primal iterates.} The associated primal iterative process (Algorithm~\ref{alg:SDA-Primal}) takes the form \begin{equation}\label{eq:SDA-primal009s09us0098} \boxed{\quad x^{k+1} = x^k - \frac{A_{i:} x^k -b_i }{\|A_{i:}\|^2} A_{i:}^\top \quad } \end{equation} This is the randomized Kaczmarz method of Strohmer and Vershynin \cite{SV:Kaczmarz2009}. \paragraph{The rate.} \bigskip Let us now compute the rate $\rho$ as defined in \eqref{eq:rho}. It will be convenient, but {\em not} optimal, to choose the probabilities via \begin{equation}\label{eq:089h08hs98xx}p_i = \frac{\norm{ A_{i:} }_2^2}{\norm{A}_F^2},\end{equation} where $\|\cdot\|_F$ denotes the Frobenius norm (we assume that $A$ does not contain any zero rows). Since \[H \overset{\eqref{eq:H}}{=} \E{S \left(S^\top A A^\top S \right)^{\dagger}S^\top } = \sum_{i=1}^m p_i \frac{e_i e_i^\top }{\|A_{i:}\|^2} \overset{\eqref{eq:089h08hs98xx}}{=} \frac{1}{\norm{A}_F^2}I,\] we have \begin{equation}\label{eq:98hs8h8ss}\rho = 1-\lambda_{\min}^+\left(A^\top H A \right) = 1-\frac{\lambda_{\min}^+\left(A^\top A\right)}{\norm{A}_F^2}. \end{equation} In general, the rate $\rho$ is a function of the probabilities $p_i$. The inverse problem: ``How to set the probabilities so that the rate is optimized?'' is difficult. If $A$ is of full column rank, however, it leads to a semidefinite program \cite{Gower2015b}. Furthermore, if $r= \Rank{A}$, then in view of \eqref{eq:nubound}, the rate is bounded as \[ 1- \frac{1}{r}\leq \rho <1. \] Assume that $A$ is of rank $r=1$ and let $A= uv^\top$. Then $A^\top A = (u^\top u) v v^\top$, and hence this matrix is also of rank 1. Therefore, $A^\top A$ has a single nonzero eigenvalue, which is equal its trace. Therefore, $\lambda_{\min}^+(A^\top A) = \Tr{A^\top A} = \|A\|^2_F$ and hence $\rho =0$. Note that the rate $\rho$ reaches its lower bound and the method converges in one step. \paragraph{Remarks.} For randomized coordinate ascent applied to (non-strongly) concave quadratics, rate \eqref{eq:98hs8h8ss} has been established by Leventhal and Lewis \cite{Leventhal:2008:RMLC}. However, to the best of our knowledge, this is the first time this rate has also been established for the randomized Kaczmarz method. We do not only prove this, but show that this is because the iterates of the two methods are linked via a linear relationship. In the $c=0, B=I$ case, and for row-normalized matrix $A$, this linear relationship between the two methods was recently independently observed by Wright~\cite{Wright:CoorDescMethods-survey}. While all linear complexity results for RK we are aware of require full rank assumptions, there exist nonstandard variants of RK which do not require such assumptions, one example being the asynchronous parallel version of RK studied by Liu, Wright and Sridhar \cite{Wright:AsyncPRK}. Finally, no results of the type \eqref{eq:PRIMALSUBOPT} (primal suboptimality) and \eqref{eq:GAPSUBOPT} (duality gap) previously existed for these methods in the literature. \subsection{Randomized block Kaczmarz is the primal process associated with randomized Newton} Let $B=I$, so that we have the same pair of primal dual problems as in Section~\ref{subsec:RKvsRCA}. \paragraph{Dual iterates.} Let us now choose $S$ to be a random column submatrix of the $m\times m$ identity matrix $I$. That is, we choose a random subset $C\subset \{1,2,\dots,m\}$ and then let $S$ be the concatenation of columns $j\in C$ of $I$. We shall write $S=I_C$. Let $p_C$ be the probability that $S = I_C$. Assume that for each $j \in \{1,\dots,m\}$ there exists $C$ with $j \in C$ such that $p_C>0$. Such a random set is called {\em proper}~\cite{SDNA}. The SDA method (Algorithm~\ref{alg:SDA}) then takes the form \begin{equation}\label{eq:SDA-compact08986986098BLOCK} \boxed{\quad y^{k+1} = y^k + I_C \lambda^k \quad } \end{equation} where $\lambda^k$ is chosen so that the dual objective is maximized (see \eqref{eq:lambda_closed_form}). This is a variant of the {\em randomized Newton method} studied in \cite{SDNA}. By examining \eqref{eq:lambda_closed_form}, we see that this method works by ``inverting'' randomized submatrices of the ``Hessian'' $AA^\top$. Indeed, $\lambda^k$ is in each iteration computed by solving a system with the matrix $I_C^\top A A^\top I_C$. This is the random submatrix of $A A^\top$ corresponding to rows and columns in $C$. \paragraph{Primal iterates.} In view of the equivalence between Algorithm~\ref{alg:SDA-Primal} and the sketch-and-project method \eqref{eq:sketchandproject}, the primal iterative process associated with the randomized Newton method has the form \begin{equation}\label{eq:SDA-primal009s09us0098BLOCK} \boxed{\quad x^{k+1} = \arg \min_{x} \left\{ \|x-x^k\| \;:\; I_C^\top Ax = I_C^\top b \right\} \quad } \end{equation} This method is a variant of the {\em randomized block Kaczmarz} method of Needell \cite{Needell2012}. The method proceeds by projecting the last iterate $x^k$ onto a subsystem of $Ax=b$ formed by equations indexed by the set $C$. \paragraph{The rate.} Provided that $H$ is nonsingular, the shared rate of the randomized Newton and randomized block Kaczmarz methods is \[\rho = 1- \lambda_{\min}^+\left(A^\top \E{I_C\left(I_C^\top AA^\top I_C\right)^\dagger I_C^\top} A\right).\] Qu et al \cite{SDNA} study the randomized Newton method for the problem of minimizing a smooth strongly convex function and prove linear convergence. In particular, they study the above rate in the case when $AA^\top$ is positive definite. Here we show that linear converges also holds for {\em weakly} convex quadratics (as long as $H$ is nonsingular). An interesting feature of the randomized Newton method, established in \cite{SDNA}, is that when viewed as a family of methods indexed by the size $\tau=|C|$, it enjoys superlinear speedup in $\tau$. That is, as $\tau$ increases by some factor, the iteration complexity drops by a factor that is at least as large. It is possible to conduct a similar study in our setting with a possibly singular matrix $AA^\top$, but such a study is not trivial and we therefore leave it for future research. \subsection{Self-duality for positive definite $A$} If $A$ is positive definite, then we can choose $B=A$. As mentioned before, in this setting SDA is self-dual: $x^k=y^k$ for all $k$. The primal problem then becomes \begin{eqnarray} \text{minimize} && P(x)\eqdef \tfrac{1}{2}x^\top A x \notag\\ \text{subject to} && Ax=b \notag\\ && x\in \R^n.\notag \end{eqnarray} and the dual problem becomes \begin{eqnarray}\notag \text{maximize} && D(y)\eqdef b^\top y - \tfrac{1}{2}y^\top A y\\ \text{subject to}&& y \in \R^m. \notag \end{eqnarray} Note that the primal objective function does not play any role in determining the solution; indeed, the feasible set contains a single point only: $A^{-1}b$. However, it does affect the iterative process. \paragraph{Primal and dual iterates.} As before, let us choose $S=e^i$ (unit coordinate vector in $\R^m$) with probability $p_i>0$, where the probabilities $p_i$ are arbitrary. Then both the primal and the dual iterates take the form \[ \boxed{\quad y^{k+1} = y^k - \frac{A_{i:} y^k - b_i}{A_{ii}}e_i \quad } \] This is the randomized coordinate ascent method applied to the dual problem. \paragraph{The rate.} If we choose $p_i = A_{ii}/\Tr{A}$, then \[H = \E{S\left(S^\top A S\right)^\dagger S^\top} = \frac{I}{\Tr{A}},\] whence \[\rho \overset{\eqref{eq:rho} }{=} 1 - \lambda_{\min}^+ \left( A^{1/2}H A^{1/2}\right) = 1 - \frac{ \lambda_{\min}(A)}{\Tr{A}}.\] It is known that for this problem, randomized coordinate descent applied to the dual problem, with this choice of probabilities, converges with this rate \cite{Leventhal:2008:RMLC}. \section{Application: Randomized Gossip Algorithms} \label{sec:gossip} In this section we apply our method and results to the distributed consensus (averaging) problem. Let $(V,E)$ be a connected network with $|V|=n$ nodes and $|E|=m$ edges, where each edge is an unordered pair $\{i,j\} \in E$ of distinct nodes. Node $i \in V$ stores a private value $c_i\in \R$. The goal of a ``distributed consensus problem'' is for the network to compute the average of these private values in a distributed fashion \cite{RandGossip2006,OlshevskyTsitsiklis2009}. This means that the exchange of information can only occur along the edges of the network. The nodes may represent people in a social network, with edges representing friendship and private value representing certain private information, such as salary. The goal would be to compute the average salary via an iterative process where only friends are allowed to exchange information. The nodes may represent sensors in a wireless sensor network, with an edge between two sensors if they are close to each other so that they can communicate. Private values represent measurements of some quantity performed by the sensors, such as the temperature. The goal is for the network to compute the average temperature. \subsection{Consensus as a projection problem} We now show how one can model the consensus (averaging) problem in the form \eqref{eq:P}. Consider the projection problem \begin{eqnarray} \text{minimize} && \tfrac{1}{2}\|x - c\|^2 \notag \\ \text{subject to} & & x_1=x_2=\cdots = x_n, \label{eq:ohs09hud98yd} \end{eqnarray} and note that the optimal solution $x^*$ must necessarily satisfy \[x^*_i=\bar{c}\eqdef \frac{1}{n}\sum_{i=1}^n c_i,\] for all $i$. There are many ways in which the constraint forcing all coordinates of $x$ to be equal can be represented in the form of a linear system $Ax=b$. Here are some examples: \begin{enumerate} \item {\em Each node is equal to all its neighbours.} Let the equations of the system $Ax=b$ correspond to constraints \[x_i=x_j,\] for $\{i,j\}\in E$. That is, we are enforcing all pairs of vertices joined by an edge to have the same value. Each edge $e\in E$ can be written in two ways: $e = \{i,j\}$ and $e=\{j,i\}$, where $i,j$ are the incident vertices. In order to avoid duplicating constraints, for each edge $e\in E$ we use $e=(i,j)$ to denote an arbitrary but fixed order of its incident vertices $i,j$. We then let $A\in \R^{m\times n}$ and $b=0\in \R^m$, where \begin{equation}\label{eq:89hs87s8ys}(A_{e:})^\top = f_i - f_j,\end{equation} and where $e=(i,j)\in E$, $f_i$ (resp.\ $f_j$) is the $i^{\text{th}}$ (resp.\ $j^{\text{th}}$) unit coordinate vector in $\R^n$. Note that the constraint $x_i=x_j$ is represented only once in the linear system. Further, note that the matrix \begin{equation}\label{eq:Laplacian09709}L = A^\top A \end{equation} is the {\em Laplacian} matrix of the graph $(V,E)$: \[L_{ij} = \begin{cases} d_i & i=j\\ -1 & i\neq j,\,\, (i,j)\in E\\ 0 & \text{otherwise,} \end{cases}\] where $d_i$ is the degree of node $i$. \item {\em Each node is the average of its neighbours.} Let the equations of the system $Ax=b$ correspond to constraints \[x_i = \frac{1}{d_i}\sum_{j \in N(i)} x_j,\] for $i\in V$, where $N(i)\eqdef \left\{ j\in V \,\,: \,\, \{i,j\} \in E\right\}$ is the set of neighbours of node $i$ and $d_i \eqdef |N(i)|$ is the degree of node $i$. That is, we require that the values stored at each node are equal to the average of the values of its neighbours. This corresponds to the choice $b=0$ and \begin{equation}\label{eq:s8h98s78gd}(A_{i:})^\top = f_i - \frac{1}{d_i}\sum_{j \in N(i)}f_j.\end{equation} Note that $A\in \R^{n\times n}$. \item {\em Spanning subgraph.} Let $(V,E')$ be any connected subgraph of $(V,E)$. For instance, we can choose a spanning tree. We can now apply any of the 2 models above to this new graph and either require $x_i=x_j$ for all $\{i,j\}\in E'$, or require the value $x_i$ to be equal to the average of the values $x_j$ for all neighbours $j$ of $i$ in $(V,E')$. \end{enumerate} Clearly, the above list does not exhaust the ways in which the constraint $x_1=\dots=x_n$ can be modeled as a linear system. For instance, we could build the system from constraints such as $x_1 = x_2 + x_4 - x_3$, $x_1 = 5 x_2 - 4 x_7$ and so on. Different representations of the constraint $x_1=\cdots=x_n$, in combination with a choice of $\cal D$, will lead to a wide range of specific algorithms for the consensus problem \eqref{eq:ohs09hud98yd}. Some (but not all) of these algorithms will have the property that communication only happens along the edges of the network, and these are the ones we are interested in. The number of combinations is very vast. We will therefore only highlight two options, with the understanding that based on this, the interested reader can assemble other specific methods as needed. \subsection{Model 1: Each node is equal to its neighbours} Let $b=0$ and $A$ be as in \eqref{eq:89hs87s8ys}. Let the distribution $\cal D$ be defined by setting $S=e_i$ with probability $p_i>0$, where $e_i$ is the $i^{\text{th}}$ unit coordinate vector in $\R^m$. We have $B=I$, which means that Algorithm~\ref{alg:SDA-Primal} is the randomized Kaczmarz (RK) method \eqref{eq:SDA-primal009s09us0098} and Algorithm~\ref{alg:SDA} is the randomized coordinate ascent method \eqref{eq:SDA-compact08986986098}. Let us take $y^0 = 0$ (which means that $x^0=c$), so that in Theorem~\ref{theo:Enormerror} we have $t=0$, and hence $x^k \to x^*$. The particular choice of the starting point $x^0=c$ in the primal process has a very tangible meaning: for all $i$, node $i$ initially knows value $c_i$. The primal iterative process will dictate how the local values are modified in an iterative fashion so that eventually all nodes contain the optimal value $x^*_i = \bar{c}$. \paragraph{Primal method.} In view of \eqref{eq:89hs87s8ys}, for each edge $e = (i,j)\in E$, we have $\|A_{e:}\|^2=2$ and $A_{e:}x^k = x^k_i - x^k_j$. Hence, if the edge $e$ is selected by the RK method, \eqref{eq:SDA-primal009s09us0098} takes the specific form \begin{equation}\label{eq:s98g98gsf66r6fs}\boxed{\quad x^{k+1} = x^k - \frac{x^k_i - x^k_j}{2} (f_i - f_j) \quad }\end{equation} From \eqref{eq:s98g98gsf66r6fs} we see that only the $i^{\text{th}}$ and $j^{\text{th}}$ coordinates of $x^k$ are updated, via \[x^{k+1}_i = x^k_i - \frac{x^k_i - x^k_j}{2} = \frac{x_i^k + x_j^k}{2}\] and \[x^{k+1}_j = x^k_j + \frac{x^k_i - x^k_j}{2} = \frac{x_i^k + x_j^k}{2}.\] Note that in each iteration of RK, a random edge is selected, and the nodes on this edge replace their local values by their average. This is a basic variant of the {\em randomized gossip} algorithm \cite{RandGossip2006, ZouziasFreris2009}. \paragraph{Invariance.} Let $f$ be the vector of all ones in $\R^n$ and notice that from \eqref{eq:s98g98gsf66r6fs} we obtain $f^\top x^{k+1} = f^\top x^k$ for all $k$. This means that for all $k\geq 0$ we have the invariance property: \begin{equation}\label{eq:invariance}\sum_{i=1}^n x_i^k = \sum_{i=1}^n c_i.\end{equation} \paragraph{Insights from the dual perspective.} We can now bring new insight into the randomized gossip algorithm by considering the dual iterative process. The dual method \eqref{eq:SDA-compact08986986098} maintains weights $y^k$ associated with the edges of $E$ via the process: \[y^{k+1} = y^k - \frac{A_{e:} (c - A^\top y^k)}{2}e_e,\] where $e$ is a randomly selected edge. Hence, only the weight of a single edge is updated in each iteration. At optimality, we have $x^* =c + A^\top y^*$. That is, for each $i$ \[\delta_i\eqdef \bar{c} - c_i = x_i^* - c_i = (A^\top y^*)_i = \sum_{e\in E} A_{ei} y^*_e,\] where $\delta_i$ is the correction term which needs to be added to $c_i$ in order for node $i$ to contain the value $\bar{c}$. From the above we observe that these correction terms are maintained by the dual method as an inner product of the $i^{\text{th}}$ column of $A$ and $y^k$, with the optimal correction being $\delta_i = A_{:i}^\top y^*$. \paragraph{Rate.} Both Theorem~\ref{theo:Enormerror} and Theorem~\ref{theo:2} hold, and hence we automatically get several types of convergence for the randomized gossip method. In particular, to the best of our knowledge, no primal-dual type of convergence exist in the literature. Equation~\eqref{eq:dualitygap09709709} gives a stopping criterion certifying convergence via the duality gap, which is also new. In view of \eqref{eq:98hs8h8ss} and \eqref{eq:Laplacian09709}, and since $\|A\|_F^2 = 2m$, the convergence rate appearing in all these complexity results is given by \[\rho = 1 - \frac{\lambda_{\min}^+(L)}{2m},\] where $L$ is the Laplacian of $(V,E)$. While it is know that the Laplacian is singular, the rate depends on the smallest nonzero eigenvalue. This means that the number of iterations needed to output an $\epsilon$-solution in expectation scales as $O(\left(2m/\lambda_{\min}^+(L)\right)\log(1/\epsilon))$, i.e., linearly with the number of edges. \subsection{Model 2: Each node is equal to the average of its neighbours} Let $A$ be as in \eqref{eq:s8h98s78gd} and $b=0$. Let the distribution $\cal D$ be defined by setting $S=f_i$ with probability $p_i>0$, where $f_i$ is the $i^{\text{th}}$ unit coordinate vector in $\R^n$. Again, we have $B=I$, which means that Algorithm~\ref{alg:SDA-Primal} is the randomized Kaczmarz (RK) method \eqref{eq:SDA-primal009s09us0098} and Algorithm~\ref{alg:SDA} is the randomized coordinate ascent method \eqref{eq:SDA-compact08986986098}. As before, we choose $y^0=0$, whence $x^0=c$. \paragraph{Primal method.} Observe that $\|A_{i:}\|^2 = 1 + 1/d_i$. The RK method \eqref{eq:SDA-primal009s09us0098} applied to this formulation of the problem takes the form \begin{equation} \label{eq:sihiuhd098d7d} \boxed{\quad x^{k+1} = x^k - \frac{x^k_i - \frac{1}{d_i}\sum_{j\in N(i)}x^k_j }{1 + 1/d_i} \left(f_i - \frac{1}{d_i}\sum_{j\in N(i)}f_j \right) \quad } \end{equation} where $i$ is chosen at random. This means that only coordinates in $i\cup N(i)$ get updated in such an iteration, the others remain unchanged. For node $i$ (coordinate $i$), this update is \begin{equation}\label{eq:9g8g98gssdd} x^{k+1}_i = \frac{1}{d_i+1} \left( x_i^k + \sum_{j\in N(i)}x^k_j \right).\end{equation} That is, the updated value at node $i$ is the average of the values of its neighbours and the previous value at $i$. From \eqref{eq:sihiuhd098d7d} we see that the values at nodes $j\in N(i)$ get updated as follows: \begin{equation}\label{eq:98889ff} x^{k+1}_j = x_j^{k} + \frac{1}{d_i+1}\left(x_i^k - \frac{1}{d_i}\sum_{j'\in N(i)}x^k_{j'} \right).\end{equation} \paragraph{Invariance.} Let $f$ be the vector of all ones in $\R^n$ and notice that from \eqref{eq:sihiuhd098d7d} we obtain \[f^\top x^{k+1} = f^\top x^k -\frac{x^k_i - \frac{1}{d_i}\sum_{j\in N(i)}x^k_j }{1 + 1/d_i} \left(1 - \frac{d_i}{d_i} \right)=f^\top x^k, \] for all $k$. It follows that the method satisfies the invariance property \eqref{eq:invariance}. \paragraph{Rate.} The method converges with the rate $\rho$ given by \eqref{eq:98hs8h8ss}, where $A$ is given by \eqref{eq:s8h98s78gd}. If $(V,E)$ is a complete graph (i.e., $m=\tfrac{n(n-1)}{2}$), then $L = \tfrac{(n-1)^2}{n}A^\top A$ is the Laplacian. In that case, $\|A\|_F^2 = \Tr{A^\top A} = \tfrac{n}{(n-1)^2}\Tr{L}=\tfrac{n}{(n-1)^2}\sum_i d_i = \tfrac{n^2}{n-1}$ and hence \[\rho \overset{\eqref{eq:s8h98s78gd}}{=} 1 - \frac{\lambda_{\min}^+(A^\top A)}{\|A\|_F^2} = 1 - \frac{\tfrac{n}{(n-1)^2}\lambda_{\min}^+(L)}{\tfrac{n^2}{n-1}} = 1 - \frac{\lambda_{\min}^+(L)}{2m}.\] \section{Proof of Theorem~\ref{theo:Enormerror}} \label{sec:proof} In this section we prove Theorem~\ref{theo:Enormerror}. We proceed as follows: in Section~\ref{subsec:error} we characterize the space in which the iterates move, in Section~\ref{subsec:inequality} we establish a certain key technical inequality, in Section~\ref{subsec:convergence} we establish convergence of iterates, in Section~\ref{subsec:residual} we derive a rate for the residual and finally, in Section~\ref{subsec:lower_bound} we establish the lower bound on the convergence rate. \subsection{An error lemma} \label{subsec:error} The following result describes the space in which the iterates move. It is an extension of the observation in Remark~\ref{lem:5shsuss} to the case when $x^0$ is chosen arbitrarily. \begin{lemma} \label{lem:error} Let the assumptions of Theorem~\ref{theo:Enormerror} hold. For all $k\geq 0$ there exists $w^k\in \R^m$ such that $e^k \eqdef x^k - x^* - t = B^{-1}A^\top w^k$. \end{lemma} \begin{proof} We proceed by induction. Since by definition, $t$ is the projection of $x^0-c$ onto $\Null{A}$ (see \eqref{eq:98hs8htt}), applying Lemma~\ref{eq:decomposition} we know that $x^0 -c = s+ t$, where $s = B^{-1}A^\top \hat{y}^0$ for some $\hat{y}^0\in \R^m$. Moreover, in view of \eqref{eq:opt_primal}, we know that $x^* =c + B^{-1}A^\top y^*$, where $y^*$ is any dual optimal solution. Hence, \[e^0 = x^0 - x^* - t = B^{-1}A^\top (\hat{y}^0-y^*).\] Assuming the relationship holds for $k$, we have \begin{eqnarray*}e^{k+1} & = & x^{k+1} - x^* - t\\ &\overset{(\text{Alg}~\ref{alg:SDA-Primal})}{=} & \left[x^k - B^{-1}A^\top S (S^\top A B^{-1} A^\top S)^\dagger S^\top (A x^k -b ) \right] - x^*-t\\ &= & \left[x^* + t + B^{-1}A^\top w^k - B^{-1}A^\top S (S^\top A B^{-1} A^\top S)^\dagger S^\top (A x^k -b )\right] - x^* - t\\ &=& B^{-1}A^\top w^{k+1}, \end{eqnarray*} where $w^{k+1} = w^k - S (S^\top A B^{-1} A^\top S)^\dagger S^\top (A x^k -b )$. \qed \end{proof} \subsection{A key inequality} \label{subsec:inequality} The following inequality is of key importance in the proof of the main theorem. \begin{lemma}\label{lem:WGWtight} Let $0\neq W \in \R^{m\times n}$ and $G \in \R^{m\times m}$ be symmetric positive definite. Then the matrix $W^\top G W$ has a positive eigenvalue, and the following inequality holds for all $y\in \R^m$: \begin{equation}\label{eq:WGWtight} y^\top WW^\top G WW^\top y \geq \lambda_{\min}^+(W^\top G W)\|W^\top y\|^2. \end{equation} Furthermore, this bound is tight. \end{lemma} \begin{proof} Fix arbitrary $y\in \R^m$. By Lemma~\ref{lem:09709s}, $W^\top y\in \Range{W^\top G W}$. Since, $W$ is nonzero, the positive semidefinite matrix $W^\top G W$ is also nonzero, and hence it has a positive eigenvalue. Hence, $\lambda_{\min}^+(W^\top G W)$ is well defined. Let $\lambda_{\min}^+(W^\top G W) = \lambda_{1}\leq \cdots\leq \lambda_{\tau}$ be the positive eigenvalues of $W^\top GW$, with associated orthonormal eigenvectors $q_1,\dots,q_\tau$. We thus have \[W^\top G W = \sum_{i=1}^\tau \lambda_i q_i q_i^\top.\] It is easy to see that these eigenvectors span $\Range{W^\top GW}$. Hence, we can write $W^\top y = \sum_{i=1}^{\tau} \alpha_i q_i$ and therefore \[y^\top WW^\top G W W^\top y = \sum_{i=1}^{\tau} \lambda_{i}\alpha_i^2 \geq \lambda_1\sum_{i=1}^{\tau} \alpha_i^2 = \lambda_1 \|W^\top y\|^2.\] Furthermore this bound is tight, as can be seen by selecting $y$ so that $W^\top y = q_1$. \qed \end{proof} \subsection{Convergence of the iterates} \label{subsec:convergence} Subtracting $x^*+t$ from both sides of the update step of Algorithm~\ref{alg:SDA-Primal}, and letting \[Z= Z_{S^\top A} \overset{\eqref{eq:Z_A}}{=} A^\top S (S^\top A B^{-1} A^\top S)^\dagger S^\top A,\] we obtain the identity \begin{equation} \label{eq:fixed0} x^{k+1} - (x^{*}+t) = (I-B^{-1}Z)(x^k-(x^{*}+t)), \end{equation} where we used that $t\in \Null{A}$. Let \begin{equation}\label{eq:h98sh9sd}e^k = x^k - (x^{*}+t)\end{equation} and note that in view of \eqref{eq:H}, $\E{Z} = A^\top H A$. Taking norms and expectations (in $S$) on both sides of~\eqref{eq:fixed0} gives \begin{eqnarray} \E{\norm{e^{k+1}}_{B}^2\, | \, e^k} &=& \E{\norm{(I-{B^{-1}}Z)e^k }^2_B} \nonumber \\ &\overset{(\text{Lemma}~\ref{eq:decomposition}, \text{ Equation}~\eqref{eq:iuhiuhpp} )}{=}& \E{(e^k)^\top (B-Z) e^k} \nonumber\\ & =& \norm{e^k}_{B}^2- (e^k)^\top \E{Z}e^k \nonumber \\ &=& \norm{e^k}_{B}^2- (e^k)^\top A^\top H A e^k, \label{eq:theostep1} \end{eqnarray} where in the second step we have used \eqref{eq:iuhiuhpp} from Lemma~\ref{eq:decomposition} with $A$ replaced by $S^\top A$. In view of Lemma~\ref{lem:error}, let $w^k\in \R^{m}$ be such that $e^k = B^{-1}A^\top w^k.$ Thus \begin{eqnarray} (e^k)^\top A^\top H A e^k \quad &=&(w^k)^\top AB^{-1}A^\top H A B^{-1}A^\top w^k\nonumber \\ &\overset{(\text{Lemma}~\ref{lem:WGWtight})}{\geq}& \lambda_{\min}^+(B^{-1/2}A^\top H A B^{-1/2})\cdot \|B^{-1/2}A^\top w^k\|^2 \nonumber\\ &= &(1-\rho) \cdot \|B^{-1}A^\top w^k\|_B^2 \\ &=& (1- \rho) \cdot\norm{e^k}_B^2,\label{eq:theostep2} \end{eqnarray} where we applied Lemma~\ref{lem:WGWtight} with $W = AB^{-1/2}$ and $G = H$, so that $W^\top GW = B^{-1/2}A^\top H A B^{-1/2}.$ Substituting~\eqref{eq:theostep2} into~\eqref{eq:theostep1} gives $\E{\norm{e^{k+1}}_{B}^2\, | \, e^k} \leq \rho \cdot \norm{e^k}_B^2$. Using the tower property of expectations, we obtain the recurrence \[\E{\norm{e^{k+1}}_{B}^2} \leq \rho \cdot \E{\norm{e^k}_B^2}. \] To prove~\eqref{eq:Enormerror} it remains to unroll the recurrence. \subsection{Convergence of the residual} \label{subsec:residual} We now prove \eqref{eq:s98h09hsxxx}. Letting $V_k = \|x^k-x^*-t\|_B^2$, we have \begin{eqnarray*} \E{\|Ax^k - b\|_B} &=& \E{\|A(x^k-x^*-t) + At\|_B}\\ &\leq &\E{\|A(x^k-x^*-t) \|_B} + \|At\|_B\\ &\leq & \|A\|_B \E{\sqrt{V_k}} + \|At\|_B\\ &\leq &\|A\|_B \sqrt{\E{V_k}} + \|At\|_B\\ &\overset{\eqref{eq:Enormerror}}{\leq}&\|A\|_B \sqrt{\rho^k V_0 } + \|At\|_B, \end{eqnarray*} where in the step preceding the last one we have used Jensen's inequality. \subsection{Proof of the lower bound}\label{subsec:lower_bound} Since $\E{Z} = A^\top H A$, using Lemma~\ref{lem:09709s} with $G=H$ and $W=A B^{-1/2}$ gives \begin{align*} \Range{B^{-1/2}\E{Z}B^{-1/2}}=\Range{B^{-1/2}A^\top}, \end{align*} from which we deduce that \begin{eqnarray*}\Rank{A} &=& \dim\left(\Range{A^\top}\right) \\ &=& \dim\left(\Range{B^{-1/2}A^\top}\right)\\ &=& \dim\left(\Range{B^{-1/2}\E{Z}B^{-1/2}}\right)\\ &=& \Rank{B^{-1/2}\E{Z}B^{-1/2}}.\end{eqnarray*} Hence, $\Rank{A}$ is equal to the number of nonzero eigenvalues of $B^{-1/2}\E{Z}B^{-1/2}$, from which we immediately obtain the bound \begin{eqnarray*} \Tr{B^{-1/2}\E{Z}B^{-1/2}} &\geq& \Rank{A }\, \lambda_{\min}^+(B^{-1/2}\E{Z}B^{-1/2}). \end{eqnarray*} In order to obtain \eqref{eq:nubound}, it only remains to combine the above inequality with \[ \E{\Rank{S^\top A}} \overset{ \eqref{eq:ugisug7sss}}{=} \E{\Tr{B^{-1}Z}} = \E{\Tr{B^{-1/2}Z B^{-1/2}}} \\ = \Tr{B^{-1/2}\E{Z}B^{-1/2}}. \] \section{Proof of Theorem~\ref{theo:2}} \label{sec:proof2} In this section we prove Theorem~\ref{theo:2}. We dedicate a subsection to each of the three complexity bounds. \subsection{Dual suboptimality} Since $x^0\in c+\Range{B^{-1}A^\top}$, we have $t=0$ in Theorem~\ref{theo:Enormerror}, and hence \eqref{eq:Enormerror} says that \begin{equation}\label{eq:s98h98shs}\E{U_k} \leq \rho^k U_0.\end{equation} It remains to apply Proposition~\ref{lem:correspondence}, which says that $U_k = D(y^*)-D(y^k)$. \subsection{Primal suboptimality} Letting $U_k = \tfrac{1}{2}\|x^k- x^*\|_B^2$, we can write \begin{eqnarray} P(x^k) - OPT &=& \tfrac{1}{2}\|x^k - c\|_B^2 - \tfrac{1}{2}\|x^* - c\|_B^2\notag\\ &=& \tfrac{1}{2}\|x^k - x^* + x^* - c\|_B^2 - \tfrac{1}{2}\|x^* - c\|_B^2\notag\\ &=& \tfrac{1}{2}\|x^k- x^*\|_B^2 + (x^k-x^*)^\top B (x^*-c)\notag\\ &\leq & U_k + \|x^k-x^*\|_B \|B(x^*-c)\|_{B^{-1}} \notag\\ &=& U_k +\|x^k-x^*\|_B \|x^*-c\|_B \notag \\ &= & U_k + 2 \sqrt{U_k} \sqrt{OPT}.\label{eq:iuhs89h98s6s} \end{eqnarray} By taking expectations on both sides of \eqref{eq:iuhs89h98s6s}, and using Jensen's inequality, we therefore obtain \[\E{P(x^k)-OPT} \leq \E{U_k} + 2\sqrt{OPT} \sqrt{\E{U_k}} \overset{\eqref{eq:s98h98shs}}{\leq} \rho^k U_0 + 2 \rho^{k/2} \sqrt{OPT \times U_0},\] which establishes the bound on primal suboptimality \eqref{eq:PRIMALSUBOPT}. \subsection{Duality gap} Having established rates for primal and dual suboptimality, the rate for the duality gap follows easily: \begin{eqnarray*} \E{P(x^k) - D(y^k)} &=& \E{P(x^k) - OPT + OPT - D(y^k)}\\ &=& \E{P(x^k) - OPT} + \E{OPT - D(y^k)}\\ &\overset{\eqref{eq:DUALSUBOPT} + \eqref{eq:PRIMALSUBOPT}}{=}& 2 \rho^k U_0 + 2 \rho^{k/2} \sqrt{OPT \times U_0}. \end{eqnarray*} \section{Numerical Experiments: Randomized Kaczmarz Method with Rank-Deficient System} To illustrate some of the novel aspects of our theory, we perform numerical experiments with the Randomized Kaczmarz method~\eqref{eq:SDA-primal009s09us0098} (or equivalently the randomized coordinate ascent method applied to the dual problem~\eqref{eq:D}) and compare the empirical convergence to the convergence predicted by our theory. We test several randomly generated rank-deficient systems and compare the evolution of the empirical primal error $\norm{x^k-x^*}_2^2/\norm{x^0-x^*}_2^2$ to the convergence dictated by the rate $\rho = 1-\lambda_{\min}^+\left(A^\top A\right)/\norm{A}_F^2$ given in~\eqref{eq:98hs8h8ss} and the lower bound $1-1/\Rank{A}\leq\rho$. From Figure~\ref{fig:rand} we can see that the RK method converges despite the fact that the linear systems are rank deficient. While previous results do not guarantee that RK converges for rank-deficient matrices, our theory does as long as the system matrix has no zero rows. Furthermore, we observe that the lower the rank of the system matrix, the faster the convergence of the RK method, and moreover, the closer the empirical convergence is to the convergence dictated by the rate $\rho $ and lower bound on $\rho$. In particular, on the low rank system in Figure~\ref{fig:randa}, the empirical convergence is very close to both the convergence dictated by $\rho$ and the lower bound. While on the full rank system in Figure~\ref{fig:randd}, the convergence dictated by $\rho$ and the lower bound on $\rho$ are no longer an accurate estimate of the empirical convergence. \begin{figure}[!h] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width = \textwidth, trim= 80 270 80 280, clip ]{Karczmarz-uniform-random300X300_r_40-bnbdist} \caption{$\Rank{A}= 40$}\label{fig:randa} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width = \textwidth, trim= 80 270 80 280, clip ]{Karczmarz-uniform-random300X300_r_80-bnbdist} \caption{$\Rank{A} = 80$}\label{fig:randb} \end{subfigure}\\% \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width = \textwidth, trim= 80 270 80 280, clip ]{Karczmarz-uniform-random300X300_r_160-bnbdist} \caption{$\Rank{A} = 160$}\label{fig:randc} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width = \textwidth, trim= 80 270 80 280, clip ]{Karczmarz-uniform-random300X300_r_300-bnbdist} \caption{$\Rank{A} = 300$}\label{fig:randd} \end{subfigure} \caption{Synthetic MATLAB generated problems. Rank deficient matrix $A~=~\sum_{i=1}^{\Rank{A}} \sigma_i u_i v_i^T $ where $\sum_{i=1}^{300} \sigma_i u_i v_i^T =$\texttt{rand}$(300,300)$ is an svd decomposition of a $300\times 300$ uniform random matrix. We repeat each experiment ten times. The blue shaded region is the $90\%$ percentile of relative error achieved in each iteration. }\label{fig:rand} \end{figure} \section{Conclusion} \label{sec:conclusion} We have developed a versatile and powerful algorithmic framework for solving linear systems: {\em stochastic dual ascent (SDA)}. In particular, SDA finds the projection of a given point, in a fixed but arbitrary Euclidean norm, onto the solution space of the system. Our method is dual in nature, but can also be described in terms of primal iterates via a simple affine transformation of the dual variables. Viewed as a dual method, SDA belongs to a novel class of randomized optimization algorithms: it updates the current iterate by adding the product of a random matrix, drawn independently from a fixed distribution, and a vector. The update is chosen as the best point lying in the random subspace spanned by the columns of this random matrix. While SDA is the first method of this type, particular choices for the distribution of the random matrix lead to several known algorithms: randomized coordinate descent \cite{Leventhal:2008:RMLC} and randomized Kaczmarz \cite{SV:Kaczmarz2009} correspond to a discrete distribution over the columns of the identity matrix, randomized Newton method \cite{SDNA} corresponds to a discrete distribution over column submatrices of the identity matrix, and Gaussian descent \cite{S.U.StichC.L.Muller2014} corresponds to the case when the random matrix is a Gaussian vector. We equip the method with several complexity results with the same rate of exponential decay in expectation (aka linear convergence) and establish a tight lower bound on the rate. In particular, we prove convergence of primal iterates, dual function values, primal function values, duality gap and of the residual. The method converges under very weak conditions beyond consistency of the linear system. In particular, no rank assumptions on the system matrix are needed. For instance, randomized Kaczmarz method converges linearly as long as the system matrix contains no zero rows. Further, we show that SDA can be applied to the distributed (average) consensus problem. We recover a standard randomized gossip algorithm as a special case, and show that its complexity is proportional to the number of edges in the graph and inversely proportional to the smallest nonzero eigenvalue of the graph Laplacian. Moreover, we illustrate how our framework can be used to obtain new randomized algorithms for the distributed consensus problem. Our framework extends to several other problems in optimization and numerical linear algebra. For instance, one can apply it to develop new stochastic algorithms for computing the inverse of a matrix and obtain state-of-the art performance for inverting matrices of huge sizes. \section{Acknowledgements} The second author would like to thank Julien Hendrickx from Universit\'{e} catholique de Louvain for a discussion regarding randomized gossip algorithms. {\small \printbibliography } \section{Appendix: Elementary Results Often Used in the Paper} We first state a lemma comparing the null spaces and range spaces of certain related matrices. While the result is of an elementary nature, we use it several times in this paper, which justifies its elevation to the status of a lemma. The proof is brief and hence we include it for completeness. \begin{lemma}\label{lem:09709s}For any matrix $W$ and symmetric positive definite matrix $G$, \begin{equation}\label{eq:8ys98hs}\Null{W} = \Null{W^\top G W}\end{equation} and \begin{equation}\label{eq:8ys98h986ss}\Range{W^\top } = \Range{W^\top G W}.\end{equation} \end{lemma} \begin{proof} In order to establish \eqref{eq:8ys98hs}, it suffices to show the inclusion $\Null{W} \supseteq \Null{W^\top G W}$ since the reverse inclusion trivially holds. Letting $s\in \Null{W^\top G W}$, we see that $\|G^{1/2}Ws\|^2=0$, which implies $G^{1/2}Ws=0$. Therefore, $s\in \Null{W}$. Finally, \eqref{eq:8ys98h986ss} follows from \eqref{eq:8ys98hs} by taking orthogonal complements. Indeed, $\Range{W^\top}$ is the orthogonal complement of $\Null{W}$ and $\Range{W^\top G W}$ is the orthogonal complement of $\Null{W^\top G W}$. \qed \end{proof} \bigskip The following technical lemma is a variant of a standard result of linear algebra (which is recovered in the $B=I$ case). While the results are folklore and easy to establish, in the proof of our main theorem we need certain details which are hard to find in textbooks on linear algebra, and hence hard to refer to. For the benefit of the reader, we include the detailed statement and proof. \begin{lemma} [Decomposition and Projection]\label{eq:decomposition} Each $x\in \R^n$ can be decomposed in a unique way as $x = s(x) + t(x)$, where $s(x)\in \Range{B^{-1}A^\top}$ and $t(x)\in \Null{A}$. Moreover, the decomposition can be computed explicitly as \begin{equation} \label{eq:98hs8hss}s(x) = \arg \min_{s} \left\{ \|x-s\|_B \;:\; s\in \Range{B^{-1}A^\top} \right\}= B^{-1} Z_A x \end{equation} and \begin{equation} \label{eq:98hs8htt}t(x) = \arg \min_{t} \left\{ \|x-t\|_B \;:\; t\in \Null{A} \right\}= (I - B^{-1}Z_A) x,\end{equation} where \begin{equation}\label{eq:Z_A}Z_A\eqdef A^\top (AB^{-1}A^\top)^\dagger A.\end{equation} Hence, the matrix $B^{-1}Z_A$ is a projector in the $B$-norm onto $\Range{B^{-1}A^\top}$, and $I-B^{-1}Z_A$ is a projector in the $B$-norm onto $\Null{A}$. Moreover, for all $x\in \R^n$ we have $\|x\|_B^2 = \|s(x)\|_B^2 + \|t(x)\|_B^2$, with \begin{equation}\label{eq:iuhiuhpp}\|t(x)\|_B^2 = \|(I-B^{-1}Z_{A})x\|_B^2 = x^\top (B-Z_{ A}) x\end{equation} and \begin{equation}\label{eq:iuhiuhppss}\|s(x)\|_B^2 = \|B^{-1}Z_{A} x\|_B^2 = x^\top Z_{ A} x.\end{equation} Finally, \begin{equation}\label{eq:ugisug7sss}\Rank{A} = \Tr{B^{-1}Z_A}.\end{equation} \end{lemma} \begin{proof} Fix arbitrary $x\in \R^n$. We first establish existence of the decomposition. By Lemma~\ref{lem:09709s} applied to $W=A^\top$ and $G=B^{-1}$ we know that there exists $u$ such that $Ax = A B^{-1}A^\top u$. Now let $s = B^{-1}A^\top u$ and $t = x-s$. Clearly, $s\in \Range{B^{-1}A^\top}$ and $t\in \Null{A}$. For uniqueness, consider two decompositions: $x = s_1+ t_1$ and $x=s_2 + t_2$. Let $u_1,u_2$ be vectors such that $s_i = B^{-1}A^\top u_i$, $i=1,2$. Then $AB^{-1}A^\top(u_1-u_2)=0$. Invoking Lemma~\ref{lem:09709s} again, we see that $u_1-u_2\in \Null{A^\top}$, whence $s_1 = B^{-1}A^\top u_1 = B^{-1}A^\top u_2 = s_2$. Therefore, $t_1 = x - s_1 = x-s_2 = t_2$, establishing uniqueness. Note that $s = B^{-1}A^\top y$, where $y$ is any solution of the optimization problem \[\min_y \tfrac{1}{2}\|x-B^{-1}A^\top y\|_B^2.\] The first order necessary and sufficient optimality conditions are $Ax = AB^{-1}A^\top y$. In particular, we may choose $y$ to be the least norm solution of this system, which gives $y=(AB^{-1}A^\top)^\dagger Ax$, from which \eqref{eq:98hs8hss} follows. The variational formulation \eqref{eq:98hs8htt} can be established in a similar way, again via first order optimality conditions (note that the closed form formula \eqref{eq:98hs8htt} also directly follows from \eqref{eq:98hs8hss} and the fact that $t = x - s$). Next, since $x=s+t$ and $s^\top B t = 0$, \begin{equation}\label{eq:09u0hss} \|t\|_B^2=(t+s)^\top B t = x^\top B t \overset{\eqref{eq:98hs8htt}}{=} x^\top B (I-B^{-1}Z_{ A})x = x^\top (B - Z_{ A}) x \end{equation} and \[ \|s\|_B^2 = \|x\|_B^2 - \|t\|_B^2 \overset{\eqref{eq:09u0hss}}{=} x^\top Z_A x.\] It only remains to establish \eqref{eq:ugisug7sss}. Since $B^{-1}Z_A$ is a projector onto $\Range{B^{-1}A^\top}$ and since the trace of each projector is equal to the dimension of the space they project onto, we have $\Tr{B^{-1}Z_A} = \dim(\Range{B^{-1}A^\top}) = \dim(\Range{A^\top}) = \Rank{A}$.\qed \end{proof} \end{document}
{"config": "arxiv", "file": "1512.06890/SDA.tex"}
\section{$P_k$-factor game} \label{wPkf} \begin{proofof}{Theorem~\ref{weakPk}} If $k=2$ and $a$ is any constant, then $\pkf$ is a perfect matching and we can use the proof of Theorem~\ref{WeakPM}. So, we let $k\geq 3$ and we fix some $\delta < 1/(8k)$. We first give a Maker's strategy. Then we prove that she can follow it and win within $\left\lceil(k-1)n/(ka)\right\rceil$ rounds.\\ \noindent \textbf{Maker's strategy} is to build $n/k$ vertex disjoint paths of length $k-1$. During the course of the game, the collection of all paths in her graph is denoted by $\P$. Each path in $\P$ belongs to exactly one of the three classes: $\P_u$, which denotes the collection of \textit{unfinished} paths (the paths of length at most $k-3$) , $\P_f$, which denotes the collection of the \textit{finished} paths (the paths of length exactly $k-2$) or $\P_c$, which denotes the collection of \textit{complete} paths (the paths of length exactly $k-1$). Maker's strategy consists of three stages. In Stage I of her strategy, Maker makes sure that every unfinished path becomes (at least) a finished path, while in the following stages she aims for complete paths. The set of isolated vertices in Maker's graph is denoted by $U=V\setminus V(\P)$. By $\End(\P)$ we denote the set of endpoints of all paths. At the beginning, $\P:=\P_u$ contains $n/k$ arbitrarily chosen vertices; $\P_f$ and $\P_c$ are empty. If $P$ is a path in Maker's graph, then $v_1^P$ and $v_2^P$ represent its endpoints. \\ \noindent \textbf{Stage I.} In this stage, Maker plays as follows: She gradually extends the unfinished paths with the vertices from $U$ until they are finished. From time to time, we allow her to complete some of these paths in order to keep control on the distribution of Breaker's edges (as described by properties (Q1)--(Q3) in the following paragraph). After each step, the sets $\P_u, \P_f, \P_c$ and $U$ are dynamically updated in the obvious way. That is, whenever Maker extends one of her paths, $P$, by some vertex $u\in U$, this vertex is removed from $U$ and added to $P$, while $P$ may be moved from $\P_u$ to $\P_f$ or from $\P_f$ to $\P_c$ according to its new length. \\ During Stage I, for a given graph $G$, we say that $(G,\P)$ is {\em good} if the following properties hold: \begin{itemize} \item[$(Q1)$] $\forall u \in U : d_G(u,\End(\P_u\cup \P_f))<\delta n$; \item[$(Q2)$] $G[U] = \emptyset$; \item[$(Q3)$] $\forall P \in \P_u \cup \P_f: d_G(v_1^P,U)+d_G(v_2^P,U)\leq 1$. \end{itemize} In each move during this stage, Maker claims $a$ free edges between $U$ and $\End(\P_u \cup \P_f)$, so that after her move $(B,\P)$ is good. In her $i^{th}$ move, Maker chooses the edges $e_1=e_1^{(i)},\dots,e_a=e_a^{(i)}$ one after another. She makes sure that for every $t\in \{0,1,\ldots,a\}$ the following holds: \begin{itemize} \item[$(Q4)$] Immediately after the edges $e_1,\ldots,e_t$ are claimed (and the paths are updated accordingly), there is a subgraph $H=H_t\subseteq B$ with $e(H)=a-t$ such $(B\setminus H,\P)$ is good. \end{itemize} Assume $e_1,\ldots,e_t$ are already claimed and $\P$ is updated accordingly. Then, as next Maker chooses a free edge $e_{t+1}$ according to the following rules: \begin{itemize} \item[\textbf{R1.}] If there is $u \in U$ with $d_B(u,\End(\P_u\cup \P_f))\geq \delta n$, then $e_{t+1}$ is chosen such that it extends a path from $\P_u$ by the vertex $u$, if $|\P_u|\geq a+ 1$, or a path from $\P_f$ by the vertex $u$, otherwise. \item[\textbf{R2.}] Otherwise, if there is a path $P \in \P_u$ with $d_{B}(v_1^P,U)+d_{B}(v_2^P,U)\geq 2$, then there is an edge $v_i^Px\in H_t$ with $x\in U$, $i\in [2]$. Maker claims an arbitrary free edge $e_{t+1}=v_i^Pu$ with $u\in U$. \item[\textbf{R3.}] Otherwise, if there is a path $P \in \P_f$ with $d_{B}(v_1^P,U)+d_{B}(v_2^P,U)\geq 2$, then there is an edge $v_i^Px\in H_t$ with $x\in U$, $i\in [2]$. Then \begin{enumerate}[a)] \item if there is a path $P_0\in \P_u$, Maker claims an arbitrary free edge $e_{t+1}\in E(\End(P_0),x)$, \item otherwise, if $\P_u=\emptyset$, she claims an arbitrary free edge $e_{t+1}=v_i^Pu$ with $u\in U$. \end{enumerate} \item[\textbf{R4.}] Otherwise, if there is $uw\in E_B(U)$, w.l.o.g.\ $d_{B}(w,\End(\P_u\cup \P_f))\leq d_{B}(u,\End(\P_u\cup \P_f))$, then Maker proceeds as follows. \begin{enumerate}[a)] \item If $d_{B}(u,\End(\P_u\cup \P_f))= d_{B}(w,\End(\P_u\cup \P_f)) \geq \delta n-1$, then Maker chooses a free edge $e_{t+1}=ux$ with $x\in N_{B}(w,\End(\P_u\cup \P_f))$. Its existence is proved later. \item Otherwise, if $d_{B}(u,\End(\P_u\cup \P_f))\geq \delta n-1> d_{B}(w,\End(\P_u\cup \P_f))$, then let $P\in \P_u\cup \P_f$ be a path with $e_B(\End(P),u)=0$ and $d_B(v_i^P,U)=0$ for some $i\in [2]$. Its existence is proved later. Maker then sets $e_{t+1}=uv_{3-i}^P$. \item Otherwise, if $d_{B}(u,\End(\P_u\cup \P_f)), d_{B}(w,\End(\P_u\cup \P_f)) < \delta n-1$, let $P\in \P_u\cup P_f$, giving priority to unfinished paths, and let $d_{B}(v_i^P,U)\geq d_{B}(v_{3-i}^P,U)$ for some $i\in [2]$. Then Maker claims one of the edges $v_i^Pu$, $v_i^Pw$. \end{enumerate} \item[\textbf{R5.}] Otherwise, in all the remaining cases, Maker extends a path which is not complete by some free edge $e_{t+1}$, arbitrarily, giving priority to unfinished paths. \end{itemize} Stage I ends when after Maker's move, her graph consists only of finished paths, complete paths and isolated vertices. At this point, $(n(k-2))/(ka)+T/{a}$ rounds are played with $T=|P_c|$ being the number of complete paths at the end of Stage I.\\ \noindent \textbf{Stage II.} In the following $\left\lceil (n/k-T)/{a}\right\rceil-1$ rounds, Maker extends the finished paths. This time, Maker is interested in keeping the following property after each of her moves. \begin{itemize} \item[\textit{(F1)}] $\forall P \in \P_f: d_B(v_1^P,U)+d_B(v_2^P,U)\leq 1$. \end{itemize} To describe Maker's strategy, we introduce the following terminology: Let $e\in E(\End(\P_f),U)$. Then $e$ is called {\em good} if it is free; otherwise we call it {\em bad}. We say that $P\in \P_f$ is a {\em bad path} if $d_B(v_1^P,U)+d_B(v_2^P,U)\geq 2$ holds, and with $\P_b\subseteq \P_f$ we denote the dynamic set of all bad paths. Moreover, we introduce the {\em potential} $$\varphi := \sum\limits_{P\in \P_b} (d_B(v_1^P,U)+d_B(v_2^P,U)-1),$$ which measures dynamically the number of edges that need to be deleted from $B$ in order to reestablish Property (F1). Finally, with $e_{end}$ we denote the very last edge claimed by Maker in Stage II. Now, in every round $i$ played in Stage II, Maker claims edges $e_1=e_1^{(i)},\ldots ,e_a=e_a^{(i)}$, one after another. The $j^{\text{th}}$ edge $e_j$ is claimed according to the following rules: \begin{itemize} \item Maker chooses a good edge $e_j$ between some vertex $u\in U$ and an endpoint of some path $P\in \P_f$ such that \begin{itemize} \item[(a)] in case $\varphi>0$, $\varphi$ is decreased after $e_j$ is claimed and all sets are updated, \item[(b)] if $e_j\neq e_{end}$, then $\End(P)\cup \{u\}$ contains a vertex of the largest degree in Breaker's graph among all vertices from $\End(\P_f)\cup U$, \item[(c)] if $e_j=e_a= e_{end}$, then after $e_j$ is claimed and all sets are updated, there is a path $P\in \P_f$ with $e_B(\End(P),U)=0$. \end{itemize} \item After $e_j$ is claimed, Maker removes $u$ from $U$ and $P$ from $\P_f$, and adds $P$ to $\P_c$, before she proceeds with $e_{j+1}$. \end{itemize} The exact details of how Maker finds such an edge $e_j$ will be given later in the proof.\\ {\bf Stage III.} Within one round, Maker claims at most $a$ free edges to complete a $P_k$-factor. The details are given later in the proof.\\ It is evident that if Maker can follow this strategy, she wins the game in the claimed number of rounds. For each of the stages above we show separately that Maker can follow her strategy.\\ \noindent\textbf{Stage I.} We start with the following useful claim. \begin{claim} \label{clm:number_complete} As long as Maker follows Stage I, $|P_c|\leq a+3/\delta < \delta n$. \end{claim} \begin{proof} Following the strategy, Maker only creates complete paths in case there is a vertex of degree at least $\delta n -1$ in Breaker's graph which is used to extend a finished path (cases {R1}, {R4.a,b}), or in case $\P_u=\emptyset$ (cases {R3.b}, {R4.c}, {R5}) holds. The first option happens less than $3/\delta$ times, as Breaker claims less than $n$ edges throughout Stage I. The second option can only happen in the last round of Stage I, which cannot lead to more than $a$ additional complete paths. \end{proof} \medskip Now, by induction on the number of rounds, $i$, we show that Maker can follow the proposed strategy of Stage I. We first observe that before the game starts, $(B,\P)$ is good, as $B$ is empty. Now, let us assume that she could follow the strategy for the first $i-1$ rounds and that immediately after her $(i-1)^{\text{st}}$ move, $(B,\P)$ is good. In particular, this also means that in the next round, Property (Q4) is guaranteed for $t=0$, by choosing $H=H_0$ to be the graph of all the $a$ edges that Breaker claims in round $i$. By induction on the number of Maker's steps in round $i$, we prove that she can claim the edges $e_1,\ldots,e_a$ as described, and that she ensures Property (Q4) for every $t\in \{0,\ldots,a\}$. Setting $t=a$ then tells us that $(B,\P)$ is good immediately after Maker's move, completing the induction on $i$.\\ Let us assume that Maker already claimed $e_1,\ldots,e_t$ and that (Q4) holds after step $t$. Let $H_t$ be the graph guaranteed by (Q4). We now look at the different cases for step $t+1$. \begin{itemize} \item[\textbf{R1.}] In this case there must be an edge $g\in E(H_t)$ between $u$ and $\End(\P_u\cup \P_f)$, as $(B\setminus H_t,\P)$ satisfies Property $(Q1)$ after step $t$. Now, if $|\P_u|\geq a+ 1$, then there needs to be a path $P\in \P_u$ with $d_B(v_1^P,U)+d_B(v_2^P,U)\leq 1$, as $(B\setminus H_t,\P)$ satisfies (Q3) after step $t$ and $e(H_t)\leq a$. Otherwise, Claim \ref{clm:number_complete} ensures that $|\P_f|\geq a+1$ and thus there is a path $P\in P_f$ with the same property. In either case, Maker can extend $P$ by $u$. Set $H_{t+1}:=H_t-g$. Then, after the update, $g\in E(\End(\P))$ holds and therefore $g$ has no influence on (Q1)--(Q3) anymore. Thus, using that $(B\setminus H_t,\P)$ satisfied (Q2) after step $t$, we conclude that $u$ has no edges towards $U$ in $B\setminus H_{t+1}$ after step $t+1$. Now, one easily checks that $(B\setminus H_{t+1},\P)$ is good after step $t+1$. \item[\textbf{R2.}] As $(B\setminus H_t,\P)$ satisfies (Q3) after step $t$, there needs to exist an edge $v_i^Px$ as claimed. Moreover, we have $|U|\geq |\P_u\cup \P_f|= \frac{n}{k} - |P_c|>d_B(v_i^P,U)$ where the last inequality follows from Claim \ref{clm:number_complete} and the fact that $(B\setminus H_t,\P)$ satisfies (Q3) after step $t$. Thus, Maker can claim an edge $v_i^Pu$ as proposed. Afterwards, we set $H_{t+1}:=H_t-v_i^Px$. Then $u$ has no edge towards $U$ in $B\setminus H_{t+1}$ and the edge $v_i^Px$ has no influence on (Q1)--(Q3) anymore. We therefore conclude that $(B\setminus H_{t+1},\P)$ is good after step $t+1$. \item[\textbf{R3.}] The existence of $v_i^Px$ is given as in case {R2}. If there is a path $P_0\in \P_u$, then not both edges $v_1^{P_0}x,v_2^{P_0}x$ can be claimed by Breaker, as otherwise $P_0$ would force case {R2}. If otherwise $\P_u=\emptyset$, then analogously to the argument in case {R2}, we have $|U|>d_B(v_i^P,U)$. So, in either case, Maker can claim an edge as proposed by the strategy. After the update of $\P$, we obtain analogously to the previous case that $(B\setminus H_{t+1})$ is good with $H_{t+1}:=H_t-v_i^Px$. \item[\textbf{R4.}] As case {R1} does not occur, we know that $d_{B}(u,\End(\P_u\cup \P_f)),d_{B}(w,\End(\P_u\cup \P_f))<\delta n$. In particular, a)--c) cover all possible subcases. Moreover, as $(B\setminus H_t,\P)$ satisfies (Q2) after step $t$, we must have $uw\in E(H_t)$. \begin{enumerate}[a)] \item Assume there is no edge $ux$ as proposed by the strategy. Then there must be at least $ \delta n-1$ vertices in $\End(\P_u\cup \P_f)$ which have degree at least $|\{u,w\}|=2$ in $B$, contradicting the fact that $(B\setminus H_t,\P)$ satisfies (Q3) after step $t$ and $e(H_t)\leq a$. So, Maker can claim an edge $ux$ as proposed. In case $xw\in E(H_{t})$, we set $H_{t+1}:=H_t-xw$. Then, after the update $xw\notin E(\End(\P)\cup U)$ holds, i.e.\ this edge has no influence on (Q1)--(Q3). Moreover, $u$ has no edge towards $U$ in $B\setminus H_{t+1}$ after step $t+1$. Otherwise, we set $H_{t+1}:=H_t-uw$. To see that again $(B\setminus H_{t+1},\P)$ is good after step $t+1$, just observe the following: After step $t+1$, $u$ has exactly one neighbour in $U$ (namely $w$) in the graph $B\setminus H_{t+1}$. But, as $xw\in E(B\setminus H_t)$ and (Q3) was fulfilled by $B\setminus H_t$ after step $t$, we know that the other endpoint of the path $P$ has no edges towards $U$ in $B\setminus H_{t+1}$. Moreover, $d_{B\setminus H_{t+1}}(w,\End(\P_f\cup \P_u))\leq d_{B}(w,\End(\P_f\cup \P_u)) <\delta n$ is maintained, as in $B$, $w$ gains $u$ and looses $x$ as a neighbour in $\End(\P_f\cup \P_u)$. \item By Claim~\ref{clm:number_complete} and since $(B\setminus H_t,\P)$ is good after step $t$, we have that $|\P_u\cup \P_f|\geq \frac{n}{k}-\delta n>3a+d_{B\setminus H_t}(u,\End(\P_u\cup \P_f))\geq 2a+d_{B}(u,\End(\P_u\cup \P_f))$. In particular, there are at least $2a$ paths $P\in \P_u\cup \P_f$ with $e_B(\End(P),u)=0$. As $e(H_t)\leq a$ and since $(B\setminus H_t,\P)$ satisfied (Q3) after step $t$, there needs to be such a path with $d_B(v_i^P,U)=0$ for some $i\in [2]$. Obviously Maker can claim the edge $v_{3-i}^Pu$, as $e_B(\End(P),u)=0$. Now, set $H_{t+1}:=H_t-uw$. Then to see that $(B\setminus H_{t+1},\P)$ is good after step $t+1$, just notice that $u$ has only one edge, $uw$, towards $U$ in $B\setminus H_{t+1}$, while $d_{B\setminus H_{t+1}}(v_i^P,U)=0$. Moreover, $d_B(w,\End(\P_u\cup \P_f))<\delta n$ is guaranteed by the assumption on $w$ for this case and since $w$ gains at most one new neighbour among $\End(\P)$, namely $u$. \item As the cases {R2} and {R3} do not occur, we have $d_B(v_{3-i}^P,U)=0$ and $d_B(v_{i}^P,U)\leq 1$. In particular, one of the edges $v_i^Pu,v_i^Pw$ is free, so Maker can follow the proposed strategy. W.l.o.g.\ let Maker claim $v_i^Pu$. Set $H_{t+1}:=H_t-uw$. Then after step $t+1$, $u$ has exactly one edge towards $U$ (namely $uw$) in $B\setminus H_{t+1}$, while $d_{B\setminus H_{t+1}}(v_{3-i}^P,U)=0$. Moreover, $d_B(w,\End(\P_u\cup \P_f))<\delta n$ is guaranteed as in the previous case. Therefore, we conclude analogously that $(B\setminus H_{t+1},\P)$ is good after step $t+1$. \end{enumerate} \item[\textbf{R5.}] As the cases {R1}--{R4} do not occur, $(B,\P)$ is good after step $t$. Therefore, Maker can easily follow the proposed strategy and afterwards $(B\setminus H_{t+1},\P)$ is good for every graph $H_{t+1}$. \end{itemize} So, in either case Maker can follow the proposed strategy for Stage I.\\ {\bf Stage II.} At any moment throughout Stage II, let $p:=|\P_f|$ and let $br$ denote the number of bad edges. By the definition of $\varphi$, it always holds that $br\leq p+\varphi$. Moreover, before every move of Maker in this stage, we have $p\geq a+1$, and therefore $p\geq 2$ holds immediately before $e_{end}$ is claimed.\\ By induction on the number of rounds, $i$, we now prove that Maker can follow the proposed strategy and always maintains (F1). We can assume that (F1) was already satisfied after Maker's last move in Stage I, using (Q3). Now, assume that Maker's $i^{\text{th}}$ move happens in Stage II. As, by induction, (F1) was satisfied immediately after her previous move and as Breaker afterwards claimed only $a$ edges in his previous move, we know that $\varphi\leq a<p$ before Maker's move, and therefore $br\leq p+\varphi<2p$. We now observe that such a relation can be maintained as long as Maker can follow her strategy. \begin{claim}\label{clm:StageII} Assume that Maker can follow her strategy. Then, after $e_j$ is claimed and $\P_f, U$ are updated accordingly, $\varphi<p$ and $br<2p$ hold. \end{claim} \begin{proof} For induction assume that $\varphi<p$ and $br<2p$ hold immediately after $e_{j-1}$ is claimed. When Maker claims $e_j$, two cases may occur. If $\varphi=0$ holds, then after $e_j$ is claimed, we still have $\varphi=0$ and $br\leq p+\varphi<2p$. Otherwise, we have $\varphi>0$, in which case Maker claims an edge that decreases the value of $\varphi$. As $p$ decreases by one within one step of Maker, we thus obtain that $\varphi<p$ and $br\leq p+\varphi<2p$ are satisfied after all updates. \end{proof} \medskip With this claim in hand, we can deduce that Maker can always follow the proposed strategy. Consider first that $e_j\neq e_{end}$. We note that each path in $\P_f$ and each vertex in $U$ is incident with $2p>br$ edges from $E(\End(\P_f),U)$. Thus each such path and each such vertex intersects at least one good edge. Let $v$ be a vertex of the largest Breaker's degree among all vertices in $\End(\P_f)\cup U$. Assume first that $v\in \End(P)$ for some $P\in\P_f$. If $\varphi=0$ or $P\in P_b$, then Maker claims an arbitrary good edge $e_j\in E(\End(P),U)$ which exists as explained above. Just note that in case $P\in P_b$, the value of $\varphi$ will be decreased, as $P$ gets removed from $\P_b$. Moreover, $v$ is contained in the path $P$ (after the update). Otherwise, if $\varphi>0$ and $P\notin P_b$, we find some bad path $P_0\neq P$ and some bad edge $u_0v_0$ with $u_0\in U$ and $v_0\in\End(P_0)$. Then Maker claims an edge $e_j=xu_0$, with $x\in \End(P)$ and $d_B(x,U)=0$. Such a vertex exists, as we assumed $P\notin P_b$. Again this decreases $\varphi$ by (at least) one, as $d_B(v_1^{P_0},U)+d_B(v_2^{P_0},U)$ decreases by (at least) one after $u_0$ is removed from $U$; and again $v$ is contained in the updated path $P$. Assume then that $v\in U$. If $\varphi=0$, then Maker can choose $e_j$ to extend an arbitrary path in $\P_f$ by the vertex $v$, which is possible as there are no bad paths. So, assume that there is a path $P\in P_b$. If there is a good edge in $E(v,\End(P))$, Maker claims such an edge and then $\varphi$ decreases as $P$ is removed from $\P_f$, and $v$ again is contained in the updated path $P$. If there is no such good edge, then Maker claims an arbitrary good edge between $v$ and some path $P_0\in \P_f\setminus \{P\}$. Then, $\varphi$ decreases as $e_B(v,\End(P))=2$ and $v$ gets removed from $U$, and $v$ finally belongs to the updated path $P_0$.\\ Consider then that $e_j=e_a=e_{end}$, and recall that $p\geq 2$ before Maker claims $e_{end}$. We know that $\varphi\leq a$ holds before Maker's first step in round $i$. As Maker decreased the value of $\varphi$ by at least one, in case $\varphi>0$, with every previous edge in this round, we know that $\varphi\leq 1$ immediately before she wants to claim $e_{end}$. W.l.o.g.\ let $\varphi =1$, and let $P_0$ be the unique bad path. Note that then $e_B(\End(P_0),U)=2$. Moreover, let $P\in \P_f\setminus \{P_0\}$. If $e_B(\End(P),U)=0$, then Maker extends $P_0$ by an arbitrary edge $e_{end}$ (which is possible as $br<2p$). Then, after the update, $\varphi=0$ holds as $P_0$ is removed from $\P_f$, and $P$ satisfies $e_B(\End(P),U)=0$. Otherwise, we have $e_B(\End(P),U)=1$ as $P\notin \P_b$, and thus there is a unique vertex $u$ such that $e_B(\End(P),u)=1$. If $e_B(\End(P_0),u)\leq 1$ holds, then Maker claims an edge $e_{end}=uv_i^{P_0}$ with $i\in [2]$. Afterwards, $P$ satisfies $e_B(\End(P),U)=0$ as $u$ gets removed from $U$; $\varphi=0$ holds as $P_0$ is removed from $\P_b$. Otherwise, we have $e_B(u,\End(P_0))=2=e_B(U,\End(P_0))$ as $\varphi=1$. In this case Maker just claims a free edge $e_{end}=uv_i^P$ with $i\in [2]$ (which is possible as $e_B(\End(P),u)=1$). Then, $P_0$ satisfies $e_B(\End(P_0),U)=0$ after $u$ is removed from $U$, and as $P_0$ is not bad anymore, i.e.\ $\varphi=0$.\\ In total, we see that in either case Maker can claim the edges $e_j$ as proposed. Finally, Property (F1) always holds immediately after $e_a$ is claimed. For this just recall that $\varphi\leq a$ holds immediately before a Maker's move, and that Maker reduces $\varphi$ by at least one in each step as long as $\varphi>0$ holds. That is, we obtain $\varphi=0$ at the end of her move, which makes Property (F1) hold.\\ {\bf Stage III.} Finally, we prove that Maker can finish a $P_k$-factor within one additional round. We start with the following claim. \begin{claim} \label{clm:StageIII} Before Maker's move in Stage III, $d_B(v)<2\delta n$ holds for every $v\in (U \cup End(\P_f))$. \end{claim} \begin{proof} Suppose that the statement does not hold, i.e.\ there exists a vertex $ v\in U \cup End(\P_f)$ such that $d_B(v)\geq 2\delta n$. Then, during the last $\left\lfloor 3/\delta \right\rfloor$ rounds of Stage II, $d_B(v)>\delta n$. However, in each step of these rounds (except when claiming $e_{end}$), Maker included a vertex $w$ into some complete path for which $d_B(w)\geq d_B(v)>\delta n$ was satisfied (see (b)). But then, Breaker would have claimed more than $n$ edges, a contradiction to the number of edges he could claim in all rounds so far. \end{proof} \medskip With this claim in hand, we are able to describe how to finish a $P_k$-factor within one further round. We distinguish between the following cases depending on the size of $U$.\\ \textbf{Case 1: $\mathbf{0<|U|\leq a/2}$.} We denote the isolated vertices in $U$ by $u_1,u_2,\ldots, u_t$, and the paths in $\P_f$ by $P_1\dots,{P_t}$. Using Claim \ref{clm:StageIII}, for every $i\in [t]$, we find at least $|\P_c|-4\delta n>n/(4k)$ paths $R\in \P_c$ such that $u_iv_1^{R}, v_2^{R}v_1^{P_i}$ are free. We thus can fix $t$ distinct paths $R_1,\ldots,R_t\in \P_c$ such that, for every $i\in [t]$, the edges $u_iv_1^{R_i}, v_2^{R_i}v_1^{P_i}$ are free. Maker claims these edges, in total at most $a$, and by this completes a $\pkf$, as $V(R_i)\cup V(P_i)\cup \{u_i\}$ contains a copy of $P_{2k}$ for every $i\in [t]$. \textbf{Case 2: $\mathbf{a/2<|U|=a}$.} Let $\G=(\P_f\cup U,E(\G))$ be the bipartite graph where two vertices $u\in U$ and $P\in \P_f$ form an edge if and only if $d_B(u,\End(P))\leq 1$. Then $e(\overline{\cal G})\leq a-1$ holds, as after Maker's last move in Stage II we had $e_B(\End(\P_f),U)\leq a-1$ (by (F1) and (c)), while Breaker afterwards claimed at most $a$ bad edges. By the theorem of K\"onig-Egervary (see e.g. \cite{West}) we thus obtain that $\cal G$ contains a matching of size at least $$\frac{|U|^2-e(\overline{\cal G})}{|U|}>\begin{cases} |U|-1, & \text{if $|U|=a$}\\ |U|-2, & \text{if $a/2<|U|<a$.}\\ \end{cases}$$ So, in case $|U|=a$, we have that ${\cal G}$ contains a perfect matching, say $u_1P_1, u_2P_2,\ldots, u_aP_a$. Then Maker claims a good edge in $E(u_i,\End(P_i))$ for every $i\in [a]$, and by this creates a copy of $\pkf$. Otherwise, in case $a/2<|U|\leq a-1$, we find a matching of size $|U|-1$, say $u_1P_1, u_2P_2,\ldots, u_{|U|-1}P_{|U|-1}$. Maker then claims a good edge in $E(u_i,\End(P_i))$ for every $i\in [|U|-1]$, in total at most $a-2$ edges. For the remaining (unique) vertices $u\in U$ and $P\in P_f$ that are not covered by the matching, we proceed as in Case 1: we find a path $R\in \P_c$ such that the edges $uv_1^{R}, v_2^{R}v_1^{P}$ are free, which then Maker claims to complete a $P_k$-factor. \end{proofof}
{"config": "arxiv", "file": "1602.02495/chapter/weakPk_new.tex"}
TITLE: Can we use algebra and trig identities to show that there exist $x,y \in \mathbb{R}$ such that $\cos(x)+i\sin(x)$ = $\frac{1+i\tan(y)}{1-i\tan(y)}$? QUESTION [0 upvotes]: Do there exis $x,y \in \mathbb{R}$ such that $\cos(x)+i\sin(x)$ = $\frac{1+i\tan(y)}{1-i\tan(y)}$? I think this is true because it can be shown that for any $z \in \mathbb{C}$ such that $|z|=1$ we have that $z = \frac{1+it}{1-it}$ for some $t \in \mathbb{R}$ (Show that for all complex number $z \not= -1$ and $|z|=1$ can be written as a form : $z= \frac{1+it}{1-it}$, $t \in \mathbb{R}$) and also that $z = \cos(x) + i\sin(x)$ for some $x \in [0,2\pi)$. Then, since tangent gives a bijection between $(\frac{-\pi}{2},\frac{\pi}{2})$ and $\mathbb{R}$ there exists $y \in (\frac{-\pi}{2},\frac{\pi}{2})$ such that $\tan(y) = t$. And so $$\cos(x)+i\sin(x) = z = \frac{1+it}{1-it} = \frac{1+i\tan(y)}{1-i\tan(y)}.$$ Is there a slick way to prove this with trig identities and a nice relation between $x$ and $y$? Thanks. edit: I worded my question wrong. I guess I should have asked if there exists a function $f$ such that such that $\cos(x)+i\sin(x)$ = $\frac{1+i\tan(f(x))}{1-i\tan(f(x))}$? for all $x \in [0, 2\pi)$ REPLY [3 votes]: $$ e^{ix} =\cos x+i\sin x =\frac{1+i\tan y}{1-i\tan y} =\frac{\cos y (1+i\tan y)}{\cos y (1-i\tan y)} =\frac{\cos y+i\sin y}{\cos y-i\sin y} \\ =\frac{e^{iy}}{e^{-iy}} =e^{2iy} $$ So, $ix\equiv 2iy\bmod{2\pi i}$. This means $x\equiv 2y \bmod{2\pi}$, $x=2y+2n\pi\,(n\in\mathbb Z)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4341508}
\begin{document} \title{Polynomial maps on vector spaces over a finite field} \author{Michiel Kosters} \address{Mathematisch Instituut P.O. Box 9512 2300 RA Leiden the Netherlands} \email{mkosters@math.leidenuniv.nl} \urladdr{www.math.leidenuniv.nl/~mkosters} \date{\today} \thanks{This article is part of my PhD thesis written under the supervision of Hendrik Lenstra.} \subjclass[2010]{11T06} \begin{abstract} Let $l$ be a finite field of cardinality $q$ and let $n$ be in $\Z_{\geq 1}$. Let $f_1,\ldots,f_n \in l[x_1,\ldots,x_n]$ not all constant and consider the evaluation map $f=(f_1,\ldots,f_n) \colon l^n \to l^n$. Set $\deg(f)=\max_i \deg(f_i)$. Assume that $l^n \setminus f(l^n)$ is not empty. We will prove \begin{align*} |l^n\setminus f(l^n)| \geq \frac{n(q-1)}{\deg(f)}. \end{align*} This improves previous known bounds. \end{abstract} \maketitle \section{Introduction} The main result of \cite{WAN3} is the following theorem. \begin{theorem} \label{27c13} Let $l$ be a finite field of cardinality $q$ and let $n$ be in $\Z_{\geq 1}$. Let $f_1,\ldots,f_n \in l[x_1,\ldots,x_n]$ not all constant and consider the map $f=(f_1,\ldots,f_n) \colon l^n \to l^n$. Set $\deg(f)=\max_i \deg(f_i)$. Assume that $l^n \setminus f(l^n)$ is not empty. Then we have \begin{align*} |l^n\setminus f(l^n)| \geq \min\left\{\frac{n(q-1)}{\deg(f)},q\right\}. \end{align*} \end{theorem} We refer to \cite{WAN3} for a nice introduction to this problem including references and historical remarks. The proof in \cite{WAN3} relies on $p$-adic liftings of such polynomial maps. We give a proof of a stronger statement using different techniques. \begin{theorem} \label{27c14} Under the assumptions of Theorem \ref{27c13} we have \begin{align*} |l^n\setminus f(l^n)| \geq \frac{n(q-1)}{\deg(f)}. \end{align*} \end{theorem} We deduce the result from the case $n=1$ by putting a field structure $k$ on $l^n$ and relate the $k$-degree and the $l$-degree. We prove the result $n=1$ in a similar way as in \cite{TUR}. \section{Degrees} Let $l$ be a finite field of cardinality $q$ and let $V$ be a finite dimensional $l$-vector space. By $V^{\vee}=\Hom(V,l)$ we denote the dual of $V$. Let $v_1,\ldots,v_f$ be a basis of $V$. By $x_1,\ldots,x_f$ we denote its dual basis in $V^{\vee}$, that is, $x_i$ is the map which sends $v_j$ to $\delta_{ij}$. Denote by $\Sym_l(V^{\vee})$ the symmetric algebra of $V^{\vee}$ over $l$. We have an isomorphism $l[x_1,\ldots,x_f] \to \Sym_l(V^{\vee})$ mapping $x_i$ to $x_i$. Note that $\Map(V,l)=l^V$ is a commutative ring under the coordinate wise addition and multiplication and it is a $l$-algebra. The linear map $V^{\vee} \to \Map(V,l)$ induces by the universal property of $\Sym_l(V^{\vee})$ a ring morphism $\varphi \colon \Sym_l(V^{\vee}) \to \Map(V,l)$. When choosing a basis, we have the following commutative diagram, where the second horizontal map is the evaluation map and the vertical maps are the natural isomorphisms: \[ \xymatrix{ \Sym(V^{\vee}) \ar[r]^{\varphi} & \Map(V,l) \\ l[x_1,\ldots,x_n] \ar[u] \ar[r] & \Map(l^n,l). \ar[u] } \] \begin{lemma} \label{27c71} The map $\varphi$ is surjective. After a choice of a basis as above the kernel is equal to $(x_i^q-x_i: i=1,\ldots,f)$ and every $f \in \Map(V,l)$ has a unique representative $\sum_{m=(m_1,\ldots,m_n):\ 0 \leq m_i \leq q-1} c_m x_1^{m_1}\cdots x_n^{m_n}$ with $c_m \in l$. \end{lemma} \begin{proof} After choosing a basis, we just consider the map \begin{align*} l[x_1,\ldots,x_f] \to \Map(l^n,l). \end{align*} For $c=(c_1,\ldots,c_f) \in l^f$ set \begin{align*} f_c = \prod_{i} (1-(x_i-c_i)^q). \end{align*} For $c' \in l^f$ we have $f_c(c')=\delta_{cc'}$. With these building blocks one easily shows that $\varphi$ is surjective. For $i \in \{1,2,\ldots,f\}$ the element $x_i^q-x_i$ is in the kernel of $\varphi$. This shows that modulo the kernel any $g \in l[x_1,\ldots,x_f]$ has a representative \begin{align*} f = \sum_{m=(m_1,\ldots,m_r):\ 0 \leq m_i \leq q-1} c_m x_1^{m_1}\cdots x_r^{m_n}. \end{align*} The set of such elements has cardinality $q^{q^r}$. As $\# \Map(V,l)=q^{q^r}$, we see that the kernel is $(x_i^q-x_i: i=1,\ldots,r)$. Furthermore, any element has a unique representative as described above. \end{proof} Note that $\Sym_l(V^{\vee})$ is a graded $l$-algebra where we say that $0$ has degree $-\infty$. For $f \in \Map(V,l)$ we set \begin{align*} \deg_l(f)=\mathrm{min}\left(\deg(g):\varphi(g)=f\right). \end{align*} Note that $\deg_l(f_1+f_2) \leq \max(\deg_l(f_1),\deg_l(f_2))$, with equality if the degrees are different. In practice, if $f \in l[x_1,\ldots,x_n]$, then $\deg_l(f)$ is calculated as follows: for all $i$ replace $x_i^q$ by $x_i$ until $\deg_{x_i}(f)<q$. Then the degree is the total degree of the remaining polynomial. Let $W$ be a finite dimensional $l$-vector space. Then we have $\Map(V,W)=W \otimes_l \Map(V,l)$. For $f \in \Map(V,W)$ we set \begin{align*} \deg_l(f) = \max \left( \deg_l(g \circ f): g \in W^{\vee} \right). \end{align*} If $g_1,\ldots,g_n$ is a basis of $W^{\vee}$, then $\deg_l(f)= \max \left( \deg_l(g_i \circ f): i =1,\ldots,n \right)$. This follows from the identity $\deg_l( \sum_{i} c_i g_i) \leq \max( \deg_l(g_i))$ for $c_i \in l$. Note that the degree is bounded above by $(q-1) \cdot \dim_l(V)$. For $i \in \Z_{\geq 0}$ and a subset $S$ of $\Sym_l(V^{\vee})$ we set \begin{align*} S^i_l = \Span_l( s_1 \cdot \ldots \cdot s_i: s_i \in S) \in \Sym_l(V^{\vee}). \end{align*} \begin{lemma} \label{27c90} Let $f \in \Map(V,W)$. For $i \in \Z_{\geq 0}$ one has: $\deg_f \leq i$ $\iff$ $f \in W \otimes_l (l+V^{\vee})^i_l$. \end{lemma} \begin{proof} Suppose first that $W=l$. The proof comes down to showing the following identity for $i \in \Z_{\geq 0}$: \begin{align*} l+V^{\vee}+\ldots+ \left(V^{\vee} \right)^i_l = (l+V^{\vee})^i_l. \end{align*} The general case follows easily. \end{proof} \section{Relations between degrees} Let $k$ be a finite field and let $l$ be a subfield of cardinality $q$. Set $h=[k:l]$. Let $V$ and $W$ be finite dimension $k$-vector spaces. Let $f \in \Map(V,W)$. In this section we will describe the relation between the $k$-degree and the $l$-degree. Let us first assume that $W=k$. Let $v_1,\ldots,v_r$ be a basis of $V$ over $k$. Let $R=k[x_1,\ldots,x_r]/(x_1^{q^h}-x_1,\ldots,x_r^{q^h}-x_r)$. We have the following diagram where all morphisms are ring morphisms. Here $\psi$ is the map discussed before, $\tau$ is the natural isomorphism, $\overline{\varphi}$ is the isomorphism discussed before, and $\sigma$ is the isomorphism, depending on the basis, discussed above. \[ \xymatrix{ k \otimes_l \Map(V,l) \ar[r]^{\tau} & \Map(V,k) &\\ k \otimes_l \Sym_l(\Hom_l(V,l)) \ar[u]^{\psi} & \Sym_k(\Hom_k(V,k))/\ker(\varphi) \ar[u]_{\overline{\varphi}} \ar[r]^{\ \ \ \ \ \ \ \ \ \ \ \ \sigma} & R. } \] Consider the ring morphism $\rho= \sigma \circ \overline{\varphi}^{-1} \circ \tau \circ \psi \colon k \otimes_l \Sym_l(\Hom_l(V,l)) \to R$. Lemma \ref{27c90} suggest that to compare degrees, we need to find \begin{align*} \rho(k \otimes_l (l+\Hom_l(V,l))^i_l). \end{align*} The following lemma says that it is enough to find $k+k \otimes_l \Hom_l(V,l)$. \begin{lemma} \label{27c91} For $i \in \Z_{\geq 0}$ we have the following equality in $k \otimes_l \Sym_l(V)$: \begin{align*} k \otimes_l (l + \Hom_l(V,l))^i_l = \left(k+k \otimes_l \Hom_l(V,l) \right)^i_k. \end{align*} \end{lemma} \begin{proof} Both are $k$-vector spaces and the inclusions are not hard to see. \end{proof} The following lemma identifies $k+k \otimes_l \Hom_l(V,l)$. \begin{lemma} \label{27c92} One has \begin{align*} \rho(k+k \otimes_l \Hom_l(V,l)) = \Span_k \left(\{ x_j^{q^s}: 1 \leq j \leq r,\ 0 \leq s <h \} \sqcup \{1\} \right). \end{align*} \end{lemma} \begin{proof} Note that $\tau \circ \phi (k+k \otimes_l \Hom_l(V,l))= k \oplus \Hom_l(V,k) \subseteq \Map(V,k)$. Note that \begin{align*} \sigma^{-1} \left(\Span_k \left(\{ x_j^{q^s}: 1 \leq j \leq r,\ 0 \leq s <h \} \right) \right)\subseteq \Hom_l(V,k) . \end{align*} As both sets have dimension $\dim_l(V)=r \cdot h$ over $k$, the result follows. \end{proof} For $m, n \in \Z_{\geq 1}$ we set $s_m(n)$ to be the sum of the digits of $n$ in base $m$. \begin{lemma} \label{27c93} Let $m \in \Z_{\geq 2}$ and $n, n' \in \Z_{\geq 0}$. Then the following hold: \begin{enumerate} \item $s_m(n+n') \leq s_m(n)+s_m(n')$; \item Suppose $n=\sum_{i} c_i m^i$, $c_i \geq 0$. Then we have $\sum_i c_i \geq s_m(n)$ with equality iff for all $i$ we have $c_i<m$. \end{enumerate} \end{lemma} \begin{proof} i. This is well-known and left to the reader. ii. We give a proof by induction on $n$. For $n=0$ the result is correct. Suppose first that $n=c_s m^s$ and assume that $c_s \geq m$. Then we have $n=(c_s-m)m^s+m^{s+1}$. By induction and i we have \begin{align*} c_s > c_s-m + 1 \geq s_m((c_s-m)m^s)+s_m(m^{s+1}) \geq s_m(c_s m^s). \end{align*} In general, using i, we find \begin{align*} \sum_i c_i \geq \sum_i s_m(c_i m^i) \geq s_m(n). \end{align*} Also, one easily sees that one has equality iff all $c_i$ are smaller than $m$. \end{proof} \begin{proposition} \label{27c33} Let $f \in k[x_1,\ldots,x_r]$ nonzero with the degree in all $x_i$ of all the monomials less than $q^h$. Write $f= \sum_{s=(s_1,\ldots,s_r)} c_s x_1^{s_1}\cdots x_r^{s_r}$. Then the $l$-degree of $\tau^{-1} \circ \overline{\varphi}(\overline{f}) \in k \otimes_l \Map(V,l)$ is equal to \begin{align*} \max\{s_q(s_1)+\ldots+s_q(s_r): s=(s_1,\ldots,s_r) \textrm{ s.t. } c_s \neq 0\}. \end{align*} \end{proposition} \begin{proof} Put $g=\tau^{-1} \circ \overline{\varphi} \circ \sigma^{-1} (\overline{f})$. From Lemma \ref{27c90}, Lemma \ref{27c91} and Lemma \ref{27c92} we obtain the following. Let $i \in \Z_{\geq 0}$. Then $\deg_l(g) \leq i$ iff \begin{align*} g \in \rho(k \otimes_l (l + \Hom_l(V,l))^i_l) &= \rho (\left(k+k \otimes_l \Hom_l(V,l) \right)^i_k) \\ &= \left(\Span_k \left(\{ x_j^{q^s}: 1 \leq j \leq r,\ 0 \leq s <h \} \sqcup \{1\} \right) \right)_k^i. \end{align*} The result follows from Lemma \ref{27c93}. \end{proof} The case for a general $W$ just follows by decomposing $W$ into a direct sum of copies of $k$ and then taking the maximum of the corresponding degrees. \section{Proof of main theorem} \begin{lemma} \label{27c67} Let $m , q, h \in \Z_{>0}$ and suppose that $q^h-1|m$. Then we have: $s_q(m) \geq h(q-1)$. \end{lemma} \begin{proof} We do a proof by induction on $m$. Suppose that $m<q^h$. Then $m=q^h-1$ and we have $s_q(m)=h(q-1)$. Suppose $m \geq q^h$. Write $m=m_0q^h+m_1$ with $0 \leq m_1<q^h$ and $m_0 \geq 1$. We claim that $q^h-1|m_0+m_1$. Note that $ m_0+m_1 \equiv m_0q^h+m_1 \equiv 0 \pmod{q^h-1}$. Then by induction we find \begin{align*} s_q(m)=s_q(m_0)+s_q(m_1) \geq s_q(m_0+m_1) \geq h(q-1). \end{align*} \end{proof} \begin{lemma} \label{27c89} Let $k$ be a finite field of cardinality $q'$. Let $R=k[X_{a}: a \in k]$ and consider the action of $k^{*}$ on $R$ given by \begin{align*} k^* &\mapsto \Aut_{k-\alg}(R) \\ c &\mapsto (X_a \mapsto X_{ca}). \end{align*} Let $F \in R$ fixed by the action of $k^*$ with $F(0,\ldots,0)=0$ and such that the degree of no monomial of $F$ is a multiple of $q'-1$. Then for $w=(a)_{a} \in k^k$ we have $F(w)=0$. \end{lemma} \begin{proof} We may assume that $F$ is homogeneous with $d=\deg(F)$ which is not a multiple of $q'-1$. Take $\lambda \in k^*$ a generator of the cyclic group. As $F$ is fixed by $k^*$ we find: \begin{align*} F(w) = F( \lambda w)=\lambda^{d} F(w). \end{align*} As $\lambda^d \neq 1$, we have $F(w)=0$ and the result follows. \end{proof} Finally we can state and prove a stronger version of Theorem \ref{27c14}. \begin{theorem} \label{27c18} Let $k$ be a finite field. Let $l \subseteq k$ be a subfield with $[k:l]=h$ and let $V$ be a finite dimensional $k$-vector space. Let $f \in \Map(V,V)$ be a non-constant and non-surjective map. Then $f$ misses at least \begin{align*} \frac{\dim_k(V) \cdot h \cdot (\#l-1)}{\deg_l(f)} \end{align*} values. \end{theorem} \begin{proof} Set $\#l =q$. Put a $k$-linear multiplication on $V$ such that it becomes a field. This allows us reduce to the case where $V=k$. Assume $V=k$. After shifting we may assume $f(0)=0$. Put an ordering $\leq$ on $k$. In $k[T]$ we have \begin{align*} \prod_{a \in k} (1-f(a)T) = 1 - \sum_{a} f(a)T+ \sum_{a<b} f(a)f(b)T^2 - \ldots = \sum_{i} a_i T^i. \end{align*} For $1 \leq i < \frac{h(q-1)}{\deg_l(f)}$ we claim that $a_i=0$. Put $f_0 \in k[x]$ a polynomial of degree at most $q^h-1$ inducing $f \colon k \to k$. Consider $g_i=\prod_{a_1<\ldots<a_i} f_0(X_{a_1})\cdots f_0(X_{a_i})$ in $k[X_{a}: a \in k]$, which is fixed by $k^*$. We have a map \begin{align*} \varphi \colon k[X_a: a \in k] \to \Map(k^k,k). \end{align*} Proposition \ref{27c33} gives us that $\deg_l(\varphi(g_i)) = i \deg_l(f)<h (q-1)$. We claim that there is no monomial in $g_i$ with degree a multiple of $q^h-1$. Indeed, suppose that there is a monomial $c X_{a_1}^{r_1}\cdots X_{a_i}^{r_i}$ ($c \neq 0$) in $g_i$ (note that not all $r_i$ are zero) and suppose that $q^h-1|\sum_i r_i$. Then by Lemma \ref{27c67} and Proposition \ref{27c33} we have \begin{align*} h(q-1) \leq s_q(\sum_j r_j) \leq \sum_j s_q(r_j) \leq i \cdot \deg_l(f) < h(q-1), \end{align*} contradiction. Hence we can apply Lemma \ref{27c89} to conclude that $a_i=0$. Hence we conclude \begin{align*} \prod_{a \in k} (1-f(a)T) &\equiv 1 \pmod{T^{\lceil \frac{h(q-1)}{\deg_l(f)} \rceil}}. \end{align*} Similarly, for the identity function of $l$-degree $1$, we conclude \begin{align*} \prod_{a \in k} (1-aT) &\equiv 1 \pmod{T^{\lceil \frac{h(q-1)}{\deg_l(f)} \rceil}}. \end{align*} Combining this gives: \begin{align*} \prod_{a \in k \setminus f(k)} (1-aT) &= \frac{\prod_{a \in k}(1-aT)}{\prod_{b \in f(k)}(1-bT)} \\ &\equiv \frac{\prod_{a \in k}(1-aT)}{\prod_{b \in f(k)}(1-bT)} \cdot \prod_{c \in k} (1-f(c)T) \pmod{T^{\lceil \frac{h(q-1)}{\deg_l(f)} \rceil}}\\ &\equiv \prod_{b \in f(k)} (1-bT)^{\#f^{-1}(b)-1} \pmod{T^{\lceil \frac{h(q-1)}{\deg_l(f)} \rceil}}. \end{align*} Note that the polynomials $\prod_{a \in k \setminus f(k)} (1-aT)$ and $\prod_{b \in f(k)} (1-bT)^{\#f^{-1}(b)-1}$ have degree bounded by $s=k \setminus f(k)$ and are different since $s \geq 1$. But this implies that $s \geq \frac{h(q-1)}{\deg_l(f)}$. \end{proof} \begin{remark} Different $l$ in Theorem \ref{27c18} may give different lower bounds. \end{remark} \section{Examples} In this section we will give examples which meet the bound from Theorem \ref{27c14}. \begin{example}[$n=\deg(f)$] Let $l$ be a finite field of cardinality $q$. In this example we will show that for $n, d \in \Z_{\geq 2}$ there are functions $f_1,\ldots,f_n \in l[x_1,\ldots,x_n]$ such that the maximum of the degrees is equal to $d$ such that the induced map $f \colon l^n \to l^n$ satisfies $|l^n \setminus f(l^n)|=\frac{n(q-1)}{d}=q-1$. For $i=1,\ldots,n-1$ set $f_i=x_i$. Let $l_{n-1}$ be the unique extension of $l$ of degree $n-1$. Let $v_1,\ldots,v_{n-1}$ be a basis of $l_{n-1}$ over $l$. Then $g=\Norm_{l_{n-1}/l}(x_1v_1+\ldots+x_{n-1} v_{n-1})$ is a homogeneous polynomial of degree $n-1$ in $x_1,\ldots,x_{n-1}$. Put $f_n=x_n \cdot g$. As the norm of a nonzero element is nonzero, one easily sees that $l^n \setminus f(l^n)=\{0\} \times \ldots \{0\} \times l^*$ has cardinality $q-1$. \end{example} \begin{example}[$n=\frac{\deg(f)}{q-1}$] Let $l$ be a finite field and let $n \in \Z_{\geq 1}$. Let $f_1,\ldots,f_n \in l[x_1,\ldots,x_n]$ such that the combined map $f \colon l^n \to l^n$ satisfies $|l^n \setminus f(l^n)|=1$ (Lemma \ref{27c71}). From Theorem \ref{27c14} and the upper bound $n(q-1)$ for the degree we deduce that $\deg(f)=n(q-1)$. \end{example}
{"config": "arxiv", "file": "1404.6884.tex"}
TITLE: Series Question: $\sum_{n=1}^{\infty}\frac{n^2}{(4n^2-1)^3}$ QUESTION [4 upvotes]: How to compute the following series: $$\sum_{n=1}^{\infty}\frac{n^2}{(4n^2-1)^3}$$ I tried to use partial fraction $$\begin{align}\frac{n^2}{(4n^2-1)^3}&=\frac{1}{64(2n+1)}-\frac{1}{64(2n-1)}+\frac{1}{64(2n+1)^2}+\frac{1}{64(2n-1)^2}\\&-\frac{1}{32(2n+1)^3}+\frac{1}{32(2n-1)^3}\end{align}$$ I can compute $$\sum_{n=1}^{\infty}\left[\frac{1}{64(2n+1)}-\frac{1}{64(2n-1)}\right]=-\frac{1}{64}$$ using telescoping series, but I cannot compute the rest. I believe there's a better way than this. Any help would be appreciated. Thanks in advance. REPLY [2 votes]: The polygamma functions are very smart to solve this kind of problem :
{"set_name": "stack_exchange", "score": 4, "question_id": 810398}
\begin{document} \begin{frontmatter} \title{An adaptive dynamically low-dimensional approximation method for multiscale stochastic diffusion equations} \author[cuhk]{Eric T. Chung} \ead{tschung@math.cuhk.edu.hk} \author[cuhk]{Sai-Mang Pun} \ead{smpun@math.cuhk.edu.hk} \author[hku]{Zhiwen Zhang\corref{cor1}} \ead{zhangzw@hku.hk} \address[cuhk]{ Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China.} \address[hku]{Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong SAR, China.} \cortext[cor1]{Corresponding author} \begin{abstract} In this paper, we propose a dynamically low-dimensional approximation method to solve a class of time-dependent multiscale stochastic diffusion equations. In \cite{ChengHouZhang1:13,ChengHouZhang2:13}, a dynamically bi-orthogonal (DyBO) method was developed to explore low-dimensional structures of stochastic partial differential equations (SPDEs) and solve them efficiently. However, when the SPDEs have multiscale features in physical space, the original DyBO method becomes expensive. To address this issue, we construct multiscale basis functions within \rev{the framework of generalized multiscale finite element method (GMsFEM)} for dimension reduction in the physical space. To further improve the accuracy, we also perform online procedure to construct online adaptive basis functions. In the stochastic space, we use the generalized polynomial chaos (gPC) basis functions to represent the stochastic part of the solutions. Numerical results are presented to demonstrate the efficiency of the proposed method in solving time-dependent PDEs with multiscale and random features. \end{abstract} \begin{keyword} Uncertainty quantification (UQ); dynamically low-dimensional approximation; online adaptive method; stochastic partial differential equations (SPDEs); generalized multiscale finite element method (GMsFEM). \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:introduction} \noindent Uncertainty arises in many real-world problems of scientific applications, such as heat propagation through random media or flow driven by stochastic forces. These kind of problems usually have multiple scale features involved in the spatial domain. For example, to simulate flows in heterogeneous porous media, the permeability field is often parameterized by random fields with multiple-scale structures. Stochastic partial differential equations (SPDEs), which contain random variables or stochastic processes, play important roles in modeling complex problems and quantifying the corresponding uncertainties. Considerable amounts of efforts have been devoted to study SPDEs, see \cite{babuska:04,Ghanem:91,WuanHou:06,Zabaras:09,Knio:01,matthies:05,Najm:09,Webster:08,Wan:06,Xiu:09} and references therein. These methods are effective when the dimension of solution space is not huge. However, when SPDEs have multiscale features, the SPDE problems become difficult since it requires tremendous computational resources to resolve the small scales of the SPDE solutions. This motivates us to develop efficient numerical schemes to solve these challenging problems. In this paper, we shall consider the time-dependent SPDEs with multiscale coefficients as follows \begin{equation} \frac{\partial u^{\varepsilon}}{\partial t}(x,t,\omega) = \mathcal{L}^{\varepsilon}u^{\varepsilon} (x,t,\omega), \quad x \in \mathcal{D}, \quad t\in (0,T], \quad \omega \in \Omega, \label{Model_Eq} \end{equation} where suitable boundary and initial conditions are imposed, $\mathcal{D}\subset \mathbb{R}^d$ is a bounded spatial domain, $\Omega$ is a sample space, and $\mathcal{L}^{\varepsilon}$ is an elliptic operator that contains multiscale and random coefficient, where the smallest-scale is parameterized by $\varepsilon$. The major difficulties in solving \eqref{Model_Eq} come from two parts. In the physical space, we need a mesh fine enough to resolve the small-scale features. In the random space, we need extra degrees of freedom to represent the random features. Moreover, the problem \eqref{Model_Eq} becomes more difficult if the dimension of the random input is high. To address these challenges, we shall explore low-dimensional structures hidden in the solution $u^{\varepsilon}(x,t,\omega)$. Specifically, if the solution $u^{\varepsilon}(x,t,\omega)$ is a second-order stochastic process at each time $t>0$, i.e., $u^{\varepsilon}(x,t,\omega) \in L^2(\mathcal{D}\times\Omega)$ for each $t$, one can approximate the solution $u^{\varepsilon}(x,t,\omega)$ by its $m$-term truncated Karhunen-Lo\`{e}ve (KL) expansion \cite{Karhunen1947, Loeve1978} \begin{equation} u^{\varepsilon}(x,t,\omega) \approx \bar{u}^{\varepsilon}(x,t) + \sum_{i=1}^m u_i^{\varepsilon}(x,t) Y_i(\omega,t) = \bar{u}^{\varepsilon}(x,t) + \vU (x,t) \vY^T(\omega,t), \label{KLE} \end{equation} where $\vU(x,t) = (u_1^{\varepsilon}(x,t),\cdots,u_m^{\varepsilon}(x,t))$ and $\vY(\omega,t) = (Y_1(\omega,t),\cdots,Y_m(\omega,t))$. The KL expansion gives the compact representation of the solution. However, the direct computation of the KL expansion can be quite expensive since we need to form a covariance kernel and solve a large-scale eigenvalue problem. In \cite{ChengHouZhang1:13,ChengHouZhang2:13}, a dynamically bi-orthogonal (DyBO) method was developed. This new method derives an equivalent system that can faithfully track the KL expansion of the SPDE solution. In other words, the DyBO method gives the evolution equations for $\bar{u}^{\varepsilon}$, $\vU$, and $\vY$. The DyBO method can accurately and efficiently solve many time-dependent SPDEs, such as stochastic Burger's equations and stochastic Navier-Stokes equations, with considerable savings over existing numerical methods. To explore the low-dimensional features of the solutions to the SPDEs, a dynamically orthogonal (DO) method was proposed \cite{sapsis:09}. Later on, the equivalence of DO method and DyBO method has been shown in \cite{choi2014equivalence} and the effectiveness of the DO and DyBO has also been discussed theoretically in \cite{musharbash2015error}. This area is very active and highly demanded due to the latest advances in the UQ research. If the SPDEs have multiscale features in the physical space (i.e., the smallest-scale parameter $\varepsilon$ is extremely small), however, the original DyBO method (as well as the DO method) becomes computationally expensive since one needs enormous degrees of freedom to represent the multiscale features in the physical space. To overcome this difficulty, we shall apply the generalized multiscale finite element methods (GMsFEM) \cite{chung2016adaptive,efendiev2013generalized} to construct multiscale basis functions within each coarse grid block for model reduction in the physical space. In the GMsFEM, we divide the computation into two stages: the offline stage and the online one. In the offline stage, we first compute global snapshot functions within each coarse neighborhood based on the given coarse and fine meshes and construct multiscale basis functions to represent the local heterogeneities. When the snapshot functions are computed, one can construct the multiscale basis functions in each coarse patch by solving some well-designed local spectral problems and identify the crucial multiscale basis functions to form the offline function space. In the online stage, we add more online multiscale basis functions that are constructed using the offline space. These online basis functions are computed adaptively in some selected spatial regions based on the current local residuals and their construction is motivated by the analysis in \cite{chung2015online}. In general, the algorithm can guarantee that additional online multiscale basis functions will reduce the error rapidly if one chooses a sufficient number of offline basis functions. We should point out that there are many existing methods in the literature to solve multiscale problems though; see \cite{Engquist2012,Bourgeat1984, Cruz1995, Dykaar1992,Engquist2003,HanZhangCMS:12,Hou1997,Juanes2005,Peterseim2014,Nolen2008} and references therein. Most of these methods are designed for multiscale problems with deterministic coefficients. In our new method, we first derive the DyBO formulation for the multiscale SPDEs \eqref{Model_Eq}, which consists of deterministic PDEs for $\bar{u}^{\varepsilon}$ and $\vU$ respectively and an ODE system for the stochastic basis $\vY$. For the deterministic PDEs (for $\bar{u}^{\varepsilon}$ and $\vU$) in the formulation, we shall apply the GMsFEM to construct multiscale basis functions and use these multiscale basis functions to represent $\bar{u}^{\varepsilon}$ and $\vU$. It leads to considerable savings over the original DyBO method. For the ODE system, the memory cost is relatively small and we shall apply a suitable ODE solver to compute the numerical solution. The GMsFEM enables us to significantly improve the efficiency of the DyBO method in solving time-dependent PDEs with multiscale and random coefficients. The rest of the paper is organized as follows. In Section \ref{sec:multiscaleSPDEs}, we will introduce the framework of DyBO formulation. The GMsFEM and its online adaptive algorithm will be outlined in Section \ref{sec:multiscale}. The implementation issues of the algorithm and the numerical results will be given in Section \ref{sec:applications}. Finally, some concluding remarks will be drawn in Section \ref{sec:conclusion}. \section{The DyBO formulation for multiscale time-dependent SPDEs} \label{sec:multiscaleSPDEs} \noindent In this paper, we consider a class of parabolic equations with multiscale and random coefficients \begin{subequations} \label{eq:MsDyBO_Model} \begin{align} \frac{\partial u^{\varepsilon}}{\partial t} & = \mathcal{L}^\varepsilon u^\varepsilon & ~ & x\in \mathcal{D}, ~ t\in (0,T], ~ \omega\in\Omega, \label{eq:MsDyBO_Model_Eq} \\ u^\varepsilon (x,0,\omega) & = u_0(x,\omega) & ~ &x \in \mathcal{D}, ~ \omega \in \Omega, \label{eq:MsDyBO_Model_ic} \\ \mathcal{B}\big( u^\varepsilon(x,t,\omega) \big) & = h(x, t, \omega) & ~ & x\in \partial \mathcal{D}, ~ \omega \in \Omega. \label{eq:MsDyBO_Model_bc} \end{align} \end{subequations} where $\mathcal{D} \subset \mathbb{R}^{d}$ ($d=2,3$) is a bounded spatial domain, $(\Omega,\mathcal{F}, \mathbb{P}) $ is a probability space, and suitable boundary and initial conditions are imposed. The differential operator $\mathcal{L}^{\varepsilon}$ is defined as $ \mathcal{L}^\varepsilon u^\varepsilon := \nabla \cdot (a^\varepsilon (x,\omega) )\nabla u^\varepsilon + f(x)$. The multiscale information is described by the parameter $\varepsilon$ and the force $f:\mathbb{R}^d \to \mathbb{R}$ is in $L^2(\mathcal{D})$. Assume that there exist two constants $a_{\max} \gg a_{min}>0$ such that $\mathbb{P}(\omega\in \Omega: a^{\varepsilon}(x,\omega)\in [a_{\min},a_{\max}], \text{a.e. } x \in \mathcal{D}) = 1$. Note that we are interested in the case that the coefficient $a^\varepsilon(x,\omega)$ has high contrast within the domain $\mathcal{D}$, where the mode reduction technique in physical space is necessary to reduce degrees of freedom in representing the solution. \subsection{An abstract framework for SPDEs} \label{sec:brief-overview-dybo} \noindent To make this paper self-contained, we briefly review the DyBO method \cite{ChengHouZhang1:13,ChengHouZhang2:13}. We assume the solution $u^{\varepsilon}(x,t,\omega)$ to \eqref{eq:MsDyBO_Model} satisfies $u^{\varepsilon}(\cdot, t, \cdot) \in L^2(\mathcal{D}\times \Omega)$ for each time $t\in (0,T]$. We consider the $m$-term truncated KL expansion with $m \in \mathbb{N}^+$ \begin{equation} \widetilde{u}^{\varepsilon}(x,t,\omega) = \bar{u}^{\varepsilon}(x,t) + \sum_{i=1}^{m} u_i^{\varepsilon}(x,t) Y_i(\omega,t) = \bar{u}^{\varepsilon}(x,t) + \vU(x,t) \vY^T(\omega,t) \approx u^{\varepsilon}(x,t,\omega), \label{eq:KLE:truncated} \end{equation} as an approximation to the solution $u^{\varepsilon}(x,t,\omega)$. Here, $\bar{u}^{\varepsilon}(x,t)$ is the mean of the solution, $$\vU(x,t)=\bkr{u_1^{\varepsilon}(x,t),\cdots,u_m^{\varepsilon}(x,t)} \quad \text{and} \quad \vY(\omega,t)=\bkr{Y_1(\omega,t),\cdots,Y_m(\omega,t)}$$ are the spatial and stochastic modes (with zero mean), respectively. We omit the symbol $\varepsilon$ to simplify the notation. Next, by imposing the orthogonal conditions for $\vU$ and $\vY$ $$ \inp{\vU^T}{\vU} = (\inp{u_i}{u_j} \delta_{ij}) \quad \text{and} \quad \EEp{\vY^T \vY} = \mI_{m\times m},$$ we obtain the evolution equations for $\bar{u}^{\varepsilon}$, $\vU$ and $\vY$ as follows \begin{subequations} \label{eq:SPDE:new3} \begin{align} \diffp{\bar{u}}{t} &= \EEp{\oL \widetilde{u}}, \label{eq:SPDE:new3a} \\ \diffp{\vU}{t} &= -\vU \mD^T + \EEp{\oLt \widetilde{u} \vY}, \label{eq:SPDE:new3b} \\ \diffd{\vY}{t} &= -\vY\mC^T + \inp{\oLt \widetilde{u}}{\vU}\mLa_{\vU}^{-1}, \label{eq:SPDE:new3c} \end{align} \end{subequations} where $\inp{\cdot}{\cdot}$ denotes the inner product in $L^2(\mathcal{D})$, $\mLa_{\vU}=\diag(\inp{\vU^T}{\vU}) \in \RR^{m \times m}$, and $\oLt \widetilde{u} = \oL \widetilde{u} -\EEp{\oL \widetilde{u}}$. Define two operators $\oQ: \RR^{k \times k} \goto \RR^{k \times k}$ and $\oQt: \RR^{k \times k} \goto \RR^{k \times k}$ as follows \begin{equation*} \oQ(\mM) := \frac{1}{2}\bkr{\mM-\mM^T}\quad \text{and} \quad \oQt(\mM) := \oQ(\mM) + \diag(\mM), \end{equation*} where $\mM \in \RR^{k \times k}$ is a square matrix and $\diag(\mM)$ is a diagonal matrix whose diagonal entries are equal to those of $\mM$. Then, the matrices $\mC,\mD \in \RR^{m\times m}$ in \eqref{eq:SPDE:new3} can be solved uniquely from the following linear system \begin{subequations} \label{eq:CDSystem} \begin{align} \mC - \mLa_{\vU}^{-1} \oQt\bkr{\mLa_{\vU}\mC} &= 0, \label{eq:CDSystem:C}\\ \mD-\oQ\bkr{\mD} &= 0, \label{eq:CDSystem:D} \\ \mD^T+\mC &= G_{*}(\bar{u}^{\varepsilon},\vU,\vY), \label{eq:CDSystem:CD} \end{align} \end{subequations} where the matrix is given by $G_{*}(\bar{u}^{\varepsilon},\vU,\vY)=\mLa_{\vU}^{-1}\inp{\vU^T}{\EEp{\oLt \widetilde{u}^{\varepsilon} \vY}} \in \RR^{m \times m}$. In order to represent the stochastic modes $Y_i(\omega,t)$ in \eqref{eq:SPDE:new3c}, one can choose several different approaches including ensemble representations in sampling methods and spectral representations. In this paper, we use gPC basis functions to represent the stochastic modes $Y_i(\omega,t)$. Given two positive integers $r$ and $p$, we define $\sJ_r^p := \{ \minda: \minda = (\alpha_1,\cdots, \alpha_r ), \alpha_i \in \mathbb{N}_+, \abs{\alpha} = \sum_{i=1}^r \alpha_i \leq p \} \backslash \{0 \}$. Let $\{ H_i(\xi) \}_{i=1}^\infty$ denote as an one-dimensional family of $\rho$-orthogonal polynomial, i.e., $$\int_{-\infty}^\infty H_i(\xi) H_j(\xi) \rho(\xi) \ d\xi = \delta_{ij}.$$ If we write $\hmpn_{\minda}(\vxi) = \prod_{i=1}^r H_{\alpha_i} (\xi_i)$ for $\minda \in \sJ_r^p$ and $\vxi = (\xi_i)_{i=1}^r$, then the Cameron-Martin theorem \cite{Cameron1947} implies the stochastic modes $Y_i(\omega,t)$ in \eqref{eq:KLE:truncated} can be approximated by \begin{equation} \label{eq:YgPC:component} Y_i(\omega,t) \approx \sum_{\minda \in \sJ_r^p} \hmpn_{\minda}(\vxi(\omega))A_{\minda i}(t) = \hmpn\bkr{\vxi} \mA_i(t) \quad i=1,2,\cdots,m. \end{equation} Here, $\hmpn\bkr{\vxi}=\bkr{\hmpn_{\minda}\bkr{\vxi}}_{\minda \in \sJ_r^p} \in \mathbb{R}^{1\times N_p}$, $\mA_i(t) = (\mA_{\minda i}(t))_{\minda \in \sJ_r^p} \in \mathbb{R}^{N_p \times 1}$ and $N_p := \abs{\sJ_r^p}$. \rev{We remark that for each $i = 1,2,\cdots,m$, the coefficients $\{ \mA_{\minda i}(t) \}_{\minda \in \sJ_r^p}$ represent the projection coefficients of the stochastic mode $Y_i(\omega,t)$ on the gPC basis functions $\hmpn_{\minda}(\vxi)$, $\minda \in \sJ_r^p$. Moreover, $\{ \mA_{\minda i}(t) \}_{\minda \in \sJ_r^p}$ change with respect to time. } One may write \begin{equation} \label{eq:YgPC:matrix} \vY(\omega, t) = \hmpn\bkr{\vxi(\omega)}\mA (t) \end{equation} where $\mA(t) = (\mA_1(t), \cdots, \mA_m(t)) \in \RR^{N_p \times m}$. The KL expansion \eqref{eq:KLE:truncated} now reads \begin{equation*} \widetilde{u} \approx \bar{u}+\vU\mA^T\hmpn^T. \end{equation*} We can derive equations for $\bar{u}$, $\vU$ and $\mA$, instead of $\bar{u}$, $\vU$ and $\vY$. \rev{Here and in the following, we have suppressed the variables $x$, $t$, and $\omega$ for notation simplicity.} In other words, the stochastic modes $\vY$ are identified with a matrix $\mA \in \RR^{N_p \times m}$, which leads to the DyBO-gPC formulation of SPDE \eqref{eq:MsDyBO_Model} \begin{subequations} \label{eq:DyBO:gPC} \begin{align} \diffp{\bar{u}}{t} &= \EEp{\oL \widetilde{u}}, \label{eq:DyBO:gPC:a} \\ \diffp{\vU}{t} &= -\vU \mD^T + \EEp{\oLt \widetilde{u} \hmpn}\mA, \label{eq:DyBO:gPC:b} \\ \diffd{\mA}{t} &= -\mA \mC^T + \inp{\EEp{\hmpn^T\oLt \widetilde{u}}}{\vU}\mLa_{\vU}^{-1}, \label{eq:DyBO:gPC:c} \end{align} \end{subequations} where $\mC(t)$ and $\mD(t)$ can be solved from the linear system \eqref{eq:CDSystem} with \begin{equation} \label{eq:DyBO:gPC:Gs} G_{*}(\bar{u},\vU,\vY)=\mLa_{\vU}^{-1}\inp{\vU^T}{\EEp{\oLt \widetilde{u} \vY}} = \mLa_{\vU}^{-1}\inp{\vU^T}{\EEp{\oLt \widetilde{u} \hmpn}}\mA. \end{equation} By solving the system \eqref{eq:DyBO:gPC}, we have an approximate solution to \eqref{eq:MsDyBO_Model} \begin{equation*} u^{\text{DyBO-gPC}}=\bar{u}+\vU\mA^T\hmpn^T. \end{equation*} The condition $\EEp{\vY^T \vY}$ implies that the columns $(\mA_i)_{i=1}^m$ are orthonormal, i.e., $\mA^T\mA= \mI_{m\times m}$. Note that $\mA\mA^T \in \RR^{N_p \times N_p}$ in general is not an identity matrix as $m \ll N_p$ if the SPDE solution has a low-dimensional structure. \subsection{The DyBO formulation for the model problem} \label{sec:analysis} \noindent In this section, we shall derive the DyBO formulation for the model problem \eqref{eq:MsDyBO_Model}. Recall that the definition of the differential operator is $\mathcal{L}u = \nabla \cdot (a(x,\omega) \nabla u) + f(x)$ and we have omitted $\varepsilon$ for notation simplification. We assume that the coefficient $a(x,\omega)$ is of the form $a(x,\omega)=\bar{a}(x)+\tilde{a}(x,\omega)$, where $\bar{a}(x) = \EEp{a(x,\omega)}$ and $\tilde{a}(x,\omega)$ is the fluctuation, which can be parametrized as follows \begin{equation*} \tilde{a}(x,\omega) = \sum_{i=1}^r a_i(x) \xi_i(\omega) , \quad i = 1,2\cdots,r. \end{equation*} Here, $r\geq1$ is a positive integer and $\{ \xi_i(\omega)\}_{i=1}^r$ are independent identically distributed random variables assumed to be mean-zero. By substituting the expression of $\mathcal{L}u$ into \eqref{eq:DyBO:gPC}, we obtain the DyBO-gPC formulation for the model problem \eqref{eq:MsDyBO_Model} (see \ref{cha:dybo-formulation-MsSPDE} for the details of the derivation) \begin{subequations} \label{eq:DyBO-gPC-MsFEM} \begin{align} \diffp{\bar{u}}{t} & = \nabla\cdot(\bar{a}\nabla \bar{u}) + \nabla\cdot(\EEp{\tilde{a}\nabla \vU \mA^{T} \hmpn^T})+ f, \label{eq:DyBO-gPC-MsFEM-ubar}\\ \diffp{\vU}{t} &= -\vU\mD^T + \nabla\cdot(\EEp{\tilde{a} \nabla \bar{u}\hmpn})\mA + \nabla\cdot(\bar{a} \nabla \vU) + \nabla\cdot( \EEp{ \tilde{a} \nabla \vU \mA^{T} \hmpn^T \hmpn})\mA, \label{eq:DyBO-gPC-MsFEM-U}\\ \diffd{\mA}{t} &= -\mA \mC^T + \inp{ \nabla\cdot(\EEp{\hmpn^{T}\tilde{a}\nabla\bar{u}})+ \nabla\cdot(\bar{a} \nabla \mA\vU^{T}) + \nabla\cdot( \EEp{ \tilde{a} \hmpn^T \hmpn \mA \nabla \vU^{T} }) }{\vU}\mLa_{\vU}^{-1}, \label{eq:DyBO-gPC-MsFEM-A} \end{align} \end{subequations} where matrices $\mC$ and $\mD$ can be solved from \eqref{eq:CDSystem} with $G_{*} = \mLa_{\vU}^{-1}\inp{\vU^T}{\EEp{\oLt u \hmpn}}\mA$. \begin{remark} When the force function of the model problem \eqref{eq:MsDyBO_Model} contains randomness, i.e., $f(x,\omega)$, one can derive the DyBO formulation accordingly without any difficulty. \end{remark} \begin{remark} \label{rmk:GeneralizationToSystems} The boundary conditions and initial conditions for each physical component, and the initial condition for each stochastic component can be obtained by projection of the initial and boundary conditions of $u(x,t,\omega)$ on the corresponding components. \end{remark} \begin{remark} As the system evolves, the norm of the mode $u_i$ (denoted as $\lambda_i$) in the KL expansion may change and some of them may get closer to each other. In this case, if the matrices $\mC$ and $\mD$ are still solved from \eqref{eq:CDSystem}, numerical errors will pollute the results. One may freeze $\vU$ or $\vY$ temporarily for a short time and continue to evolve the system. At the end of this short period, the solution is recast into the bi-orthogonal form via the KL expansion. See \cite[Section 4.2]{ChengHouZhang1:13} for more details. \end{remark} \section{Multiscale model reduction using the GMsFEM} \label{sec:multiscale} \subsection{Motivations}\label{sec:motivations} \noindent Since the DyBO formulation \eqref{eq:DyBO-gPC-MsFEM} involves multiscale features in the physical space, one may consider an efficient solver to solve the problem in order to reduce the computational cost. As such, we shall apply the GMsFEM to discretize $\bar{u}$ and $\vU$. Note that, Eq.\eqref{eq:DyBO-gPC-MsFEM-ubar} and each component of Eq. \eqref{eq:DyBO-gPC-MsFEM-U} have the following deterministic time-dependent PDE form \begin{subequations} \label{eq:GMsFEM} \begin{align} \frac{\partial w}{\partial t} & = \nabla \cdot ( \bar{a} \nabla w) + \mathcal{G}, \label{eq:GMsFEM:1}\\ w|_{t=0} & = w_0. \label{eq:GMsFEM:2} \end{align} for some functions $\mathcal{G}$. For example, in \eqref{eq:DyBO-gPC-MsFEM-ubar} we have $w=\bar{u}$ and $\mathcal{G} = \nabla \cdot (\EEp{\tilde{a} \nabla \vU \mA^{T} \hmpn^{T}}) + f$. \end{subequations} In order to discretize the equation \eqref{eq:GMsFEM} in time, we apply the implicit Euler scheme with time step $\Delta t>0$ and obtain the discretization for each time $t_n = n \Delta t$, $n=1,2,\cdots, N$ ($T = N\Delta t$) \begin{equation*} \frac{w^n - w^{n-1}}{\Delta t} = \nabla \cdot(\bar{a} \nabla w^n) + \mathcal{G} \end{equation*} where $w^n = w(t_n)$ and the above equation is equivalent to the following \begin{equation} -\nabla \cdot (\bar{a}\nabla w^n) + cw^n = \tilde{\mathcal{G}}, \label{eq:GMsFEM:diff:1} \end{equation} where $c = 1/\Delta t$ and $\tilde{\mathcal{G}} = cw^{n-1} + \mathcal{G}$. Hence, for each fixed $t_n>0$, we use the GMsFEM to solve the second order elliptic PDE \eqref{eq:GMsFEM:diff:1} with multiscale coefficient $\bar{a}$. \subsection{The GMsFEM and the multiscale basis functions} \noindent Next, we present the framework of the GMsFEM for solving \eqref{eq:GMsFEM:diff:1}. We first introduce the notion of fine and coarse grids that we shall use in the method. Let $\mathcal{T}^H$ be a conforming partition of the spatial domain $\mathcal{D}$ with mesh size $H>0$. We refer to this partition as the coarse grid. Subordinate to $\mathcal{T}^H$, we define a fine grid partition denoted by $\mathcal{T}^h$, with mesh size $0<h \ll H$, by refining each coarse element in $\mathcal{T}^H$ into a connected union of fine elements. Assume the above refinement is performed such that $\mathcal{T}^h$ is a conforming partition of $\mathcal{D}$. Denote the interior nodes of $\mathcal{T}^H$ as $x_i$, $i = 1,2,\cdots, N_{in}$, where $N_{in}$ is the number of interior nodes. The coarse elements of $\mathcal{T}^H$ are denoted as $K_j$, $j=1,2.,\cdots,N_{e}$, where $N_{e}$ is the number of the coarse elements. Define the coarse neighborhood of the node $x_i$ by $D_i:=\bigcup \{ K_j \in \mathcal{T}^H: x_i \in \overline{K_j} \}$. Once the coarse and fine grids are given, one may construct the multiscale basis functions for approximating the solution of \eqref{eq:GMsFEM:diff:1}. To obtain the multiscale basis functions, we first define the snapshot space. For each neighborhood $D_i$, define $J_h(D_i)$ as the set of fine nodes of $\mathcal{T}^h$ lying on $\partial D_i$ and denote its cardinality as $L_i \in \mathbb{N}^+$. For each fine-grid node $x_j \in J_h(D_i)$, define a fine-grid function $\delta_j^h$ on $J_h(D_i)$ as $\delta_j^h(x_k) = \delta_{jk}$. Next, for $j = 1,\cdots, L_i$, define the snapshot function $\psi_j^{(i)}$ in coarse neighborhood $D_i$ as the solution to the following system \begin{eqnarray} -\nabla \cdot(\bar{a} \nabla \psi_j^{(i)}) & = 0 &\quad \text{in } D_i, \\ \psi_j^{(i)} & = \delta_j^h &\quad \text{on } \partial D_i. \end{eqnarray} The local snapshot space $V_{snap}^{(i)}$ corresponding to the coarse neighborhood $D_i$ is defined as follows $V_{snap}^{(i)} := \text{snap} \{ \psi_j^{(i)}: j = 1,\cdots, L_i \}$ and the snapshot space reads $V_{snap} := \bigoplus_{i=1}^{N_{in}} V_{snap}^{(i)}$. The snapshot space defined above is usually of large dimension. Therefore, a dimension reduction is performed on $V_{snap}$ to archive a smaller space $V_{\text{off}}$, which contains the multiscale basis functions for simulation. This reduction is achieved by performing a spectral decomposition on each local snapshot space $V_{snap}^{(i)}$. The analysis in \cite{Yalchin2011} motivates the following construction. For each $i = 1,\cdots,N_{in}$, the spectral problem is to find $(\phi_j^{(i)},\lambda_j^{(i)}) \in V_{snap}^{(i)} \times \mathbb{R}$ such that \begin{eqnarray} \int_{D_i} \bar{a} \nabla \phi_j^{(i)} \cdot \nabla v = \lambda_j^{(i)} \int_{D_i} \hat{a} \phi_j^{(i)} v \quad \forall v\in V_{snap}^{(i)}, \quad j = 1,\cdots, L_i, \label{eqn:spectral} \end{eqnarray} where $\hat{a} := \bar{a} \sum_{i=1}^{N_{in}} H^2 \abs{\nabla \chi_i}^2$ and $\{\chi_i\}_{i=1}^{N_{in}}$ is a set of partition of unity satisfying the following system \begin{eqnarray*} -\nabla \cdot (\bar{a} \nabla \chi_i) &= 0 & \quad \text{in } K\subset D_i, \label{eqn:std_basis_1}\\ \chi_i &= p_i & \quad \text{on each } \partial K ~ \text{with } K \subset D_i \label{eqn:std_basis_2}, \\ \chi_i &= 0 & \quad \text{on } \partial D_i. \label{eqn:std_basis_3} \end{eqnarray*} Assume that the eigenvalues obtained from \eqref{eqn:spectral} are arranged in ascending order and we may use the first $0<l_i \leq L_i$ (with $l_i \in \mathbb{N}^+$) eigenfunctions (related to the smallest $l_i$ eigenvalues) to form the local multiscale space $V_{\text{off}}^{(i)} := \text{span} \{ \chi_i \phi_j^{(i)} : j = 1,\cdots, l_i\}$. The multiscale space $V_{\text{off}}$ is the direct sum of the local multiscale spaces, namely $V_{\text{off}} := \bigoplus_{i=1}^{N_{in}} V_{\text{off}}^{(i)}$. Once the multiscale space $V_{\text{off}}$ is constructed, we can find the GMsFEM solution $u_{\text{off}}^n \in V_{\text{off}}$ at time $t=t_n$, $n = 1,\cdots, N$, by solving the following equation \begin{equation} \mathcal{A}(u_{\text{off}}^n,v) + c\inp{u_{\text{off}}^n}{v} = \inp{c u_{\text{off}}^{n-1} + \mathcal{G}}{v} \quad \forall v \in V_{\text{off}}, \label{eqn:GMsFEM:var} \end{equation} where $\mathcal{A}(u,v) := \int_{\mathcal{D}} \bar{a} \nabla u \cdot \nabla v $. \begin{remark} The above derivation of $V_{\text{off}}$ is based on the (mean) coefficient $\bar{a}$ and the multiscale basis functions in $V_{\text{off}}$ are suitable for approximating $\bar{u}$. In this paper, we assume that the fluctuation of the coefficient is a small perturbation to the mean. Therefore, the multiscale space $V_{\text{off}}$ can also efficiently approximate $\vU$. \end{remark} \subsection{Online adaptive algorithm} \noindent In order to achieve a rapid convergence in the GMsFEM, one may add some online basis functions to enrich the multiscale space $V_{\text{off}}$ based on local residuals. In this subsection, we briefly outline the online adaptive algorithm for GMsFEM. Let $u_{\text{off}}^n \in V_{\text{off}}$ be the numerical solution obtained in \eqref{eqn:GMsFEM:var} at time $t=t_n$. Given a coarse neighborhood $D_i$, we define $V_i := H_0^1(D_i) \cap V_{snap}$ equipped with the norm $\norm{v}_{V_i}^2 := \int_{D_i} \bar{a}(x) \abs{\nabla v}^2$. We also define the local residual operator $\mathcal{R}_i^n : V_i \to \mathbb{R}$ by \begin{equation} \mathcal{R}_i^n(v;u_{\text{off}}^n) := \int_{D_i} \big(cu_{\text{f}}^{n-1} + \mathcal{G}\big)v - \int_{D_i} \big(\bar{a} \nabla u_{\text{off}}^n \cdot \nabla v + cu_{\text{off}}^n v \big) \quad \forall v \in V_i,\label{eq:residualonline} \end{equation} where $u_{\text{f}}^{n-1}$ is the fine-scale solution at time $t=t_{n-1}$. The operator norm of $\mathcal{R}_i^n$, denoted by $\norm{\mathcal{R}_i^n}_{V_i^*}$ gives a measure of the quantity of the residual. The online basis functions are computed during the time-marching process for a given fixed time $t = t_n$, contrary to offline basis functions that are pre-computed (defined in the Section 4.2). Suppose that one needs to add an online basis $\phi$ into the space $V_i$. The analysis in \cite{chung2015online} suggests that the required online basis $\phi \in V_i$ is the solution to the following equation \begin{equation} \mathcal{A}(\phi,v) = \mathcal{R}_i^n(v;u_{\text{off}}^{n,\tau}) \quad \forall v\in V_i. \label{eqn:online} \end{equation} We refer to $\tau \in \mathbb{N}$ as the level of enrichment and denote $u_{\text{off}}^{n,\tau}$ as the solution of \eqref{eqn:GMsFEM:var} in $V_{\text{off}}^{n,\tau}$. Remark that $V_{\text{off}}^{n,0} := V_{\text{off}}$ for time level $n \in \mathbb{N}$. Let $\mathcal{I} \subset \{1,2,\cdots,N_{in}\}$ be the index set over some non-overlapping coarse neighborhoods. For each $i\in \mathcal{I}$, we obtain an online basis $\phi_i \in V_i$ by solving \eqref{eqn:online} and define $V_{\text{off}}^{n,\tau+1} = V_{\text{off}}^{n,\tau} \oplus \text{span}\{ \phi_i: i\in \mathcal{I}\}$. After that, solve \eqref{eqn:GMsFEM:var} in $V_{\text{off}}^{n,\tau+1}$ to get $u_{\text{off}}^{n,\tau+1}$. Consequently, following the arguments in \cite{chung2015online}, we have at time $t=t_n$, \begin{equation} \norm{u_{\text{f}}^n - u_{\text{off}}^{n,\tau+1}}_V^2 \leq \bigg( 1- \frac{\Lambda_{\text{min}}^{(\mathcal{I})}}{C_{\text{err}}} \frac{\sum_{i\in \mathcal{I}} \norm{\mathcal{R}_i^n}_{V_i^*}(\lambda_{l_i+1}^{(i)})^{-1} }{\sum_{i=1}^N \norm{\mathcal{R}_i^n}_{V_i^*}(\lambda_{l_i+1}^{(i)})^{-1}} \bigg) \norm{u_{\text{f}}^n-u_{\text{off}}^{n,\tau}}_V^2, \label{ieq:online} \end{equation} where $C_{\text{err}}$ is a uniform constant and $\Lambda_{\text{min}}^{(\mathcal{I})} = \min_{i\in \mathcal{I}} \lambda_{l_i +1}^{(i)}$. Here, the norm is defined by $\norm{\cdot}_V := \sqrt{\mathcal{A}(\cdot,\cdot)}$. Inequality \eqref{ieq:online} shows that we can obtain a better accuracy by adding more online basis functions at each time $t=t_n$ and the rate of convergence depends on the constant $C_{\text{err}}$ and $\Lambda_{\text{min}}^{(\mathcal{I})}$. \subsection{The implementation of our new algorithm}\label{sec:ImplementationGMsFEM} \noindent We summarize the computational scheme for the problem in this section. Recall that the multiscale coefficient is $a(x,\omega) = \bar{a}(x) + \tilde{a}(x,\omega)$. The mean has high contrast in nature and the fluctuation part is $\tilde{a}(x,\omega) = \sum_{i=1}^r a_i \xi_i = a_i \xi_i$, where the Einstein notation is used. We rewrite the DyBO formulation for \eqref{eq:DyBO:gPC} as follows \begin{align} \diffp{\bar{u}}{t} & = \nabla\cdot(\bar{a}\nabla \bar{u}) + \nabla\cdot(\EEp{a_i \xi_i \nabla \vU \mA^{T} \hmpn^T})+ f, \label{eqn:DyBO-gPC-ubar}\\ \diffp{\vU}{t} &= -\vU\mD^T + \nabla\cdot(\EEp{a_i \xi_i \nabla \bar{u}\hmpn})\mA + \nabla\cdot(\bar{a} \nabla \vU) + \nabla\cdot( \EEp{ a_i \xi_i \nabla \vU \mA^{T} \hmpn^T \hmpn})\mA, \label{eqn:DyBO-gPC-U}\\ \diffd{\mA}{t} &= -\mA \mC^T + \inp{ \nabla\cdot(\EEp{\hmpn^{T} a_i \xi_i \nabla\bar{u}})+ \nabla\cdot(\bar{a} \nabla \mA\vU^{T}) + \nabla\cdot( \EEp{ a_i \xi_i \hmpn^T \hmpn \mA \nabla \vU^{T} }) }{\vU}\mLa_{\vU}^{-1}. \label{eqn:DyBO-gPC-A} \end{align} We assume that homogeneous boundary condition is imposed. Hence, the solutions $\bar{u}$ and $\vU = (u_1,\cdots,u_m)$ will vanish on $\partial \mathcal{D}$. If the model problem \eqref{eq:MsDyBO_Model} has inhomogeneous boundary condition, the boundary conditions for $\bar{u}$ and $\vU$ can be obtained by taking expectation of $u$ on the corresponding components. The initial conditions for $\bar{u}$, $\vU$ and $\mA$ depend on the initial condition of $u$, which will be discussed in Section \ref{sec:applications}. For the details of implementation, see \ref{app:implement}. \section{Numerical experiments} \label{sec:applications} \noindent In this section, we present some numerical examples to demonstrate the efficiency of our proposed method. The computational domain is $\mathcal{D}=(0,1)^2 \subset \mathbb{R}^2$ and $T = 1$. First, we divide the domain $\mathcal{D}$ into several equal square units with mesh size $H>0$ and refer to it as the coarse mesh $\mathcal{T}^H$. Next, we divide each coarse element into several equal square blocks with mesh size $h>0$ and refer to it as the fine mesh $\mathcal{T}^h$. Then, we discretize $\bar{u}$ and $\vU$ in the DyBO formulation by using the GMsFEM to reduce degrees of freedom in representing the multiscale solutions. Thus, with the multiscale basis functions, we can represent $\bar{u}$ and $\vU$ on the coarse mesh. \rev{In all examples, the number of initial local basis functions is $L_i = 4$}. The multiscale coefficient is assumed to be $a(x,\omega) = \bar{a}(x) + \sum_{i=1}^r a_i(x)\xi_i(\omega)$, where $a_i(x)$ is a small (or multiscale) perturbation and $\{\xi_i(\omega)\}_{i=1}^r$ is a set of i.i.d. uniform-distributed random variables over $[-1,1]$. Moreover, we assume that there exist two constants $a_{\max} \gg a_{\min}>0$ such that $\mathbb{P}(\omega\in \Omega: a(x, \omega)\in [a_{\min},a_{\max}], ~\text{a.e. } x \in \mathcal{D}) = 1$. The initial condition of the solution is assumed to have the form of $m$-terms truncated KL expansion \begin{equation} \tilde{u}(x,0,\omega) = \bar{u}(x,0) + \sum_{i=1}^m u_i(x,0) Y_i(\omega,0).\label{eq:MsDyBO_Initial_KLEform} \end{equation} The stochastic basis $Y_i(\omega,t)$ can be expanded as $ Y_i(\omega,t) = \sum_{j=1}^{N_p} H_j(\omega) A_{ji}(t)$ for each $i = 1,\cdots, m$. Here, $\{ H_j(\omega)\}_{j=1}^{N_p}$ is a set of tensor products of orthogonal polynomials in $\mathbb{R}$, $N_p = \frac{(p+r)!}{p!r!}-1$, and $p$ is the maximum degree of polynomial. Denote $\mA(t) = (A_{ji}(t))_{N_p \times m}$. The initial condition of the matrix $\mA(t)|_{t=0} = \big(A_{ji}(0)\big)_{N_p \times m}$ should satisfy $ \EEp{\hmpn \mA} = 0$ and $\mA^T(t) \mA (t) |_{t=0} = \mI_{m \times m}$. For each function to be approximated (e.g. $\bar{u}$, $u_i$ or the variance function $\text{var}(u) := \sum_{i=1}^m u_i^2$), we define the following quantity at $t= t_n$ to measure the numerical error $$e_2^n = \frac{\norm{u_{\text{f}}^n - u_{\text{approx}}^n}_{L^2(\mathcal{D})}}{\norm{u_{\text{f}}^n}_{L^2(\mathcal{D})}}, $$ where $u_{\text{f}}^n$ is the reference solution and $u_{\text{approx}}^n$ is the approximation obtained by the proposed method. In the remaining part of this paper, we refer to this quantity as $L^2$-error. \begin{example}\label{exp:1} \noindent We set the mesh size to be $H = \sqrt{2}/10$ and $h = \sqrt{2}/100$. The time step is $\Delta t = 10^{-3}$. The multiscale fluctuation is parameterized by three independent random variables ($r=3$) and the number of terms in the KL expansion is $m=4$. Next, we set the coefficients $a_i$ ($i=1,2,3$) to be \begin{align*} a_1(x_1,x_2) & = 0.04 \times \frac{2+P_1 \sin(\frac{2\pi (x_1-x_2)}{\varepsilon_1})}{2-P_1 \cos(\frac{2\pi (x_1-x_2)}{\varepsilon_1})}, & P_1 = 1.6 \quad \text{and} \quad \varepsilon_1 = 1/8, \\ a_2 (x_1,x_2) & = 0.08 \times \frac{2+P_2 \cos(\frac{2\pi x_1}{\varepsilon_2})}{2-P_2 \sin(\frac{2\pi x_2}{\varepsilon_2})}, & P_2 = 1.5 \quad \text{and} \quad \varepsilon_2 = 1/7, \\ a_3(x_1,x_2) & = 0.16 \times \frac{2+P_3 \sin(\frac{2\pi (x_1-0.5)}{\varepsilon_3})}{2-P_3 \cos(\frac{2\pi (x_2-0.5)}{\varepsilon_3})}, & P_3 = 1.4 \quad \text{and} \quad \varepsilon_3 = 1/6. \end{align*} The mean $\bar{a}$ of the multiscale coefficient is of high-contrast (see Figure \ref{fig:exp3_abar}). The source function is chosen to be $ f \equiv 1$. The initial conditions for the mean of the solution and the physical modes are given as follows \begin{eqnarray*} \bar{u}(x_1,x_2,t)|_{t=0} & = & 32\big(1-\cos(2\pi x_1)\big) \big(1-\cos(2\pi x_2) \big), \\ u_{1}(x_1,x_2,t)|_{t=0} & = & 24 \big(1-\cos(2\pi x_1)\big) \big(1-\cos(2\pi x_2) \big), \\ u_{2}(x_1,x_2,t)|_{t=0} & = & 16 \big(1-\cos(4\pi x_1)\big) \big(1-\cos(4\pi x_2) \big), \\ u_{3}(x_1,x_2,t)|_{t=0} & = & 8 \big(1-\cos(6\pi x_1)\big) \big(1-\cos(6\pi x_2) \big), \\ u_{4}(x_1,x_2,t)|_{t=0} & = & 4 \big(1-\cos(8\pi x_1)\big) \big(1-\cos(8\pi x_2) \big). \end{eqnarray*} \begin{figure}[htbp!] \mbox{ \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.9\linewidth, height=5.2cm]{figs/abar_exp1} \caption{$\bar{a}$ in the Example \ref{exp:1}. (Max: \rev{1000}, Min: 4)} \label{fig:exp3_abar} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.9\linewidth, height=5.2cm]{figs/abar_spe10} \caption{$\bar{a}$ in the Example \ref{exp:2}. (Max: 100, Min: 1)} \label{fig:exp_spe10} \end{subfigure} } \caption{The mean component of the permeability.} \label{fig:mean_exp} \end{figure} The history of the $L^2$-error is recorded in Table \ref{tab:exp3_2}. One can find that at the specific time level the $L^2$-errors of the quantities to be approximated are relative small (less than $1\%$) when the online procedure is terminated. It shows that the proposed method can approximate the stochastic multiscale diffusion problem with certain accuracy. We remark that due to the linearity of the diffusion problem and its DyBO formulation, one may easily extend this algorithm to the case with more modes in the KL expansion. In addition, one can adopt the adaptive approach proposed in \cite{ChengHouZhang2:13} to dynamically change the number of the modes in the DyBO formulation during the numerical simulation. \begin{table}[h!] \centering \begin{tabular}{c|c|ccccc} \hline \hline function & online status & $t=0.1$ & $t=0.2$ & $t=0.4$ & $t=0.8$ & $t=1.0$ \\ \hline \multirow{2}{*}{$\bar{u}$} & S & 3.7904\% & 4.0551\% & 4.0404\% & 4.0278\% & 4.0378\% \\ & E & 0.3883\% & 0.4076\% & 0.4059\% & 0.4042\% & 0.4062\% \\ \hline \multirow{2}{*}{$u_1$} & S & 3.7449\% & 5.0027\% & 4.3672\% & 4.2385\% & 4.5720\% \\ & E & 0.4247\% & 0.4000\% & 0.3489\% & 0.3964\% & 0.3641\% \\ \hline \multirow{2}{*}{$u_2$} & S & 4.9319\% & 6.7831\% & 7.5986\% & 5.0301\% & 5.2988\% \\ & E & 0.4284\% & 0.4496\% & 0.5077\% & 0.3662\% & 0.3708\% \\ \hline \multirow{2}{*}{$u_3$} & S & 8.2879\% & 14.5263\% & 5.7078\% & 14.5705\% & 8.4522\% \\ & E & 0.5011\% & 0.5505\% & 0.4391\% & 0.5294\% & 0.4251\% \\ \hline \multirow{2}{*}{$u_4$} & S & 14.9988 \% & 12.6509\% & 11.0844\% & 22.7257\% & 20.3689\% \\ & E & 0.6415\% & 0.4641\% & 0.4586\% & 0.6168\% & 0.6943 \% \\ \hline \multirow{2}{*}{$\text{var}(u)$} & S & 6.3057\% & 7.6541\% & 6.7792\% & 6.6286\% & 6.8721\% \\ & E & 0.6795\% & 0.6729\% & 0.5870\% & 0.6181\% & 0.6258\% \\ \hline \hline \end{tabular} \caption{$L^2$-error for each functions in Example \ref{exp:1}. (S: start, E: end)} \label{tab:exp3_2} \end{table} \end{example} \begin{example}\label{exp:2} We keep $H$ and $h$ the same as in Example \ref{exp:1}. The time step is still $\Delta t = 10^{-3}$. The mean permeability field $\bar{a}$ in this example is chosen from the SPE10 data set \cite{aarnes2005mixed} and the data is moderately related to the real physical applications (See Figure \ref{fig:exp_spe10}). The fluctuation part is parameterized by four independent random variables ($r=4$) and the coefficients is set as $a_i(x_1,x_2)=0.02 \times \frac{2+P_i\sin(\frac{2\pi (x_1-x_2)}{\varepsilon_i})}{2-P_i\cos(\frac{2\pi(x_1+x_2)}{\varepsilon_i})}$, $i=1,...,4$, where $[P_1,P_2,P_3,P_4]=[1.4,1.5,1.6,1.7]$ and $[\varepsilon_1,\varepsilon_2,\varepsilon_3,\varepsilon_4]=[\frac{1}{9},\frac{1}{8},\frac{1}{7},\frac{1}{6}]$. The number of modes in the KL expansion is $m=3$ and the source function is $f \equiv 1$. The initial conditions for the mean and the physical modes are given as follows \begin{eqnarray*} \bar{u}(x_1,x_2,t)|_{t=0} & = & 4\big(1-\cos(2\pi x_1)\big) \big(1-\cos(2\pi x_2) \big), \\ u_{1}(x_1,x_2,t)|_{t=0} & = & 16 \big(1-\cos(4\pi x_1)\big) \big(1-\cos(4\pi x_2) \big), \\ u_{2}(x_1,x_2,t)|_{t=0} & = & 4 \big(1-\cos(6\pi x_1)\big) \big(1-\cos(6\pi x_2) \big), \\ u_{3}(x_1,x_2,t)|_{t=0} & = & 2 \big(1-\cos(8\pi x_1)\big) \big(1-\cos(8\pi x_2) \big). \end{eqnarray*} \begin{figure}[h!] \mbox{ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.82\linewidth, height = 4.9cm]{figs/mean_01} \caption{Fine-scale solution of the mean $\bar{u}$.} \label{fig:fs_mean} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.82\linewidth, height = 4.9cm]{figs/mean_01_ap} \caption{Multiscale approximation of the mean $\bar{u}$.} \label{fig:ms_mean} \end{subfigure} } \mbox{ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.82\linewidth, height = 4.9cm]{figs/var_01} \caption{Fine-scale solution of the variance.} \label{fig:fs_mode1} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.82\linewidth, height = 4.9cm]{figs/var_01_ap} \caption{Multiscale approximation of the variance.} \label{fig:ms_mode1} \end{subfigure} } \caption{Solution profiles at $t=0.1$ in Example \ref{exp:2}.} \label{fig:sol_profiles} \end{figure} One may notice that this problem has multiscale features driven by the mean field $\bar{a}$ and some small random perturbations. The solution profiles of the mean and the variance at $t=0.1$ are plotted in Figure \ref{fig:sol_profiles}. One can see that our method archives a certain level of accuracy when the problem has both multiscale and random features. In both the numerical experiments, a few times of online enrichments are required at each time level. Meanwhile, the $L^2$-error between the multiscale solution and the fine-scale solution is nearly less than $2\%$ when the online procedure is terminated. We remark that the contrast value in SPE10 model used in Example \ref{exp:2} is already scaled down by 100 times. The difficulty of these kinds of stochastic multiscale problem is that when the contrast value is high (e.g. $\max_{x \in \mathcal{D}} \big(a(x)\big)\approx 10^4$ or larger), the usual computational schemes for UQ problems fail to obtain a good approximation, even though the random perturbation is small. We shall develop a more robust method to compute stochastic multiscale problems with higher contrast value in our subsequent research. \end{example} \begin{example}\label{exp:3} \rev{In this example, we compare the efficiency between the proposed method and the fine-scale method in terms of the CPU time. First, we set the mesh size to be $H = \sqrt{2}/10$ and $h = \sqrt{2}/400$. The time step is $\Delta t = 1/80$ and the final time is $T=1$. We keep the setting of random variables and initial conditions the same as in Example \ref{exp:1}. Table \ref{tab:exp_CPU_1} records the $L^2$-error at certain time level using online adaptivity. We remark that when the multiscale space contains sufficiently many initial local basis functions, only $1$-$2$ times of iterations are required to achieve such certain accuracy. Furthermore, one may achieve moderate computational savings with the proposed multiscale solver. From the data in Table \ref{tab:exp_CPU_2}, one may observe that the proposed DyBO-GMsFEM solver outperform the fine-scale solver with $20$ times speed-up in terms of the CPU times.} \begin{table}[h!] \centering \begin{tabular}{c|ccccc} \hline \hline function & $t=1/8$ & $t=1/4$ & $t=1/2$ & $t=3/4$ & $t=1$ \\ \hline $\bar{u}$ & 0.2286\% & 0.2317\% & 0.2621\% & 0.1922\% & 0.3450\% \\ $u_1$ & 0.2416\% & 0.2758\% & 0.2659\% & 0.2497\% & 0.2478\% \\ $u_2$ & 0.2774\% & 0.3218\% & 0.3403\% & 0.2975\% & 0.2791\% \\ $u_3$ & 0.5053\% & 0.3110\% & 0.3701\% & 0.2835\% & 0.4562\% \\ $u_4$ & 0.7377\% & 0.2931\% & 0.4040\% & 0.3612\% & 0.4286\% \\ $\text{var}(u)$ & 0.3911\% & 0.4730\% & 0.4276\% & 0.4219\% & 0.4131\% \\ \hline \hline \end{tabular} \caption{$L^2$-error for each functions in Example \ref{exp:3}; online process terminated.} \label{tab:exp_CPU_1} \end{table} \begin{table}[h!] \centering \begin{tabular}{l|cc} \hline \hline & \multicolumn{2}{c}{CPU times (s)} \\ \hline function & Fine-scale solver & Proposed solver \\ \hline Mean $\bar{u}$ & 44.1012 & 2.0763 \\ Modes $u_i$ & 170.9146 & 9.4423 \\ Total & $215.0158$ & $11.5186$ \\ \hline \hline \end{tabular} \caption{CPU times of the fine-scale solver and the proposed solver. ($H = 1/10$, $h = 1/400$)} \label{tab:exp_CPU_2} \end{table} \end{example} \section{Conclusion} \label{sec:conclusion} \noindent In this paper, we proposed a new framework combining the DyBO formulation and the online adaptive GMsFEM to solve time-dependent PDEs with multiscale and random features. For a given multiscale PDE with random input, one can derive its corresponding DyBO formulation under the assumption that the solution has a low-dimensional structure in the sense of Karhunen-Lo\`{e}ve expansion. The DyBO method enables one to faithfully track the KL expansion of the SPDE solution. For the mean of the solution and physical modes of the solution in the truncated KL expansion, they are deterministic and dependent on time, which were solved using the GMsFEM with implicit Euler scheme. Moreover, at each time level, the online construction was applied in order to reduce the $L^2$-error rapidly. For the stochastic modes of the solution in the truncated KL expansion, we projected them onto a set of polynomial chaos to obtain an ODE system, which could be solved using some existing solvers. Thanks to the approximation property of the multiscale basis functions obtained using the GMsFEM, the degrees of freedom of our new method is relatively small compared with the original DyBO method. Therefore, our new method provides considerable computational savings over the original DyBO method. We presented several numerical examples for 2D stochastic parabolic PDEs with multiscale coefficients to demonstrate the accuracy and efficiency of our proposed method. \rev{One may obtain significant saving in computation with the proposed multiscale solver without losing the accuracy of approximations}. We point out that the stochastic multiscale problem is still very challenging when the contrast value of the coefficient is very large, which will be our subsequent research work. \section*{Acknowledgements} \noindent The research of E. Chung is supported by Hong Kong RGC General Research Fund (Project 14304217). The research of Z. Zhang is supported by the Hong Kong RGC grants (Projects 27300616, 17300817, and 17300318), National Natural Science Foundation of China (Project 11601457), and Seed Funding Programme for Basic Research (HKU). S. Pun would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme {\it Uncertainty quantification for complex systems: theory and methodologies} when work on this paper was undertaken. This workshop was supported by EPSRC Grant Numbers EP/K032208/1 and EP/R014604/1. \appendix \section{Derivations of the DyBO Formulation for the multiscale SPDE} \label{cha:dybo-formulation-MsSPDE} \noindent In this appendix, we provide the details of the derivations of the DyBO-gPC formulation of multiscale SPDE \eqref{eq:MsDyBO_Model}. Substituting the KL expansion of $u$ (see Eq.~\eqref{KLE}) into Eq.~\eqref{eq:MsDyBO_Model}, we get \begin{align*} \oL u & = \nabla\cdot((\bar{a}+\tilde{a})(\nabla \bar{u} + \nabla \vU \mA^{T} \hmpn^T )) + f \\ & = \nabla\cdot(\bar{a} \nabla \bar{u}) + \nabla\cdot(\tilde{a} \nabla \bar{u}) + \nabla\cdot(\bar{a} \nabla \vU \mA^{T} \hmpn^T) + \nabla\cdot(\tilde{a} \nabla \vU \mA^{T} \hmpn^T) + f. \end{align*} Taking expectations on both sides yields \begin{align*} \EEp{\oL u}& = \nabla\cdot(\bar{a}\nabla \bar{u}) + \nabla\cdot(\EEp{\tilde{a}\nabla \vU \mA^{T} \hmpn^T})+ f, \end{align*} where we have used the facts that $\EEp{\tilde{a}}=0$ and $\EEp{\hmpn}=\textbf{0}$. Then, we obtain \begin{align*} \oLt u & = \oL u - \EEp{\oL u} \\ & = \nabla\cdot(\tilde{a} \nabla \bar{u}) + \nabla\cdot(\bar{a} \nabla \vU \mA^{T} \hmpn^T) + \nabla\cdot(\tilde{a} \nabla \vU \mA^{T} \hmpn^T) - \nabla\cdot(\EEp{\tilde{a}\nabla \vU \mA^{T} \hmpn^T}) \end{align*} In addition, we compute some related terms as follows \begin{align*} \EEp{\oLt u \hmpn} & = \nabla\cdot(\EEp{\tilde{a} \nabla \bar{u}\hmpn}) + \nabla\cdot(\bar{a} \nabla \vU \mA^{T}) + \nabla\cdot( \EEp{ \tilde{a} \nabla \vU \mA^{T} \hmpn^T \hmpn}) \end{align*} and \begin{align*} \inp{\vU^T}{\EEp{\oLt u \hmpn}}_{m \times N_p} &= \inp{\vU^T}{ \nabla\cdot(\EEp{\tilde{a}\nabla\bar{u}\hmpn})+\nabla\cdot(\bar{a} \nabla \vU \mA^{T}) + \nabla\cdot( \EEp{ \tilde{a} \nabla \vU \mA^{T} \hmpn^T \hmpn}) } \\ \end{align*} From Eq.~\eqref{eq:DyBO:gPC}, we obtain the DyBO-gPC formulation for the multiscale SPDE \eqref{eq:MsDyBO_Model} \begin{align*} \diffp{\bar{u}}{t} & = \nabla\cdot(\bar{a}\nabla \bar{u}) + \nabla\cdot(\EEp{\tilde{a}\nabla \vU \mA^{T} \hmpn^T})+ f, \\ \diffp{\vU}{t} &= -\vU\mD^T + \nabla\cdot(\EEp{\tilde{a} \nabla \bar{u}\hmpn})\mA + \nabla\cdot(\bar{a} \nabla \vU) + \nabla\cdot( \EEp{ \tilde{a} \nabla \vU \mA^{T} \hmpn^T \hmpn})\mA, \\ \diffd{\mA}{t} &= -\mA \mC^T + \inp{ \nabla\cdot(\EEp{\hmpn^{T}\tilde{a}\nabla\bar{u}})+ \nabla\cdot(\bar{a} \nabla \mA\vU^{T}) + \nabla\cdot( \EEp{ \tilde{a} \hmpn^T \hmpn \mA \nabla \vU^{T} }) }{\vU}\mLa_{\vU}^{-1} , \end{align*} where matrices $\mC$ and $\mD$ can be solved from \eqref{eq:CDSystem} with $G_{*}$ \begin{align*} G_{*} = \mLa_{\vU}^{-1}\inp{\vU^T}{\EEp{\oLt u \hmpn}}\mA. \end{align*} and we have used that $\mA^{T}\mA= \mI_{m\times m}$. \section{The implementation of DyBO-GMsFEM}\label{app:implement} \noindent In this section, we present the details of the implementation of our complete algorithm. We denote $V_{\text{off}} = \text{span} \{ \eta_i: i = 1,\cdots, N_d \}$ and the row vector $\mathcal{C} = \mathcal{C}(x) = \big (\eta_1(x),\cdots, \eta_{N_d}(x)\big)$, where $N_d = \text{dim}(V_{\text{off}})$. For each time $t = t_n$, we seek the approximations for the functions $\bar{u}$ and $\vU$ using the multiscale basis functions and the following representations hold $$ \bar{u}(x,t) = \mathcal{C}(x) \hat{u}_0(t), \quad \hat{u}_0(t) \in \mathbb{R}^{N_d},$$ $$ \vU(x,t) = \mathcal{C}(x) \hat{U}_m(t), \quad \hat{U}_m(t) := (\hat{u}_1(t),\cdots,\hat{u}_m(t)) \in \mathbb{R}^{N_d\times m}.$$ Then, the variational form of \eqref{eqn:DyBO-gPC-ubar} becomes \begin{equation} \oM \frac{d\hat{u}_0}{dt} = -\oS_0 \hat{u}_0 - \oS_i \hat{U}_m \mA^T \EEp{\xi_i \hmpn^T} + \hat{f}, \label{eq:DyBO-GMsFEM-ubar} \end{equation} where $$ \mathcal{M} = ( \inp{\eta_j}{\eta_k} )\in \mathbb{R}^{N_d \times N_d}, \quad \mathcal{S}_0 = (\inp{\bar{a} \eta_j}{\eta_k} )\in \mathbb{R}^{N_d \times N_d},$$ $$ \mathcal{S}_i = ( \inp{a_i \eta_j}{\eta_k})\in \mathbb{R}^{N_d \times N_d}, ~ ~ i=1,...,r, \quad \hat{f} = (\inp{f}{\eta_1} \cdots \inp{f}{\eta_{N_d}})^T \in \mathbb{R}^{N_d}.$$ Similarly, the variational form of \eqref{eqn:DyBO-gPC-U} becomes \begin{equation} \oM \frac{d\hat{U}_m}{dt} = - \oM\hat{U}_m \mD^T - \oS_i\hat{u}_0 \EEp{\xi_i \hmpn}\mA - \oS_0 \hat{U}_m - \oS_i \hat{U}_m \mA^T \EEp{\xi_i \hmpn^T \hmpn }\mA, \label{eq:DyBO-GMsFEM-U} \end{equation} where the Einstein notation is used. Next, we apply the implicit Euler method to approximate the time derivatives in \eqref{eq:DyBO-GMsFEM-ubar} and \eqref{eq:DyBO-GMsFEM-U}. Combining with the variational forms, we obtain the following algebraic equations at each fixed time $t = t_n = n\Delta t$, $n = 1,\cdots, N$ \begin{align} \oS_0 \hat{u}_0^n + c \oM \hat{u}_0^n & = \mathcal{G}_1^{n-1}, \label{eq:DyBO-GMsFEM-ubar-Euler} \\ \oS_0 \hat{U}_i^n + c \oM \hat{U}_i^n & = \mathcal{G}_2^{n-1}, \quad i=1,...,m, \label{eq:DyBO-GMsFEM-U-Euler} \end{align} where $ c = 1/\Delta t$ and the right hand sides $\mathcal{G}_1$ and $\mathcal{G}_2$ are defined as follows \begin{align*} \mathcal{G}_1^{n-1} & = c \oM \hat{u}_0^{n-1} - \oS_i \hat{U}_m^{n-1} \mA_{n-1}^T \EEp{\xi_i \hmpn^T} + \hat{f}, \\ \mathcal{G}_2^{n-1} & = c \oM \hat{U}_m^{n-1} - \oM \hat{U}_m^{n-1} \mD_{n-1}^T - \oS_i \hat{u}_0^{n-1}\EEp{\xi_i \hmpn}\mA_{n-1} -\oS_i \hat{U}_m^{n-1}\mA_{n-1}^T \EEp{\xi_i \hmpn^T \hmpn}\mA_{n-1}, \end{align*} where $\mA_{n-1} = \mA(t_{n-1})$, $\hat{U}_m^n = \hat{U}_m(t_n)$, $\hat{u}_0^n = \hat{u}_0(t_n)$, and $\mD_n = \mD(t_n)$. Using integration by part one simplifies the ODE system for \rev{$\mA(t) = (\mA_1(t), \cdots , \mA_m(t)) \in \mathbb{R}^{N_p \times m}$} as follows \begin{align} \frac{d\mA}{dt} = -\mA \mC^T - \big( \EEp{\xi_i \hmpn^T}\hat{u}_0^T \oS_i \hat{U}_m + \mA \hat{U}_m^T \oS_0 \hat{U}_m + \EEp{\xi_i \hmpn^T \hmpn} \mA \hat{U}_m^T \oS_i \hat{U}_m \big) \mLa_{\vU}^{-1}. \label{eq:DyBO-GMsFEM-A} \end{align} \rev{Here, $\mA_i(t) = (\mA_{\minda i} (t) )_{\minda \in \sJ_r^p} \in \mathbb{R}^{N_p \times 1}$, $i = 1,\cdots, m$ represent the stochastic components of the solution, which change with respect to time.} Then, we use implicit Euler scheme to approximate the time derivative and get \begin{align} \mA_{n} = \mA_{n-1} - \Delta t \big( \mA_{n-1} \mC_{n-1}^T + \mathcal{G}_3^{n-1}\big), \label{eq:DyBO-GMsFEM-A-Euler} \end{align} where $\mC_{n-1} = \mC(t_{n-1})$ and $$ \mathcal{G}_3^{n-1} =\Big( \EEp{\xi_i \hmpn^T}(\hat{u}_0^{n-1})^T \oS_i \hat{U}_m^{n-1} + \mA_{n-1}(\hat{U}_m^{n-1})^T \oS_0 \hat{U}_m^{n-1} + \EEp{\xi_i \hmpn^T \hmpn} \mA_{n-1} (\hat{U}_m^{n-1})^T \oS_i \hat{U}_m^{n-1} \Big) \mLa_{\vU}^{-1}.$$ \noindent Overall, we solve the following discrete system to obtain $\hat{u}_0^n$, $\hat{U}_m^n$, and $\mA_{n}$ at each time $t=t_n$, $n=1,\cdots,N$, \begin{align} \oS_0 \hat{u}_0^n + c \oM \hat{u}_0^n & = \mathcal{G}_1^{n-1}, \label{eqn:final_1st}\\ \oS_0 \hat{U}_m^n + c \oM \hat{U}_m^n & = \mathcal{G}_2^{n-1}, \label{eqn:final_2nd}\\ \mA_{n} & = \mA_{n-1} - \Delta t \big( \mA_{n-1} \mC_{n-1}^T + \mathcal{G}_3^{n-1}\big), \label{eqn:final_3rd} \end{align} where the matrices $\mC_{n-1}$ and $\mD_{n-1}$ in \eqref{eqn:final_1st}-\eqref{eqn:final_3rd} can be computed using the system \eqref{eq:CDSystem} with $G_{*}(\bar{u},\vU,\vY) = -\mLa_{\vU}^{-1} \big( \hat{U}_m^T \oS_i^T \hat{u}_0 \EEp{\xi_i \hmpn} + \hat{U}_m^T \oS \hat{U}_m \mA^T + \hat{U}_m^T \oS_i^T \hat{U}_m \mA^T \EEp{\xi_i \hmpn^T \hmpn}\big) \mA $. To improve the accuracy of the spatial approximation, one may possibly perform the online adaptive enrichment at each time level, adjusting the dimension of the multiscale space. See \cite{chung2015online} for more details of the online basis construction using GMsFEM. \bibliographystyle{plain} \bibliography{mypaperref} \end{document}
{"config": "arxiv", "file": "1812.01394/AdaptiveDyBOMultiscaleSPDE_V18.tex"}
TITLE: Limit $\lim_{x\to\infty} x(\arctan(a^2x)-\arctan(ax))$ QUESTION [0 upvotes]: I have a limit $\lim_{x\to\infty} x(\arctan(a^2x)-\arctan(ax))$ and I know the solution $\frac{a-1}{a^2}$, but I dont have any Idea, how to calculate this limit or at least how to start. Any idea? REPLY [2 votes]: I assume that we are near $+\infty$. If $ a=0 $, the limit is zero. If $ a<0 $ the limit is$$+\infty(\frac{\pi}{2}-(-\frac{\pi}{2}))=+\infty$$ If $ a>0$, then we use the well-known identity, for $X>0 \;:$ $$\arctan(X)=\frac{\pi}{2}-\arctan(\frac 1X)$$ So, we want $$\lim_{x\to+\infty}x\Bigl(\arctan(\frac{1}{ax})-\arctan(\frac{1}{a^2x})\Bigr)$$ we use the fact that, near $+\infty$ $$\arctan(\frac 1X)=\frac 1X(1+\epsilon(X))$$ thus $$\arctan(\frac{1}{ax})-\arctan(\frac{1}{a^2x})=$$ $$\frac 1x(\frac 1a-\frac{1}{a^2})+\frac 1x\epsilon(x)$$ the limit is then $$\frac 1a-\frac{1}{a^2}=\frac{a-1}{a^2}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 3702995}
\section{Some Properties of the Tutte Polynomial} \label{sec:properties} There is a large and ever-growing body of information about properties of the Tutte polynomial. Here, we present some of them, again with an emphasis on illustrating general techniques for extracting information from a graph polynomial. \subsection{The Beta Invariant} Even a single coefficient of a graph polynomial can encode a remarkable amount of information. It may characterize entire classes of graphs and have a number of combinatorial interpretations. A noteworthy example is the $\beta$ invariant\index{beta invariant@$\beta$ invariant}, introduced (in the context of matroids) by Crapo in~\cite{Cra67}. \begin{definition} Let $G=(V,E)$ be a graph with at least two edges. The $\beta$ invariant of $G$ is \[ \beta \left( G \right) = \left( { - 1} \right)^{r\left( G \right)} \sum\limits_{A \subseteq E} {\left( { - 1} \right)^{\left| A \right|} r\left( A \right)}. \] \end{definition} The beta invariant is a deletion/contraction invariant, that is, it satisfies~(\ref{eq:TG_invariant_del_con}). However, the $\beta$ invariant is zero if and only if $G$ either has loops or is not two-connected. Thus, the $\beta$ invariant is not a Tutte-Gr\"othendieck invariant in the sense of Section~\ref{sec:IV}. While the $\beta$ invariant may be defined to be 1 for a single edge or a single loop, it still will not satisfy~(\ref{eq:TG_invariant_prod}), and it is not multiplicative with respect to disjoint unions and one-point joins. Nevertheless, the $\beta$ invariant derives from the Tutte polynomial. \begin{theorem} If $G$ has at least two edges, and we write $T(G;x,y)$ in the form $\sum{t_{ij} x^i y^j}$, then $t_{0,1} = t_{1,0} $, and this common value is equal to the $\beta$ invariant. \end{theorem} \begin{proof} This can easily be proved by induction, using deletion/contraction for an ordinary edge, and otherwise noting that the $\beta$ invariant is zero if the graph has loops or is not two-connected. \qed\end{proof} The $\beta$ invariant does not change with the insertion of parallel edges or edges in series. Thus, homeomorphic graphs have the same $\beta$ invariant. The $\beta$ invariant is also occasionally called the chromatic invariant, because $\chi '\left( {G;1} \right) = \left( { - 1} \right)^{r\left( G \right)} \beta \left( G \right)$, where $\chi(G;x)$ is the chromatic polynomial. \begin{definition} A series-parallel\index{graph!series-parallel} graph is a graph constructed from a digon (two vertices joined by two edges in parallel) by repeatedly adding an edge in parallel to an existing edge, or adding an edge in series with an existing edge by subdividing the edge. Series-parallel graphs are loopless multigraphs, and are planar. \end{definition} Brylawski, \cite{Bry71} and also~\cite{Bry82}, in the context of matroids, showed that the $\beta$ invariant completely characterizes series-parallel graphs. \begin{theorem} $G$ is a series-parallel graph if and only if $\beta \left( G \right) = 1$. \end{theorem} Using the deletion/contraction definition of the Tutte polynomial, it is quite easy to show that the $\beta$ invariant is unchanged by adding an edge in series or in parallel to another edge in the graph. This, combined with the $\beta$ invariant of a digon being one, suffices for one direction of the proof. The difficulty is in the reverse direction, and the proof is provided in~\cite{Bry72} by a set of equivalent characterizations for series-parallel graphs, one by excluded minors and another that the $\beta$ invariant is 1 for series-parallel graphs. For graphs, the excluded minor is $K_4$ (cf. Duffin~\cite{Duf65} and Oxley~\cite{Oxl82}). Succinct proofs may also be found in Zaslavsky~\cite{Zas87}. The fundamental observation, which may be applied to other situations, is that there is a graphical element, here an edge which is in series or parallel with another edge, which behaves in a tractable way with respect to the computation methods of the polynomial. The $\beta$ invariant has been explored further, for example by Oxley in~\cite{Oxl82} and by Benashski, Martin, Moore and Traldi in~\cite{BMMT95}. Oxley characterized 3-connected matroids with $\beta \leqslant 4$, and a complete list of all simple 3-connected graphs with $\beta \leqslant 9$ is given in~\cite{BMMT95}. A wide variety of combinatorial interpretations have also been found for the $\beta$ invariant. Most interpretations involve objects other than graphs, but we give two graphical interpretations below. The first is due to Las Vergnas~\cite{Las84}. \begin{theorem} Let $G$ be a connected graph. Then $2\beta \left( G \right)$ gives the number of orientations of $G$ that have a unique source and sink, independent of their relative locations. \end{theorem} This result is actually a consequence of a more general theorem giving an alternative formulation of the Tutte polynomial, which will be discussed further in Subsection~\ref{subsec:coefficient_relations}. We also have the following result from~\cite{E-M04}. \begin{theorem} Let $G=(V,E)$ be a connected planar graph with at least two edges. Then \[ \beta= \frac{1}{2}\sum {\left( { - 1} \right)^{c\left( E\setminus P \right) + 1} }, \] where the sum is over all closed trails $P$ in $\vec G_m $ which visit all its vertices at least once. \end{theorem} Like the interpretations for $T(G;x,x)$ given in the Subsection~\ref{subsec:x=y}, this result follows from the Tutte polynomial’s relation to the Martin polynomial. Graphs in a given class may have $\beta$ invariants of a particular form. McKee~\cite{McK01} provides an example of this in dual-chordal graphs. A \emph{dual-chordal graph}\index{graph!dual-chordal} is 2-connected, 3-edge-connected, such that every cut of size at least four creates a bridge. A \emph{$\theta$ graph} has two vertices with three edges in parallel between them. A dual-chordal graph has the property that it may be reduced to a $\theta$ graph by repeatedly contracting induced subgraphs of the following forms: digons, triangles, and $K_{2,3}$'s, where in all cases each vertex has degree 3 in $G$. \begin{theorem} If $G$ is a dual-chordal graph, then $\beta(G)=2^a5^b$. Here, $a$ is the number of triangles in $G$, where each vertex has degree 3 in $G$, that are contracted in reducing $G$ to a $\theta$ graph. Similarly, $b$ is the number of induced $K_{2,3}$ in $G$, again where each vertex has degree 3 in $G$, that are contracted in reducing $G$ to a $\theta$ graph. \end{theorem} The proof follows from considering the acyclic orientations of $G$ with unique source and sink and applying the results of Green and Zaslavsky~\cite{GZ83}. \subsection{Coefficient Relations} \label{subsec:coefficient_relations} After observing that $t_{1,0} = t_{0,1} $ in the development of the $\beta$ invariant, it is natural to ask if there are similar relations among the coefficients $t_{ij} $ of the Tutte polynomial $T(G;x,y) = \sum {t_{ij} x^i y^j } $ and whether there are combinatorial interpretations for these coefficients as well. The answer is yes, although less is known. The most basic fact, and one which is not obvious from the rank-nullity formulation of Definition~\ref{def:rank_generating_expansion}, is that all the coefficients of the Tutte polynomial are non-negative. That $t_{1,0} = t_{0,1} $ is one of an infinite family of relations among the coefficients of the Tutte polynomial. Brylawski~\cite{Bry82} has shown the following: \begin{theorem}\label{coeff_rel} If $G$ is a graph with at least $m$ edges, then \[ \sum_{i=0}^{k} {\sum_{j=0}^{k-i} {(-1)^j \binom{k-i}{j} t_{ij} }}= 0, \] for $k=0,1\ldots, m-1$. \end{theorem} Additionally, Las Vergnas in~\cite{Las84} found combinatorial interpretations in the context of oriented matroids for these coefficients by determining yet another generating function formulation for the Tutte polynomial. Gioan and Las Vergnas~\cite{GLV05} give the following specialization to orientations of graphs. \begin{theorem} Let $G$ be a graph with a linear ordering of its edges. Let $o_{i,j}$ be the number of orientations of G such that the number of edges that are smallest on some consistently directed cocycle is i and the number of edges that are smallest on a consistently directed cycle is j. Then \[ T(G;x,y) = \sum\limits_{i,j} {o_{i,j} 2^{-(i+j)} x^i y^j } , \] and thus $t_{ij} = o_{i,j}/(2^{i + j})$. \end{theorem} The proof is modeled on Tutte's proof that the $t_{ij}$'s are independent of the ordering of the edges by using deletion/contraction on the greatest edge in the ordering. Another natural question is to ask if these coefficients are unimodular or perhaps log concave, for example in either $x$ or $y$. While this was originally conjectured to be true (see Seymour and Welsh~\cite{SW75}, Tutte~\cite{Tut84}), then Schw\"arzler~\cite{Sch93} found a contradiction in the graph in Fig.~\ref{Fig:counterexample}. This counterexample can be extended to an infinite family of counterexamples by increasing the number of edges parallel to $e$ or $f$. \begin{figure}[hbtp] \begin{center} \includegraphics[scale=0.75]{counterexample.pdf} \caption{A counterexample to the conjecture}\label{Fig:counterexample} \end{center} \end{figure} The unimodularity question for the chromatic polynomial, raised by Read in~\cite{Rea68}, is still unresolved. \subsection{Zeros of the Tutte Polynomial} \label{subsec:zeros} Because the Tutte polynomial is after all a polynomial, it is very natural to ask about its zeros and factorizations. The importance of its zeros is magnified by their interpretations. For example, since $T(G;0,y)$ is essentially the flow polynomial, a root of the form $(0, 1-\lambda)$, for $\lambda$ a positive integer, means that $G$ does not have a nowhere zero flow for any Abelian group of order $\lambda$. Similarly, since $T(G;x,0)$ is essentially the chromatic polynomial, a root of the form $( 1-\lambda,0)$ with $\lambda$ a positive integer, means that $G$ cannot be properly colored with $\lambda$ colors. In particular, a direct proof the four-color theorem would follow if it could be shown that the Tutte polynomial has no zero of the form $(-3,0)$ on the class of planar graphs. Of course, because of the duality between the flow and chromatic polynomials, results for the zeros of the one informs the other, and vice versa. Jackson~\cite{Jac03} surveys zeros of both chromatic and the flow polynomials. As we will see in detailed in the next chapter, the chromatic polynomial has an additional interpretation as the zero-temperature antiferromagnetic Potts model of statistical mechanics. In this context, its zeros correspond to numbers of spins for which the ground state degeneracy function may be nonanalytic. This has led to research into its zeros by theoretical physicists as well as mathematicians. Traditionally, the focus from a graph theory perspective was on positive integer roots of the chromatic polynomial, corresponding to graph not being properly colorable with $q$ colors. In statistical mechanics however, the relevant quantity involves the limit of an increasing family of graphs as the number, $n$, of vertices goes to infinity. This shifted the focus to the complex roots of the chromatic polynomial, since the sequence of complex roots as $n \to \infty $ may have an accumulation point on the real axis. Because of this, a significant body of work has emerged in recent years devoted to clearing regions of the complex plane (in particular regions containing intervals of the real axis) of roots of the chromatic polynomial. Results showing that certain intervals of the real axis and certain complex regions are free of zeros of chromatic polynomials include those of Woodall~\cite{Woo92}, Jackson~\cite{Jac93}, Shrock and Tsai~\cite{ST97a,ST97b}, Thomassen~\cite{Tho97}, Sokal \cite{Sok01b}, Procacci, Scoppola, and Gerasimov~\cite{PSG03}, Choe, Oxley, Sokal, and Wagner~\cite{COSW04}, Borgs~\cite{Bor06}, and Fernandez and Procacci~\cite{FP}. One particular question concerns the maximum magnitude of a zero of a chromatic polynomial and of zeros comprising region boundaries in the complex plane as the number of vertices $n \to \infty $. An upper bound is given in~\cite{Sok01b}, depending on the maximal vertex degree. There are, however, families of graphs where both of these magnitudes are unbounded (see Read and Royle~\cite{RR91}, Shrock and Tsai~\cite{ST97a,ST98}, Brown, Hickman, Sokal and Wagner~\cite{BHSW01}, and Sokal~\cite{Sok04}). For recent discussions of some relevant research directions concerning zeros of chromatic polynomials and properties of their accumulation sets in the complex plane, as well as approximation methods, see, e.g., Shrock and Tsai~\cite{ST97b}, Shrock~\cite{Shr01}, Sokal~\cite{Sok01a, Sok01b}, Chang and Shrock~\cite{CS01b}, Chang, Jacobsen, Salas, and Shrock~\cite{CJSS04}, Choe, Oxley, Sokal, and Wagner~\cite{COSW04}, Dong and Koh~\cite{DK04}, and more recently Royle~\cite{Roya, Royb}. If $G$ is a graph with chromatic number $k+1$, then $\chi(G;x)$ has integer roots at $0,1,\ldots, k$. Thus, the chromatic polynomial of $G$ can be written as \[ \chi(G;x)=x^{a_0}(x-1)^{a_1}\cdots(x-k)^{a_{k}}q(x), \] where $a_0,\ldots,a_{k}$ are integers and $q(x)$ is a polynomial with no integer roots in the interval $[0,k]$. In contrast to this we have the following result of Merino, de Mier and Noy~\cite{MMN01}. \begin{theorem}\label{irreducible} If $G$ is a 2-connected graph, then $T(G;x,y)$ is irreducible in $\mathbb{Z}[x,y]$. \end{theorem} The proof is quite technical and it heavily relies on Theorem~\ref{coeff_rel} and that $\beta(G)\neq 0$ if and only if $G$ has no loops and it is 2-connected. If $G$ is not 2-connected, then $T(G;x,y)$ can be factored. From Proposition~\ref{product-formula} we get that if $G$ is a disconnected graph with connected components $G_1, \ldots, G_{\kappa}$, then $T(G;x,y)=\prod_{i=1}^{\kappa}T(G_i;x,y)$. So let us consider when $G$ is connected but not 2-connected. One of the basic properties mentioned in~\cite{BO92} is that $y^{s}|T(G;x,y)$ if and only if $G$ has $s$ loops. Thus, let us focus on loopless connected graphs that are not 2-connected. It is well-known that such graphs have a decomposition into its blocks, see for example~\cite{Bol98}. A \emph{block of} \index{graph!block of} a graph $G$ is either a bridge or a maximal 2-connected subgraph. If two blocks of $G$ intersect, they do so in a cut vertex. By Theorem~\ref{irreducible} and Proposition~\ref{product-formula} we get the following. \begin{corollary} If $G$ is a loopless connected graph that is not 2-connected with blocks $H_1,\ldots, H_p$, then the factorization of $T(G;x,y)$ in $\mathbb{Z}[x,y]$ is exactly \[ T(G;x,y)=T(H_1;x,y)\cdots T(H_p;x,y). \] \end{corollary} \subsection{Derivatives of the Tutte Polynomial} \label{subsec:derivatives} It is also most natural to differentiate the Tutte polynomial and to ask for combinatorial interpretations of its derivatives. For example, Las Vergnas~\cite{Las} has found the following combinatorial interpretation of the derivatives of the Tutte polynomial. It first requires a slight generalization of the notions of internal and external activities given in Subsection~\ref{subsec:trees_expansion}. \begin{definition}\index{active edge!externally} \index{active edge!internally} Let $G=(V,E)$ be a graph with a linear order on its edges, and let $A \subseteq E$. An edge $e \in A$ and a cut $C$ are internally active with respect to $A$ if $e \in C \subseteq \left( {E\backslash A} \right) \cup \{ e\} $ and $e$ is the smallest element in $C$. Similarly, an edge $e \in E\backslash A$ and a cycle $C$ are externally active with respect to $A$ if $e \in C \subseteq A \cup \{ e\} $. \end{definition} In the case that $A$ is a spanning tree, this reduces to the previous definitions of internally and externally active. \begin{theorem} Let $G$ be a graph with a linear ordering on its edges. Then \[ \frac{{\partial ^{p + q} }} {{\partial x^p \partial y^q }} T(G;x,y) = p!\,q!\,\sum {x^{\operatorname{in}(A)} y^{\operatorname{ex}(A)} }, \] where the sum is over all subsets $A$ of the edge set of $G$ such that $r\left( G \right) - r\left( A \right) = p$ and $\left| A \right| - r\left( A \right) = q$, and where $\operatorname{in}( A )$ is the number of internally active edges with respect to $A$, and $\operatorname{ex}(A)$ is the number of externally active edges with respect to $A$. \end{theorem} The proof begins by differentiating the spanning tree definition of the Tutte polynomial, Definition~\ref{def:trees_expansion}, which gives a sum over $i$ and $j$ restricted by $p$ and $q$. This is followed by showing that the coefficients of $x^{i - p} y^{j - q}$ enumerate the edge sets described in the theorem statement. The enumeration comes from examining, for each subset $A$ of $E$, the set of $e \in E\backslash A$ such that there is a cut-set of $G$ contained in $E\backslash A$ with $e$ as the smallest element (and dually for cycles). The Tutte polynomial along the line $x=y$ is a polynomial in one variable that, for planar graphs, is related to the Martin polynomial via a medial graph construction. From this relationship, \cite{E-M04b} derives an interpretation for the $n$-th derivative of this one variable polynomial evaluated at $(2,2)$ in terms of edge disjoint closed trails in the oriented medial graph. \begin{definition} For an oriented graph $\vec{G}$ , let $P_n$ be the set of ordered $n$-tuples ${\bar p}:=(p_1 , \ldots ,p_n)$, where the $p_i$'s are consistently oriented edge-disjoint closed trails in $\vec{G}$. \end{definition} \begin{theorem} If G is a connected planar graph with oriented medial graph $\overrightarrow {G_m} $, then, for all non-negative integers n, \[ \left. \frac{{\partial ^{n} }} {{\partial x}} T(G;x,x)\right|_{x=2} = \sum_{k = 0}^n {(- 1)^{n - k} \frac{{n!}}{{k!}}\sum_{\bar p \in P_k \left( {\vec G_m } \right)}^{} {2^{m\left( {\bar p} \right)} } }, \] where $m\left( {\bar p} \right)$ is the number of vertices of $\vec{G}$ not belonging to any of the trails in $\bar p$. \end{theorem} \subsection{ Convolution and the Tutte Polynomial} \label{subsec:convolution} Since the Tutte polynomial can also be formulated as a generating function, the tools of generating functions, such as M\"obius inversion and convolution, are available to analyze it. A comprehensive treatment of convolution and M\"obius inversion can be found in Stanley~\cite{Sta96}. Convolution identities are valuable because they write a graph polynomial in terms of the polynomials of its substructures, thus facilitating induction techniques. We have the following result from Kook, Reiner, and Stanton~\cite{KRS99} using this approach. \begin{theorem} The Tutte polynomial can be expressed as \[ T(G;x, y) = \sum{T(G/A;x,0) T\left({\left. G \right|_A;0,y}\right)}, \] where the sum is over all subsets $A$ of the edge set of G, and where $\left. G \right|_A $ is the restriction of G to the edges of $A$, i.e. $\left. G \right|_A=G\setminus (E\setminus A)$. \end{theorem} This result is particularly interesting in that it essentially writes the Tutte polynomial of a graph in terms of the chromatic and flow polynomials of its minors. It may be proved in several ways, for example by induction using the deletion/contraction relation, or from the spanning trees expansion of the Tutte polynomial. However, we present the first proof from~\cite{KRS99} to illustrate the technique, which is dependent on results of Crapo~\cite{Cra69}. \begin{proof}[sketch] We begin with a convolution product of two functions on graphs into the ring $\mathbb{Z}[x,y]$ given by \index{convolution} $f * g$ $=$ $\sum_{A \subseteq E\left( G \right)} {f\left( {\left. G \right|_A } \right)} g\left( {G/A} \right)$. The identity for convolution is $\delta (G)$ which is 1 if and only if $G$ is edgeless and 0 otherwise. From Crapo~\cite{Cra69}, we have that \[ T(G;x + 1,y + 1) = \left( {\zeta (1,y) * \zeta (x,1)} \right)\left( G \right), \] where $\zeta \left( {x,y} \right)\left( G \right) = x^{r\left( G \right)} y^{r\left( {G^ * } \right)} $. Kook, Reiner, and Stanton~\cite{KRS99} then show that $\zeta \left( {x,y} \right)^{ - 1} = \zeta \left( { - x, - y} \right)$. From this it follows that $T(G;x + 1,0) = ( \zeta (1, - 1) * \zeta (x,1))(G)$ and $T(G;0,y + 1) = (\zeta (1,y) * \zeta ( - 1,1))( G )$. Thus, $\sum T(\left. G \right|_A ;0,y + 1)\, T(G/A;x + 1,0) = (\zeta (1,y) * \zeta ( - 1,1)) * (\zeta (1, - 1) * \zeta(x,1))( G )$. By associativity, the last expression is the same as $(\zeta (1,y) * (\zeta ( - 1,1) * \zeta (1, - 1)) * \zeta (x,1))( G ) = (\zeta ( 1,y) * \zeta (x,1))( G )= T(G;x + 1,y + 1)$. \qed\end{proof} A formula, known as Tutte's identity for the chromatic polynomial, with a similar flavor, exists for the chromatic polynomial. \begin{theorem} The chromatic polynomial can be expressed as \[ \chi(G;x + y) = \sum {\chi(\left. G \right|_A;x)\, \chi(\left. G \right|_{A^c};y)}, \] where the sum is over all subsets $A$ of the set of vertices of $G$, and where $\left. G \right|_A $ is the restriction of $G$ to the vertices of $A$. \end{theorem} \begin{proof} Consider an $(m + n)$-coloring of $G$, and let $A$ be the vertices colored by the first $m$ colors. Then an $(m + n)$-coloring of $G$ decomposes into an $m$ coloring of $\left. G \right|_A $ using the first $m$ colors and an $n$ coloring of $\left. G \right|_{A^c } $ using the remaining colors. Thus, for any two non-negative integers $m$ and $n$, it follows that $\chi(G;m + n) = \sum {\chi(\left. G \right|_A ;m)\,\chi(\left. G \right|_{A^c } ;n )} $. Since the expressions involve finite polynomials, this establishes the result for indeterminates $x$ and $y$. \qed \end{proof}
{"config": "arxiv", "file": "0803.3079/7.properties.tex"}
TITLE: When does this integral converge? $\;\;\int_0^\infty {\frac{e^{-ax}}{x^2+1}}\,\mathrm{d}x$ QUESTION [2 upvotes]: $$\int_0^\infty \frac{e^{-ax}}{x^2+1}\,\mathrm{d}x$$ - $a$ real, for which a does this converge? (The final answer is $a\ge 0$) I've tried doing this by parts and it seems to work at first, but then everything cancels and I get $0=0$ at the 2nd passage. I can't come up with any manipulations, any hints? Thanks. REPLY [0 votes]: If $a<0$, $\frac{e^{-ax}}{x^2+1} \to \infty$ as $x \to \infty$ and hence the given integral diverges. When $a \geq 0$ $$\frac{e^{-ax}}{x^2+1} \leq \frac{1}{x^2+1} \qquad 0 < x < \infty$$ The integral of right side of the inequality converges and hence the given integral converges when $a \geq 0$
{"set_name": "stack_exchange", "score": 2, "question_id": 1078821}
{\bf Problem.} Factor $t^2-121$. {\bf Level.} Level 2 {\bf Type.} Algebra {\bf Solution.} We have $t^2 -121 = t^2 - 11^2 = \boxed{(t-11)(t+11)}$.
{"set_name": "MATH"}
TITLE: Prove by induction on two variables that there are $n^m$ functions from $\{1, \ldots m\}$ to $\{1, \ldots, n\}$ QUESTION [1 upvotes]: I am trying to prove following statement: $[m,n]$ is a set of functions defined as $f \in [m,n] \leftrightarrow f: \{1,...,m\} \rightarrow \{1,...,n\}$. The size of $[m,n]$ is $n^m$ for $m,n \in \mathbb{N}_{\gt0}$. I have tried to prove it but I am not entirely sure about its correctness: 1) For the basic step $m=n=1$. The size of $\{1\} \rightarrow \{1\}$ is $1$. And it equals $1^1 = 1$. 2) Then I assume that for some $m,n$ the size $[m,n] = n^m$. Now comes the first problem: should I be proving it for $[m, n+1],[m+1,n],[m+1,n+1]$ or is some of it redundant? When trying to prove $[m, n+1]$ I rewrite it as $[m,n+1] = (n+1) * (n+1) * (n+1) * ... * (n+1) = (n+1)^m$ but I don't use the induction assumption so is that correct? Again $[m+1,n] = n*n*...*n = n^{m+1}$ Finally $[m+1,n+1] = (n+1)*(n+1)*...*(n+1) = (n+1)^{m+1}$. During the process I didn't really used my induction assumption, so I am worried that this wouldn't qualify as a proof by induction. So what would be the correct way to prove this? REPLY [0 votes]: Usually you let one of the variables stay a variable and do induction on the other variable. I will let your intuition tell you which should be the induction variable.
{"set_name": "stack_exchange", "score": 1, "question_id": 635893}
TITLE: Grothendieck lemma for weakly compact sets QUESTION [4 upvotes]: Does anyone have a reference for the following result? I need to know the proof but I can't find it anywhere. Lemma (Grothendieck, for weakly compact sets). Let $X$ be a Banach space, and $K \subset X$. If $K$ is weakly closed and bounded, and for all $\varepsilon >0$ there exists a weakly compact set $K_\varepsilon$ such that $K \subset K_\varepsilon + \varepsilon B_X$, then $K$ is weakly compact. REPLY [3 votes]: This lemma is in Grothendieck's book Espaces Vectoriels Topologiques, page 401. The proof uses the bidual $X''$ whose unit ball is $\sigma(X'',X')$ compact by Alaoglu's theorem. Since $K$ is bounded its closure $\tilde K=\overline{K}^{\sigma(X'',X')}$ is $\sigma(X'',X')$-compact and it is enough to show $\tilde K\subseteq X$ (since the relative topology of $\sigma(X'',X')$ induced on $X$ is $\sigma(X,X')$ this then implies that $K$ is weakly relatively compact and hence weakly compact since $K$ was assumed to be weakly closed). For all $\varepsilon>0$ take a weakly compact $K_\varepsilon$ with $K\subseteq K_\varepsilon +\varepsilon B_X$. As the sum of a compact and a closed set is closed this implies $$\tilde K \subseteq K_\varepsilon +\varepsilon \overline{B_X}^{\sigma(X'',X')} \subseteq X+ \varepsilon B_{X''}.$$ Taking the intersection over all $\varepsilon>0$ yields that $\tilde K$ is contained in the norm-closure of $X$ in the bidual which is $X$ because $X$ is complete.
{"set_name": "stack_exchange", "score": 4, "question_id": 2206991}
TITLE: How do I solve a PDE with a Dirac Delta function? QUESTION [9 upvotes]: I have a PDE in the form of $$ \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} + u = \delta(x-1), $$ with initial condition $u(x,0)=100$. I'm trying to solve it numerically, but I have no idea on which method should I use. Most of the examples that I refer to has no delta function in the PDE. Can anyone guide me on what should I do? Thank you very much. REPLY [2 votes]: Note that this PDE just belongs to the PDE of the form http://eqworld.ipmnet.ru/en/solutions/fpde/fpde1302.pdf so that exact solution can find easily, so you have not necessarily to solve it numerically. The general solution is $u(x,t)=e^{-x}C(x-t)+e^{-x}\int\delta(x-1)e^x~dx=e^{-x}C(x-t)+e^{1-x}H(x-1)$ $u(x,0)=100$ : $e^{-x}C(x)+e^{1-x}H(x-1)=100$ $C(x)=100e^x-eH(x-1)$ $\therefore u(x,t)=e^{-x}(100e^{x-t}-eH(x-t-1))+e^{1-x}H(x-1)=100e^{-t}-e^{1-x}H(x-t-1)+e^{1-x}H(x-1)$
{"set_name": "stack_exchange", "score": 9, "question_id": 251799}
\begin{document} \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\secref}[1]{Section~\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\propref}[1]{Proposition~\ref{#1}} \newcommand{\corref}[1]{Corollary~\ref{#1}} \newcommand{\remref}[1]{Remark~\ref{#1}} \newcommand{\eqnref}[1]{(\ref{#1})} \newcommand{\exref}[1]{Example~\ref{#1}} \newcommand{\nc}{\newcommand} \nc{\Z}{{\mathbb Z}} \nc{\C}{{\mathbb C}} \nc{\N}{{\mathbb N}} \nc{\F}{{\mf F}} \nc{\Q}{\ol{Q}} \nc{\la}{\lambda} \nc{\ep}{\epsilon} \nc{\h}{\mathfrak h} \nc{\n}{\mf n} \nc{\G}{{\mathfrak g}} \nc{\DG}{\widetilde{\mathfrak g}} \nc{\SG}{\overline{\mathfrak g}} \nc{\D}{\mc D} \nc{\Li}{{\mc L}} \nc{\La}{\Lambda} \nc{\is}{{\mathbf i}} \nc{\V}{\mf V} \nc{\bi}{\bibitem} \nc{\NS}{\mf N} \nc{\dt}{\mathord{\hbox{${\frac{d}{d t}}$}}} \nc{\E}{\mc E} \nc{\ba}{\tilde{\pa}} \nc{\half}{\frac{1}{2}} \nc{\mc}{\mathcal} \nc{\mf}{\mathfrak} \nc{\hf}{\frac{1}{2}} \nc{\hgl}{\widehat{\mathfrak{gl}}} \nc{\gl}{{\mathfrak{gl}}} \nc{\hz}{\hf+\Z} \nc{\dinfty}{{\infty\vert\infty}} \nc{\SLa}{\overline{\Lambda}} \nc{\SF}{\overline{\mathfrak F}} \nc{\SP}{\overline{\mathcal P}} \nc{\U}{\mathfrak u} \nc{\SU}{\overline{\mathfrak u}} \nc{\ov}{\overline} \nc{\wt}{\widetilde} \nc{\osp}{\mf{osp}} \nc{\spo}{\mf{spo}} \nc{\hosp}{\widehat{\mf{osp}}} \nc{\hspo}{\widehat{\mf{spo}}} \nc{\I}{\mathbb{I}} \nc{\X}{\mathbb{X}} \nc{\Y}{\mathbb{Y}} \nc{\hh}{\widehat{\mf{h}}} \nc{\cc}{{\mathfrak c}} \nc{\dd}{{\mathfrak d}} \nc{\aaa}{{\mf A}} \nc{\xx}{{\mf x}} \nc{\wty}{\widetilde{\mathbb Y}} \nc{\ovy}{\overline{\mathbb Y}} \nc{\vep}{\bar{\epsilon}} \advance\headheight by 2pt \title[Super duality and ortho-symplectic Lie superalgebras] {Super duality and irreducible characters of ortho-symplectic Lie superalgebras} \author[Cheng]{Shun-Jen Cheng} \address{Institute of Mathematics, Academia Sinica, Taipei, Taiwan 10617} \email{chengsj@math.sinica.edu.tw} \author[Lam]{Ngau Lam} \address{Department of Mathematics, National Cheng-Kung University, Tainan, Taiwan 70101} \email{nlam@mail.ncku.edu.tw} \author[Wang]{Weiqiang Wang} \address{Department of Mathematics, University of Virginia, Charlottesville, VA 22904} \email{ww9c@virginia.edu} \begin{abstract} We formulate and establish a super duality which connects parabolic categories $O$ for the ortho-symplectic Lie superalgebras and classical Lie algebras of $BCD$ types. This provides a complete and conceptual solution of the irreducible character problem for the ortho-symplectic Lie superalgebras in a parabolic category $O$, which includes all finite dimensional irreducible modules, in terms of classical Kazhdan-Lusztig polynomials. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} \subsection{} \label{sec:LA} Finding the irreducible characters is a fundamental problem in representation theory. As a prototype of this problem, consider a complex semisimple Lie algebra $\G$. The problem is solved in two steps, following the historical development: \begin{enumerate} \item The category of finite dimensional $\G$-modules is semisimple and the corresponding irreducible $\G$-characters are given by Weyl's character formula. \item A general solution to the irreducible character problem in the BGG category $O$ was given much later by the Kazhdan-Lusztig (KL) polynomials (theorems of Beilinson-Bernstein and Brylinski-Kashiwara) \cite{KL, BB, BK}. \end{enumerate} \subsection{} \label{sec:LSA} The study of Lie superalgebras and their representations was largely motivated by the notion of supersymmetry in physics. A Killing-Cartan type classification of finite dimensional complex simple Lie superalgebras was achieved by Kac \cite{K1} in 1977. The most important subclass of the simple Lie superalgebras (called basic classical), including two infinite series of types $A$ and $\osp$, bears a close resemblance to the usual simple Lie algebras, so we can make sense of root systems, Dynkin diagrams, triangular decomposition, Cartan and Borel subalgebras, (parabolic) category $O$, and so on. However, the representation theory of Lie superalgebras $\SG$ has encountered several substantial difficulties, as made clear by numerous works over the last three decades (cf. \cite{BL, K2, KW, LSS, Pen, dHKT, dJ} as a sample of earlier literature, and some more recent references which can be found in the next paragraph): \begin{enumerate} \item There exist non-conjugate Borel subalgebras for a given Lie superalgebra $\SG$. \item The category $\mc F$ of finite dimensional $\SG$-modules is in general not semisimple. A uniform Weyl type finite dimensional character formula does not exist. \item One has a notion of a Weyl group associated to the even subalgebra of $\SG$; however, the linkage in the category $O$ (or in $\mc F$) of $\SG$-modules is not solely controlled by the Weyl group. \item A block in the category $O$ (or in $\mc F$) may contain infinitely many simple objects. \end{enumerate} The conventional wisdom of solving the irreducible character problem for Lie superalgebras has been to follow closely the two steps for Lie algebras in \ref{sec:LA}. As the problem is already very difficult in the category $\mc F$, there has been little attempt in understanding the category $O$. For type $A$ Lie superalgebra $\gl(m|n)$, there have been several different general approaches over the years. Serganova \cite{Se} in 1996 developed a mixed geometric and algebraic approach to solving the irreducible character problem in the category $\mc F$. Brundan in 2003 \cite{B} developed a new elegant purely algebraic solution to the same problem in $\mc F$ using Lusztig-Kashiwara canonical basis. Developing the idea of super duality \cite{CW2} (which generalizes \cite{CWZ}) which connects the categories $O$ for Lie superalgebras and Lie algebras of type $A$ for the first time, two of the authors \cite{CL} very recently established the super duality conjecture therein. In particular they provided a complete solution to the irreducible character problem for a fairly general parabolic category $O$ (including $\mc F$ as a very special case) in terms of KL polynomials of type $A$. Independently, Brundan and Stroppel \cite{BS} proved the super duality conjecture in \cite{CWZ}, offering yet another solution of the irreducible character problem in $\mc F$. \subsection{} The goal of this paper is to formulate and establish a super duality which connects parabolic category $O$ for Lie superalgebra of type $\osp$ with parabolic category $O$ for classical Lie algebras of types $BCD$, vastly generalizing the type $A$ case of \cite{CWZ, CW2, CL}. In particular, it provides a complete solution of the irreducible character problem for $\osp$ in some suitable parabolic category $O$, which includes all finite dimensional irreducibles, in terms of parabolic KL polynomials of $BCD$ types (cf.~Deodhar \cite{Deo}). \subsection{} Before launching on a detailed explanation of our main ideas below, it is helpful to keep in mind the analogy that the ring of symmetric functions (or its super counterpart) in infinitely many variables carries more symmetries than in finitely many variables, and a truncation process can easily recover finitely many variables. The super duality can be morally thought as a categorification of the standard involution $\omega$ on the ring of symmetric functions and it only becomes manifest when the underlying Lie (super)algebras pass to infinite rank. Then truncation functors can be used to recover the finite rank cases which we are originally interested in. Even though the finite dimensional Lie superalgebras of type $\osp$ depend on two integers $m$ and $n$, our view is to fix one and let the other, say $n$, vary, and so let us denote an $\osp$ Lie superalgebra by $\SG_n$. By choosing appropriately a Borel and a Levi subalgebra of $\SG_n$, we formulate a suitable parabolic category $\ov{\mc{O}}_n$ of $\SG_n$-modules. It turns out that there are four natural choices one can make here which correspond to the four Dynkin diagrams $\mf{b,b^\bullet,c,d}$ in \ref{dynkin} (the type $\mf a$ case has been treated in \cite{CL}). There is a natural sequence of inclusions of Lie superalgebras: $$ \SG_1\subset \SG_2 \subset \ldots \subset \SG_n \subset \ldots. $$ Let $\SG :=\cup_{n=1}^\infty \SG_n$. A suitable category $\ov{\mc{O}}$ of $\SG$-modules can be identified with the inverse limit $\lim\limits_{\longleftarrow} \ov{\mc{O}}_n$. On the other hand, we introduce truncation functors $\mf{tr}^{k}_n:\ov{\mc{O}}_k\rightarrow\ov{\mc{O}}_n$ for $k>n$, as analogues of the truncation functors studied in algebraic group setting ( cf. Donkin \cite{Don}). These truncation functors send parabolic Verma modules to parabolic Vermas or zero and irreducibles to irreducibles or zero. In particular, this allows us to derive the irreducible characters in $\ov{\mc{O}}_n$ once we know those in $\ov{\mc O}$. Corresponding to each of the above choices of $\SG_n$ and $\ov{\mc O}_n$, we have the Lie algebra counterparts $\G_n$ and parabolic categories ${\mc{O}}_n$ for positive integers $n$. Moreover, we have natural inclusions of Lie algebras $\G_n \subset \G_{n+1}$, for all $n$, which allow us to define the Lie algebra $\G :=\cup_n \G_n$ and the parabolic category ${\mc{O}}$ of $\G$-modules. Similarly, the category ${\mc{O}}$ can be identified with the inverse limit $\lim\limits_{\longleftarrow} {\mc{O}}_n$. In the main body of the paper, we actually replace $\G, \SG$ et cetera by their (trivial) central extensions. The reason is that the truncation functors depend implicitly on a stabilization scalar, which is interpreted conceptually as a level of representations with respect to the central extensions. To establish a connection between $\ov{\mc{O}}$ and ${\mc{O}}$, we introduce another infinite rank Lie superalgebra $\wt \G$ and its parabolic category $\wt{\mc{O}}$. The Lie superalgebra $\wt \G$ contains $\G$ and $\SG$ as natural subalgebras (though not as Levi subalgebras), and this enables us to introduce two natural functors $T: \wt{\mc{O}} \rightarrow \mc O$ and $\ov{T}: \wt{\mc{O}} \rightarrow \ov{\mc O}$. Using the technique of odd reflections among others, we establish in \secref{sec:character} a key property that $T$ and $\ov T$ respect the parabolic Verma and irreducible modules, respectively. This result is already sufficient to provide a complete solution of irreducible $\osp$-characters in the category $\ov{\mc{O}}_n$ in the first half of the paper (by the end of Section~\ref{sec:T}). We remark that the idea of introducing an auxiliary Lie superalgebra $\wt \G$ and category $\wt{\mc O}$ has been used in the type $A$ superalgebra setting \cite{CL}. Recall that for the usual category $O$ of Lie algebras, the KL polynomials were interpreted by Vogan \cite{V} in terms of Kostant $\mf u$-homology groups. The $\mf u$-homology groups make perfect sense for Lie superalgebras, and we may take this interpretation as the definition for the otherwise undefined KL polynomials in category $O$ for Lie superalgebras (cf.~Section \ref{sec:LSA}~(3)), as Serganova \cite{Se} did in the category $\mc F$ of $\gl (m|n)$-modules. In Section~\ref{sec:homology}, we show that the functors $T$ and $\ov T$ match the corresponding $\mf u$-homology groups and hence the corresponding KL polynomials (compare \cite{CKW}). Actually the computation in \cite{CKW} of the $\mf u$-homology groups with coefficients in the Lie superalgebra oscillator modules via Howe duality was the first direct supporting evidence for the super duality for $\osp$ as formulated in this paper. Section~\ref{sec:category} of the paper is devoted to proving that both $T$ and $\ov T$ are indeed category equivalences. As a consequence, we have established that the categories $\mc O$ and $\ov{\mc O}$ are equivalent, which is called super duality. A technical difference here from \cite{CL} is that we need to deal with the fact that parabolic Verma modules in $\ov{\mc O}$ may not have finite composition series. An immediate corollary of the super duality is that any known BGG resolution in the category $\mc O$ gives rise to a BGG type resolution in the category $\ov{\mc O}$, and vice versa. \subsection{} The finite dimensional irreducible $\osp$-modules are of highest weight, and they are classified in terms of the Dynkin labels by Kac \cite{K2}. We note that the finite dimensional irreducible modules of non-integral highest weights are typical and so their characters are known \cite[Theorem 1]{K2}. It turns out that a more natural labeling of the remaining finite dimensional irreducible $\osp$-modules (of integral highest weights) is given in terms of Young diagrams just as for classical Lie algebras (see e.g. \cite{SW} for such a formulation and a new proof using odd reflections). As Borel subalgebras are not conjugate to each other, it becomes a nontrivial problem to find the extremal weights, i.e.~highest weights with respect to different Borel, of a given finite dimensional irreducible $\osp$-module. We provide an elegant and simple answer in terms of a combinatorial notion which we call {\em block Frobenius coordinates} associated to Young diagrams. We observe that our solution of the irreducible character problem in $\ov{\mc O}_n$ includes solutions to all finite dimensional irreducible $\osp$-characters. The category $\mc F$ for a general $\osp$ (with the exception of $\osp(2|2n)$) is not a highest weight category and does not admit an abstract KL theory in the sense of Cline, Parshall and Scott \cite{CPS}, as indicated in the case of $\osp(3|2)$ \cite[Section~2]{Ger}. For a completely independent and different approach to the finite dimensional irreducible $\osp$-characters in the category $\mc F$, see Gruson and Serganova \cite{GS}. The finite dimensional irreducible characters of $\osp(2|2n)$ were obtained in \cite{dJ}. The finite dimensional irreducible characters of $\osp(k|2)$ were also computed in \cite{SZ}. \subsection{} In hindsight, here is how our super duality approach overcomes the difficulties as listed in \ref{sec:LSA}. \begin{enumerate} \item The existence of non-conjugate Borel subalgebras for a Lie superalgebra is essential for establishing the properties of the functors $T$ and $\ov T$. Choices of suitable Borel subalgebras are crucial for a formulation of the compatible sequence of categories $\ov{\mc O}_n$ for $n>0$. \item The category $\mc F$ of finite dimensional $\SG_n$-modules does not play any special role in our approach. Even the ``natural" $\osp(M|2n)$-modules $\C^{M|2n}$ do not correspond well with each other under truncation functors, as they are natural with respect to the ``wrong" Borel. \item In the $n \to \infty$ limit, the linkage in the category $\ov{\mc O}$ of $\SG$-modules is completely controlled by the Weyl group of the corresponding Lie algebra $\G$ (which contains the even subalgebra of $\SG$ as a subalgebra). \item In the $n \to \infty$ limit, it is no surprise for a block to contain infinitely many simple objects. \end{enumerate} In the extreme cases described in \secref{sec:m=0}, we indeed obtain an equivalence of module categories between two classical (non-super!) Lie algebras of types $C$ and $D$ of infinite rank at opposite levels. If one is willing to regard $\osp(1|\infty)$ as classical (recall that the finite dimensional $\osp(1|2n)$-module category is semisimple), there is another equivalence of categories which relates $\osp(1|\infty)$ to the infinite rank Lie algebra of type $B$. In this sense, our super duality has a flavor of the Langlands duality. The super duality approach here can be further adapted to the setting of Kac-Moody superalgebras (including affine superalgebras) and this will shed new light on the irreducible character problem for these superalgebras. The details will appear elsewhere. It is well known that the proof of KL conjectures involves deep geometric machinery and results on $D$-modules of flag manifolds. The formulation of super duality suggests potential direct connections on the (super) geometric level behind the categories $\mc O$ and $\ov{\mc O}$, which will be very important to develop. \subsection{} The paper is organized as follows. In \secref{sec:superalgebras} the Lie superalgebras $\G$, $\SG$ and $\DG$ are defined, with their respective module categories ${\mc{O}}$, $\ov{\mc{O}}$ and $\wt{\mc{O}}$ introduced in \secref{sec:O}. In \secref{sec:character}, we provide a complete solution of the irreducible $\osp$ character problem in category $\ov{\mc O}_n$ for all $n$, including all finite dimensional irreducible $\osp$-characters, in terms of the KL polynomials of classical type. We establish in \secref{sec:category} equivalence of the categories $\mc O$ and $\ov{\mc O}$. \secref{finite:dim:repn} offers a diagrammatic description of the extremal weights of the finite dimensional irreducible $\osp$-modules. Throughout the paper the symbols $\Z$, $\N$, and $\Z_+$ stand for the sets of all, positive and non-negative integers, respectively. All vector spaces, algebras, tensor products, et cetera, are over the field of complex numbers $\C$. {\bf Acknowledgment.} The first author is partially supported by an NSC-grant and an Academia Sinica Investigator grant, and he thanks NCTS/TPE and the Department of Mathematics of University of Virginia for hospitality and support. The second author is partially supported by an NSC-grant and thanks NCTS/SOUTH. The third author is partially supported by NSF and NSA grants, and he thanks the Institute of Mathematics of Academia Sinica in Taiwan for hospitality and support. The results of the paper were announced by the first author in the AMS meeting at Raleigh in April 2009, and they were presented by the third author in conferences at Ottawa, Canada and Durham, UK in July 2009. \section{Lie superalgebras of infinite rank}\label{sec:superalgebras} In this section, we introduce infinite rank Lie (super)algebras $\G^\xx$, $\SG^\xx$ and $\wt{\G}^\xx$ associated to the $3$ Dynkin diagrams in \eqref{Dynkin:combined} below, where $\xx$ denotes one of the four types $\mf{b,b^\bullet,c,d}$. \subsection{Dynkin diagrams of $\G^\xx$, $\ov{\G}^\xx$ and $\wt{\G}^\xx$} \label{dynkin} Let $m\in\Z_+$. Consider the free abelian group with basis $\{\epsilon_{-m},\ldots,\epsilon_{-1}\}\cup\{\epsilon_{r}\vert r\in\hf\N\}$, with a symmetric bilinear form $(\cdot|\cdot)$ given by \begin{align*} (\epsilon_r|\epsilon_s)=(-1)^{2r}\delta_{rs}, \qquad r,s \in \{-m,\ldots,-1\} \cup \hf \N. \end{align*} We set \begin{align}\label{alpha:beta} &\alpha_{\times}:=\epsilon_{-1}-\epsilon_{1/2}, \quad\alpha_{j} :=\epsilon_{j}-\epsilon_{j+1},\quad -m\le j\le -2,\\ &\beta_{\times}:=\epsilon_{-1}-\epsilon_{1},\quad \alpha_{r}:=\epsilon_{r}-\epsilon_{r+1/2},\quad \beta_{r} :=\epsilon_{r}-\epsilon_{r+1},\quad r\in\hf\N.\nonumber \end{align} For $\xx =\mf{b,b^\bullet,c,d}$, we denote by $\mf{k}^\xx$ the contragredient Lie (super)algebras (\cite[Section 2.5]{K1}) whose Dynkin diagrams \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{k}^\xx$} together with certain distinguished sets of simple roots $\Pi(\mf{k^x})$ are listed as follows: \vspace{.3cm} \iffalse \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(6,2){\line(1,0){1.4}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(-.5,2){\makebox(0,0)[c]{$\mf{a}$:}} \put(5.5,1){\makebox(0,0)[c]{\tiny$\alpha_{-m}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\alpha_{-m+1}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-3}$}} \put(19.3,1){\makebox(0,0)[c]{\tiny$\alpha_{-2}$}} \end{picture} \end{center} \fi \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,2) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(6,1.8){$\Longleftarrow$} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(-.5,2){\makebox(0,0)[c]{$\mf{b}$:}} \put(5.5,1){\makebox(0,0)[c]{\tiny$-\epsilon_{-m}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\alpha_{-m}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-3}$}} \put(19.3,1){\makebox(0,0)[c]{\tiny$\alpha_{-2}$}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,2) \put(5.6,2){\circle*{0.9}} \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.35,2){\line(1,0){1.5}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(6.8,2){\makebox(0,0)[c]{$\Longleftarrow$}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(-.2,2){\makebox(0,0)[c]{$\mf{b^\bullet}$:}} \put(5.5,1){\makebox(0,0)[c]{\tiny$-\epsilon_{-m}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\alpha_{-m}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-3}$}} \put(19.3,1){\makebox(0,0)[c]{\tiny$\alpha_{-2}$}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,2) \put(5.7,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(6.8,2){\makebox(0,0)[c]{$\Longrightarrow$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(-.5,2){\makebox(0,0)[c]{$\mf{c}$:}} \put(5.5,1){\makebox(0,0)[c]{\tiny$-2\epsilon_{-m}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\alpha_{-m}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-3}$}} \put(19.3,1){\makebox(0,0)[c]{\tiny$\alpha_{-2}$}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3.5) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(6,3.8){\makebox(0,0)[c]{$\bigcirc$}} \put(6,.3){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(7.6,2.2){\line(-1,1){1.3}} \put(7.6,1.8){\line(-1,-1){1.3}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(-.5,2){\makebox(0,0)[c]{$\mf{d}$:}} \put(3.3,0.3){\makebox(0,0)[c]{\tiny${-}\epsilon_{-m}{-}\epsilon_{-m+1}$}} \put(4.7,3.8){\makebox(0,0)[c]{\tiny$\alpha_{-m}$}} \put(8.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-m+1}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{-3}$}} \put(19.3,1){\makebox(0,0)[c]{\tiny$\alpha_{-2}$}} \end{picture} \end{center} According to \cite[Proposition 2.5.6]{K1} these Lie (super)algebras are $\mf{so}(2m+1)$, $\mf{osp}(1|2m)$, $\mf{sp}(2m)$ for $m\ge 1$ and $\mf{so}(2m)$ for $m \ge 2$, respectively. We will use the same notation \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{k}^\xx$} to denote the diagrams of all the degenerate cases for $m=0,1$ as well. (See Sections~\ref{sec:realize} and \ref{sec:m=0} below). We have used \makebox(15,5){\circle*{7}} to denote an odd non-isotropic simple root. So $\mf{osp}(1|2m)$ is actually a Lie superalgebra (instead of Lie algebra), but it is classical from the super duality viewpoint in this paper. For $n\in\N$ let \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{T}_n$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\ov{\mf{T}}_n$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\wt{\mf{T}}_n$} denote the following Dynkin diagrams, where $\bigotimes$ denotes an odd isotropic simple root: \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(6,2){\line(1,0){1.4}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(0,1.2){{\ovalBox(1.6,1.2){$\mf{T}_n$}}} \put(5.5,1){\makebox(0,0)[c]{\tiny$\beta_{\times}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\beta_{1}$}} \put(10.3,1){\makebox(0,0)[c]{\tiny$\beta_{2}$}} \put(15,1){\makebox(0,0)[c]{\tiny$\beta_{n-2}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\beta_{n-1}$}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,2) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigotimes$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(6,2){\line(1,0){1.4}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(0,1.2){{\ovalBox(1.6,1.2){$\ov{\mf{T}}_n$}}} \put(5.5,1){\makebox(0,0)[c]{\tiny$\alpha_{\times}$}} \put(10.3,1){\makebox(0,0)[c]{\tiny$\beta_{3/2}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\beta_{1/2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny$\beta_{n-5/2}$}} \put(17.5,1){\makebox(0,0)[c]{\tiny$\beta_{n-3/2}$}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,2) \put(8,2){\makebox(0,0)[c]{$\bigotimes$}} \put(10.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(14.85,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(5.6,2){\makebox(0,0)[c]{$\bigotimes$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(6,2){\line(1,0){1.4}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(0,1.2){{\ovalBox(1.6,1.2){$\wt{\mf{T}}_n$}}} \put(5.5,1){\makebox(0,0)[c]{\tiny$\alpha_{\times}$}} \put(8,1){\makebox(0,0)[c]{\tiny$\alpha_{1/2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny$\alpha_{n-1}$}} \put(17.2,1){\makebox(0,0)[c]{\tiny$\alpha_{n-1/2}$}} \put(10.3,1){\makebox(0,0)[c]{\tiny$\alpha_{1}$}} \end{picture} \end{center} The Lie superalgebras associated with these Dynkin diagrams are $\gl(n+1)$, $\mf{gl}(1|n)$ and $\mf{gl}(n|n+1)$, respectively. In the limit $n\to\infty$, the associated Lie superalgebras are direct limits of these Lie superalgebras, and we will simply drop $\infty$ to write \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{T}$} = \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{T}_\infty$} and so on. Any of the {\em head} diagrams \makebox(23,0){$\oval(20,11)$}\makebox(-20,8){$\mf{k}^\xx$} may be connected with the {\em tail} diagrams \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{T}_n$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\ov{\mf{T}}_n$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\wt{\mf{T}}_n$} to produce the following Dynkin diagrams ($n\in\N\cup\{\infty\}$): \begin{equation}\label{Dynkin:combined} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,1) \put(5.0,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{k}^\xx$}}}} \put(5.8,0.5){\line(1,0){1.85}} \put(8.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{T}_n$}}}} \put(15,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{k}^\xx$}}}} \put(15.8,0.5){\line(1,0){1.85}} \put(18.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\ov{\mf{T}}_n$}}}} \put(25,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{k}^\xx$}}}} \put(25.8,0.5){\line(1,0){1.85}} \put(28.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\wt{\mf{T}}_n$}}}} \end{picture} \end{equation} We will denote the sets of simple roots of the above diagrams accordingly by $\Pi_n^{\xx}$, $\ov{\Pi}_n^\xx$ and $\wt{\Pi}_n^\xx$. For $n=\infty$, we also denote the sets of positive roots by $\Phi_+^\xx$, $\ov{\Phi}^\xx_+$ and $\wt{\Phi}^\xx_+$, and the sets of roots by $\Phi^\xx$, $\ov{\Phi}^\xx$ and $\wt{\Phi}^\xx$, respectively. \subsection{Realization} \label{sec:realize} Let us denote the $3$ Dynkin diagrams of \eqnref{Dynkin:combined} at $n=\infty$ by \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\G^\xx$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG^\xx$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG^\xx$}. We provide a realization for the corresponding Lie superalgebras. For $m\in\Z_+$ consider the following totally ordered set $\wt{\I}_m$ \begin{align*} \cdots <\ov{\frac{3}{2}} <\ov{1}<\ov{\hf}<\underbrace{\ov{-1}<\ov{-2} <\cdots<\ov{-m}}_m<\ov{0}<\underbrace{-m<\cdots<-1}_m <\hf<1<\frac{3}{2}<\cdots \end{align*} For $m\in\Z_+$ define the following subsets of $\wt{\mathbb I}_m$: \begin{align*} {{\mathbb I}}_m &:= \{\underbrace{\ov{-1},\ldots,\ov{-m}}_m,\ov{0}, \underbrace{-m,\ldots,-1}_m\}\cup \{\ov{1},\ov{2},\ov{3},\ldots\} \cup\{1,2,3,\ldots\},\\ \ov{{\mathbb I}}_m &:= \{\underbrace{\ov{-1},\ldots,\ov{-m}}_m,\ov{0}, \underbrace{-m,\ldots,-1}_m\}\cup \{\ov{\frac{1}{2}},\ov{\frac{3}{2}},\ov{\frac{5}{2}},\ldots\} \cup\{\hf,\frac{3}{2},\frac{5}{2},\ldots\}, \\ \wt{\I}^+_m &:=\{-m,\ldots,-1,\hf,1,\frac{3}{2},2,\ldots\}. \end{align*} For a subset $\X$ of $\wt{\I}_m$, define $\X^\times:=\X\setminus\{\ov{0}\},\quad \X^+ :=\X\cap\wt{\I}_m^+. $ \subsubsection{General linear Lie superalgebra} For a homogeneous element $v$ in a super vector space $V=V_{\bar{0}}\oplus V_{\bar{1}}$ we denote by $|v|$ its $\Z_2$-degree. For $m\in\Z_+$ consider the infinite dimensional super space $\wt{V}_{m}$ over $\C$ with ordered basis $\{v_i|i\in\wt{\I}_m\}$. We declare $|v_r|=|v_{\ov{r}}|=\bar{0}$, if $r\in\Z\setminus\{0\}$, and $|v_r|=|v_{\ov{r}}|=\bar{1}$, if $r\in\hf+\Z_+$. The parity of the vector $v_{\ov{0}}$ is to be specified. With respect to this basis a linear map on $\wt{V}_m$ may be identified with a complex matrix $(a_{rs})_{r,s\in\wt{\mathbb{I}}_m}$. The Lie superalgebra $\gl(\wt{V}_m)$ is the Lie subalgebra of linear transformations on $\wt{V}_m$ consisting of $(a_{rs})$ with $a_{rs}=0$ for all but finitely many $a_{rs}$'s. Denote by $E_{rs}\in\gl(\wt{V}_m)$ the elementary matrix with $1$ at the $r$th row and $s$th column and zero elsewhere. The vector spaces $V_m$ and $\ov{V}_m$ are defined to be subspaces of $\wt{V}_m$ with ordered basis $\{v_i\}$ indexed by $\I_m$ and $\ov{\I}_m$, respectively. The corresponding subspaces of $V_m$, $\ov{V}_m$ and $\wt{V}_m$ with basis vectors $v_i$, with $i$ indexed by $\I^\times_m$, $\ov{\I}^\times_m$ and $\wt{\I}^\times_m$, respectively, are denoted by $V^\times_m$, $\ov{V}^\times_m$ and $\wt{V}^\times_m$, respectively. This gives rise to Lie superalgebras $\gl(V_m)$, $\gl(\ov{V}_m)$, $\gl(V^\times_m)$, $\gl(\ov{V}^\times_m)$ and $\gl(\wt{V}^\times_m)$. Let $W$ be one of the spaces $\wt{V}_m,\wt{V}^\times_m, V_m, V_m^\times, \ov{V}_m$ or $\ov{V}_m^\times$. The standard Cartan subalgebra of $\gl(W)$ is spanned by the basis $\{E_{rr}\}$, with corresponding dual basis $\{\epsilon_{r}\}$, where $r$ runs over the index sets $\wt{\I}_m$, $\wt{\I}^\times_m$, ${\I}_m$, ${\I}^\times_m$, $\ov{\I}_m$, $\ov{\I}^\times_m$, respectively. \subsubsection{Skew-supersymmetric bilinear form on $W$} \label{sec:skewsym} In this subsection we set $|v_{\ov{0}}|=\bar{1}$. For $m\in\Z_+$ define a non-degenerate skew-supersymmetric bilinear form $(\cdot|\cdot)$ on $\wt{V}_m$ by \begin{align}\label{symp:bilinear:form} &(v_{r}|v_{{s}}) =(v_{\ov{r}}|v_{\ov{s}})=0,\quad (v_r|v_{\ov{s}}) =\delta_{rs}=-(-1)^{|v_r|\cdot|v_s|}(v_{\ov{s}}|v_r), \quad r,s\in\wt{\I}_m^+,\\ &(v_{\ov{0}}|v_{\ov{0}})=1, \quad (v_{\ov{0}}|v_{r})=(v_{\ov{0}}|v_{\ov{r}})=0,\quad r\in\wt{\I}_m^+.\nonumber \end{align} Restricting the form to $\wt{V}^\times_m$, $V_m$, $V^\times_m$, $\ov{V}_m$ and $\ov{V}^\times_m$ gives rise to non-degenerate skew-supersymmetric bilinear forms that will again be denoted by $(\cdot|\cdot)$. Let $W$ be as before. The Lie superalgebra $\spo(W)$ is the subalgebra of $\gl(W)$ preserving the bilinear form $(\cdot|\cdot)$. The standard Cartan subalgebra of $\spo(W)$ is spanned by the basis $\{E_r:=E_{rr}-E_{\ov{r},\ov{r}}\}$, with corresponding dual basis $\{\epsilon_r\}$. We have the following realizations of the corresponding Lie superalgebras for $\xx=\mf{b^\bullet},\cc$ and $m>0$: \begin{center} \begin{table}[ht] \caption{}\label{table1} \begin{tabular}{|c|c||c|c|} \hline Lie superalgebra & Dynkin diagram& Lie superalgebra & Dynkin diagram\\ \hline \hline $\spo(\wt{V}_m)$ &\makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG^{\mf{b^\bullet}}$} &$\spo(\wt{V}^\times_m)$ & \makebox(23,2){$\oval(20,12)$}\makebox(-20,8){$\DG^{\mf{c}}$}\\ $\spo(V_m)$ & \makebox(23,-2){$\oval(20,14)$}\makebox(-20,8){$\G^{\mf{b^\bullet}}$} &$\spo(V^\times_m)$ & \makebox(23,1){$\oval(20,12)$}\makebox(-20,8){$\G^{\mf{c}}$}\\ $\spo(\ov{V}_m)$ & \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG^{\mf{b^\bullet}}$} &$\spo(\ov{V}^\times_m)$ & \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\SG^{\mf{c}}$}\\ \hline \end{tabular} \end{table} \end{center} The sets $\Pi^{\xx}$, $\ov{\Pi}^\xx$ and $\wt{\Pi}^\xx$ give rise to the following sets of positive roots: \begin{align*} &\wt{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_m)\} \cup\{-2\epsilon_i\ (i\in\I^+_m)\}\cup\{-\epsilon_r\ (r\in\wt{\I}_m^+)\},\\ &\wt{\Phi}^{\mf c}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_m)\} \cup\{-2\epsilon_i\ (i\in\I^+_m)\},\\ &{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_m)\} \cup\{-\epsilon_i,-2\epsilon_i\ (i\in\I^+_m)\},\\ &{\Phi}^{\mf c}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_m)\} \cup\{-2\epsilon_i\ (i\in\I^+_m)\},\\ &\ov{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_m)\} \cup\{-2\epsilon_i\ (-m\le i\le -1)\}\cup\{-\epsilon_r\ (r\in\ov{\I}_m^+)\},\\ &\ov{\Phi}^{\mf c}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_m)\} \cup\{-2\epsilon_i\ (-m\le i\le -1)\}. \end{align*} The corresponding subsets of simple roots can be read off from the corresponding diagrams in \eqref{Dynkin:combined} (here we recall the notation of roots $\alpha$'s and $\beta$'s from \eqref{alpha:beta}). \subsubsection{Supersymmetric bilinear form on $W$}\label{sec:symm} Let $W$ be as before. In this subsection we set $|v_{\ov{0}}|=\bar{0}$. Define a supersymmetric bilinear form $(\cdot|\cdot)$ on $\wt{V}_m$ by \begin{align}\label{sym:bilinear:form} &(v_{r}|v_{{s}})=(v_{\ov{r}}|v_{\ov{s}})=0,\quad (v_r|v_{\ov{s}})=\delta_{rs}=(-1)^{|v_r|\cdot|v_s|}(v_{\ov{s}}|v_r), \quad r,s\in\wt{\I}_m^+,\\ &(v_{\ov{0}}|v_{\ov{0}})=1,\quad (v_{\ov{0}}|v_{r}) =(v_{\ov{0}}|v_{\ov{r}})=0,\quad r\in\wt{\I}_m^+.\nonumber \end{align} Restricting the form to $\wt{V}^\times_m$, $V_m$, $V^\times_m$, $\ov{V}_m$ and $\ov{V}^\times_m$ gives respective non-degenerate supersymmetric bilinear forms that will also be denoted by $(\cdot|\cdot)$. The Lie superalgebra $\osp(W)$ is the subalgebra of $\gl(W)$ preserving the respective bilinear form determined by \eqnref{sym:bilinear:form}. The standard Cartan subalgebra of $\osp(W)$ is also spanned by the basis $\{E_r:=E_{rr}-E_{\ov{r},\ov{r}}\}$, with corresponding dual basis $\{\epsilon_r\}$. We have the following realizations of the corresponding Lie superalgebras for $\xx=\mf{b}, \dd$ and $m>0$: \begin{center} \begin{table}[ht] \caption{}\label{table2} \begin{tabular}{|c|c||c|c|} \hline Lie superalgebra & Dynkin diagram& Lie superalgebra & Dynkin diagram\\ \hline \hline $\osp(\wt{V}_m)$ & \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG^{\mf{b}}$} &$\osp(\wt{V}^\times_m)$ & \makebox(23,1){$\oval(20,14)$}\makebox(-20,8){$\DG^{\mf{d}}$}\\ $\osp(V_m)$ & \makebox(23,-2){$\oval(20,13)$}\makebox(-20,8){$\G^{\mf{b}}$}&$\osp(V^\times_m)$ & \makebox(23,0){$\oval(20,13)$}\makebox(-20,8){$\G^{\mf{d}}$}\\ $\osp(\ov{V}_m)$ & \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG^{\mf{b}}$}&$\osp(\ov{V}^\times_m)$ & \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG^{\mf{d}}$}\\ \hline \end{tabular} \end{table} \end{center} The sets $\Pi^{\xx}$, $\ov{\Pi}^\xx$ and $\wt{\Pi}^\xx$ give rise to the following sets of positive roots: \begin{align*} &\wt{\Phi}^{\mf b}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_m)\} \cup\{-2\epsilon_s\ (s\in\ov{\I}^+_0)\}\cup\{-\epsilon_r\ (r\in\wt{\I}_m^+)\},\\ &\wt{\Phi}^{\mf d}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_m)\} \cup\{-2\epsilon_s\ (s\in\ov{\I}^+_0)\},\\ &{\Phi}^{\mf b}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_m)\} \cup\{-\epsilon_i\ (i\in\I^+_m)\},\\ &{\Phi}^{\mf d}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_m)\},\\ &\ov{\Phi}^{\mf b}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_m)\} \cup\{-2\epsilon_s\ (s\in\ov{\I}_0^+)\}\cup\{-\epsilon_r\ (r\in\ov{\I}_m^+)\},\\ &\ov{\Phi}^{\mf d}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_m)\} \cup\{-2\epsilon_s\ (s\in\ov{\I}_0^+)\}. \end{align*} Again, the subsets of simple roots can be read off from the corresponding diagrams in \eqref{Dynkin:combined}. \subsection{The case $m=0$}\label{sec:m=0} The Dynkin diagrams of $\spo(W)$ with a distinguished set of simple roots, for $W=\wt{V}_0,\wt{V}_0^\times,V_0, V_0^\times, \ov{V}_0, \ov{V}_0^\times$ are listed in order as follows (see also \remref{rem:000} below): \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigotimes$}} \put(10.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(14.85,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(19.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(6,1.8){$\Longleftarrow$} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8,1){\makebox(0,0)[c]{\tiny $\alpha_{1/2}$}} \put(10.5,1){\makebox(0,0)[c]{\tiny $\alpha_{1}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\alpha_{r-1/2}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\alpha_{r}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\alpha_{r+1/2}$}} \put(5.4,1){\makebox(0,0)[c]{\tiny $-\epsilon_{1/2}$}} \put(0,1.2){{\ovalBox(1.6,1.2){$\DG^{\mf b^\bullet}$}}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(8,2){\makebox(0,0)[c]{$\bigotimes$}} \put(10.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(14.85,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(19.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(6,3.8){\makebox(0,0)[c]{$\bigotimes$}} \put(6,.3){\makebox(0,0)[c]{$\bigotimes$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(7.5,2.2){\line(-1,1){1.3}} \put(7.6,1.8){\line(-1,-1){1.25}} \put(6,.8){\line(0,1){2.6}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8.1,1){\makebox(0,0)[c]{\tiny $\alpha_{1}$}} \put(10.9,1){\makebox(0,0)[c]{\tiny $\alpha_{3/2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\alpha_{r-1/2}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\alpha_{r}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\alpha_{r+1/2}$}} \put(4.5,3.8){\makebox(0,0)[c]{\tiny $\alpha_{1/2}$}} \put(3.5,0.3){\makebox(0,0)[c]{\tiny $-\epsilon_{1/2}-\epsilon_{1}$}} \put(0,1.2){{\ovalBox(1.6,1.2){$\DG^{\mf c}$}}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\circle*{0.9}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(6,1.8){$\Longleftarrow$} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8,1){\makebox(0,0)[c]{\tiny $\beta_1$}} \put(10.5,1){\makebox(0,0)[c]{\tiny $\beta_{2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\beta_{n-1}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\beta_{n}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\beta_{n+1}$}} \put(5.4,1){\makebox(0,0)[c]{\tiny $-\epsilon_{1}$}} \put(-1,1.2){{\ovalBox(1.6,1.2){$\G^{\mf b^\bullet}$}}} \put(1,1.5){=} \put(2,1.2){{\ovalBox(1.6,1.2){$\SG^{\mf b}$}}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(6,1.8){$\Longrightarrow$} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8,1){\makebox(0,0)[c]{\tiny $\beta_1$}} \put(10.5,1){\makebox(0,0)[c]{\tiny $\beta_{2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\beta_{n-1}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\beta_{n}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\beta_{n+1}$}} \put(5.4,1){\makebox(0,0)[c]{\tiny $-2\epsilon_{1}$}} \put(-1,1.2){{\ovalBox(1.6,1.2){$\G^{\mf c}$}}} \put(1,1.5){=} \put(2,1.2){{\ovalBox(1.6,1.2){$\SG^{\mf d}$}}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,3) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(6,1.8){$\Longleftarrow$} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8,1){\makebox(0,0)[c]{\tiny $\beta_{1/2}$}} \put(10.5,1){\makebox(0,0)[c]{\tiny $\beta_{3/2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\beta_{r-1}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\beta_{r}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\beta_{r+1}$}} \put(5.4,1){\makebox(0,0)[c]{\tiny $-\epsilon_{1/2}$}} \put(-1,1.2){{\ovalBox(1.6,1.2){$\SG^{\mf b^\bullet}$}}} \put(1,1.5){=} \put(2,1.2){{\ovalBox(1.6,1.2){$\G^{\mf b}$}}} \end{picture} \end{center} \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(6,3.8){\makebox(0,0)[c]{$\bigcirc$}} \put(6,.3){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\line(1,0){1.55}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.8,2){\line(1,0){1.5}} \put(7.6,2.2){\line(-1,1){1.3}} \put(7.6,1.8){\line(-1,-1){1.3}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(22.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(8.1,1){\makebox(0,0)[c]{\tiny $\beta_{3/2}$}} \put(10.7,1){\makebox(0,0)[c]{\tiny $\beta_{5/2}$}} \put(14.8,1){\makebox(0,0)[c]{\tiny $\beta_{r-1}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\beta_{r}$}} \put(19.8,1){\makebox(0,0)[c]{\tiny $\beta_{r+1}$}} \put(4.5,3.8){\makebox(0,0)[c]{\tiny $\beta_{1/2}$}} \put(3.5,0.3){\makebox(0,0)[c]{\tiny $-\epsilon_{1/2}-\epsilon_{3/2}$}} \put(-1,1.2){{\ovalBox(1.6,1.2){$\SG^{\mf c}$}}} \put(1,1.5){=} \put(2,1.2){{\ovalBox(1.6,1.2){$\G^{\mf d}$}}} \end{picture} \end{center} For the sake of completeness we also list the corresponding sets of positive roots. \begin{align*} &\wt{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_0)\} \cup\{-2\epsilon_i\ (i\in\I^+_0)\}\cup\{-\epsilon_r\ (r\in\wt{\I}_0^+)\},\\ &\wt{\Phi}^{\mf c}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\wt{\I}^+_0)\} \cup\{-2\epsilon_i\ (i\in\I^+_0)\},\\ &{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_0)\} \cup\{-\epsilon_i,-2\epsilon_i\ (i\in\I^+_0)\},\\ &{\Phi}^{\mf c}_+ =\{\pm\epsilon_i-\epsilon_j|i< j\ (i,j\in{\I}^+_0)\} \cup\{-2\epsilon_i\ (i\in\I^+_0)\},\\ &\ov{\Phi}^{\mf b^\bullet}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_0)\} \cup\{-\epsilon_r\ (r\in\ov{\I}_0^+)\},\\ &\ov{\Phi}^{\mf c}_+ =\{\pm\epsilon_r-\epsilon_s|r< s\ (r,s\in\ov{\I}^+_0)\}. \end{align*} \begin{rem}\label{rem:000} It is easy to see that we have the following isomorphisms of Lie superalgebras with identical Dynkin diagrams: $\osp(V_0)\cong\spo(\ov{V}_0)$, $\osp(\ov{V}_0) \cong\spo(V_0)$, $\osp(V^\times_0)\cong\spo(\ov{V}^\times_0)$, $\osp(\ov{V}^\times_0)\cong\spo(V^\times_0)$. \end{rem} \subsection{Central extensions} \label{sec:ext} We will replace the above matrix realization of the Lie superalgebras with Dynkin diagrams \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\G^\xx$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG^\xx$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG^\xx$} by their central extensions, for $\xx=\mf{b, b^\bullet, c, d}$. These central extensions will be convenient and conceptual for later formulation of truncation functors and super duality. Let $m\in\Z_+$. Consider the central extension $\widehat{\gl}(\wt{V}_m)$ of $\gl(\wt{V}_m)$ by the one-dimensional center $\C K$ determined by the $2$-cocycle \begin{align*} \tau(A,B):=\text{Str}([\mathfrak{J},A]B),\quad A,B\in\gl(\wt{V}_m), \end{align*} where $\mathfrak{J}=E_{\ov{0}\ov{0}}+\sum_{r\le \ov{\hf}}E_{rr}$ and $\text{Str}$ denotes the supertrace. Observe that the cocycle $\tau$ is a coboundary. Indeed, as a vector space, $\widehat{\gl}(\wt{V}_m) =\gl(\wt{V}_m) \oplus \C K$, and let us denote by $\widehat{X}$ for $X\in\gl(\wt{V}_m)$ to indicate that it is in $\widehat{\gl}(\wt{V}_m)$. Then the map from $\widehat{\gl}(\wt{V}_m)$ to the direct sum of Lie superalgebras $\gl(\wt{V}_m) \oplus \C K$, which sends $\widehat{X}$ to ${X}':= X - \text{Str}(\mf{J}X) K,$ is an algebra isomorphism, i.e., $[{X}', {Y}']= {[X,Y]}' +\tau(X,Y) K$. For $W=\wt{V}^\times_m, V_m, V_m^\times, \ov{V}_m, \ov{V}_m^\times$ the restrictions of $\tau$ to the subalgebras $\gl(W)$ give rise to respective central extensions, which in turn induce central extensions on $\osp(W)$ and $\spo(W)$. We denote such a central extension of $\spo(W)$ or $\osp(W)$ by $\G^\xx$ (respectively, $\SG^\xx$ and $\DG^\xx$) when it corresponds to the Dynkin diagram \makebox(23,0){$\oval(20,13)$}\makebox(-20,8){$\G^\xx$} in Tables \ref{table1} and \ref{table2} (respectively, \makebox(23,0){$\oval(20,13)$}\makebox(-20,8){$\SG^\xx$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG^\xx$}). We make a trivial yet crucial observation that $\G^\xx$ and $\SG^\xx$ are naturally subalgebras of $\DG^\xx$. The standard Cartan subalgebras of $\G^\xx$, $\SG^\xx$ and $\DG^\xx$ will be denoted by $\h^\xx$, $\ov{\h}^\xx$ and $\wt{\h}^\xx$, respectively. $\h^\xx$, $\ov{\h}^\xx$ or $\wt{\h}^\xx$ has a basis $\{K,\widehat{E}_{r} \}$ with dual basis $\{\Lambda_0,\epsilon_r\}$ in the restricted dual $(\h^\xx)^*$, $(\ov{\h}^\xx)^*$ or $(\wt{\h}^\xx)^*$, where $r$ runs over the index sets $\I^+_m$, $\ov{\I}^+_m$ or $\wt{\I}^+_m$, respectively. Here $\La_0$ is defined by letting \begin{align*} \La_0(K)=1,\quad\La_0(\widehat{E}_{r})=0, \end{align*} for all relevant $r$ in each case. {\bf In the remainder of the paper we shall drop the superscript $\xx$.} For example, we write $\G$, $\SG$ and $\wt{\G}$ for $\G^\xx$, $\SG^\xx$ and $\wt{\G}^\xx$, with associated Dynkin diagrams \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\G$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\SG$} and \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\DG$}, respectively, where $\xx$ denotes a fixed type among $\mf{b,b^\bullet,c,d}$. \section{Categories $\mathcal{O}$, $\ov{\mathcal{O}}$ and $\wt{\mathcal{O}}$}\label{sec:O} In this section, we first introduce the categories $\mathcal{O}$, $\ov{\mathcal{O}}$ and $\wt{\mathcal{O}}$ of $\G$-modules, $\SG$-modules and $\wt{\G}$-modules, respectively. Then we study the truncation functors which relate $\SG$ to finite dimensional Lie superalgebras of $\osp$ type. Let $m\in\Z_+$ be fixed. \subsection{The weights} \label{sec:OBCD} We fix an arbitrary subset $Y_0$ of $\Pi({\mf k})$. Let $Y$, $\ov{Y}$ and $\wt{Y}$ be the union of $Y_0$ and the subset of simple roots of \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf T$}, \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\ov{\mf T}$} and \makebox(23,0){$\oval(20,13)$}\makebox(-20,8){$\wt{\mf T}$}, respectively, with the leftmost one removed. We have $Y_0=\emptyset$ for $m=0$. As $Y$, $\ov{Y}$ and $\wt{Y}$ are fixed, we will make the convention of suppressing them from notations below. Set $\mf{l}$, $\ov{\mf{l}}$ and $\wt{\mf l}$ to be the standard Levi subalgebras of $\G$, $\SG$ and $\DG$ corresponding to the subsets $Y$, $\ov{Y}$ and $\wt{Y}$, respectively. The Borel subalgebras of $\G$, $\SG$ and $\DG$, spanned by the central element $K$ and upper triangular matrices, are denoted by $\mf{b}$, $\ov{\mf{b}}$ and $\wt{\mf{b}}$, respectively. Let $\mf{p} =\mf{l} +\mf{b}$, $\ov{\mf p} =\ov{\mf{l}} + \ov{\mf{b}}$ and $\wt{\mf p} =\wt{\mf{l}} +\wt{\mf{b}}$ be the corresponding parabolic subalgebras with nilradicals $\mf{u}$, $\ov{\mf u}$ and $\wt{\mf u}$ and opposite nilradicals $\mf{u}_-$, $\ov{\mf u}_-$ and $\wt{\mf u}_-$, respectively. Given a partition $\mu=(\mu_1,\mu_2,\ldots)$, we denote by $\ell(\mu)$ the length of $\mu$ and by $\mu'$ its conjugate partition. We also denote by $\theta(\mu)$ the modified Frobenius coordinates of $\mu$: \begin{equation*} \theta(\mu) :=(\theta(\mu)_{1/2},\theta(\mu)_1,\theta(\mu)_{3/2},\theta(\mu)_2,\ldots), \end{equation*} where $$\theta(\mu)_{i-1/2}:=\max\{\mu'_i-i+1,0\}, \quad \theta(\mu)_i:=\max\{\mu_i-i, 0\}, \quad i\in\N. $$ Let $\la_{-m},\ldots,\la_{-1}\in\C$ and $\la^+$ be a partition. The tuple $(\la_{-m},\ldots,\la_{-1};\la^+)$ is said to satisfy a dominant condition if $\langle\sum_{i=-m}^{-1}\la_i\epsilon_i, h_\alpha \rangle\in\Z_+$ for all $\alpha\in Y_0$, where $h_\alpha$ denotes the coroot of $\alpha$. Associated to such a dominant tuple and each $d \in \C$, we define the weights (which will be called {\em dominant}) \begin{align} {\la} &:=\sum_{i=-m}^{-1}\la_{i}\epsilon_{i} + \sum_{j\in\N}\la^+_{j}\epsilon_{j} + d\La_0\in {\h}^{*},\label{weight:Im}\\ \la^\natural &:=\sum_{i=-m}^{-1}\la_{i}\epsilon_{i} + \sum_{s\in\hf+\Z_+}(\la^+)'_{s+\hf}\epsilon_s + d\La_0\in \ov{\h}^{*}, \label{weight:ovIm}\\ \la^\theta &:=\sum_{i=-m}^{-1}\la_{i}\epsilon_{i} + \sum_{r\in\hf\N}\theta(\la^+)_r\epsilon_r + d\La_0\in \wt{\h}^{*}.\label{weight:wtIm} \end{align} We denote by $P^+\subset{\h}^*$, $\bar{P}^+\subset\ov{\h}^*$ and $\wt{P}^+\subset\wt{\h}^*$ the sets of all dominant weights of the form \eqnref{weight:Im}, \eqnref{weight:ovIm} and \eqnref{weight:wtIm} for all $d\in \C$, respectively. By definition we have bijective maps \begin{eqnarray*} \natural:P^+\longrightarrow\bar{P}^+, && \quad \la\mapsto \la^\natural, \\ \theta:P^+\longrightarrow\wt{P}^+, && \quad \la\mapsto \la^\theta. \end{eqnarray*} For $\mu\in P^+$, let $L(\mf{l},\mu)$ denote the highest weight irreducible $\mf{l}$-module of highest weight $\mu$. We extend $L(\mf{l},\mu)$ to a $\mf{p}$-module by letting $\mf{u}$ act trivially. Define as usual the parabolic Verma module $\Delta(\mu)$ and its irreducible quotient $L(\mu)$ over $\G$: \begin{align*} \Delta(\mu):=\text{Ind}_{\mf{p}}^{\G}L(\mf{l},\mu), \qquad \Delta(\mu) \twoheadrightarrow L(\mu). \end{align*} Similarly, for $\mu\in{P^+}$, we define the irreducible $\ov{\mf{l}}$-module $L(\ov{\mf{l}},\mu^\natural)$, the parabolic Verma $\SG$-module $\ov{\Delta}(\mu^\natural)$ and its irreducible $\SG$-quotient $\ov{L}(\mu^\natural)$, as well as the irreducible $\wt{\mf{l}}$-module $L(\wt{\mf{l}},\mu^\theta)$, the parabolic Verma $\DG$-module $\wt{\Delta}(\mu^\theta)$ and its irreducible $\DG$-quotient $\wt{L}(\mu^\theta)$. \subsection{The categories $\mc{O}, \ov{\mc O}$ and $\wt{\mc O}$} \begin{lem}\label{lem:paraverma} Let $\mu\in P^+$. \begin{itemize} \item[(i)] The restrictions to $\mf{l}$ of the $\G$-modules $\Delta(\mu)$ and $L(\mu)$ decompose into direct sums of $L(\mf{l},\nu)$ for $\nu\in P^+$. \item[(ii)] The restrictions to $\ov{\mf{l}}$ of the $\SG$-modules $\ov{\Delta}(\mu^\natural)$ and $\ov{L}(\mu^\natural)$ decompose into direct sums of $L(\ov{\mf{l}},\nu^\natural)$ for $\nu\in P^+$. \item[(iii)] The restrictions to $\wt{\mf{l}}$ of the $\DG$-modules $\wt{\Delta}(\mu^\theta)$ and $\wt{L}(\mu^\theta)$ decompose into direct sums of $L(\wt{\mf{l}},\nu^\theta)$ for $\nu\in P^+$. \end{itemize} \end{lem} \begin{proof} Part (i) is clear. The proofs of (ii) and (iii) are analogous, and so we shall only give the proof for (ii). The $\ov{\mf{l}}$-module $\ov{\mf{u}}_-$ is a direct sum of irreducible modules of the form $L(\ov{\mf{l}},\nu^\natural)$. Now the category of $\ov{\mf{l}}$-modules that have an increasing composition series with composition factors isomorphic to $L(\ov{\mf{l}},\nu^\natural)$, with $\nu\in P^+$, is a semi-simple tensor category \cite[Section 3.2]{CK}. Thus $\ov{\Delta}(\mu^\natural)\cong U{(}\ov{\mf{u}}_-{)}\otimes L(\ov{\mf{l}},\mu^\natural)$ also decomposes into a direct sum of $L(\ov{\mf{l}},\nu^\natural)$ with $\nu\in P^+$, and so does its irreducible quotient $\ov{L}(\mu^\natural)$. \end{proof} Let $\mc{O}$ be the category of $\G$-modules ${M}$ such that ${M}$ is a semisimple ${\h}$-module with finite dimensional weight subspaces $M_\gamma$, $\gamma\in {\h}^*$, satisfying \begin{itemize} \item[(i)] ${M}$ decomposes over ${\mf{l}}$ into a direct sum of $L({\mf{l}},\mu)$ for $\mu\in {P^+}$. \item[(ii)] There exist finitely many weights $\la_1,\la_2,\ldots,\la_k\in{P^+}$ (depending on ${M}$) such that if $\gamma$ is a weight in ${M}$, then $\gamma\in\la_i-\sum_{\alpha\in{\Pi}}\Z_+\alpha$, for some $i$. \end{itemize} The parabolic Verma modules $\Delta(\mu)$ and irreducible modules $L(\mu)$ for $\mu\in P^+$ lie in $\mc{O}$, by \lemref{lem:paraverma}. Analogously we define the categories $\ov{\mc{O}}$ and $\wt{\mc{O}}$ of $\SG$- and $\DG$-modules, respectively. They also contain suitable parabolic Verma and irreducible modules. The morphisms in $\mc{O}$, $\ov{\mc{O}}$ and $\wt{\mc{O}}$ are all (not necessarily even) $\G$-, ${\SG}$- and ${\DG}$-homomorphisms, respectively. \subsection{The Lie superalgebras $\G_n$, $\SG_n$ and $\DG_n$ of finite rank} \label{sec:finiterank} For $n\in\N$, recall the sets $\Pi_n, \ov{\Pi}_n, \wt{\Pi}_n$ of simple roots for the Dynkin diagrams \eqref{Dynkin:combined}. The associated Lie superalgebras $\G_n$, $\SG_n$ and $\DG_n$ can be identified naturally with the subalgebras of $\G$, $\SG$ and $\DG$ generated by $K$ and the root vectors of the corresponding Dynkin diagrams in (\ref{Dynkin:combined}), and moreover, $\G_n \subset \G_{n+1}, \SG_n \subset \SG_{n+1}$ for all $n$. Observe that the $\SG_n$'s (modulo the trivial central extensions) are exactly all the finite dimensional Lie superalgebras of $\osp$ type. Since $\SG =\cup_n \SG_n$, the standard Cartan subalgebra of $\SG_n$ equals $\ov{\h}_n=\ov{\h}\cap\SG_n$. Similarly, we use the notation ${\h}_n$ and $\wt{\h}_n$ for the standard Cartan subalgebras of $\G_n$ and $\DG_n$, respectively. Recall the notation $\la \in P^+$, $\la^\natural$, and $\la^\theta$ from \eqnref{weight:Im}, \eqnref{weight:ovIm} and \eqnref{weight:wtIm}. Given $\la\in P^+$ with $\la^+_j=0$ for $j>n$, we may regard it as a weight $\la_n \in \h^*_n$ in a natural way. Similarly, for $\la\in P^+$ with $(\la^+)'_j=0$ for $j>n$, we regard $\la^\natural$ as a weight $\la^\natural_n \in \ov{\h}^*_n$. Finally, for $\la\in P^+$ with $\theta(\la^+)_j=0$ for $j>n$, we regard $\la^\theta$ as a weight $\la_n^\theta \in \wt{\h}^*_n$. The subsets of such weights $\la_n, \la_n^\natural, \la_n^\theta$ in $\h^*_n$, $\ov{\h}^*_n$ and $\wt{\h}^*_n$ will be denoted by $P^+_n$, $\bar{P}^+_n$ and $\wt{P}^+_n$, respectively. The corresponding parabolic Verma and irreducible $\G_n$-modules are denoted by $\Delta_n (\mu)$ and $L_n(\mu)$, respectively, with $\mu\in P^+_n$, while the corresponding category of $\G_n$-modules is denoted by $\mc{O}_n$. Similarly, we introduce the self-explanatory notations $\ov{\Delta}_n(\mu^\natural)$, $\ov{L}_n(\mu^\natural)$, $\ov{\mc{O}}_{n}$, and $\wt{\Delta}_n(\mu^\theta)$, $\ov{L}_n(\mu^\theta)$, $\wt{\mc{O}}_{n}$ for $\SG_n$- and $\DG_n$-modules, respectively. \subsection{The truncation functors} Let $\infty\ge k>n$. For $M\in \mc{O}_k$, we can write $M=\bigoplus_{\gamma}M_\gamma$, where $\gamma$ runs over $\gamma\in\sum_{i=-m}^{-1}\C \epsilon_i+\sum_{0<j\le k}\C \epsilon_j+ \C\La_0$. The {\em truncation functor} $$ \mf{tr}^{k}_n:\mc{O}_k \rightarrow\mc{O}_n $$ is defined by sending $M$ to $\bigoplus_{\nu}M_\nu$, summed over $\sum_{i=-m}^{-1}\C \epsilon_i+\sum_{0<j\le n}\C\epsilon_j+ \C\La_0$. When it is clear from the context we shall also write $\mf{tr}_n$ instead of $\mf{tr}^k_n$. Analogously, truncation functors $\mf{tr}^{k}_n:\ov{\mc{O}}_k\rightarrow\ov{\mc{O}}_n $ and $\mf{tr}^{k}_n:\wt{\mc{O}}_k\rightarrow\wt{\mc{O}}_n$ are defined. \begin{lem}\label{lem:trunc} Let $\infty\ge k>n$ and $X=L,\Delta$. \begin{itemize} \item[(i)] For $\mu\in P^+_k$ we have $\mf{tr}_n\big{(}X_k (\mu)\big{)} = \begin{cases} X_n (\mu),\quad\text{if } \langle\mu,\widehat{E}_j\rangle=0,\forall j>n,\\ 0,\quad\text{otherwise}. \end{cases}$ \item[(ii)] For $\mu\in\bar{P}^+_k$ we have $\mf{tr}_n\big{(}\ov{X}_k (\mu)\big{)}= \begin{cases} \ov{X}_n(\mu),\quad\text{if } \langle\mu,\widehat{E}_j\rangle=0,\forall j>n,\\ 0,\quad\text{otherwise}. \end{cases}$ \item[(iii)] For $\mu\in\wt{P}^+_k$ we have $\mf{tr}_n\big{(}\wt{X}_k(\mu)\big{)}= \begin{cases} \wt{X}_n(\mu),\quad\text{if }\langle\mu,\widehat{E}_j\rangle=0,\forall j>n,\\ 0,\quad\text{otherwise}. \end{cases}$ \end{itemize} \end{lem} \begin{proof} We will show (i) only. The proofs of (ii) and (iii) are similar. Since $\mf{tr}^k_n\circ\mf{tr}^l_k=\mf{tr}^l_n$, it is enough to show (i) for $k=\infty$. Suppose that $\langle\mu,\widehat{E}_{j}\rangle=0$ for all $j>n$. Let $\mf {l}'$ denote the standard Levi subalgebra of $\G$ corresponding to the removal of the vertex $\beta_{n} $ of the Dynkin diagram of $\G$. Then $\mf{l}'\cong \G_n\oplus \mf{gl}(\infty)$. Now $L(\mu)$ is the unique irreducible quotient of the $\G$-module obtained via parabolic induction from the $\mf{l}'$-module $L_n(\mu)$ (where the $\G_n$-module $L_n(\mu)$ is extended to an $\mf{l}'$-module by a trivial action of $\mf{gl}(\infty)$). Our choice of the Levi subalgebra and of the opposite nilradical assures that this parabolically induced module truncates to $L_n(\mu)$. Thus its irreducible quotient $L(\mu)$ also truncates to $L_n(\mu)$. The remaining case in (i) is clear. \end{proof} \begin{rem} The central extensions introduced in Section \ref{sec:ext} allow us to study, in a uniform fashion, modules whose weights stabilize at any $d\in\C$ (not just at $d=0$). For example, if $\mu$ is a weight with $\mu(E_{r})=d\not=0$, for $r\gg 0$, then, without central extensions, the usual truncation functors would always truncate an irreducible or parabolic Verma of such a highest weight to zero. A way around central extensions is to define truncation functors depending on each $d\in\C$. This approach, although equivalent, looks less elegant. \end{rem} \section{The character formulas}\label{sec:character} In this section we introduce two functors $T: \wt{\mc O} \rightarrow \mc O$ and $\ov{T}: \wt{\mc O} \rightarrow \ov{\mc O}$, and then establish a fundamental property of these functors (Theorem~\ref{matching:modules}). As a consequence, we obtain an irreducible $\osp$-character formula in a parabolic category $O$ in terms of the KL polynomials of $BCD$ types (Theorem~\ref{character} and Remark~\ref{rem:sol}). The Kazhdan-Lusztig polynomials in the categories $\mc O$, $\ov{\mc O}$ and $\wt{\mc O}$ in terms of Kostant $\mf u$-homology groups are shown to match perfectly with one another (\thmref{matching:KLpol}). \subsection{Odd reflections}\label{odd} Let $\mc G$ be a Lie superalgebra with a Borel subalgebra $\mc{B}$ with corresponding sets of simple and positive roots $\Pi(\mc{B})$ and $\Phi_+(\mc{B})$, respectively. As usual, for a positive root $\beta$, we let $f_\beta$ denote a root vector associated to root $-\beta$. Let $\alpha$ be an isotropic odd simple root in $\Pi(\mc{B})$ and $h_\alpha$ be its corresponding coroot. The set $\Phi_+(\mc{B}^\alpha) :=\{-\alpha\}\cup\Phi_+(\mc{B})\setminus\{\alpha\}$ forms a new set of positive roots whose corresponding set of simple roots is \begin{align*} \Pi(\mc{B}^\alpha) =\{\beta\in\Pi(\mc{B})|\langle\beta,h_\alpha\rangle=0, \beta \neq \alpha\}\cup \{\beta+\alpha|\beta\in\Pi(\mc{B}),\langle\beta,h_\alpha \rangle\not=0\} \cup\{-\alpha\}. \end{align*} We shall denote by $\mc B^\alpha$ the corresponding new Borel subalgebra. The process of such a change of Borel subalgebras is referred to as {\em odd reflection} with respect to $\alpha$ \cite{LSS}. The following simple and fundamental lemma for odd reflections has been used by many authors (cf. e.g. \cite{KW}). \begin{lem} \label{hwt odd} Let $L$ be a simple $\mc G$-module of $\mc B$-highest weight $\la$ and let $v$ be a $\mc B$-highest weight vector. Let $\alpha$ be a simple isotropic odd root in $\Pi(\mc{B})$. \begin{enumerate} \item If $\langle \la, h_\alpha \rangle = 0$, then $L$ is a $\mc G$-module of $\mc B^\alpha$-highest weight $\la$ and $v$ is a $\mc B^\alpha$-highest weight vector. \item If $\langle \la, h_\alpha \rangle \neq 0$, then $L$ is a $\mc G$-module of $\mc B^\alpha$-highest weight $\la -\alpha$ and $f_\alpha v$ is a $\mc B^\alpha$-highest weight vector. \end{enumerate} \end{lem} \subsection{The Borel subalgebras $\wt{\mf{b}}^{c}(n)$ and $\wt{\mf{b}}^{s}(n)$} \label{oddreflection} Fix $n\in \N$. Starting with the third Dynkin diagram in \eqref{Dynkin:combined} associated to $\DG$, we apply the following sequence of $\frac{n(n+1)}{2}$ odd reflections. First we apply one odd reflection corresponding to $\alpha_{1/2}$, then we apply two odd reflections corresponding to $\alpha_{3/2}$ and $\alpha_{1/2}+\alpha_1+\alpha_{3/2}$. After that we apply three odd reflections corresponding to $\alpha_{5/2}$, $\alpha_{3/2}+\alpha_{2}+\alpha_{5/2}$, and $\alpha_{1/2}+\alpha_1+\alpha_{3/2}+\alpha_{2}+\alpha_{5/2}$, et cetera, until finally we apply $n$ odd reflections corresponding to $\alpha_{n-1/2},\alpha_{n-3/2}+\alpha_{n-1} +\alpha_{n-1/2},\ldots,\sum_{i=1}^{2n-1}\alpha_{i/2}$. The resulting new Borel subalgebra for $\DG$ will be denoted by $\wt{\mf{b}}^{c}(n)$ and its corresponding simple roots are listed in the following Dynkin diagram: \begin{center} \hskip -6cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(15.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(21.9,2){\makebox(0,0)[c]{$\bigcirc$}} \put(24.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(26.8,2){\makebox(0,0)[c]{$\bigotimes$}} \put(13.2,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){0.9}} \put(20.1,2){\line(1,0){1.4}} \put(22.35,2){\line(1,0){1.6}} \put(24.9,2){\line(1,0){1.5}} \put(27.3,2){\line(1,0){1.5}} \put(19.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(29.7,1.95){\makebox(0,0)[c]{$\cdots$}} \put(15.2,3){\makebox(0,0)[c]{\tiny $-\sum_{i=1}^{2n-1}\alpha_{i/2}$}} \put(17.8,1){\makebox(0,0)[c]{\tiny $\beta_{1/2}$}} \put(22,1){\makebox(0,0)[c]{\tiny $\beta_{n-1/2}$}} \put(24.4,1){\makebox(0,0)[c]{\tiny $\alpha_{n+1/2}$}} \put(27,1){\makebox(0,0)[c]{\tiny $\alpha_{n+1}$}} \put(8.0,2){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{k}$}}}} \put(8.8,2){\line(1,0){1.7}} \put(11.8,2){\makebox(0,0)[c]{{\ovalBox(2.6,1.2){$\mf{T}_{n}$}}}} \end{picture} \end{center} The crucial point here is that the subdiagram to the left of the first $\bigotimes$ is the Dynkin diagram of $\G_{n}$. On the other hand, starting with the third Dynkin diagram in \eqref{Dynkin:combined} associated to $\DG$, we apply the following new sequence of $\frac{n(n+1)}{2}$ odd reflections. First we apply one odd reflection corresponding to $\alpha_{1}$, then we apply two odd reflections corresponding to $\alpha_{2}$ and $\alpha_{1}+\alpha_{3/2}+\alpha_2$. After that we apply three odd reflections corresponding to $\alpha_{3}$, $\alpha_{2}+\alpha_{5/2}+\alpha_3$, and $\alpha_1+\alpha_{3/2}+\alpha_{2}+\alpha_{5/2}+\alpha_3$, et cetera, until finally we apply $n$ odd reflections corresponding to $\alpha_{n},\alpha_{n-1}+\alpha_{n-1/2} +\alpha_{n},\ldots,\sum_{i=2}^{2n}\alpha_{i/2}$. The resulting new Borel subalgebra for $\DG$ will be denoted by $\wt{\mf{b}}^{s}(n)$ and its corresponding simple roots are listed in the following Dynkin diagram: \begin{center} \hskip -6cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(15.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(21.9,2){\makebox(0,0)[c]{$\bigcirc$}} \put(24.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(26.8,2){\makebox(0,0)[c]{$\bigotimes$}} \put(13.28,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){0.9}} \put(20.1,2){\line(1,0){1.4}} \put(22.35,2){\line(1,0){1.6}} \put(24.9,2){\line(1,0){1.5}} \put(27.3,2){\line(1,0){1.5}} \put(19.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(29.7,1.95){\makebox(0,0)[c]{$\cdots$}} \put(15.5,3){\makebox(0,0)[c]{\tiny $-\sum_{i=2}^{2n}\alpha_{i/2}$}} \put(17.8,1){\makebox(0,0)[c]{\tiny $\beta_{1}$}} \put(22,1){\makebox(0,0)[c]{\tiny $\beta_{n}$}} \put(24.4,1){\makebox(0,0)[c]{\tiny $\alpha_{n+1}$}} \put(27,1){\makebox(0,0)[c]{\tiny $\alpha_{n+3/2}$}} \put(8.0,2){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$\mf{k}$}}}} \put(8.8,2){\line(1,0){1.7}} \put(11.8,2){\makebox(0,0)[c]{{\ovalBox(2.6,1.2){$\ov{\mf{T}}_{n+1}$}}}} \end{picture} \end{center} We remark that the subdiagram to the left of the odd simple root $-\sum_{i=2}^{2n}\alpha_{i/2}$ above becomes the Dynkin diagram of $\SG_{n+1}$. \subsection{Highest weights with respect to $\wt{\mf{b}}^{c}(n)$ and $\wt{\mf{b}}^{s}(n)$} Recall the standard Levi subalgebra $\wt{\mf{l}}$ of $\DG$ with (opposite) nilradical $\wt{\mf{u}}$ and $\wt{\mf{u}}_-$ (see Section~\ref{sec:OBCD}). \begin{lem}\label{lem:uinvar} The sequences of odd reflections in \ref{oddreflection} leave the sets of roots of $\wt{\mf{u}}$ and $\wt{\mf{u}}_-$ invariant. \end{lem} \begin{proof} This follows from the fact that the simple roots used in the sequences of odd reflections in Section~ \ref{oddreflection} are all roots of $\wt{\mf{l}}$. \end{proof} We denote by $\wt{\mf{b}}^{c}_{\wt{\mf{l}}}(n)$ and $\wt{\mf{b}}^{s}_{\wt{\mf{l}}}(n)$ the Borel subalgebras of $\wt{\mf{l}}$ corresponding to the sets of simple roots $\wt{\Pi}^{c}(n)\cap\sum_{\alpha\in\wt{Y}}\Z\alpha$ and $\wt{\Pi}^{s}(n)\cap\sum_{\alpha\in\wt{Y}}\Z\alpha$, respectively. The sequences of odd reflections in \ref{oddreflection} only affect the tail diagram \makebox(20,0){$\oval(20,14)$}\makebox(-20,8){$\wt{\mf{T}}$} and leaves the head diagram \makebox(20,0){$\oval(20,12)$}\makebox(-20,7){$\mf{k}$} invariant. Since the tail diagram is of type $A$, the proofs of \cite[Lemma 3.2]{CL} and \cite[Corollary 3.3]{CL} can be adapted in a straightforward way to prove the following (where Lemma~\ref{lem:uinvar} is used). \begin{prop}\label{prop:change} Let $\la\in P^+$ and $n\in\N$. \begin{itemize} \item[(i)] Suppose that $\ell(\la_+)\le n$. Then the highest weight of $L(\wt{\mf{l}},\la^\theta)$ with respect to the Borel subalgebra $\wt{\mf{b}}^{c}_{\wt{\mf{l}}}(n)$ is $\la$. Furthermore, $\wt{\Delta}(\la^\theta)$ and $\wt{L}(\la^\theta)$ are highest weight $\DG$-modules of highest weight $\la$ with respect to the new Borel subalgebra $\wt{\mf{b}}^{c}(n)$. \item[(ii)] Suppose that $\ell(\la_+')\le n$. Then the highest weight of $L(\wt{\mf{l}},\la^\theta)$ with respect to the Borel subalgebra $\wt{\mf{b}}^{s}_{\wt{\mf{l}}}(n)$ is $\la^\natural$. Furthermore, $\wt{\Delta}(\la^\theta)$ and $\wt{L}(\la^\theta)$ are highest weight $\DG$-modules of highest weight $\la^\natural$ with respect to the new Borel subalgebra $\wt{\mf{b}}^{s}(n)$. \end{itemize} \end{prop} \subsection{The functors $T$ and $\ov{T}$}\label{Tfunctors} \label{sec:T} By definition, $\G$ and $\SG$ are naturally subalgebras of $\DG$, $\mf l$ and $\ov{\mf{l}}$ are subalgebras of $\wt{\mf{l}}$, while $\h$ and $\ov{\h}$ are subalgebras of $\wt{\h}$. Also, we may regard ${\h}^* \subset \wt{\h}^*$ and $\ov{\h}^* \subset \wt{\h}^*$. Given a semisimple $\wt{\h}$-module $\wt{M}=\bigoplus_{\gamma\in\wt{\h}^*}\wt{M}_\gamma$, we define \begin{align*} T(\wt{M}):= \bigoplus_{\gamma\in{{\h}^*}}\wt{M}_\gamma,\qquad \hbox{and}\qquad \ov{T}(\wt{M}):= \bigoplus_{\gamma\in{\ov{\h}^*}}\wt{M}_\gamma. \end{align*} Note that $T(\wt{M})$ is an ${\h}$-submodule of the $\wt{M}$, and $\ov{T}(\wt{M})$ is an $\ov{\h}$-submodule of $\wt{M}$. One checks that if $\wt{M}=\bigoplus_{\gamma\in\wt{\h}^*}\wt{M}_\gamma$ is an $\wt{\mf{l}}$-module, then $T(\wt{M})$ is an ${\mf{l}}$-submodule of $\wt{M}$ and $\ov{T}(\wt{M})$ is an $\ov{\mf{l}}$-submodule of $\wt{M}$. Furthermore, if $\wt{M}=\bigoplus_{\gamma\in\wt{\h}^*}\wt{M}_\gamma$ is a $\DG$-module, then $T(\wt{M})$ is a ${\G}$-submodule of $\wt{M}$ and $\ov{T}(\wt{M})$ is a $\ov{\G}$-submodule of $\wt{M}$. The direct sum decomposition in $\wt{M}$ gives rise to the natural projections \begin{eqnarray*} \CD T_{\wt{M}} : \wt{M} @>>>T(\wt{M}) \qquad \hbox{and}\qquad \ov{T}_{\wt{M}} : \wt{M} @>>>\ov{T}(\wt{M}) \endCD \end{eqnarray*} that are ${\h}$- and $\ov{\h}$-module homomorphisms, respectively. If $\wt{f}:\wt{M}\rightarrow \wt{N}$ is an $\wt{\h}$-homomorphism, then the following induced maps \begin{eqnarray*} \CD T[\wt{f}] : T(\wt{M}) @>>>T(\wt{N}) \qquad \hbox{and}\qquad \ov{T}[\wt{f}] : \ov{T}(\wt{M}) @>>>\ov{T}(\wt{N}) \endCD \end{eqnarray*} are also ${\h}$- and $\ov{\h}$-module homomorphisms, respectively. Also if $\wt{f}:\wt{M}\rightarrow \wt{N}$ is a ${\wt{\G}}$-homomorphism, then $T_{\wt{M}}$ and $T[\wt{f}]$ (respectively, $\ov{T}_{\wt{M}}$ and $\ov{T}[\wt{f}]$) are ${\G}$- (respectively, $\ov{\G}$-) homomorphisms. \begin{lem}\label{lmod2lmod} For $\la\in P^+$, we have $T\big{(}L(\wt{\mf{l}},\la^\theta)\big{)} = L({\mf{l}},\la)$, and $\ov{T}\big{(}L(\wt{\mf{l}},\la^\theta)\big{)}= L(\ov{\mf{l}},\la^\natural)$. \end{lem} \begin{proof} We shall prove the first formula using a character argument, and the second one can be proved similarly. Associated to partitions $\nu \subset \la$, we denote by $s_\la(x_1,x_2,\ldots)$ and $s_{\la/\nu}(x_1,x_2,\ldots)$ the Schur and skew Schur functions in the variables $x_1,x_2,\ldots$. The hook Schur functions associated to $\la$ is defined to be (cf.~\cite{S, BR}) \begin{equation}\label{hookschur:char} hs_{\la}(x_{1/2},x_1,x_{3/2},x_2,\ldots) :=\sum_{\mu\subset\la}s_{\mu}(x_{1/2},x_{3/2},\ldots) s_{\la'/\mu'}(x_1,x_2,\ldots). \end{equation} For a dominant tuple $(\la_{-m},\ldots,\la_{-1};\la^+)$, we have (cf. \cite{CK}) \begin{equation} \label{eqn:hookschur} {\rm ch}L(\wt{\mf{l}},\la^\theta)={\rm ch}L(\wt{\mf{l}}\cap\mf{k},\la|_{\mf k}) \, hs_{\la_+'}(x_{1/2},x_1,x_{3/2},x_2,\ldots). \end{equation} Here $x_r:=e^{\epsilon_r}$ for each $r$, and $L(\wt{\mf{l}}\cap\mf{k},\la|_{\mf k})$ denotes the irreducible $\wt{\mf{l}}\cap\mf{k}$-module of highest weight $\la|_{\mf k}=\sum_{i=-m}^{-1}\la_i\epsilon_i$. Note that $\wt{\mf{l}}\cap\mf{k} ={\mf{l}}\cap\mf{k}$. As an $\mf{l}$-module, $L(\wt{\mf{l}},\la^\theta)$ is completely reducible. On the character level, applying $T$ to $L(\wt{\mf{l}},\la^\theta)$ corresponds to setting $x_{1/2},x_{3/2},x_{5/2},\ldots$ in the character formula \eqnref{eqn:hookschur} to zero. By \eqnref{hookschur:char}, $T\big{(}L({\mf{l}},\la^\theta)\big{)}$ is an $\mf{l}$-module with character ${\rm ch}L(\wt{\mf{l}}\cap\mf{k},\la|_{\mf k}) \, s_{\la_+}(x_1,x_2,\ldots)$, which is precisely the character of $L({\mf{l}},\la)$. This proves the formula. \end{proof} \begin{cor}\label{functor} $T$ and $\ov{T}$ define exact functors from $\wt{\mc{O}}$ to $\mc{O}$ and from $\wt{\mc{O}}$ to $\ov{\mc{O}}$, respectively. \end{cor} The following theorem can be regarded as a weak version of the super duality which is to be established in Theorem~\ref{thm:equivalence}. \begin{thm}\label{matching:modules} Let $\la\in P^+$. If $\wt{M}$ is a highest weight $\wt{\G}$-module of highest weight $\la^\theta$, then $T(\wt{M})$ and $\ov{T}(\wt{M})$ are highest weight ${\G}$- and $\ov{\G}$-modules of highest weights $\la$ and $\la^\natural$, respectively. Furthermore, we have \begin{align*} T\big{(}\wt{\Delta}(\la^\theta)\big{)} =\Delta(\la),\quad &T\big{(}\wt{L}(\la^\theta)\big{)}=L(\la); \\ \ov{T}\big{(}\wt{\Delta}(\la^\theta)\big{)} =\ov{\Delta}(\la^\natural),\quad &\ov{T}\big{(}\wt{L}(\la^\theta)\big{)}=\ov{L}(\la^\natural). \end{align*} \end{thm} \begin{proof} We will prove only the statements involving $T$, and the statements involving $\ov{T}$ can be proved in the same way. By \propref{prop:change}, $\wt{M}$ contains a $\wt{\mf b}^c(n)$-highest weight vector $v_\la$ of highest weight $\la$ for $n \gg 0$. The vector $v_\la$ clearly lies in $T(\wt{M})$, and it is a ${\mf b}$-singular vector since $\mf{b}=\G\cap\wt{\mf b}^c(n)$. Now $T(\wt{M})$ is completely reducible over ${\mf l}$ with all highest weights of its irreducible summands lying in $P^+$. Thus to show that $T(\wt{M})$ is a highest weight $\G$-module it remains to show that any vector in $T(\wt{M})$ of weight in $P^+$ is contained in ${U}(\G)v_\la$. This follows by the same argument in \cite[Lemma 3.5]{CL}, which only relies on the $A$-type tail diagram of $\DG$. Let us write $\Delta(\la) =U(\mf u_-) \otimes_\C L({\mf{l}},\la)$ and $\wt{\Delta}(\la^\theta) =U(\wt{\mf u}_-) \otimes_\C L(\wt{\mf{l}},\la^\theta)$. We observe that all the weights in $U(\mf u_-)$, $L({\mf{l}},\la)$, $U(\wt{\mf u}_-)$, and $L(\wt{\mf{l}},\la^\theta)$ are of the form $\sum_{j<0}a_j\ep_j+\sum_{r>0}b_r\ep_r$ with $b_r\in\Z_+$. Since also $T(U(\wt{\mf u}_-)) =U(\mf u_-)$, it follows by \lemref{lmod2lmod} that $\text{ch}T\big{(}\wt{\Delta}(\la^\theta)\big{)} =\text{ch}\Delta(\la)$. Since $T\big{(}\wt{\Delta}(\la^\theta)\big{)}$ is a highest weight module of highest weight $\la$, we have $T\big{(}\wt{\Delta}(\la^\theta)\big{)} =\Delta(\la)$. To show that $T$ sends irreducibles to irreducibles we show that $T(\wt{L}(\la^\theta))$ has no singular vector apart from the scalar multiples of a highest weight vector. We argue by assuming otherwise and derive a contradiction. If we have another singular vector of weight different from $\la$, then we can show, following the second part of the proof of \cite[Theorem 3.6]{CL}, that we also have a singular vector in $\wt{L}(\la^\theta)$ of weight different from $\la^\theta$. The argument there is applicable here, since it again only depends on the tail diagram, which is of type $A$. \end{proof} \begin{rem} It can be shown that tilting modules exist in categories $\mc O, \ov{\mc O}, \wt{\mc O}$ (cf. \cite{CWZ, CW2} for type $A$) and that the functors $T$ and $\ov T$ respect the tilting modules. We choose not to develop the details in order to keep the paper to a reasonable length. \end{rem} By standard arguments \thmref{matching:modules} implies the following character formula. \begin{thm}\label{character} Let $\la\in P^+$, and write ${\rm ch}L(\la)=\sum_{\mu\in P^+}a_{\mu\la}{\rm ch}\Delta(\mu)$, $a_{\mu\la}\in\Z$. Then \begin{itemize} \item[(i)] ${\rm ch}\ov{L}(\la^\natural)=\sum_{\mu\in P^+}a_{\mu\la}{\rm ch}\ov{\Delta}(\mu^\natural)$, \item[(ii)] ${\rm ch}\wt{L}(\la^\theta)=\sum_{\mu\in P^+}a_{\mu\la}{\rm ch}\wt{\Delta}(\mu^\theta)$. \end{itemize} \end{thm} \begin{rem} \label{rem:sol} The transition matrix $(a_{\mu\la})$ in \thmref{character} is known according to the Kazhdan-Lusztig theory. This is because the Kazhdan-Lusztig polynomials in the BGG category ${O}$ also determine the composition factors of generalized Verma modules in the corresponding parabolic subcategory (see e.g.~\cite[p.~445 and Proposition 7.5]{So}). Hence \thmref{character} and \lemref{lem:trunc} provide a complete solution to the irreducible character problem in the category $\ov{\mc O}_n$ for the ortho-symplectic Lie superalgebras. \end{rem} \subsection{Kostant type homology formula}\label{sec:homology} For a precise definition of homology groups of Lie superalgebras with coefficients in a module and a precise formula for the boundary operator we refer the reader to \cite[Section 4]{CL} or \cite{T}. For $\wt{M}\in\wt{\mc{O}}$ we denote by $M=T(\wt{M})\in{\mc{O}}$ and $\ov{M}=\ov{T}(\wt{M})\in\ov{\mc{O}}$. Furthermore let $\wt{d}:\Lambda(\wt{\mf{u}}_-)\otimes {\wt{M}}\rightarrow\Lambda(\wt{\mf{u}}_-)\otimes {\wt{M}}$ be the boundary operator of the complex of $\wt{\mf{u}}_-$-homology groups with coefficients in $\wt{M}$, regarded as a $\wt{\mf{u}}_-$-module. The map $\wt{d}$ is an $\wt{\mf{l}}$-module homomorphism and hence the homology groups $H_n(\wt{\mf{u}}_-,\wt{M})$ are $\wt{\mf{l}}$-modules, for $n\in\Z_+$. Accordingly we let $d:\Lambda({\mf{u}}_-)\otimes {M}\rightarrow \Lambda({\mf{u}}_-)\otimes {M}$ and $\ov{d}:\Lambda({\ov{\mf{u}}}_-)\otimes \ov{M}\rightarrow \Lambda({\ov{\mf{u}}}_-)\otimes \ov{M}$ stand for the boundary operator of the complex of $\mf{u}_-$-homology with coefficients in $M$ and the boundary operator of the complex of ${\ov{\mf{u}}}_-$-homology with coefficients in $\ov{M}$, respectively. Similarly, ${d}$ and $\ov{d}$ are ${\mf{l}}$- and ${\ov{\mf{l}}}$-homomorphisms, respectively. \begin{lem}\label{boundary} For $\wt{M}\in{\wt{\mc{O}}}$ and $\la\in P^+$, we have \begin{itemize}\label{lem:aux3} \item[(i)] $T\big{(}\La(\wt{\mf{u}}_-) \otimes\wt{M}\big{)}=\La({\mf{u}}_-)\otimes{M}$, and thus $T\big{(}\La(\wt{\mf{u}}_-)\otimes\wt{L}(\la^\theta)\big{)} = \La({\mf{u}}_-)\otimes L(\la).$ Moreover, $T[\wt{d}]=d$. \item[(ii)] $\ov{T}\big{(}\La(\wt{\mf{u}}_-)\otimes\wt{M}\big{)} =\La({\ov{\mf{u}}}_-)\otimes{\ov{M}}$, and thus $\ov{T}\big{(}\La(\wt{\mf{u}}_-)\otimes \wt{L}(\la^\theta)\big{)} = \La({\ov{\mf{u}}}_-)\otimes\ov{L}(\la^\natural).$ Moreover, $\ov{T}[\wt{d}]=\ov{d}$. \end{itemize} \end{lem} \begin{proof} We will prove (i) only. It follows by definition of $T$ and $\wt{\mf{u}}_-$ that $T\big{(}\Lambda(\wt{\mf{u}}_-)\big{)}=\Lambda({\mf{u}}_-)$. Now, since all modules involved have weights of the form $\sum_{i<0}a_i\ep_i+\sum_{r>0}b_r\ep_r$ with $b_r\in\Z_+$, it follows that $T\big{(}\La(\wt{\mf{u}}_-)\otimes\wt{M}\big{)}$ and $\La({\mf{u}}_-)\otimes{M}$ have the same character. Complete reducibility of the $\mf{l}$-modules $T\big{(}\La(\wt{\mf{u}}_-)\otimes\wt{M}\big{)}$ and $\La({\mf{u}}_-)\otimes{M}$ implies that $T\big{(}\La(\wt{\mf{u}}_-)\otimes\wt{M}\big{)}=\La({\mf{u}}_-)\otimes{M}$ as $\mf{l}$-modules. \thmref{matching:modules} completes the proof of the first part of (i). The second part of (i) follows from the definitions of $\wt{d}$ and $d$ (see e.g.~\cite[(4.1)]{CL}). \end{proof} By \lemref{boundary} we have the following commutative diagram. \begin{eqnarray}\label{compare-complexes} \CD \cdots @>\wt{d}>> \La^{n+1}(\wt{\mf{u}}_-) \otimes\wt{M} @>\wt{d}>>\La^{n}(\wt{\mf{u}}_-) \otimes\wt{M} @>\wt{d}>>\La^{n-1}(\wt{\mf{u}}_-)\otimes\wt{M}\cdots \\ @. @VVT_{\La^{n+1}(\wt{\mf{u}}_-) \otimes\wt{M}}V @VVT_{\La^{n}(\wt{\mf{u}}_-) \otimes\wt{M}}V @VVT_{\La^{n-1}(\wt{\mf{u}}_-)\otimes\wt{M}}V\\ \cdots @>d>> \La^{n+1}({\mf{u}}_-) \otimes {M} @>{d}>>\La^{n}({\mf{u}}_-) \otimes{M} @>{d}>>\La^{n-1}({\mf{u}}_-)\otimes{M}\cdots \\ \endCD \end{eqnarray} Thus $T$ induces an $\mf{l}$-homomorphism from $H_n(\wt{\mf{u}}_-;\wt{M})$ to $H_n({\mf{u}}_-;{M})$. Similarly, $\ov{T}$ induces an ${\ov{\mf{l}}}$-homomorphism from $H_n(\wt{\mf{u}}_-;\wt{M})$ to $H_n({\ov{\mf{u}}}_-;\ov{M})$. As an ${\wt{\mf{l}}}$-module, $\La(\wt{\mf{u}}_-)$ is a direct sum of $L({\wt{\mf{l}}},\mu^\theta)$, $\mu\in P^+$, each appearing with finite multiplicity (\cite[Section 3.2.3]{CK}). By \cite[Theorem~3.2]{CK}, $\Lambda(\wt{\mf{u}}_-)\otimes\wt{M}$ as an ${\wt{\mf{l}}}$-module is completely reducible. Write $\Lambda(\wt{\mf{u}}_-)\otimes\wt{M}\cong\bigoplus_{\mu\in P^+}L({\wt{\mf{l}}},\mu^\theta)^{m(\mu)}$ as ${\wt{\mf{l}}}$-modules. It follows by Lemmas \ref{lmod2lmod} and \ref{lem:aux3} that $\Lambda({\mf{u}}_-)\otimes{M}\cong \bigoplus_{\mu\in P^+}L({\mf{l}},\mu)^{m(\mu)}$, as ${\mf{l}}$-modules. Similarly, $\Lambda({\ov{\mf{u}}}_-)\otimes\ov{M}\cong\bigoplus_{\mu\in P^+}L({\ov{\mf{l}}},\mu^\natural)^{m(\mu)}$, as ${\ov{\mf{l}}}$-modules. The commutativity of \eqnref{compare-complexes} and \lemref{boundary} now allow us to adapt the proof of \cite[Theorem ~4.4]{CL} to prove the following. \begin{thm}\label{matching:KL} We have for $n\ge 0$ \begin{itemize} \item[(i)] $T(H_n(\wt{\mf{u}}_-;\wt{M}))\cong H_n({\mf{u}}_-;{M})$, as $\mf{l}$-modules. \item[(ii)] $\ov{T}(H_n(\wt{\mf{u}}_-;\wt{M}))\cong H_n({\ov{\mf{u}}}_-;\ov{M})$, as ${\ov{\mf{l}}}$-modules. \end{itemize} \end{thm} Setting $\wt{M}=\wt{L}(\la^\theta)$ in \thmref{matching:KL} and using \thmref{matching:modules} we obtain the following. \begin{cor}\label{matching:KL1} For $\la\in P^+$ and $n\ge 0$, we have \begin{itemize} \item[(i)] $T\big{(}H_n(\wt{\mf{u}}_-;\wt{L}(\la^\theta))\big{)}\cong H_n({\mf{u}}_-;L(\la))$, as $\mf{l}$-modules. \item[(ii)] $\ov{T}\big{(}H_n(\wt{\mf{u}}_-;\wt{L}(\la^\theta))\big{)}\cong H_n({\ov{\mf{u}}}_-;\ov{L}(\la^\natural))$, as ${\ov{\mf{l}}}$-modules. \end{itemize} \end{cor} We define parabolic Kazhdan-Lusztig polynomials in the categories $\mc O$, $\ov{\mc O}$ and $\wt{\mc O}$ for $\mu,\la\in P^+$ by letting \begin{align*} {\ell}_{\mu\la}(q) :=\sum_{n=0}^\infty\text{dim}_\C \Big{(}\text{Hom}_{{\mf{l}}}\big{[} L({\mf{l}},\mu), H_n\big{(}{{\mf{u}}}_-;{L}(\la)\big{)} \big{]}\Big{)} (-q)^{-n},\\ \ov{\ell}_{\mu^\natural\la^\natural}(q) :=\sum_{n=0}^\infty\text{dim}_\C\Big{(}\text{Hom}_{\ov{\mf{l}}}\big{[} L(\ov{\mf{l}},\mu^\natural), H_n\big{(}{\ov{\mf{u}}}_-;{L}(\la^\natural)\big{)} \big{]}\Big{)} (-q)^{-n},\\ \wt{\ell}_{\mu^\theta\la^\theta}(q) :=\sum_{n=0}^\infty\text{dim}_\C\Big{(}\text{Hom}_{\wt{\mf{l}}}\big{[} L(\wt{\mf{l}},\mu^\theta), H_n\big{(}{\wt{\mf{u}}}_-;{L}(\la^\theta)\big{)} \big{]}\Big{)} (-q)^{-n}. \end{align*} By Vogan's homological interpretation of the Kazhdan-Lusztig polynomials \cite[Conjecture 3.4]{V} and the Kazhdan-Lusztig conjectures \cite{KL}, proved in \cite{BB, BK}, $\ell_{\mu\la}(q)$ coincides with the original definition and moreover $\ell_{\mu\la} (1) =a_{\mu\la}$ (cf. \thmref{character}). The following reformulation of \corref{matching:KL1} is a generalization of \thmref{character}. \begin{thm} \label{matching:KLpol} For $\la,\mu\in P^+$ we have $ \ell_{\mu\la}(q) =\wt{\ell}_{\mu^\theta\la^\theta}(q) = \ov{\ell}_{\mu^\natural\la^\natural}(q). $ \end{thm} \section{Equivalences of categories}\label{sec:category} \subsection{} The following is standard (see, for example, \cite[Lemma 2.1.10]{Ku}). \begin{prop}\label{Ku} Let ${M}\in {\mc{O}}$. Then there exists a (possibly infinite) increasing filtration $0={M}_0\subset {M}_1\subset{M}_2\subset\cdots$ of ${\G}$-modules such that \begin{itemize} \item[(i)] $\bigcup_{i\ge 0}{M}_i={M}$, \item[(ii)] ${M}_i/{M}_{i-1}$ is a highest weight module of highest weight $\nu_i$ with $\nu_i\in{P^+}$, for $i\ge 1$, \item[(iii)] the condition $\nu_i-\nu_j\in\sum_{\alpha\in{\Pi}}\Z_+\alpha$ implies that $i<j$, \item[(iv)] for any weight $\mu$ of ${M}$, there exists an $r\in \N$ such that $({M}/{M}_{r})_\mu=0$. \end{itemize} Similar statements hold for $\ov{M}\in {\ov{\mc{O}}}$ and $\wt{M}\in \wt{{\mc{O}}}$. \end{prop} Let $\wt{\mc{O}}^f$ denote the full subcategory of $\wt{\mc{O}}$ consisting of finitely generated ${U}(\wt{\G})$-modules. The categories $\ov{\mc{O}}^f$ and $\mc{O}^f$ are defined in a similar fashion. \propref{Ku} implies the following. \begin{prop}\label{filtration} Let ${M}\in {{\mc{O}}}$. Then ${M}\in {{\mc{O}^f}}$ if and only if there exists a finite increasing filtration $0={M}_0\subset {M}_1\subset{M}_2\subset\cdots\subset{M}_k={M}$ of ${\G}$-modules such that ${M}_i/{M}_{i-1}$ is a highest weight module of highest weight $\nu_i$ with $\nu_i\in{{P^+}}$, for $1\le i\le k$. Similar statements hold for $\ov{M}\in {\ov{\mc{O}}}$ and $\wt{M}\in \wt{{\mc{O}}}$. \end{prop} The following proposition is the converse to \thmref{matching:modules}. \begin{prop}\label{hw-onto} \begin{itemize} \item[(i)] If ${V}(\la)$ is a highest weight ${\G}$-module of highest weight $\la\in{P^+}$, then there is a highest weight ${\wt{\G}}$-module $\wt{V}(\la^\theta)$ of highest weight $\la^\theta$ such that $T(\wt{V}(\la^\theta))={V}(\la)$. \item[(ii)] If $\ov{V}(\la^\natural)$ is a highest weight $\ov{\G}$-module of highest weight $\la^\natural$ with $\la\in P^+$, then there is a ${\wt{\G}}$-module $\wt{V}(\la^\theta)$ of highest weight $\la^\theta$ such that $\ov{T}(\wt{V}(\la^\theta))=\ov{V}(\la^\natural)$. \end{itemize} \end{prop} \begin{proof} We shall only prove (i), as (ii) is similar. We let $W$ be the kernel of the natural projection from the $\Delta(\la)$ to ${V}(\la)$. Now \thmref{matching:modules} says that $T(\wt{\Delta}(\la^\theta))=\Delta(\la)$. Thus, by the exactness of functor $T$, it suffices to prove that $W$ lifts to a submodule $\wt{W}$ of $\wt{\Delta}(\la^\theta)$ such that $T(\wt{W})=W$. There is an increasing filtration $0=W_0\subset W_1\subset W_2\subset\cdots$ of $\G$-modules for $W$ satisfying the properties of \propref{Ku}. For each $i>0$, let $v_i$ be a weight vector in $W_i$ such that $v_i+W_{i-1}$ is a non-zero highest weight vector of $W_i/W_{i-1}$. Observe that $\wt{\Delta}(\la^\theta)=\bigoplus_{\mu\in P^+}L({\wt{\mf{l}}},\mu^\theta)^{m(\mu)}$ and ${\Delta}(\la)=\bigoplus_{\mu\in P^+}L({{\mf{l}}},\mu)^{m(\mu)}$ are completely reducible $\wt{\mf{l}}$- and $\mf{l}$-modules, respectively. Then, for each $i>0$, there is a highest weight vector $\wt{v}_i$ of the $\wt{\mf{l}}$-module $U(\wt{\mf{l}})v_i$ with respect to the Borel subalgebra $\wt{\mf{b}}\cap\wt{\mf{l}}$. Let $\wt{W}_i$ be the submodule of $\wt{\Delta}(\la^\theta)$ generated by $\wt{v}_1, \wt{v}_2,\ldots, \wt{v}_i$ and set $\wt{W}_0=0$. It is easy to see $\wt{v}_i$ is a highest weight vector of the $\wt{\G}$-module $\wt{W}_i/\wt{W}_{i-1}$. Let $\wt{W}=\bigcup_{i\ge 1}\wt{W}_i$. It is clear that $T(\wt{W}_i/\wt{W}_{i-1})\cong {W}_i/{W}_{i-1}$ for all $i$. This implies $T(\wt{W}_i)=W_i$ for all $i$ and hence $T(\wt{W})=W$. \end{proof} \subsection{The categories $\wt{\mc{O}}^{f,\bar{0}}$ and $\ov{\mc{O}}^{f,\bar{0}}$} \label{sec:51} Define an equivalence relation $\sim$ on $\wt{\h}^*$ by letting $\mu \sim \nu$ if and only if $\mu -\nu$ lies in the root lattice $\Z \wt \Phi$ of $\DG$. For each such equivalence class $[\mu]$, fix a representative $[\mu]^o \in \wt{\h}^*$ and declare $[\mu]^o$ to have $\Z_2$-grading $\bar 0$. For $\epsilon=\bar{0},\bar{1}$, set (cf.~\cite[\S4-e]{B} and \cite[Section~2.5]{CL} for type $A$) \begin{eqnarray*} {\wt{\h}^*}_\epsilon &=& \Big \{\mu\in \wt{\h}^*\mid \sum_{r\in {1/ 2}+\Z_+}(\mu-[\mu]^o) (\widehat{E}_{r})\equiv {\epsilon} \,\,(\text{mod }2) \Big \}, \text{ for } \xx =\mf{b, c, d}, \\ {\wt{\h}^*}_\epsilon &=& \Big \{\mu\in \wt{\h}^*\mid \sum_{i=1}^m (\mu-[\mu]^o)(\widehat{E}_{-i}) + \sum_{r\in \N}(\mu-[\mu]^o)(\widehat{E}_{r})\equiv {\epsilon} \,\,(\text{mod }2) \Big \}, \text{ for } \xx =\mf b^\bullet. \end{eqnarray*} Recall that $\wt{V} \in \wt{\mc O}$ is a semisimple $\wt{\h}$-module with $\wt{V}=\bigoplus_{\gamma\in\wt{\h}^*}\wt{V}_\gamma$. Then $\wt{V}$ acquires a natural $\Z_2$-grading $\wt{V}=\wt{V}_{\bar{0}}\bigoplus \wt{V}_{\bar{1}}$ given by \begin{equation}\label{wt-Z2-gradation} \wt{V}_{\ep} :=\bigoplus_{\mu\in{\wt{\h}^*}_{\ep}}\wt{V}_{\mu}, \qquad \ep =\bar{0},\bar{1}, \end{equation} which is compatible with the $\Z_2$-grading on $\DG$. We define $\wt{\mc{O}}^{\bar{0}}$ and $\wt{\mc{O}}^{f,\bar{0}}$ to be the full subcategories of $\wt{\mc{O}}$ and $\wt{\mc{O}}^f$, respectively, consisting of objects with $\Z_2$-gradation given by \eqnref{wt-Z2-gradation}. Note that the morphisms in $\wt{\mc{O}}^{\bar{0}}$ and $\wt{\mc{O}}^{f,\bar{0}}$ are of degree $\bar{0}$. For $\wt{M}\in\wt{\mc{O}}$, let $\widehat{\wt{M}}\in\wt{\mc{O}}^{\bar{0}}$ denote the $\wt{\G}$-module $\wt{M}$ equipped with the $\Z_2$-gradation given by \eqnref{wt-Z2-gradation}. It is clear that $\widehat{\wt{M}}$ is isomorphic to $\wt{M}$ in $\wt{\mc{O}}$. Thus $\wt{\mc{O}}$ and $\wt{\mc{O}}^{\bar{0}}$ have isomorphic skeletons and hence they are equivalent categories. Similarly, $\wt{\mc{O}}^f$ and $\wt{\mc{O}}^{f,\bar{0}}$ are equivalent categories. Analogously define ${\mc{O}}^{\bar{0}}$, ${\mc{O}}^{f,\bar{0}}$, $\ov{\mc{O}}^{\bar{0}}$ and $\ov{\mc{O}}^{f,\bar{0}}$ to be the respective full subcategories of ${\mc{O}}$, ${\mc{O}}^f$, $\ov{\mc{O}}$ and $\ov{\mc{O}}^f$ consisting of objects with $\Z_2$-gradation given by \eqnref{wt-Z2-gradation}. Similarly, ${\mc{O}}^{\bar{0}}\cong{\mc{O}}$, $\ov{\mc{O}}^{\bar{0}}\cong\ov{\mc{O}}$, and also ${\mc{O}}^{f,\bar{0}}\cong{\mc{O}}^f$, $\ov{\mc{O}}^{f,\bar{0}}\cong\ov{\mc{O}}^f$. (In case of $\mc O$ and $\mc O^f$, these remarks are trivial except for the type $\mf b^\bullet$ which corresponds to a Lie superalgebra). \subsection{Equivalence of the categories} Recall the functors $T$ and $\ov{T}$ from \secref{Tfunctors}. The following is the main result of this section. \begin{thm}\label{thm:equivalence} \begin{itemize} \item[(i)] $T:\wt{\mc{O}}\rightarrow{\mc{O}}$ is an equivalence of categories. \item[(ii)] $\ov{T}:\wt{\mc{O}}\rightarrow\ov{\mc{O}}$ is an equivalence of categories. \end{itemize} Hence, the categories $\mc{O}$ and $\ov{\mc{O}}$ are equivalent. \end{thm} Since $\ov{\mc{O}}^{\bar{0}}\cong \ov{\mc{O}}$ and $\wt{\mc{O}}^{\bar{0}}\cong \wt{\mc{O}}$ it is enough to prove \thmref{thm:equivalence} for $\ov{\mc{O}}^{\bar{0}}$ and $\wt{\mc{O}}^{\bar{0}}$. In order to keep notation simple we will from now on drop the superscript $\bar{0}$ and use $\ov{\mc{O}}$, $\wt{\mc{O}}$, $\ov{\mc{O}}^f$ and $\wt{\mc{O}}^f$ to denote the respective categories $\ov{\mc{O}}^{\bar{0}}$, $\wt{\mc{O}}^{\bar{0}}$, $\ov{\mc{O}}^{f,\bar{0}}$ and $\wt{\mc{O}}^{f,\bar{0}}$ for the remainder of Section \ref{sec:category}. Henceforth, when we write $\wt{\Delta}(\la^\theta), \wt{L}(\la^\theta)\in\wt{\mc{O}}^f$, $\la\in P^+$, we will mean the corresponding modules equipped with the $\Z_2$-gradation \eqnref{wt-Z2-gradation}. Similar convention applies to $\ov{\Delta}(\la^\natural)$ and $\ov{L}(\la^\natural)$. For ${M},{N}\in{\mc{O}}$ and $i\in \N$ the $i$th extension group ${\rm Ext}_{{\mc{O}}}^i({M},{N})$ can be understood in the sense of Baer-Yoneda (see e.g.~\cite[Chapter VII]{M}) and ${\rm Ext}_{{\mc{O}}}^0({M},{N}):={\rm Hom}_{{\mc{O}}}({M},{N})$. In a similar way extensions in $\ov{\mc{O}}$ and $\wt{\mc{O}}$ are interpreted. From this viewpoint the exact functors $T$ and $\ov{T}$ induce natural maps on extensions by taking the projection of the corresponding exact sequences. \begin{thm}\label{thm:equivalence-f.g.} We have the following. \begin{itemize} \item[(i)] $T:\wt{\mc{O}}^f\rightarrow{\mc{O}}^f$ is an equivalence of categories. \item[(ii)] $\ov{T}:\wt{\mc{O}}^f\rightarrow\ov{\mc{O}}^f$ is an equivalence of categories. \item[(iii)] The categories ${\mc{O}}^f$ and $\ov{\mc{O}}^f$ are equivalent. \end{itemize} \end{thm} \thmref{thm:equivalence-f.g.} can be proved following a similar strategy as the one used to prove \cite[Theorem 5.1]{CL}. To avoid repeating similar arguments at great length we will just point out the main differences between their proofs. In \cite[Section 5]{CL} the main point is to prove that the functor $T$ induces isomorphisms $\text{Hom}_{\wt{\mc O}}(\wt{M},\wt{N})\cong \text{Hom}_{{\mc O}}({M},{N})$ and $\text{Ext}^1_{\wt{\mc O}}(\wt{L},\wt{N})\cong \text{Ext}^1_{{\mc O}}({L},{N})$, for $\wt{L}$ irreducible, and $\wt{M},\wt{N}$ having finite composition series. From this \cite[Theorem 5.1]{CL} is derived easily. As the isomorphism of the $\text{Hom}$ spaces imply the isomorphism of the $\text{Ext}^1$ spaces \cite[Lemma 5.12]{CL}, we are reduced to establish the isomorphism of the $\text{Hom}$ spaces. To prove the isomorphism of the $\text{Hom}$ spaces therein, the idea is to prove this isomorphism first for $\wt{M}$ irreducible, and then to use induction on the length of the composition series of $\wt{M}$ to establish the general case. Now, thanks to \propref{hw-onto}, we can proceed similarly as in \cite[Section 5]{CL} to prove \thmref{thm:equivalence-f.g.}. For this purpose we replace $\wt{L}$ by a highest weight module, and $\wt{M}$ and $\wt{N}$ by finitely generated modules. As finitely generated modules possess finite filtrations whose subquotients are highest weight modules (cf. \propref{filtration}), we can now borrow the same type of induction arguments from \cite{CL}, now inducting on the length of such a filtration instead of the length of a composition series. Therefore, the proof of the isomorphisms is again reduced to a special case, namely when $\wt{M}$ is a highest weight module. This case can then be proved using similar arguments as the ones given in the proof of \cite[Lemmas~5.8]{CL}. The case of $\ov{T}$ is completely analogous. Having \thmref{thm:equivalence-f.g.} at our disposal we can now prove \thmref{thm:equivalence}. \begin{proof}[Proof of \thmref{thm:equivalence}] Since the proofs of (i) and (ii) are similar, we shall only prove (i). (iii) follows from (i) and (ii). For every $M\in\mc{O}$, there is an increasing filtration $0=M_0\subset M_1\subset M_2\subset\cdots$ of $\G$-modules for $M$ with $M_i\in{\mc{O}}^f$ satisfying the properties of \propref{Ku}. The filtration $\{M_i\}$ of $M$ lifts to a filtration $\{\wt{M}_i\}$ with $\wt{M}_i \in\wt{\mc{O}}^f$ such that $T(\wt{M}_i)\cong M_i$ by \thmref{thm:equivalence-f.g.}. It is clear that we have $\wt{M}:=\bigcup_{i\ge 0}\wt{M}_i\in\wt{\mc{O}}$ and $T(\wt{M})\cong M$. It is well known that a full and faithful functor $F: \mc{C}\mapsto \mc{C'}$, satisfying the property that for every $M'\in \mc{C}'$ there exists $M\in \mc{C}$ with $F(M)\cong M'$, is an equivalence of categories (see e.g.~\cite[Proposition 1.5.2]{P}). Therefore it remains to show that $T$ is full and faithful. By \propref{Ku}, for $\wt{M} \in\wt{\mc{O}}$, we may choose an increasing filtration of $\DG$-modules $0=\wt{M}_0\subset \wt{M}_1\subset\wt{M}_2\subset\cdots$ such that $\bigcup_{i\ge 0}\wt{M}_i=\wt{M}$ and $\wt{M}_i/\wt{M}_{i-1}$ is a highest weight module of highest weight $\nu^\theta_i$ with $\nu_i\in{P^+}$, for $i\ge 1$. Then the direct limit of $\wt{M}_i$ is $\underrightarrow{\lim}\,\wt{M}_i \cong \wt{M}$ and ${\rm Hom}_{\wt{\mc{O}}}(\wt{M},\wt{N})\cong\underleftarrow{\lim}\, {\rm Hom}_{\wt{\mc{O}}}(\wt{M}_i,\wt{N})$ for every $\wt{N} \in\wt{\mc{O}}$. Similarly we have $\underrightarrow{\lim}\,{M}_i \cong {M}$ and ${\rm Hom}_{{\mc{O}}}({M},{N})\cong\underleftarrow{\lim}\,{\rm Hom}_{{\mc{O}}}({M}_i,{N})$ for $N=T(\wt{N})$. Furthermore, we have the following commutative diagram (where $\varphi =\underleftarrow{\lim} T_{\wt{M}_i,\wt{N}}$): \begin{eqnarray*} \CD {\rm Hom}_{\wt{\mc{O}}}\big{(}\wt{M},\wt{N}\big{)} @>\cong>>\underleftarrow{\lim}\, {\rm Hom}_{\wt{\mc{O}}}\big{(}\wt{M}_i,\wt{N}\big{)} \\ @VVT_{\wt{M},\wt{N}}V @VV\varphi V \\ {\rm Hom}_{{\mc{O}}}\big{(}{M},N\big{)} @>\cong>>\underleftarrow{\lim}\,{\rm Hom}_{{\mc{O}}}\big{(}{M}_i,N\big{)}\\ \endCD \end{eqnarray*} Using a similar argument as the one given in \cite[Lemma 5.10]{CL}, where we replace the induction on the length of composition series therein by induction on the length of finite increasing filtration $0=\wt{M}_0\subset \wt{M}_1\subset\wt{M}_2\subset\cdots\subset\wt{M}_i$, we show that $T_{\wt{M}_i,\wt{N}}:{\rm Hom}_{\wt{\mc{O}}}(\wt{M}_i,\wt{N})\rightarrow {\rm Hom}_{{\mc{O}}}({M}_i,N)$ are isomorphisms for each $i$. Therefore $\varphi$ is an isomorphism and hence $T_{\wt{M},\wt{N}}$ is an isomorphism. This completes the proof. \end{proof} \section{Finite dimensional representations}\label{finite:dim:repn} The main purpose of this section is to determine the extremal weights of finite dimensional irreducible modules over the ortho-symplectic Lie superalgebras with integral highest weights. It follows that all such finite dimensional irreducible modules for the ortho-symplectic Lie superalgebras are in the category $\ov{\mc O}_n$. We note that the finite dimensional irreducible modules of non-integral highest weights are typical and so their characters are known \cite[Theorem 1]{K2}. \subsection{Extremal weights for $\osp(2m+1|2n)$} \label{sec:type B} Let us denote the weights of the natural $\osp(2m+1|2n)$-module $\C^{2n|2m+1}$ by $\pm \delta_i, 0, \pm \vep_j$ for $1\le i\le n, 1\le j \le m$. We call a weight {\em integral}, if it lies in $\Z$-span of the $\delta_i$'s and $\ov{\ep}_j$'s. The {\em standard} Borel subalgebra $\mc B^{\text{st}}$ of $\osp(2m+1|2n)$ is the one associated to the following set of simple roots \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(12.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(15.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(21.9,2){\makebox(0,0)[c]{$\bigcirc$}} \put(24.2,2){\makebox(0,0)[c]{$\bigcirc$}} \put(6.4,2){\line(1,0){1.55}} \put(8.82,2){\line(1,0){0.8}} \put(11.2,2){\line(1,0){1.2}} \put(13.28,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){1.0}} \put(20.3,2){\line(1,0){1.2}} \put(22.3,1.8){$\Longrightarrow$} \put(5.6,1){\makebox(0,0)[c]{\tiny $\delta_1-\delta_2$}} \put(8.4,1){\makebox(0,0)[c]{\tiny $\delta_2-\delta_3$}} \put(12.8,1){\makebox(0,0)[c]{\tiny $\delta_{n-1}-\delta_n$}} \put(15.15,3){\makebox(0,0)[c]{\tiny $\delta_n-\vep_{1}$}} \put(17.4,1){\makebox(0,0)[c]{\tiny $\vep_{1}-\vep_{2}$}} \put(21.8,1){\makebox(0,0)[c]{\tiny $\vep_{m-1} -\vep_{m}$}} \put(24.5,1){\makebox(0,0)[c]{\tiny $\vep_{m}$}} \end{picture} \end{center} An arbitrary Dynkin diagram for $\osp(2m+1|2n)$ always has a type $A$ end while the other end is a short (even or odd) root. Starting from the type $A$ end, the simple roots for a Borel subalgebra $\mc B$ of $\osp(2m+1|2n)$ give rise to a sequence of $d_1$ $\delta$'s, $e_1$ $\vep$'s, $d_2$ $\delta$'s, $e_2$ $\vep$'s, $\ldots, d_r$ $\delta$'s, $e_r$ $\vep$'s and sequences of $\pm 1$'s: $(\xi_i)_{1\le i \le n} \cup (\eta_j)_{1\le j \le m}$ (all the $d_i$ and $e_j$ are positive except possibly $d_1 =0$ or $e_r =0$). Note that a Dynkin diagram contains a short {\em odd} root exactly when $e_r =0$. Let $$ \texttt d_u =\sum_{a=1}^u d_a, \quad \texttt e_u =\sum_{a=1}^u e_a $$ for $u=1, \ldots, r$, and let $\texttt d_0 =\texttt e_0 =0$. Note $\texttt d_r=n, \texttt e_r=m$. More precisely, there exist a permutation $s$ of $\{1, \ldots, n\}$ and a permutation $t$ of $\{1, \ldots, m\}$, so that the simple roots for $\mc B$ are given by \begin{align*} \xi_i \delta_{s(i)} -\xi_{i+1} \delta_{s(i+1)},& \quad 1\le i \le n,\; i \not \in \{\texttt d_u | u=1,\ldots, r\}; \\ \eta_j \vep_{t(j)} -\eta_{j+1} \vep_{t(j+1)},& \quad 1 \le j \le m, \; j \not \in \{\texttt e_u | u=1,\ldots, r\}; \\ \xi_{\texttt d_u} \delta_{s({\texttt d_u})} - \eta_{1+\texttt e_{u-1}} \vep_{t({1+\texttt e_{u-1}})}, &\quad \text{for } 1\le u \le r \text{ if } e_r>0\;\; (\text{or } 1\le u< r \text{ if } e_r=0); \\ \eta_{\texttt e_u} \vep_{t({\texttt e_u})} -\xi_{1+\texttt d_u} \delta_{s({1+\texttt d_u})}, & \quad u=1,\ldots, r-1; \\ \eta_{\texttt e_r} \vep_{t({\texttt e_r})}, &\quad \text{ if } e_r>0 \quad \quad (\text{or } \xi_{\texttt d_r} \delta_{s({\texttt d_r})} \text{ if } e_r=0). \end{align*} Recall a partition $\la=(\la_1,\la_2,\ldots)$ is called an $(n|m)$-{\em hook partition}, if $\la_{n+1}\le m$ (cf. \cite{BR, S}). For such a $\la$, we define $$ \la^\# =(\la_1,\ldots, \la_n, \nu_1, \ldots, \nu_m), $$ where $(\nu_1, \ldots, \nu_m)$ is the conjugated partition of $(\la_{n+1}, \la_{n+2}, \ldots)$. \begin{lem} \cite{K2} \label{kac hwt} The irreducible $\osp(2m+1|2n)$-module of integral highest weight $\sum_{i=1}^n\la_i\delta_i + \sum_{j=1}^m\ov{\la}_j\vep_j$ with respect to the standard Borel subalgebra is finite dimensional if and only if $(\la_1,\ldots,\la_n,\ov{\la}_1,\ldots,\ov{\la}_m)=\la^\#$ for some $(n|m)$-hook partition $\la$. \end{lem} We denote by $L'(\osp(2m+1|2n),\la^\#)$ these irreducible $\osp(2m+1|2n)$-modules with respect to the standard Borel subalgebra, to distinguish from earlier notation used for irreducible modules with respect to different Borel subalgebra. Actually the finite dimensionality criterion was given in \cite{K2} in terms of Dynkin labels, which is known to be equivalent to the more natural labeling above in terms of $(n|m)$-hook partitions (cf. \cite{SW}). Same remark applies to the finite dimensionality criterion for $\osp(2m|2n)$ in \lemref{hwt:standard:spo} below. \begin{example} \label{diag 9|10} Suppose that the corresponding Dynkin diagram of a Borel subalgebra of $\osp(9|10)$ is as follows: \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(10.5,1.95){\makebox(0,0)[c]{$\bigcirc$}} \put(12.85,2){\makebox(0,0)[c]{$\bigotimes$}} \put(15.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(17.4,2){\makebox(0,0)[c]{$\bigotimes$}} \put(19.6,1.95){\makebox(0,0)[c]{$\bigcirc$}} \put(21.9,2){\makebox(0,0)[c]{$\bigotimes$}} \put(24.2,2){\circle*{0.9}} \put(6.4,2){\line(1,0){1.55}} \put(8.82,2){\line(1,0){1.3}} \put(10.8,2){\line(1,0){1.6}} \put(13.28,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){1.4}} \put(19.9,2){\line(1,0){1.5}} \put(22.3,1.8){$\Longrightarrow$} \put(5.8,3){\makebox(0,0)[c]{\tiny $\delta_2+\delta_1$}} \put(8.4,1){\makebox(0,0)[c]{\tiny $-\delta_1+\vep_1$}} \put(10.4,3){\makebox(0,0)[c]{\tiny $-\vep_1-\vep_2$}} \put(12.8,1){\makebox(0,0)[c]{\tiny $\vep_2-\delta_3$}} \put(15.15,3){\makebox(0,0)[c]{\tiny $\delta_3-\delta_4$}} \put(17.4,1){\makebox(0,0)[c]{\tiny $\delta_4-\vep_4$}} \put(19.4,3){\makebox(0,0)[c]{\tiny $\vep_4-\vep_3$}} \put(21.8,1){\makebox(0,0)[c]{\tiny $\vep_3+\delta_5$}} \put(24,1){\makebox(0,0)[c]{\tiny $-\delta_5$}} \end{picture} \end{center} We read off from the above a signed sequence with indices $\delta_2(-\delta_1)(-\vep_1)\vep_2\delta_3\delta_4\vep_4\vep_3(-\delta_5)$. In particular, we obtain a sequence $\delta\delta\vep\vep\delta\delta\vep\vep\delta$ by ignoring the signs and indices. In this case, $d_1=d_2=2,d_3=1$, and $e_1=e_2=2$. Furthermore, the sequences $(\xi_i)_{1\le i \le 5}$ and $(\eta_j)_{1\le j\le 4}$ are $(1,-1,1,1,-1)$ and $(-1,1,1,1)$, respectively. \end{example} Define the {\em block Frobenius coordinates} $(p_i|q_j)$ of an $(n|m)$-hook partition $\la$ associated to $\mc B$ as follows. For $1\le i \le n, 1\le j \le m$, let \begin{eqnarray*} p_i &= \max \{\la_i - \texttt e_u, 0\}, &\text{ if }\texttt d_u < i \leq \texttt d_{u+1} \text{ for some } 0\le u\le r-1 \\ q_j &= \max \{\la_j' - \texttt d_{u+1}, 0\}, & \text{ if } \texttt e_u+1 < j \leq \texttt e_{u+1} \text{ for some } 0\le u\le r-1. \end{eqnarray*} It is elementary to read off the block Frobenius coordinates of $\la$ from the Young diagram of $\la$ in general, as illustrated by the next example. \begin{example}\label{frob:block} Consider the $(5,4)$-hook diagram $\la =(14,11,8,8,7,4,3,2).$ The block Frobenius coordinates associated with $\mc B$ from Example \ref{diag 9|10} for $\la$ is: $$p_1=14, p_2=11, p_3=p_4=6, p_5=3;\quad q_1=q_2=6, q_3=3, q_4=2. $$ These are read off from the Young diagram of $\la$ by following the $\vep\delta$ sequence $\delta\delta\vep\vep\delta\delta\vep\vep\delta$ as follows: \begin{center} \hskip 1cm \setlength{\unitlength}{0.25in} \begin{picture}(15,9) \put(0,6){\line(0,1){2}} \put(0,8){\line(1,0){14}} \put(14,8){\line(0,-1){1}} \put(14,7){\line(-1,0){3}} \put(11,7){\line(0,-1){1}} \put(11,6){\line(-1,0){11}} \put(0,6){\line(0,-1){6}} \put(0,0){\line(1,0){2}} \put(2,0){\line(0,1){6}} \put(2,4){\line(1,0){6}} \put(8,4){\line(0,1){2}} \put(2,1){\line(1,0){1}} \put(3,1){\line(0,1){1}} \put(3,2){\line(1,0){1}} \put(4,2){\line(0,1){2}} \put(4,3){\line(1,0){3}} \put(7,3){\line(0,1){1}} \put(5.5,7.5){\makebox(0,0)[c]{$\leftarrow p_1\rightarrow$}} \put(5.5,6.5){\makebox(0,0)[c]{$\leftarrow p_2\rightarrow$}} \put(5.5,5.5){\makebox(0,0)[c]{$\leftarrow p_3\rightarrow$}} \put(5.5,4.5){\makebox(0,0)[c]{$\leftarrow p_4\rightarrow$}} \put(5.5,3.5){\makebox(0,0)[c]{$\leftarrow p_5\rightarrow$}} \put(0.5,3.4){\makebox(0,0)[c]{$\uparrow$}} \put(0.5,2){\makebox(0,0)[c]{$\downarrow$}} \put(0.5,2.7){\makebox(0,0)[c]{$q_1$}} \put(1.5,3.4){\makebox(0,0)[c]{$\uparrow$}} \put(1.5,2){\makebox(0,0)[c]{$\downarrow$}} \put(2.5,3.4){\makebox(0,0)[c]{$\uparrow$}} \put(2.5,2){\makebox(0,0)[c]{$\downarrow$}} \put(3.5,3.4){\makebox(0,0)[c]{$\uparrow$}} \put(3.5,2.3){\makebox(0,0)[c]{$\downarrow$}} \put(1.5,2.7){\makebox(0,0)[c]{$q_2$}} \put(2.5,2.7){\makebox(0,0)[c]{$q_3$}} \put(3.5,2.8){\makebox(0,0)[c]{$q_4$}} \put(4,3){\linethickness{1pt}\line(0,-1){3}} \put(4,3){\linethickness{1pt}\line(1,0){10}} \put(-0.1,3){\linethickness{1pt}\line(1,0){0.2}} \put(4,7.9){\linethickness{1pt}\line(0,1){0.2}} \put(-1.7,2.8){$n=5$} \put(3.2,8.3){$m=4$} \end{picture} \vskip 0.5cm \end{center} \end{example} \begin{thm} \label{hwt change} Let $\la$ be an $(n|m)$-hook partition. Let $\mc B$ be a Borel subalgebra of $\osp(2m+1|2n)$ and retain the above notation. Then, the $\mc B$-highest weight of the simple $\osp(2m+1|2n)$-module $L'(\osp(2m+1|2n),\la^\#)$ is $$\la^{\mc B} :=\sum_{i=1}^n \xi_i p_i \delta_{s(i)} +\sum_{j=1}^m \eta_j q_j \vep_{t(j)}.$$ \end{thm} \begin{proof} Let us consider an odd reflection that changes a Borel subalgebra $\mc B_1$ to $\mc B_2$. Assume the theorem holds for $\mc B_1$. We observe by Lemma~\ref{hwt odd} that the statement of the theorem for $\mc B_2$ follows from the validity of the theorem for $\mc B_1$. The statement of the theorem is apparently consistent with a change of Borel subalgebras induced from a real reflection, and all Borel subalgebras are linked by a sequence of real and odd reflections. Hence, once we know the theorem holds for one particular Borel subalgebra, it holds for all. We finally note that the theorem holds for the standard Borel subalgebra $\mc B^{\text{st}}$, which corresponds to the sequence of $n$ $\delta$'s followed by $m$ $\vep$'s with all signs $\xi_i$ and $\eta_j$ being positive, i.e., $\la^{\mc B^{\text{st}}} =\la^\#$. \end{proof} \begin{example} With respect to the Borel $\mc B$ of $\osp(9|10)$ as in Example~\ref{diag 9|10}, the $\mc B$-extremal weight of $L'(\osp(9|10),\la^\#)$ for $\la$ as in Example~\ref{frob:block} equals to \begin{align*} -11\delta_1+14\delta_2+6\delta_3+6\delta_4-3\delta_5 -6\vep_1+6\vep_2+2\vep_3+3\vep_4. \end{align*} \end{example} \begin{cor} Every finite dimensional irreducible $\osp(2m+1|2n)$-module of integral highest weight is self-contragradient. \end{cor} \begin{proof} Denote by $\mc B^{\text{op}}$ the opposite Borel to the standard one $\mc B^{\text{st}}$. It follows by Theorem~\ref{hwt change} that the $\mc B^{\text{op}}$-extremal weight of the module $L'(\osp(2m+1|2n) ,\la^\#)$ is $-\la^\#$. \end{proof} Recall that the following Dynkin diagram of $\osp(2m+1|2n)$ and of its (trivial) central extension $\SG^\mf{b}_n$ has been in use from the point of view of super duality and it is opposite to the one associated to the standard Borel $\mc B^{\text{st}}$. \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4) \put(5.6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8,2){\makebox(0,0)[c]{$\bigcirc$}} \put(10.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(14.85,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.25,2){\makebox(0,0)[c]{$\bigcirc$}} \put(19.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(23.5,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.35,2){\line(1,0){1.5}} \put(10.82,2){\line(1,0){0.8}} \put(13.2,2){\line(1,0){1.2}} \put(15.28,2){\line(1,0){1.45}} \put(17.7,2){\line(1,0){1.25}} \put(19.81,2){\line(1,0){0.9}} \put(22,2){\line(1,0){1}} \put(6.8,2){\makebox(0,0)[c]{$\Longleftarrow$}} \put(12.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(21.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(5.4,1){\makebox(0,0)[c]{\tiny $-\epsilon_{-m}$}} \put(7.8,1){\makebox(0,0)[c]{\tiny $\alpha_{-m}$}} \put(10.4,1){\makebox(0,0)[c]{\tiny $\alpha_{-m+1}$}} \put(14.7,1){\makebox(0,0)[c]{\tiny $\alpha_{-1}$}} \put(17.15,1){\makebox(0,0)[c]{\tiny $\beta_{1/2}$}} \put(19.5,1){\makebox(0,0)[c]{\tiny $\beta_{3/2}$}} \put(23.5,1){\makebox(0,0)[c]{\tiny $\beta_{n-3/2}$}} \put(0,1.2){{\ovalBox(1.8,1.4){${\SG}^\mf{b}_n$}}} \end{picture} \end{center} Setting $\vep_{j} =\ep_{-m+j-1}$ and $\delta_i =\ep_{n-i+1/2}$ to match the notation in this section with the one used earlier, we have the following immediate corollary of \lemref{kac hwt} and \thmref{hwt change}. \begin{cor}\label{aux:finite1} An irreducible integral highest weight $\osp(2m+1|2n)$-module with respect to the Borel subalgebra corresponding to \makebox(22,0){$\oval(22,15)$}\makebox(-22,8){${\SG}_n^\mf{b}$} is finite dimensional if and only if the highest weight is of the form \begin{equation}\label{B-type:finite:hw2} -\sum_{j=1}^{m}\max\{\la'_{j}-n, 0\} \, \epsilon_{-j} -\sum_{i=1}^n\la_{n-i+1}\epsilon_{i-1/2}, \end{equation} where $\la=(\la_1,\la_2,\ldots)$ is an $(n|m)$-hook partition. \end{cor} \subsection{Extremal weights for $\osp(2m|2n)$} Let us denote the weights of the natural $\osp(2m|2n)$-module $\C^{2n|2m}$ by $\pm \delta_i, \pm \vep_j$ for $1\le i\le n, 1\le j \le m$. The {\em standard} Borel subalgebra $\mc B^{\text{st}}$ of $\osp(2m|2n)$ is the one associated to the following set of simple roots \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,4.5) \put(6,2){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(12.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(15.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(22,2){\makebox(0,0)[c]{$\bigcirc$}} \put(24,3.8){\makebox(0,0)[c]{$\bigcirc$}} \put(24,.3){\makebox(0,0)[c]{$\bigcirc$}} \put(6.4,2){\line(1,0){1.55}} \put(8.82,2){\line(1,0){0.8}} \put(11.2,2){\line(1,0){1.2}} \put(13.28,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){0.9}} \put(20.1,2){\line(1,0){1.4}} \put(22.4,2){\line(1,1){1.4}} \put(22.4,2){\line(1,-1){1.4}} \put(10.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(19.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(6.1,1){\makebox(0,0)[c]{\tiny $\delta_1-\delta_2$}} \put(8.9,1){\makebox(0,0)[c]{\tiny $\delta_2-\delta_3$}} \put(12.8,1){\makebox(0,0)[c]{\tiny $\delta_{n-1}-\delta_{n}$}} \put(15.15,3){\makebox(0,0)[c]{\tiny $\delta_n-\vep_1$}} \put(17.4,1){\makebox(0,0)[c]{\tiny $\vep_1-\vep_2$}} \put(21.3,1){\makebox(0,0)[c]{\tiny $\vep_{m-2}-\vep_{m-1}$}} \put(26.5,3.8){\makebox(0,0)[c]{\tiny $\vep_{m-1}-\vep_{m}$}} \put(26.5,0.3){\makebox(0,0)[c]{\tiny $\vep_{m-1}+\vep_{m}$}} \end{picture} \end{center} There are two kinds of Dynkin diagrams and corresponding Borel subalgebras for $\osp(2m|2n)$: \begin{enumerate} \item[(i)] Diagrams of $|$-shape, i.e., Dynkin diagrams with a long simple root $\pm 2\delta_i$. \item[(ii)] Diagrams of {\Large $\Ydown$}-shape, i.e., Dynkin diagrams with no long simple root. \end{enumerate} We will follow the notation for $\osp(2m+1|2n)$ in Subsection~\ref{sec:type B} for sets of simple roots in terms of signed $\vep\delta$ sequences, so we have permutations $s, t$, and signs $\xi_i, \eta_j$. We fix an ambiguity on the choice of the sign $\eta_m$ associated to a Borel $\mc B$ of {\Large $\Ydown$}-shape, by demanding the total number of negative signs among $\eta_j (1\le j \le m)$ to be always even. Let $\la$ be an $(n|m)$-hook partition, and let the block Frobenius coordinates $(p_i|q_j)$ be as defined in Subsection~\ref{sec:type B}. Introduce the following weights: \begin{eqnarray*} \la^{\mc B} &:=& \sum_{i=1}^n \xi_i p_i \delta_{s(i)} +\sum_{j=1}^m \eta_j q_j \vep_{t(j)}, \\ \la^{\mc B}_- &:=& \sum_{i=1}^n \xi_i p_i \delta_{s(i)} +\sum_{j=1}^{m-1} \eta_j q_j \vep_{t(j)} -\eta_m q_m \vep_{t(m)}. \end{eqnarray*} The weight $\la^{\mc B}_-$ will only be used for Borel $\mc B$ of {\Large $\Ydown$}-shape. Note that $\la^{\mc B^{\text{st}}} =\la^\#$ and we shall denote $\la^{\#}_- :=\la^{\mc B^{\text{st}}}_-$. Given a Borel $\mc B$ of $|$-shape, we define $\text{s}(\mc B)$ to be the sign of $\prod_{j=1}^m \eta_j$. \begin{lem}\label{hwt:standard:spo} \cite{K2} The irreducible $\osp(2m|2n)$-module of integral highest weight of the form $\sum_{i=1}^n\mu_i\delta_i + \sum_{j=1}^m\ov{\mu}_j\vep_j$ with respect to the standard Borel subalgebra is finite dimensional if and only if $(\mu_1,\ldots,\mu_n,\ov{\mu}_1,\ldots,\ov{\mu}_m)$ is either $\la^\#$ or $\la^{\#}_-$ for some $(n|m)$-hook partition $\la$. \end{lem} We shall denote these irreducible $\osp(2m|2n)$-modules with respect to the standard Borel by $L'(\osp(2m|2n),\la^\#)$ and $L'(\osp(2m|2n),\la^\#_-)$. By a similar argument as for Theorem~\ref{hwt change}, we establish the following. \begin{thm}\label{hwt change 2} Let $\la$ be an $(n|m)$-hook partition. \begin{enumerate} \item Assume $\mc B$ is of {\Large $\Ydown$}-shape. Then, \begin{itemize} \item[(i)] $\la^{\mc B}$ is the $\mc B$-extremal weight for the module $L'(\osp(2m|2n), \la^\#)$. \item[(ii)] $\la^{\mc B}_-$ is of the $\mc B$-extremal weight for the module $L'(\osp(2m|2n), \la^{\#}_-)$. \end{itemize} \item Assume $\mc B$ is of $|$-shape. Then, \begin{itemize} \item[(i)] $\la^{\mc B}$ is the $\mc B$-extremal weight for $L'(\osp(2m|2n), \la^\#)$ if $\text{s}(\mc B) =+$. \item[(ii)] $\la^{\mc B}$ is the $\mc B$-extremal weight for $L'(\osp(2m|2n), \la^{\#}_-)$ if $\text{s}(\mc B) =-$. \end{itemize} \end{enumerate} \end{thm} \begin{cor} For $m$ even, every finite dimensional irreducible $\osp(2m|2n)$-module of integral highest weight is self-contragradient. \end{cor} \begin{rem} The remaining $\mc B$-extremal weights for the modules $L'(\osp(2m|2n), \la^\#)$ when $\text{s}(\mc B) =-$ or for the modules $L'(\osp(2m|2n), \la^\#_-)$ when $\text{s}(\mc B) =+$ are rather complicated and do not seem to afford a uniform simple answer. \end{rem} The following Dynkin diagram of $\osp(2m|2n)$ or $\SG_n^\dd$ that has been in use for super duality is opposite to the standard Borel $\mc B^{\text{st}}$. \begin{center} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,5) \put(6,0.3){\makebox(0,0)[c]{$\bigcirc$}} \put(6,3.8){\makebox(0,0)[c]{$\bigcirc$}} \put(8.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(12.85,2){\makebox(0,0)[c]{$\bigcirc$}} \put(15.25,2){\makebox(0,0)[c]{$\bigotimes$}} \put(17.4,2){\makebox(0,0)[c]{$\bigcirc$}} \put(21.9,2){\makebox(0,0)[c]{$\bigcirc$}} \put(6.4,0.4){\line(1,1){1.6}} \put(6.4,3.7){\line(1,-1){1.6}} \put(8.82,2){\line(1,0){0.8}} \put(11.2,2){\line(1,0){1.2}} \put(13.28,2){\line(1,0){1.45}} \put(15.7,2){\line(1,0){1.25}} \put(17.8,2){\line(1,0){0.9}} \put(20.1,2){\line(1,0){1.4}} \put(10.5,1.95){\makebox(0,0)[c]{$\cdots$}} \put(19.6,1.95){\makebox(0,0)[c]{$\cdots$}} \put(3,0.3){\makebox(0,0)[c]{\tiny $-\epsilon_{-m}-\epsilon_{-m+1}$}} \put(4.5,3.8){\makebox(0,0)[c]{\tiny $\alpha_{-m}$}} \put(8.9,1){\makebox(0,0)[c]{\tiny $\alpha_{-m+1}$}} \put(12.8,1){\makebox(0,0)[c]{\tiny $\alpha_{-2}$}} \put(15.15,1){\makebox(0,0)[c]{\tiny $\alpha_{-1}$}} \put(17.8,1){\makebox(0,0)[c]{\tiny $\beta_{1/2}$}} \put(22,1){\makebox(0,0)[c]{\tiny $\beta_{n-3/2}$}} \put(0,1.2){{\ovalBox(1.8,1.4){$\SG_n^\dd$}}} \end{picture} \end{center} Setting $\vep_{j} =\ep_{-m+j-1}$ and $\delta_i =\ep_{n-i+1/2}$ to match notations, we record the following corollary of \lemref{hwt:standard:spo} and \thmref{hwt change 2}. \begin{cor}\label{aux:finite2} An irreducible integral highest weight $\osp(2m|2n)$-module with respect to the Borel subalgebra corresponding to \makebox(24,0){$\oval(24,14)$}\makebox(-24,8){${\SG}_n^\dd$} is finite dimensional if and only if the highest weight is of the form \begin{equation}\label{B-type:finite:hw1} \pm\max \{\la'_{m}-n, 0\} \,\epsilon_{-m} -\sum_{j=1}^{m-1}\max\{\la'_{j}-n, 0\} \,\epsilon_{-j} -\sum_{i=1}^n\la_{n-i+1}\epsilon_{i-1/2}, \end{equation} where $\la=(\la_1,\la_2,\ldots)$ is an $(n|m)$-hook partition. \end{cor} \begin{rem}\label{finite:dim:character} From Corollaries \ref{aux:finite1} and \ref{aux:finite2} it follows that, after passing to the central extension $\SG_n$ on which the center $K$ acts as a scalar multiplication by $d\in\Z$, the weights in \eqnref{B-type:finite:hw2} and \eqnref{B-type:finite:hw1} lie in $\bar{P}^+_n$ whenever $d\le -\la_1$. Hence, \thmref{character} and \lemref{lem:trunc} provide a complete solution to the {\em finite dimensional} irreducible character problem for the ortho-symplectic Lie superalgebras. \end{rem} \begin{rem} Recall \cite{BR,S} that finite dimensional irreducible {\em polynomial} $\gl(n|m)$-modules are exactly the highest weight modules $L'(\gl(n|m), \la^\#)$ with respect to the standard Borel subalgebra parametrized by $(n|m)$-hook partitions $\la$. One can assign to any Borel subalgebra $\mc{B}$ of $\gl(n|m)$ an $\vep\delta$ sequence as in \secref{sec:type B}, but now with $\xi_i=\eta_j=1$, for all $i,j$. By the same argument as for Theorem~\ref{hwt change}, we can show that the highest weights of the polynomial representations of $\gl(n|m)$ with respect to $\mc{B}$ is given by $\la^{\mc B} =\sum_{i=1}^n p_i \delta_{s(i)} +\sum_{j=1}^m q_j \vep_{t(j)}. $ \end{rem} \subsection{} \label{opposite:borel} By flipping from the left to right the Dynkin diagram of \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{k}^\xx$} and changing all the simple roots therein to their opposites, we obtain a Dynkin diagram\ \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$^\texttt{o}\mf{k}^\xx$} corresponding to the opposite Borel subalgebras, where $\mf{x=b,b^\bullet,c,d}$. Similarly, by flipping the Dynkin diagrams \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$\mf{T}_n$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$\ov{\mf{T}}_n$} and \makebox(23,0){$\oval(20,15)$}\makebox(-20,8){$\wt{\mf{T}}_n$} and changing all signs of the simple roots for $n\in\N\cup\{\infty\}$, we obtain the Dynkin diagrams \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$^\texttt{o}\mf{T}_n$}, \makebox(23,0){$\oval(20,14)$}\makebox(-20,8){$^\texttt{o}\ov{\mf{T}}_n$} and \makebox(23,0){$\oval(20,15)$}\makebox(-20,8){$^\texttt{o}\wt{\mf{T}}_n$}, respectively, of the opposite Borel subalgebras. We form the diagrams corresponding to the Borel subalgebras opposite to \eqnref{Dynkin:combined} as follows: \begin{equation*}\label{Dynkin:combined:op} \hskip -3cm \setlength{\unitlength}{0.16in} \begin{picture}(24,1) \put(5.0,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\mf{T}_n$}}}} \put(5.8,0.5){\line(1,0){1.85}} \put(8.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\mf{k}^\xx$}}}} \put(15,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\ov{\mf{T}}_n$}}}} \put(15.8,0.5){\line(1,0){1.85}} \put(18.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\mf{k}^\xx$}}}} \put(25,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\wt{\mf{T}}_n$}}}} \put(25.8,0.5){\line(1,0){1.85}} \put(28.5,0.5){\makebox(0,0)[c]{{\ovalBox(1.6,1.2){$^\texttt{o}\mf{k}^\xx$}}}} \end{picture} \end{equation*} The corresponding Lie superalgebras are again $\G$, $\SG$ and $\DG$, respectively. The arguments in Sections \ref{sec:O} and \ref{sec:character} can be adapted easily to allow us to compare correspondingly defined parabolic categories $^\texttt{o}\mc{O}$, $^\texttt{o}\ov{\mc{O}}$ and $^\texttt{o}\wt{\mc{O}}$ using these opposite Borel subalgebras, whose precise definitions are evident. We note that for the corresponding set of weights $^\texttt{o}P^+$ of the form \begin{align*} \sum_{i=1}^{m}\la_{i}\epsilon_{-i} - \sum_{j\in\N}\la^+_{j}\epsilon_{j} + d\La_0,\quad d\in\C, \end{align*} to satisfy the corresponding {dominant condition} we require, besides the obvious dominant condition on the standard Levi subalgebra of \makebox(23,0){$\oval(20,12)$}\makebox(-20,8){$^\texttt{o}\mf{k}^\xx$}, also that $\la^+=(\la^+_1,\la^+_2,\ldots)$ is a partition. This allows us to prove an analogous version of \thmref{character} and thus to compute irreducible characters of Lie superalgebras in terms of irreducible characters of Lie algebras. Also the results in Section~ \ref{sec:homology} and section~ \ref{sec:category} have fairly straightforward analogues in $^\texttt{o}\mc{O}$, $^\texttt{o}\ov{\mc{O}}$ and $^\texttt{o}\wt{\mc{O}}$ as well. In particular, we can prove equivalences of the corresponding finitely generated module subcategories following the strategy of \secref{sec:category}. Besides of its own interest, another virtue of this opposite version of super duality lies in the ease of calculation of finite dimensional irreducible characters of modules over the finite dimensional ortho-symplectic Lie superalgebras. As the highest weight modules over $\SG$ in this setup already have highest weights over the standard Borel subalgebras, the knowledge of extremal weights for finite dimensional irreducible modules is no longer needed to imply that solution of the irreducible character problem in the category $^\texttt{o}\ov{\mc{O}}$ and $^\texttt{o}\ov{\mc{O}}_n$ also solves the finite dimensional irreducible character problem. \bigskip \frenchspacing
{"config": "arxiv", "file": "0911.0129/CLW-20100810.tex"}
\begin{document} \maketitle \begin{abstract} In this paper we consider a constrained parabolic optimal control problem. The cost functional is quadratic and it combines the distance of the trajectory of the system from the desired evolution profile together with the cost of a control. The constraint is given by a term measuring the distance between the final state and the desired state towards which the solution should be steered. The control enters the system through the initial condition. We present a geometric analysis of this problem and provide a closed-form expression for the solution. This approach allows us to present the sensitivity analysis of this problem based on the resolvent estimates for the generator of the system. The numerical implementation is performed by exploring efficient rational Krylov approximation techniques that allow us to approximate a complex function of an operator by a series of linear problems. Our method does not depend on the actual choice of discretization. The main approximation task is to construct an efficient rational approximation of a generalized exponential function. It is well known that this class of functions allows exponentially convergent rational approximations, which, combined with the sensitivity analysis of the closed form solution, allows us to present a robust numerical method. Several case studies are presented to illustrate our results. \end{abstract} \begin{keywords} optimal control, parabolic equations, convex optimization, Krylov spaces, functions of operators, spectral calculus \end{keywords} \begin{AMS} 49N05, 49K20, 49M41, 65F60 \end{AMS} \tableofcontents \section{Introduction} In this paper we consider an optimal control problem for a general linear parabolic equation governed by a self-adjoint operator on an abstract Hilbert space. The task consists in identifying a control (entering the system through the initial condition) that minimizes a given cost functional, while steering the final state at time $T>0$ close to the given target. The functional comprises of the control norm and an additional term penalizing the distance of the state from the desired trajectory. This can be considered as an inverse problem (of initial source identification) for parabolic equations from the optimal control viewpoint. It is an important, but also numerically challenging issue due to the dissipative nature of such equations. It has been addressed by different methods, some including optimization and optimal control techniques \cite{FPZ95,MV07,LOT14,CVZ15}. Optimal control problems with control in initial conditions are less investigated than distributed or boundary control problems. The latter contain controls acting along the whole time interval $[0, T]$. Such a setting is not the subject of this paper, but we refer an interested reader to \cite{T10}, containing a quite clear and detailed exposition of the topic. Our problem can be treated by exploring the Fenchel-Rockafellar duality for convex optimisation (cf.\ \cite[Section 3.6]{Peypouquet-15}). If the cost functional consists of the control cost only, the problem is reduced to the classical minimal norm control problem which can be treated by the Hilbert uniqueness method. In the seminal work \cite{CGL94} this approach is used to transform the boundary control problems into identification problems for initial data of the adjoint heat equation, better suited to numerical methods than the original problems. A more recent paper \cite{FM14} generalizes this method by considering cost functional including the state in addition to the control. In order to explore efficient optimization methods, in both papers the authors approximate an original problem with a constraint on the final state, by an unconstrained one containing a penalisation term. The solution is then obtained by letting the penalisation constant blow up. Similar techniques are applied in \cite{B13,FZ00}. However, this approach does not provide an a-priori estimate on the deviation of the final state from the given target. In order to numerically recover the control minimising the functional of interest, most of the authors involve finite difference and/or finite element discretisation and employ some iterative scheme (e.g. conjugate gradient), usually including the dual problem. Classical convex optimization techniques in Hilbert spaces (e.g. \cite{Bauschke17,Peypouquet-15}) also provide iterative methods that can be applied to our problem. Of course, these iterative techniques come with a significant computational cost, which increases with the system dimension. In this paper we propose a different approach based on the spectral calculus for self-adjoint operators and a geometrical representation of the problem. First, we obtain closed-form expression for the control solution as a function of the self-adjoint operator governing the dynamics of the system. This expression is almost explicit, up to a scalar factor ensuring that the deviation of the final state from the given target is within the prescribed tolerance. Once the equation for this scalar unknown is solved, the method provides a direct, one-shot formula for the solution. Its numerical computation is achieved by exploring efficient rational Krylov approximation techniques for resolvents from \cite{BK17}, by which one constructs a rational approximant of the aforementioned function of the operator. The proposed method and the obtained formula are given in the abstract Hilbert space framework and can be applied to optimal control problems for a large class of linear parabolic PDEs for which there exist efficient resolvent approximation algorithms. To illustrate our methods we treat optimization problems for 1D and 2D heat equations. Our approach is an extension of the result from \cite{LazarMP-17}, where the authors explore the spectral representation of the solution by eigenfunctions of the operator governing the system dynamics. This eventually leads to an explicit expression (up to a scalar factor) of the optimal final state and the optimal control. The obtained formulae are spectrally decoupled meaning that the $n$-th Fourier coefficient is fully expressed by the corresponding coefficients of the given data: the final target and desired trajectory. However, the practical implementation of the algorithm is constrained by the availability of the spectral decomposition of the operator. For general PDE operators with variable coefficients and/or acting on irregular domains the decomposition is in general not available or hard to construct. Also, this construction requires costly computations that can exceed the gain provided by the efficiency of the obtained formula. On the other side, the method proposed in this paper is applicable to more complex settings. It allows to efficiently treat PDEs with variable coefficients and defined on complicated domains. In addition, it is robust with respect to small perturbations of both the system and the cost functional, and we provide estimates on deviation of the original solution from the perturbed one. This is quite important in applications, as in practice the models of interest are often not completely determined, subject to unknown or uncertain parameters, either of deterministic or stochastic nature. Furthermore, an expansion of a state in eigenfunctions typically converges rather slowly except in very specific cases. In comparison, representations of solutions using Krylov subspaces are much more efficient in the number of required terms. The paper is organised as follows. In the next section we formulate the problem and state the main result (Theorem \ref{thm:solution_final_state}). In Section~\ref{sec:sensitivity_analysis} we present the sensitivity analysis which justifies a finite-dimensional approximation of the problem. In Section~\ref{Sec:rat} we present the rational function approximation theory and discuss the stability of the finite element approximation of the problem. Further, we discuss the relationship between numerical rational functions calculus as realized by the \texttt{rkfit} algorithm \cite{BK17} and the approximation problem for the generalized exponential functions which appear as central for the study of the concrete numerical examples. In Section~\ref{sec:examples} we present 1D and 2D numerical examples which are outside the scope of the original eigendecomposition method from \cite{LazarMP-17}. Within the concluding remarks we discuss efficiency of the introduced method, open perspectives and comparison to other approaches. \section{Setting of the problem and characterisation of the solution} Let $\cH$ be a Hilbert space and $A$ be an upper-bounded self--adjoint operator in $\cH$ with an upper bound $\kappa$, i.e.\ $\max \sigma (A)\le \kappa$. We denote by $(S_t)_{t \geq 0}$ the semigroup generated by $A$. We consider for $f\in L^2((0,\infty);\cH)$ and $u \in \cH$ the Cauchy problem \begin{equation} \label{eq:Cauchy} \begin{cases} y' (t) = A y (t) + f(t), & t > 0 , \\ y (0) = u . \end{cases} \end{equation} Note that the mild solution of \eqref{eq:Cauchy} is given by \[ y(t) = S_t u + \int_0^t S_{\tau}f(t-\tau)\drm \tau ,\quad t \geq 0, \] and is an element of $L^2((0,\infty);\cH$), see e.g.\ \cite{EngelN1999}. We say that the system \eqref{eq:Cauchy} is controllable to a target state $y^* \in \cH$ in time $T > 0$ if there is $u \in \cH$ such that \[ S_T u + \int_0^T S_{\tau}f(T-\tau)\drm \tau = y^* . \] We say that the system \eqref{eq:Cauchy} is approximately controllable in time $T > 0$ if for all $y^* \in \cH$ and all $\epsilon > 0$ there exists $u \in \cH$ such that \begin{equation} \label{eq:approx} \Bigl\lVert S_T u + \int_0^T S_{\tau}f(T-\tau)\drm \tau - y^* \Bigr\rVert \leq \epsilon . \end{equation} \begin{remark} \label{St-properties} Let us note that for any $T>0$ the operator $S_T$ is injective with a dense range. For the reader's convenience, we present a short proof. For the injectivity assume that there exists $0 \ne x \in \cH$ such that $S_T x = 0$. The semigroup property immediately implies that $S_t x = 0$ for all $t > T$. Let $0 < t < T$ be arbitrary. Then there exists $k\in \mathbb{N}$ such that $2^k t > T$ and hence $S_{2^k t} x = 0 $. Since $S_t$ is a non--negative operator, we have $0 = \langle S_{2^k t} x, x\rangle = \lVert S_{2^{k-1}t} x\rVert^2 $, hence $S_{2^{k-1}t} x = 0$. Now by induction it follows $S_t x = 0$. Since $( S_t )_{t\geq 0}$ is a $C_0$-semigroup, we obtain $0 = \lim_{t\to 0} S_t x = x$, a contradiction. Note that this also implies that $\ran(S_T)$ is a dense subspace of $\cH$ as $S_T$ is a self--adjoint operator. \end{remark} \medskip Since the range of $S_T$ is dense in $\cH$ the system \eqref{eq:Cauchy} is indeed approximately controllable in any time $T > 0$. In the class of initial values satisfying \eqref{eq:approx} for a given target $y^* \in \cH$ and time $T > 0$, we are looking for those with minimal cost. More precisely, for $\epsilon,T > 0$ and $y^* \in \cH$ we introduce the problem \begin{equation} \label{eq:problem} \min_{u \in \cH} \left\{ J(u) \colon \Bigl\lVert S_T u + \int_0^T S_{\tau}f(T-\tau)\drm \tau - y^* \Bigr\rVert \leq \epsilon \right\} \end{equation} where \[ J (u) = \frac{\alpha}{2} \lVert u \rVert^2 + \frac{1}{2} \int_0^T \beta (t) \Bigl\lVert S_t u + \int_0^t S_{\tau}f(t-\tau)\drm \tau - w(t) \Bigr\rVert^2 \drm t , \] $\alpha > 0$ and $\beta \in L^\infty ((0,T) ; [0,\infty))$ are weights of the cost, and $w \in L^2 ((0,T) ; \cH)$ is the target trajectory. Of course, the notation used can be somewhat simplified by substituting $S_t u + \int_0^t S_{\tau}f(t-\tau)\drm \tau $ with $y(t)$, but we want to keep the formulation that explicitly shows dependence of the problem on the given data $f, y^*$ and $w$, and the unknown control $u$. For $\epsilon > 0$ and $x \in \cH$ we denote by $B_\epsilon (x) = \{ y \in \cH \colon \lVert y-x \rVert \leq \epsilon \}$ the closed ball of radius $\epsilon$ and center $x$. Our problem \eqref{eq:problem} can be restated as \begin{equation} \label{p2} \min_{u\in \cH} \left\{J\left(u\right) + I_{B_\epsilon (y^*)}\left(\mathcal{S}_T u+ \int_0^T S_{\tau}f(T-\tau)\drm \tau \right) \right\}, \end{equation} where $I_{B_\epsilon (y^*)}$ is the corresponding \emph{indicator function} defined as \begin{equation*} I_{B_\epsilon (y^*)} \left(y\right)= \begin{cases} 0 &\mbox{if } y\in B_\epsilon (y^*), \\ +\infty &\mbox{else}. \end{cases} \end{equation*} Since the function $u \to J(u)+I_{B_\epsilon (y^*)}\circ(\mathcal{S}_T u+ \int_0^T S_{\tau}f(\tau)\drm \tau) $ is proper, strongly convex and lower-semicontinuous, problem \eqref{eq:problem} has a unique solution, which we denote by $\uhat $ (see, for instance, \cite[Corollary 2.20]{Peypouquet-15}). Moreover, we define \[ \utilde = \argmin_{u \in \cH} J (u) , \] as the solution to the corresponding unconstrained problem, while by $\ytilde = S_T u^{\mathrm{min}} + \int_0^T S_{\tau}f(T-\tau)\drm \tau$ and $\yhat = S_T \uhat + \int_0^T S_{\tau}f(T-\tau)\drm \tau$ we denote the corresponding optimal final states obtained from $\utilde $ and $\uhat $, respectively. \begin{remark} Regarding the results we use from \cite{Peypouquet-15}, we note that they are stated in the case of real Hilbert spaces only. However, they carry over to the complex case by realifying the Hilbert space $\cH$ and taking the real part of the inner product instead of the (complex) inner product. \end{remark} \medskip The problem \eqref{p2} has a form of the composite optimization problem \cite{Peypouquet-15}. That is to say that the target functional is a sum of a quadratic function and a ``simple'' function, e.g. a composition with an indicator function. Such problems -- when posed in the correct abstract setting -- are typically solved by methods based on proximal operator. Instead, we use the spectral calculus to explicitly construct an operator theoretic representation of the trajectories, cf.\ Remark~\ref{rem:2.8} and Section~\ref{Sec:rat}. Interestingly, the structure of the abstract composite optimization problem is still preserved in the solution formula. We will comment on this explicitly in Remark~\ref{rem:proximal} after the statement of the main theorem. \par We define $\yhoms = y^* - \int_0^T S_{\tau}f(T-\tau)\drm \tau$ and $\whom = w - \int_0^{\boldsymbol{\cdot}} S_{\tau}f(\boldsymbol{\cdot} - \tau)\drm \tau$. Then our problem~\eqref{eq:problem} can be written as \begin{equation} \label{eq:problemh} \min_{u \in \cH} \left\{ J(u) \colon \lVert S_T u - \yhoms \rVert \leq \epsilon \right\}, \end{equation} where \[ J (u) = \frac{\alpha}{2} \lVert u \rVert^2 + \frac{1}{2} \int_0^T \beta (t) \lVert S_t u - \whom(t) \rVert^2 \drm t . \] Note that $S_t u$ is the solution of the corresponding homogeneous Cauchy problem with $f=0$. \par If $\epsilon \geq \lVert \ytilde - y^* \rVert$ it follows that the solution of \eqref{eq:problemh} (and hence also of \eqref{eq:problem}) satisfies $\uhat = \utilde$. The following theorem covers the non-trivial case $0 < \epsilon < \lVert \ytilde - y^* \rVert$ as well. \begin{theorem} \label{thm:solution_final_state} Let $T,\epsilon > 0$ and $y^* \in \cH$. Then the optimal initial state $\uhat$ is given by \begin{equation} \label{sol_form} \uhat = ( \muhat S_{2T} + \opb)^{-1} (\muhat S_T \yhoms + \vb) , \end{equation} where \[ \opb = \alpha \Id + \int_0^T \beta (t) S{}_{2t} \drm t, \quad \vb = \int_0^T \beta (t) S_{t} \whom(t) \drm t , \] and $\muhat \ge 0$ is the unique solution of $\Phi (\mu) = \epsilon$ if $\epsilon < \lVert \ytilde - y^* \rVert = \lVert \Psi^{-1}S_T \psi - \yhoms \rVert $, and zero otherwise. Here $\Phi \colon [0,\infty) \to [0,\infty)$ is the function defined by \begin{equation} \label{eq:Phi} \Phi(\mu) = \lVert \yhoms - (\mu S_{2T} + \opb)^{-1} (\mu S_{2T} \yhoms + S_T\vb) \rVert . \end{equation} \end{theorem} \begin{remark} \label{u_min} Since $\Psi$ is positive definite, we indeed have that $\Psi$ and $\mu S_{2T} + \Psi$, $\mu \geq 0$, are invertible. Moreover, for the functional $J$ we have $\nabla J(u)= \Psi u - \psi$. As $\utilde$ is its global minimizer, it immediately follows that $\utilde = \opb^{-1}\vb$, and thus $\lVert \Psi^{-1}S_T \psi - \yhoms \rVert = \lVert \ytilde - y^* \rVert$. \end{remark} \medskip \begin{remark} For $\epsilon < \lVert \ytilde - y^* \lVert$ we obtain from \eqref{sol_form} and \eqref{eq:Phi} \[ \Phi(\muhat) = \lVert y^* - \yhat \rVert = \epsilon. \] In other words, if the global minimizer $\utilde$ of the unconstrained problem does not drive the system to the target ball $B_\epsilon (y^*)$, then the optimal final state lies on the boundary of this ball, cf.\ Lemma~\ref{lem:yopt}. This is in accordance with previous results on similar problems (e.g. \cite[Proposition 2.1]{LazarMP-17} and \cite[Theorem 2.4]{CVZ15}) that provide the same characterisation of the optimal solution. \end{remark} \medskip \begin{remark} Let $\phi(\mu) = \yhoms - (\mu S_{2T}+ \opb)^{-1} (\mu S_{2T} \yhoms + S_T\vb)$, hence $\Phi(\mu) = \lVert \phi(\mu) \rVert $. Then $\phi(\mu) = \yhoms - x$, where $x$ is the solution of the equation \begin{equation} \label{eq:linsym-mu} \left(\mu S_{2T} + \opb \right) x = \mu S_{2T }\yhoms + S_T \vb, \end{equation} hence the calculation of $\Phi(\mu)$ reduces to solving a linear equation. Note also that the optimal initial state $\uhat$ is the solution of the equation \begin{equation} \label{eq:linsym-final} \left(\muhat S_{2T} +\opb \right) x = \muhat S_{T} \yhoms + \vb. \end{equation} \end{remark} \medskip \begin{remark} \label{rem:proximal} In order to give some geometrical intuition for the constrained optimization problem that we solve in the Hilbert space setting, let us observe a formal similarity of the result of Theorem \ref{thm:solution_final_state} and the known finite dimensional result \cite{ProximalOP}. Let $\lambda>0$ and let $A\in\mathbb{R}^{m\times n}$ be of full rank. Then for a given $x \in \RR^n$ \[ u_x = \argmin_{u \in \RR^n}\left\{\lambda\lVert Au \rVert +\frac{1}{2} \lVert u-x \rVert^2\right\} \] is given by the formula \[ u_x = \begin{cases} x-A^*(AA^*)^{-1}Ax,& \text{if} \ \lVert (AA^*)^{-1}Ax \rVert \leq\lambda, \\ x-A^*(AA^*+\alpha^* \Id)^{-1}Ax& \text{if} \ \lVert (AA^*)^{-1}Ax \rVert >\lambda. \end{cases} \] Here $\alpha^*$ is the unique positive root of the decreasing function \[ \phi(\alpha)= \lVert (AA^*+\alpha \Id)^{-1}Ax \rVert^2-\lambda^2 . \] The function $\phi$ takes the roll of $\Phi$ from \eqref{eq:Phi}. \end{remark} \medskip \begin{remark}\label{rem:2.8} To calculate $\opb$ we will use the fact that it can be written as a function of $A$ by using \[ \int_0^T \beta(t) S_{2t} \drm t = \int_{- \infty}^{\infty} \int_0^T \beta(t) \exp(-2 t \lambda) \drm t \, \drm (E(\lambda)) = \tilde \beta_0(A), \] where $\tilde \beta_0$ is a function given by $\tilde \beta_0(\lambda) = \int_{0}^{T} \beta(t) \exp(- 2 t \lambda) \drm t$ and $E$ is the spectral measure of $A$. However such approach does not work directly with other term entering the formula for the solution \eqref{sol_form}. Namely, it is not possible to find a nice closed formula for $\vb$ except in special situations. But we can always find a good approximant for $\vb$. Let $\tilde w (t) = \sum_{i=1}^N w_i \chi_{[t_{i-1},t_i]} $ be an approximation of $w$, where $0 = t_0 < t_1 < \cdots < t_N = T$, $w_i \in \cH$, $i=1,\ldots, N$, and $\chi_S$ is the characteristic function of the set $S$. Then \[ \tilde\vb = \sum_{i=1}^N \tilde \beta_i(A) w_i, \text{ where } \tilde \beta_i (\lambda) = \int_{t_{i-1}}^{t_i} \beta(t) \exp(- t \lambda) \drm t, \] is an approximation of $\vb$. If the function $\beta$ is such that we can not explicitly calculate $\tilde \beta_i$, $i = 0,\ldots,N$, we can still find appropriate approximations of $\opb$ and $\vb$ by finding appropriate approximations of $\tilde \beta_i$, $i = 0,\ldots,N$. \end{remark} \medskip Before proving Theorem \ref{thm:solution_final_state} we provide two auxiliary results. \begin{lemma} \label{lem:yopt} Let $\epsilon > 0$ and $\lVert \ytilde-y^* \rVert > \eps$, then the optimal final state verifies $\lVert y^* - \yhat \rVert = \epsilon$, i.e. $\yhat$ lies on the boundary of the target ball. \end{lemma} \begin{proof} Let us suppose the contrary, that $\yhat \in B_{\eps}(y^*)$. Then it exists $\eta>0$ such that $B_{\eta}(\yhat )\subset B_{\eps}(y^*)$ and then, by continuity of $S_T $, a $\delta>0$ such that $S_T B_{\delta} (\uhat)+ \int_0^T S_{\tau}f(T-\tau)\drm \tau \subset B_{\eta}(\yhat ) \subset B_{\eps}(y^*)$. In particular, every $u\in B_{\delta} (\uhat)$ is a feasible control for the problem \eqref{eq:problem}. As $\uhat$ is the solution of the same problem, it holds $J(\uhat)\leq J(u)$ for every $u\in B_{\delta} (\uhat)$. But, by the convexity of $J$, a local minimizer is also global. Then $\uhat$ is solution of the unconstrained problem, which contradicts the assumption $\epsilon < \lVert \ytilde - y^* \rVert$. \end{proof} \begin{lemma} \label{lem:Phi} The function $\Phi$ has the following properties: \begin{enumerate}[(a)] \item If $0 < \lVert y^* -\ytilde \rVert$, then $\Phi$ is a strictly decreasing function, \item $\lim_{\mu\to \infty}\Phi (\mu) = 0$, \item $ \Phi (0) = \lVert y^* -\ytilde \rVert$. \end{enumerate} \end{lemma} \begin{proof} Note that the function $\Phi$ can be rewritten as \[\Phi(\mu) = \lVert ( \mu S_{2T} + \opb)^{-1} (\opb \yhoms - S_T\vb) \rVert.\] For the derivative of $\mu \mapsto \Phi^2(\mu)$ we have \begin{align*} (\Phi^2)' (\mu) &= -2 \bigl\langle S_{2T} (\mu S_{2T} + \opb)^{-2} ( \opb \yhoms - S_T\vb) , (\mu S_{2T} + \opb)^{-1} ( \opb \yhoms - S_T\vb) \bigr\rangle \\ &= -2 \bigl\lVert S_T (\mu S_{2T} + \opb)^{-3/2} ( \opb \yhoms - S_T\vb) \bigr\rVert^2 \leq 0 . \end{align*} As $(\Phi^2)' = 2 \Phi \Phi'$ and $\Phi$ is a nonegative function, it follows that $\Phi' (\mu) \le 0$ for all $\mu > 0$ and we have strict negativity if $ \opb \yhoms \ne S_T\vb$. Recall that $ \opb \utilde =\vb$ (cf. Remark \ref{u_min}), hence $ \opb \ytilde =S_T \vb$. Now $ \opb \yhoms = S_T\vb$ would imply $ \opb\ytilde = \opb y^*$, a contradiction with the assumption and the invertibility of $ \opb$. We now prove (b). First note \[ y - (\mu S_{2T} + \opb)^{-1} \mu S_{2T} y = y - \mu (\mu + S_{2T}^{-1} \opb )^{-1}y = (\mu + S_{2T}^{-1} \opb)^{-1} S_{2T}^{-1} \opb y \] for all $y\in \dom(S_{2T}^{-1} \opb )$, which implies that we have $ y - \mu (\mu + S_{2T}^{-1} \opb )^{-1}y = 0 $ for all $y\in \dom(S_{2T}^{-1} \opb )$. Let $(y_n)$ be a sequence from $\dom(S_{2T}^{-1} \opb )$ which converges to $\yhoms$. Let $n\in \mathbb{N}$ be arbitrary. Using $\lVert (\mu + S_{2T}^{-1} \opb)^{-1} \rVert = \mathop{\mathrm{dist}}(\mu, -\sigma (S_{2T}^{-1} \opb))^{-1}\le \mu^{-1} $ it follows \begin{align*} &\lim_{\mu\to \infty} \lVert \yhoms - (\mu S_{2T} + \opb)^{-1} \mu S_{2T} \yhoms \rVert \\ &= \lim_{\mu\to \infty} \lVert \yhoms - y_n - \mu (\mu + S_{2T}^{-1} \opb)^{-1}(\yhoms - y_n) + y_n - \mu (\mu + S_{2T}^{-1} \opb)^{-1}y_n \rVert \\ &\le 2\lVert \yhoms - y_n \rVert + \lim_{\mu\to \infty} \lVert y_n - \mu (\mu + S_{2T}^{-1} \opb)^{-1}y_n \rVert = 2\lVert \yhoms - y_n \rVert , \end{align*} and taking the limit $n\to \infty$ we obtain \[ \lim_{\mu\to \infty} \lVert \yhoms - (\mu S_{2T} + \opb)^{-1} \mu S_{2T} \yhoms \rVert = 0. \] Hence to prove (b) we only have to show \begin{equation} \label{eq:b-limit} \lim_{\mu\to \infty} \lVert (\mu S_{2T} + \opb)^{-1} S_T\vb \rVert = \lim_{\mu\to \infty} \lVert S_T^{-1} (\mu + S_{2T}^{-1} \opb)^{-1} \vb \rVert = 0. \end{equation} Since $\ran (S_T)$ is dense in $\cH$ (cf. Remark \ref{St-properties}), there exists a sequence $(\vb_m)_{m \in \NN}$ in $\ran (S_T)$ such that $\lim_{m\to \infty}\vb_m =\vb$, and let $v_m\in \cH$ be such that $\vb_m = S_T v_m$. Then $\lim_{\mu \to \infty}S_T^{-1} (\mu + S_{2T}^{-1} \opb)^{-1}\vb_m = \lim_{\mu \to \infty}(\mu + S_{2T}^{-1} \opb)^{-1} v_m =0$ for all $m\in \mathbb{N}$ and for $\vb$ we have the following estimate \begin{equation} \label{eq:bn} \lVert S_T^{-1} (\mu + S_{2T}^{-1} \opb)^{-1} \vb \rVert \leq \lVert S_T (\mu S_{2T} + \opb)^{-1} \rVert \lVert\vb -\vb_m \rVert + \lVert S_T^{-1} (\mu + S_{2T}^{-1} \opb)^{-1}\vb_m \rVert . \end{equation} By differentiating one can show that the mapping $[0,\infty) \ni \mu \mapsto \lVert (\mu S_{2T} + \opb)^{-1}x \rVert^2 $ is a decreasing function for all $x\in\cH$, so in particular $[0,\infty) \ni \mu \mapsto \lVert S_T (\mu S_{2T} + \opb)^{-1} \rVert$ is a bounded function. Thus we can pass to the limit in \eqref{eq:bn}, from which we obtain \eqref{eq:b-limit}. Finally, (c) follows from $ \Phi (0) = \lVert \yhoms - \opb^{-1}S_T\vb \rVert = \lVert y^* - \ytilde \rVert$. \end{proof} Now we are ready to provide proof of Theorem \ref{thm:solution_final_state}. \begin{proof}[Proof of Theorem~\ref{thm:solution_final_state}] Note that the case $\epsilon \ge \lVert \ytilde - y^* \rVert$ is covered trivially. Indeed, by choosing $\muhat = 0$ in \eqref{sol_form} we obtain $\uhat = \Psi^{-1}\psi = \utilde$. By assumption on $\epsilon$, the unconstrained minimizer $\utilde$ is admissible, and clearly optimal. For the rest of the proof we consider the case $0 < \epsilon < \lVert \ytilde - y^* \rVert$. We fix $T,\epsilon > 0$ and $y^* \in \cH$. Based on Lemma \ref{lem:yopt}, our problem \eqref{eq:problem} (or its equivalent form \eqref{eq:problemh}) can be restated as \begin{equation*} \min_{u \in \cH} \left\{ J(u) \colon \lVert S_T u - \yhoms \rVert =\epsilon \right\}. \end{equation*} whose associate Lagrange functional reads as \[ L(u, \mu)= J(u) + \frac{\mu}{2} \left(\lVert S_T u - \yhoms \rVert^2 - \epsilon^2\right). \] Its (global) minimizer corresponds to the unique solution of our problem. As $J$ is a differentiable function, so it is the Lagrangian $L$. Thus it achieves the minimum value in the point $(\uhat, \muhat)$ satisfying \[ \nabla_{u, \mu} L (\uhat, \muhat)= 0. \] By exploring the relation $\nabla J(u)= \Psi u - \psi$ we get \[ \nabla_{u} L (\uhat, \muhat) = \Psi \uhat - \psi + \muhat (S_{2T } \uhat - S_T \yhoms) =0, \] which directly leads to the formula \eqref{sol_form}. In order to determine the optimal value of the Lagrange multiplier, we use Lemma \ref{lem:yopt} providing $ \lVert S_T \uhat - \yhoms \rVert =\eps $. Plugging the expression for the optimal control $\uhat$ we obtain \[ \Phi(\muhat)=\lVert \yhoms - (\mu S_{2T} + \opb)^{-1} (\muhat S_{2T} \yhoms + S_T\vb) \rVert =\eps. \] Lemma \ref{lem:Phi} and the assumption $\epsilon < \lVert \ytilde - y^* \rVert$ ensure the existence of the unique value $\muhat$ satisfying the last relation, which completes the proof. \end{proof} For the rest of this section we provide a geometric interpretation of the final optimal state. For $x \in \cH$ and $W\subset \cH$ closed and convex, we denote by $\Pi_W (x)$ the unique projection of $x$ onto the set $W$. For $c \in \mathbb{R}$ we denote by $\Gamma_c (g)$ the $c$-sublevel set of a function $g \colon \dom (g)\subset \cH \to \RR$ defined by \[ \Gamma_c (g) = \{y \in \dom(g) \colon g (y) \leq c\}. \] Note that the sublevel set of a convex function is convex, see, e.g., Remark~2.10 in \cite{Peypouquet-15}. In particular, we have the following result. \begin{lemma}\label{lemma:projection} Let $g : \cH \to \RR$ be a differentiable convex function, $c > \inf g$, and $y_1 \in \cH \setminus \Gamma_c (g)$. Denote by $y_0 \in \Gamma_c (g)$ the unique projection of $y_1$ onto $\Gamma_c (g)$. Then there is $\gamma > 0$ such that \[ y_1 - y_0 = \gamma \nabla g(y_0) . \] \end{lemma} \begin{proof} First note that the projection $y_0$ is the solution of the constrained minimisation problem \[ \min_{y\in \Gamma_c (g)} \Big\{ \frac 12 \lVert y-y_1 \rVert^2: \; y \in \partial\Gamma_c (g)\Big\}, \] whose Lagrangian is given by the relation \[ L(y,\lambda)=\frac 12 \lVert y-y_1 \rVert^2 + \lambda (g(y)-c). \] Since $g$ is a differentiable, real-valued function, the associated Lagrangian is differentiable as well. Therefore its unique minimizer $(y_0, \gamma)$ satisfies $\nabla L(y_0, \gamma) =0$, which is exactly the formula appearing in the statement of the lemma. The sign of $\gamma$ results from the fact that $g(y_1) > g(y_0)$ (as $y_1 \in \cH \setminus \Gamma_c (g))$. \end{proof} We introduce the functional $h: \ran(S_T) \to \mathbb{R}$, defined by $h(y)=J(S_T^{-1} y)$. For each $c \in \RR$ we define the corersponding sublevel set \begin{align*} W_{c} &:=\{S_T u \colon u \in\cH \ \text{with}\ J (u) \leq c\} = \Gamma_c (h) . \end{align*} With the notation $c_{\mathrm{min}} = J (u^{\mathrm{min}})$ we have that $W_{c} = \emptyset$ if and only if $c < c_{\mathrm{min}}$. In order to show that $W_c$ are closed we will use the fact that the weak and strong closure of a convex set agrees. If $c < c_{\mathrm{min}}$ the set $W_c$ is empty and thus closed. Assume now $c \geq c_{\mathrm{min}}$ and let $(y_n)_{n \in \NN}$ be a (strongly) convergent sequence in $W_c \subset \cH$. We denote by $y = \lim_{n\to \infty} y_n$ its limit. If we show that $y\in W_c$, i.e.\ $h (y) \leq c$, we are done. For $n \in \NN$ let $x_n \in \cH$ be such that $y_n = S_T x_n$. As $J(x_n)\le c$ for all $n \in \NN$, it follows that $(x_n)_{n \in \NN}$ is a bounded sequence. Hence there exists a weakly convergent subsequence, still denoted by $(x_n)_{n \in \NN}$. We denote by $x = \wlim_{n\to \infty}x_n$ its weak limit. For all $z\in \cH$ we have $ \langle S_T x_n -y, z \rangle = \langle x_n, S_T z \rangle - \langle y, z \rangle \to \langle x, S_T z \rangle - \langle y, z \rangle $. Hence $S_T x = y$. From the continuity of the function $J$ and $h (y_n) \leq c$ for each $n \in \NN$ we conclude \[ c \geq \wlim_{n \to \infty} h (y_n) = J ( \wlim_{n \to \infty} x_n) = J (x) = h (y) . \] If we formally differentiate the function $h$, for the gradient of $h$ we obtain \begin{equation} \begin{aligned} \label{eq:h-gradient} \nabla h(y)&= S_{2T}^{-1} \left( \alpha y + \int_0^T \beta (t) S_{2t}y \,\mathrm{d}t \right) - S_{T}^{-1} \int_0^T \beta(t) S_{t} \whom(t) \,\mathrm{d}t \\ &= S_{2T}^{-1} \Psi y - S_{T}^{-1} \psi. \end{aligned} \end{equation} Below we comment how to tackle the case if $h$ is not differentiable in $\cH$. Denoting by $\chat=J(\uhat) $, from Lemma \ref{lem:yopt} it follows that the optimal final state belongs to $W_{\chat}\cup \partial B(y^*, \eps)$. Specially, it equals the projection $ \Pi_{W_\chat} (y^*)$ of the target state to the sublevel set $W_{\chat}$ (Figure \ref{fig:illustrationWc}). By putting $c = \chat$ and $y_1 = y^*$ into Lemma \ref{lemma:projection}, we obtain that there exists $\gammahat > 0$ such that \begin{equation*} y^* - \yhat = \gammahat \nabla \funomega (\yhat) . \end{equation*} By acting with $S_T$ on the last equality, taking into account that $\yhat = S_T \uhat+ \int_0^t S_{\tau}f(t-\tau)\drm \tau$ and using \eqref{eq:h-gradient} we obtain \begin{equation*} S_T \yhoms - S_{2T} \uhat = \gammahat (\Psi \uhat - \psi) , \end{equation*} which lead us again to the solution formula \eqref{sol_form} with $\muhat= 1/\gammahat$. One can bypass the problem of non-differentiability of $h$ by taking a sequence of approximate functionals $h_n(y) = J(\mathrm{e}^{-TA_n}y)$ for $n$ large enough, where $A_n$ are the Yosida approximations of the operator $A$. Then $h_n:\cH \to \mathbb{R}$ are differentiable functions which converge to $h$. Instead of \eqref{eq:problemh} for all $n$ large enough we study the problem \begin{equation} \label{eq:n-constr_problem} \min_{y\in \cH} \left\{ \funomega_n(y)\colon \lVert y-y^* \rVert \leq \epsilon \right\}. \end{equation} Then one can prove the corresponding version of Lemma \ref{lem:yopt} and use Lemma \ref{lemma:projection} to show that for all $n$ large enough we have \begin{equation*} \mathrm{e}^{TA_n} y^* - \mathrm{e}^{2TA_n} \uhat_n = \gammahat_n (\Psi \uhat_n - \psi) , \end{equation*} where $\uhat_n$ is the unique solution of \eqref{eq:n-constr_problem}. Finally, by proving $\lim_{n\to \infty}\uhat_n = \uhat$ and $\lim_{n\to \infty}\gammahat_n = 1/\muhat$, one recovers \eqref{sol_form}. We skip the details. \begin{figure}[ht] \centering \begin{tikzpicture} \draw (0,0) ellipse (2cm and 1cm); \draw (-2,0) node[left] {$W_{\chat }$}; \draw (1.5,1.76) circle (1cm); \filldraw (1.5,1.76) circle (2pt) node[right] {$y^*$}; \filldraw (1.2,0.8) circle (2pt); \draw (1.4,1) node[right] {$\yhat$}; \draw (1.5,1.76)--(0.5,1.76); \draw (1,1.76) node[above] {$\epsilon$}; \filldraw (0,0) circle (2pt) node[below] {$\ytilde$}; \end{tikzpicture} \caption{Illustration of the optimal final state. It equals the projection of the target state to the sublevel set $W_{\chat}$.} \label{fig:illustrationWc} \end{figure} \section{Sensitivity analysis} \label{sec:sensitivity_analysis} In this section we show that the solution of \eqref{eq:problem} is stable in the sense that if the parameters $\alpha$, $f$, $\beta$, $w$, $y^*$ and $A$ are perturbed by a small perturbation, then the solution of the perturbed problem is as well a small perturbation of the solution of the unperturbed problem. Let $0 < \nu <1 $ and \begin{enumerate}[(i)] \item $\delta\alpha < \nu$ such that $\alpha + \delta\alpha > 0$, \item $\delta f \in L^2 ((0,\infty) ; \cH)$ such that $\lVert \delta f \rVert_{L^2 ((0,\infty) ; \cH)} < \nu$ , \item $\delta \beta \in L^\infty ((0,T); \RR)$ such that $\lVert \delta \beta \rVert_{L^{\infty}((0,T);\mathbb{R})} < \nu $ and $\beta + \delta \beta \in L^\infty ((0,T) ; [0,\infty))$, \item $\delta w \in L^2 ((0,T);\cH)$ such that $\lVert \delta w\rVert_{L^2((0,T);\cH )} < \nu$, \item $\delta y^* \in \cH$ such that $\lVert \delta y^* \rVert < \nu$. \item For the perturbation $\delta A$ of the operator $A$ we assume that $\delta A$ is a symmetric linear operator in $\cH$, $A + \delta A$ is an upper bounded self-adjoint operator in $\cH$, and there exists $\zeta > \max \{\max \sigma (A) , \max \sigma (A + \delta A)\}$ and $R > 0$ such that for all $s \in \RR$ we have \begin{equation} \label{eq:deltaAcondition} \lVert (\zeta + \mathrm{i} s- A - \delta A)^{-1} - (\zeta + \mathrm{i} s -A)^{-1} \rVert < \nu \begin{cases} 1, & |s|\le R, \\ s^{-2}, & |s| > R. \end{cases} \end{equation} \end{enumerate} We denote by $(S_t^\delta)_{t \geq 0}$ the semigroup generated by $A + \delta A$, and introduce the short hand notation $\alpha_\delta = \alpha + \delta \alpha$, $f_\delta = f + \delta f$, $\beta_\delta = \beta + \delta \beta$, $w_\delta = w + \delta w$ and $y^*_\delta = y^* + \delta y^*$. We will use the estimate \eqref{eq:deltaAcondition} to obtain an upper bound for the perturbation of the semigroup, e.g.\ $\lVert S_t^\delta - S_t \rVert $. Now we introduce the perturbed problem \begin{equation} \label{eq:problem-perturbed} \min_{u \in \cH} \left\{ J_\delta(u) \colon \lVert S_T^\delta u + \int_0^T S_{\tau}^\delta f_\delta (T-\tau)\drm \tau - y^*_\delta\rVert \leq \epsilon \right\} \end{equation} where \[ J_\delta (u) = \frac{\alpha_\delta}{2} \lVert u \rVert^2 + \frac{1}{2} \int_0^T \beta_\delta (t) \left\lVert S_t^\delta u + \int_0^t S_{\tau}^\delta f_\delta(t-\tau)\drm \tau - w_\delta(t) \right\rVert^2 \drm t . \] Let $ \yhoms_{\delta} = y_{\delta}^* - \int_0^T S^{\delta}_{\tau}f_{\delta}(T-\tau)\drm \tau$, $\whom_{\delta} = w_{\delta} - \int_0^{\boldsymbol{\cdot}} S^{\delta}_{\tau}f_{\delta}(\boldsymbol{\cdot} - \tau)\drm \tau$, $\delta \yhoms = \yhoms_{\delta} - \yhoms$ and $\delta \whom = \whom_{\delta} - \whom$. We denote the unique solution of the perturbed problem \eqref{eq:problem-perturbed} by $\uhat_\delta$, and recall that $\uhat$ is the unique solution of the unperturbed problem~\eqref{eq:problem}. \begin{theorem} \label{thm:sensitivity} Under the above assumptions we have \[ \lVert \uhat_\delta - \uhat \rVert < C \nu \] for $\nu$ small enough, where $C$ is a constant that does not depend on $\nu$. \end{theorem} \begin{remark} Let us discuss certain situations where assumption \eqref{eq:deltaAcondition} is satisfied. \begin{enumerate}[(i)] \item Let $\lVert \delta A \rVert < \nu$, $\zeta = \max \{\max \sigma (A) , \max \sigma (A + \delta A)\} + 1$ and $R = 1$. Then, by the second resolvent identity and $\lVert (z-T)^{-1} \rVert = 1 / \operatorname{dist} (z,\sigma (T))$ for self-adjoint $T$ and $z \in \rho (T)$ we conclude for all $s \in \RR$ \begin{align*} \lVert (\zeta + \mathrm{i} s - A_\delta)^{-1} - (\zeta + \mathrm{i} s - A)^{-1} \rVert &= \lVert (\zeta + \mathrm{i} s - A_\delta)^{-1} \delta A (\zeta + \mathrm{i} s - A)^{-1} \rVert \\ &< \frac{\nu}{\operatorname{dist}(\zeta + \mathrm{i} s , \sigma (A_\delta)) \cdot \operatorname{dist}(\zeta + \mathrm{i} s , \sigma (A))} \\ & \leq \frac{\nu}{1+s^2} . \end{align*} Hence, \eqref{eq:deltaAcondition} is satisfied if $\delta A$ is a bounded operator with norm smaller than $\nu$. \item Let $\delta A$ be relatively bounded with respect to $A$ with a relative bound smaller than $\nu$, i.e.\ $ \lVert \delta A x \rVert \le a \lVert x \rVert + b \lVert Ax \rVert $ for all $x\in \dom(A)$ with $0\leq b < \nu$ and a nonnegative constant $a$. Let $R=1$. Then $A + \delta A$ is an upper-bounded self-adjoint operator, see e.g.\ \cite[Theorem V.4.3, Theorem V.4.11]{Kato1995}. Let $\zeta > \max \sigma (A) =: \kappa$ be arbitrary. Then we have $\lVert A (\zeta - A)^{-1} \rVert = \int_{-\infty}^{\kappa} \frac{\lvert \lambda \rvert }{\zeta - \lambda} \,\mathrm{d}\lVert E(\lambda) \rVert $, hence for all $\delta > 0$ and all $\zeta > \frac{2 + \delta}{1 + \delta}\kappa$ we have $\lVert A (\zeta - A)^{-1} \rVert < 1 + \delta$. Let $\delta = (\nu -b)/(a+b) $ and let $\zeta > \frac{2 + \delta}{1 + \delta}\kappa$ be such that $\lVert (\zeta - A)^{-1} \rVert < \delta $. Let $x\in\dom (A)$ be arbitrary and let $y = (\zeta -A)x$. Then \[ \lVert \delta A x \rVert \le a \lVert (\zeta -A)^{-1}y \rVert + b \lVert A (\zeta - A)^{-1}y \rVert < \nu \lVert (\zeta - A)x \rVert . \] Let $\chi\ge \zeta$ be arbitrary. Then from $\langle x + (\chi - \zeta) (\zeta -A)^{-1}x, x \rangle \ge 1$ for all $x\in \cH$ such that $\lVert x \rVert = 1 $ it follows $\lVert (I + (\chi - \zeta)(\zeta -A)^{-1})^{-1} \rVert \le 1$ and hence \[ \lVert \delta A (\chi -A)^{-1} \rVert = \lVert \delta A (\zeta -A)^{-1} (I + (\chi - \zeta)(\zeta -A)^{-1})^{-1} \rVert < \nu. \] This implies $\chi \in \sigma (A +\delta A)$, e.g.\ $\max \sigma (A + \delta A) < \zeta$. By the second resolvent identity we have for all $s\in \mathbb{R}$ \begin{align*} \lVert (\zeta + \mathrm{i} s - A_\delta)^{-1} - (\zeta + \mathrm{i} s - A)^{-1} \rVert & = \lVert (\zeta + \mathrm{i} s - A_\delta)^{-1} \delta A (\zeta + \mathrm{i} s - A)^{-1} \rVert \\ &\le \lVert (\zeta + \mathrm{i} s - A_\delta)^{-1} \rVert \lVert \delta A (\zeta + \mathrm{i} s - A)^{-1} \rVert \\ & = \operatorname{dist}(\zeta + \mathrm{i} s , \sigma (A_\delta))^{-1}\lVert \delta A (\zeta + \mathrm{i} s - A)^{-1} \rVert. \end{align*} For all $s\ne 0$ we have \begin{align*} \lVert \delta A (\zeta + \mathrm{i} s - A)^{-1} \rVert &= \lVert \delta A \left( (I + \mathrm{i} s (\zeta - A)^{-1}) (\zeta - A) \right)^{-1}\rVert \\ & = \lVert \delta A (\zeta - A)^{-1} \left( I + \mathrm{i}s (\zeta - A)^{-1} \right)^{-1} \rVert \\ &< \frac{\nu}{s} \left\lVert \left( \frac{1}{\mathrm{i}s} I + (\zeta -A)^{-1} \right)^{-1} \right\rVert \\ & = \frac{\nu}{s} \operatorname{dist}\left( -\frac{1}{\mathrm{i} s}, \left( \zeta - \sigma(A) \right)^{-1} \right)^{-1} \le \frac{\nu}{\sqrt{s^2 +1}} \end{align*} but note that the final estimate holds also for $s=0$. Hence we finally obtain for all $s\in \mathbb{R}$ and $\xi = \zeta +1$ \[ \lVert (\xi + \mathrm{i} s - A_\delta)^{-1} - (\xi + \mathrm{i} s - A)^{-1} \rVert < \frac{\nu}{1 + s^2}, \] and hence the perturbation $\delta A$ satisfies the assumption (vi). \end{enumerate} \end{remark} \medskip \begin{proof}[Proof of Theorem~\ref{thm:sensitivity}] We first estimate the perturbation bound for $\opb$. We define $\Xi := \left\{ \zeta + i s \colon s \in \mathbb{R} \right\}$, where $\zeta$ is the value from the assumption (vi). Then $\Xi$ is in the resolvent sets of both $A$ and $A + \delta A$. Using the spectral calculus for generators of $C_0$-semigroups (see, for example \cite{EngelN1999}), we obtain \begin{align*} \opb &= \alpha I + \frac{1}{2\pi i} \int_0^T \beta (t) \int_\Xi \euler^{2t \lambda}(\lambda - A)^{-1}\drm \lambda \, \drm t,\\ \opb + \delta \opb &= (\alpha + \delta \alpha) I + \frac{1}{2\pi i} \int_0^T (\beta (t) + \delta \beta (t)) \int_\Xi \euler^{2t \lambda}(\lambda - A - \delta A )^{-1}\drm \lambda \, \drm t. \end{align*} Hence, using Fubini theorem and the resolvent formula, we obtain \begin{multline} \label{eq:3_terms} \delta \opb = \delta \alpha I + \frac{1}{2\pi i} \int_\Xi (\lambda - A - \delta A )^{-1}\delta A (\lambda - A)^{-1} \int_0^T \beta(t) \euler^{2t \lambda} \drm t \, \drm \lambda \\ + \frac{1}{2\pi i} \int_0^T \delta \beta (t) \int_\Xi \euler^{2 t \lambda} (\lambda -A - \delta A)^{-1} \drm \lambda \, \drm t. \end{multline} From $\lvert \int_0^T \beta(t) \euler^{2t \lambda} \drm t\rvert \le \lVert \beta \rVert \frac{1}{2 \zeta } ( \euler^{2 T \zeta} -1 ) $ for $\lambda \in \Xi$, the norm of the second term of $\delta \opb$ can be estimated from above by \begin{align*} &\frac{ \euler^{2 T \xi} -1 }{4 \xi \pi} \lVert \beta \rVert \int_\Xi \lVert(\lambda - A - \delta A )^{-1} - (\lambda - A)^{-1} \rVert \drm \lambda \\ &= \frac{ \euler^{2 T \xi} -1 }{4 \xi \pi} \lVert \beta \rVert \left( \int_{\xi - i R}^{\xi + iR} + \int_{\Xi \setminus [\xi-iR, \xi + iR]}\right) \lVert(\lambda - A - \delta A )^{-1} - (\lambda - A)^{-1} \rVert \drm \lambda \\ & < \frac{\nu ( \euler^{2 T \xi} - 1)}{2 \xi \pi} \lVert \beta \rVert (R + R^{-1}) , \end{align*} where in the last inequality we have used \eqref{eq:deltaAcondition}. From \begin{align*} & \left\lVert \int_0^T \delta \beta (t) \int_\Xi \euler^{2 t \lambda} (\lambda -A - \delta A)^{-1} \drm \lambda \, \drm t \right\rVert \le \int_0^T \lvert \delta \beta (t) \rvert \left\lVert \int_\Xi \euler^{2 t \lambda} (\lambda -A - \delta A)^{-1} \drm \lambda \right\rVert \drm t \\ &< \nu \int_0^T \left\lVert S_{2t}^{\delta} \right\rVert \drm t \le \nu \int_0^T \int_{- \infty}^\zeta \lvert \mathrm{e}^{2 t \lambda } \rvert \; \mathrm{d}\lVert E_{A + \delta A} (\lambda) \rVert \drm t \le \nu \frac{\euler^{2 T \xi} - 1}{2 \zeta}, \end{align*} we obtain that the third term in \eqref{eq:3_terms} has an upper bound $\nu ( \euler^{2 T \xi} - 1) / (4 \zeta \pi)$. Hence we obtain \begin{equation*} \lVert \delta \opb \rVert < \nu \left( 1 + \frac{\euler^{2 T \xi} - 1}{4 \zeta \pi} \left( 2 \lVert \beta \rVert (R + R^{-1}) +1 \right) \right). \end{equation*} To obtain an upper bound for $\lVert \delta\vb \rVert $, we use the same steps as for $\delta \opb$, but pulling out the $L^2$-functions using H\"older's inequality. For $t \ge 0$ we define the operator function \[ D(t) = \frac{1}{2\pi i}\int_\Xi \euler^{t \lambda} \left( (\lambda - A - \delta A)^{-1} - (\lambda - A)^{-1} \right)\drm \lambda. \] Note that $D(t) = S_t^{\delta} - S_t$, hence $D(t)$ is actually the perturbation of $S_t$. We calculate \[ \lVert D(t) \rVert < \nu \pi^{-1} \mathrm{e}^{\zeta t} (R + R^{-1}) \text{ for all } t \ge 0. \] We first estimate \[ \delta \whom = \delta w - \int_0^{\boldsymbol{\cdot}} D(\tau) f(\tau) \drm \lambda \, \drm \tau - \frac{1}{2\pi i}\int_0^{\boldsymbol{\cdot}} \int_\Xi \euler^{\tau \lambda} (\lambda - A - \delta A)^{-1} \delta f(\tau) \drm \lambda \, \drm \tau . \] By using H\"older's inequality we estimate \begin{align*} \lVert \delta \whom(t) \rVert &\le \lVert \delta w(t)\rVert + \left\lVert \int_0^t D(\tau) f(\tau) \drm \tau \right\rVert \\ &\qquad + \frac{1}{2\pi}\left\lVert \int_0^t \int_\Xi \euler^{\tau \lambda} (\lambda - A - \delta A)^{-1} \delta f(\tau) \drm \lambda \, \drm \tau \right\rVert \\ &< \lVert \delta w(t) \rVert + \frac{ \nu}{\pi} (R + R^{-1}) \lVert f \rVert \sqrt{\frac{ \mathrm{e}^{2 t \zeta }-1}{2 \zeta}} + \frac{\nu}{2 \pi} \sqrt{\frac{ \mathrm{e}^{2 t \zeta}-1}{2 \zeta}} . \end{align*} This implies \[ \lVert \delta \whom \rVert < \sqrt{2}\nu \left( 1 + \frac{1}{8 \zeta \pi^2}\left( 2 (R + R^{-1} + 1) \right)^2 \lVert f \rVert \left( \frac{\mathrm{e}^{2T \zeta}-1}{2 \zeta} - T \right) \right)^{1/2} . \] Now we are in position to estimate $\delta\vb$. Since \begin{multline*} \delta\vb = \int_0^T \beta(t) D(T+t) \whom(t)\drm t + \int_0^T \beta(t) D(T+t) \delta \whom(t)\drm t \\ + \int_0^T \delta\beta(t) D(T+t) \whom(t)\drm t + \int_0^T \delta\beta(t) D(T+t) \delta \whom(t)\drm t \\ + \frac{1}{2 \pi i} \int_0^T \beta(t) \int_\Xi \euler^{\tau \lambda} (\lambda - A - \delta A)^{-1} \delta \whom(t)\drm \lambda \,\drm t \\ + \frac{1}{2 \pi i} \int_0^T \delta\beta(t) \int_\Xi \euler^{\tau \lambda} (\lambda - A - \delta A)^{-1} \whom(t)\drm \lambda \,\drm t \\ + \frac{1}{2 \pi i} \int_0^T \delta\beta(t) \int_\Xi \euler^{\tau \lambda} (\lambda - A - \delta A)^{-1} \delta \whom(t)\drm \lambda \,\drm t \end{multline*} we can again estimate $\lVert \delta\vb \rVert$ using the techniques from above and obtain \begin{align*} \lVert \delta\vb \rVert &\le C \nu, \end{align*} where $C$ is a constant which does not depend on $ \nu$ and which may change from line to line. Similarly we obtain \[ \lVert \delta \yhoms \rVert \le C \nu. \] Hence we obtained that for $\nu < 1$, each of $\lVert \delta \opb \rVert $, $\lVert \delta\vb \rVert $ and $\lVert \delta \yhoms \rVert $ has an upper bound of the form $C \nu$. We have also proved $\lVert D(t) \rVert < C(t) \nu$. As the solution is given in terms of linear systems \eqref{eq:linsym-mu} and \eqref{eq:linsym-final}, to prove the claim of the theorem it is sufficient to show that the solutions of these systems are stable under perturbations. First note that for a chosen $\mu$ the operator on the left hand side of \eqref{eq:linsym-mu} and \eqref{eq:linsym-final} is bounded and strictly positive and that the same holds for the perturbed right hand side. Moreover, from the estimates obtained above, we see that the perturbation of the left hand side of \eqref{eq:linsym-mu} is given by \[ \mu D(2T) + \delta \opb \] and the perturbation of the right hand side of \eqref{eq:linsym-mu} is given by \[\mu S_{2T} \delta \yhoms + \mu D(2T)(\yhoms + \delta \yhoms) + S_T \delta\vb + D(T)(\vb + \delta\vb). \] Hence the norms of the perturbations of both left and right hand side of \eqref{eq:linsym-mu} are smaller than $C \nu$ if $\nu$ is small enough. This allows us to apply the standard perturbation theoretic results for the solutions of linear systems (\cite{Nashed1976}, see also \cite[Proposition 4.2]{ChenXue1997}) and conclude that the perturbed system \eqref{eq:linsym-mu} has the solution $x + \delta x$ with $\delta x$ satisfying $\lVert \delta x\rVert < C \nu $. Let\ $\Phi_{\delta}$ be the perturbed function $\Phi$, $\delta \Phi_{\delta} = \Phi_{\delta} - \Phi$, and let $\mu_{\epsilon}^{\delta}$ be the solution of the equation $\Phi_{\delta}(\mu) = \epsilon$. Then $\Phi_{\delta} (\mu_{\epsilon}^{\delta}) = \lVert \yhoms + \delta\yhoms - x_{\epsilon} - \delta x_{\epsilon} \rVert $, where $x_{\epsilon} + \delta x_{\epsilon}$ is the solution of the perturbed system \eqref{eq:linsym-mu} with $\mu = \mu_{\epsilon}^{\delta}$. Using the obtained bounds on the perturbations, it follows $ \lvert \delta\Phi_{\delta} (\mu_{\epsilon}^{\delta}) \rvert < C \nu $. Hence \[ \lvert \Phi (\mu_{\epsilon}^{\delta}) - \Phi (\mu_{\epsilon}) \rvert = \lvert \Phi_{\delta} (\mu_{\epsilon}^{\delta}) - \delta \Phi (\mu_{\epsilon}^{\delta}) - \epsilon \rvert = \lvert \delta \Phi (\mu_{\epsilon}^{\delta}) \rvert < C \nu. \] Since $\Phi$ is a continuous and monotone function, it follows that $\Phi^{-1}$ is continuous, hence we obtain $\lvert \delta \muhat\rvert < C \nu$. \end{proof} \section{Approximations of semigroups and related operator functions}\label{Sec:rat} In this section we will review rational approximation methods for a semigroup $S_t$ whose generator $A$ is a self-adjoint operator with upper bound $\kappa \le 0$. Our approach to constructing numerical approximations can be applied to non stable systems (those systems for which $\kappa > 0$) as well, but such systems are not included among our examples, and we just note that in the case of a non-stable systems the estimates include a multiplicative constant which grows exponentially with $\kappa$. We say that the function $r$ is a type $(n,m)$ rational function, where $n$ and $m$ are nonnegative integers, if there are polynomials $p$ and $q$ of degrees at most $n$ and $m$, respectively, such that $r=p/q$. Here the degrees $n$ and $m$ need not be optimal. Given a rational function $r$ we have \begin{equation}\label{sesq} \aligned (v,S_t v)-(v,r(A)v)&=\int^{\kappa}_{-\infty}\big(\mathrm{e}^{t\lambda}- r(\lambda)\big)\,\mathrm{d}(E(\lambda)v,v), \endaligned \end{equation} where $E(\cdot)$ denotes the spectral measure of the self-adjoint operator $A$ \cite{ReedSimonII,Kato1995}. The support of the spectral measure of any of the operators $tA$, for $t>0$ is contained in $(-\infty, 0]$ and computing similarly as in \eqref{sesq} we obtain the estimate \begin{equation}\label{SC} \lVert g(A)v-r(A)v \rVert \leq \lVert g-r \rVert_{L^\infty(-\infty, 0]} \lVert v \rVert , \end{equation} where the function $g$ is measurable with respect to the spectral measure of $A$. When considering numerical efficiency, a key information is if there exists a rational approximation of $g$ with $n$ small. For the case in which $g(z)=e^{-z}$, it is known, see \cite{CODY196950,Trefethen2006}, that for each $n$ there exists a unique type $(n,n)$ rational function $r^*_n$ which minimizes $\lVert g-\cdot \rVert_{L^\infty(-\infty,0]}$. Furthermore, there exists a constant $C>0$, independent of $n$, such that $r^*_n$ verifies \begin{equation}\label{Halphen} \aligned \lVert g-&r^*_n \rVert_{L^\infty(-\infty,0]}\\&=\min\{ \lVert g-r \rVert_{L^\infty(-\infty,0]}~:~r \text{ is a type} (n,n) \text { rational function.} \}\\&\leq\frac{C}{H^n}\leq\frac{C}{9.28903^n} . \endaligned\end{equation} The number $H$ is known under the name of Halphen constant, see \cite{STAHL2009821}. The rational function $r^*_n$ is the unique minimizer of $\lVert g-r \lVert_{L^\infty(-\infty,0]}$ among $(n,n)$ rational functions. Further, $r^*_n$ does not have zero-pole pairs appearing on the negative real axis. For the error analysis of approximations of semi-groups it is particularly convenient if the rational function is representable in the partial fractions form. For constants $r_0$ and $r_i$, $\zeta_i$, $i=1,\cdots,d$ the expression \[ \widehat{r}(z)=r_0+\frac{r_1}{z-\zeta_{1}}+\cdots\frac{r_d}{z-\zeta_d} \] is a partial fractions expansion of the rational function $\widehat{r}$. It has been shown that for $g(z)=e^{-z}$ one can construct, see \cite{Trefethen2006}, a partial fractions expansion of the type $(n,n)$ rational function $\widehat{r}_n$ such that $\lVert g-\widehat{r}_n \rVert_{L^\infty(-\infty,0]}\leq C 3.2^{-n}$. The constant $C>0$ is independent of $n$. The poles $\zeta_i$ are contained on a hyperbola in a complex plane and the weights are defined by the application of the $n$ point quadrature rule to the Cauchy integral representation of the exponential function with this hyperbola as a contour. \subsection{Rational function fitting} \label{sect:rff} To approximate solutions of the constrained parabolic control problem, we will need rational approximations of slightly more general functions. Let us first note that the optimal approximation result \eqref{Halphen} can be extended, see \cite{STAHL2009821}, in a slightly modified form to the class of perturbed exponential functions $g$ which can be represented as \begin{equation*} g(x)=u_0(x)+u_1(x)\mathrm{e}^{ax} \end{equation*} where $u_0$ and $u_1\neq 0$ are arbitrary rational functions and $a<0$. According to \cite[Theorem 1]{STAHL2009821}, for any $n\in\NN$ and a chosen but fixed integer $k$ such that $n-k\geq 0$ there exists a unique rational function $r_{n,n+k}^*$ such that \[ r_{n,n+k}^*=\text{arg min}\{\lVert g-r_{n,n+k} \rVert_{L^\infty (-\infty , 0]}~:~ r~\text{ is rational function of type } (n,n+k)\} \] and $\lVert g-r^*_{n,n+k} \rVert_{L^\infty(-\infty,0]}\leq C 9.28903^{-n}$. Further, the results of \cite{CODY196950,STAHL2009821} are existential. A way to construct a rational approximation satisfying \eqref{Halphen} is to transform the interval $(-1,1]$ to $(-\infty,0]$ and then apply the contour integration technique to the transformed problem, see \cite{Trefethen2006}. This can be achieved by the Moebius transformation $m(z)=9(z-1)/(z+1)$, see \cite{Trefethen2006}. The inverse transformation to $m$ is given by the formula $m^{-1}(z)=-(z+9)/(z-9)$ and it maps $\soi{-\infty}{0}$ to $\soi{-1}{1}$. Then the function to approximate is $g(z)=\mathrm{e}^{m(z)}:(-1,1]\to\mathbb{R}$ and the rational function which approximates $e^z$ is obtained by composing the rational approximant of $g$ with the inverse Moebius transformation. We first loop a finite contour around the interval $(-1,1]$ and then these points get mapped by the Moebius transform into points on a curve looping around the infinite interval $(-\infty,0]$. Let now $g_1$ and $g_2$ be perturbed exponential functions. We are interested in finding type $(n,n)$ rational approximations of functions of the form $g_1+g_2$, $g_1 g_2$, $g_1/g_2$ and $g_i\circ m$. Obviously, combining rational approximations $r_i$ of $g_i$ is a natural first idea. However, the rational functions $r_\diamond=r_1+r_2$, $r_\diamond=r_1 r_2$ or $r_\diamond=r_1/r_2$ will in general be of a different (component-wise larger) type. We can however use an approximation approach to truncate the type of the product, sum or a quotient of two rational functions of the type $(n,n)$ to a rational function $\widetilde{r}_\circ$ of the type $(n,n)$ which for given $\texttt{tol}>0$ and an interval $\ci{a}{b}$ satisfies the estimate $\lVert\widetilde{r}_\diamond-r_\diamond \rVert_{L^2\ci{a}{b}}\leq \texttt{tol} \lVert r_\diamond \rVert_{L^2\ci{a}{b}}$. To this end we use the award winning \texttt{rkfit} algorithm from \cite{BK17}. This is the rational Krylov function fitting algorithm which implements the rational functions calculus by working with a representation of a rational function as a transfer function of a pencil of Hessenberg matrices. It performs all ot the aforementioned operations (addition, division, multiplication and composition with a Moebius transformation) stably using only floating point arithmetic. According to \cite{BK17} given a tolerance $\mathtt{tol}$ and the perturbed exponential function $g(x)=u_0(x)+u_1(x)\mathrm{e}^{ax}$, $a<0$ such that $g\in L^2(-\infty,0]$, \texttt{rkfit} algorithm produces a rational function \begin{equation}\label{pf} r_{RK}(x)=r_0+\frac{r_1}{x-\tilde{\zeta}_1}+\cdots+\frac{r_d}{x-\tilde{\zeta}_d} , \end{equation} in the pole residue form, such that \begin{equation}\label{rkf} \lVert r_{RK}-g \rVert_{L^\infty(-\infty,0]}\leq\mathtt{tol} \lVert g \rVert_{L^2(-\infty,0]} . \end{equation} We can now construct the operator $r_{RK}(A):=r_0 I+\sum_{i=1}^dr_i(A-\tilde{\zeta}_i)^{-1}$ such that \[ \lVert g(A)-r_{RK}(A) \rVert_{L(\cH)}\leq\texttt{tol} \lVert g \rVert_{L^2(-\infty,0]} . \] \subsection{Galerkin resolvent estimates} The steps needed to compute the action of a function of an operator on a vector, exemplary $g(A)v=S_tv$, involve two steps. First, we approximate the function $g$ by a rational function on an interval containing the spectrum of the self-adjoint operator $A$. We then need to sample the resolvent $(z-A)^{-1}v$ at the poles of the rational function $r$. In what follows we will restrict our considerations to the operator of the divergence type posed in a compact polygonal domain $\Omega\subset\R^2$. Many statements are algebraic in nature and hold in a more general setting. However, the interpolation results for piecewise polynomial functions and the regularity results for the domain of the operator are specific to the aforementioned class of operators. We approximate the action of the resolvent by selecting a finite dimensional subspace $\mathcal{V}_h\subset\dom(A^{1/2})$ and then forming the Galerkin projection of $A$ onto $\mathcal{V}_h$. According to \cite[Section 5]{Lasiecka1}, the Galerkin projection $A_h:\mathcal{V}_h\to\mathcal{V}_h$ is given by the formula \[ A_h=(A^{1/2}P_h)^*(A^{1/2}P_h), \] where $P_h$ is the orthogonal projection onto $\mathcal{V}_h$. Let $\mathcal{V}_h$ be the space of piece-wise linear, for a given triangular tessellation of $\Omega$, and continuous functions on $\Omega$. The resolvent estimate for $A$ using the Galerkin projection $A_h$ reads (see e.g. \cite{MR4011540} for technical details) \begin{equation}\label{res_est} \lVert (z-A)^{-1}v-(z-A_h)^{-1}v \rVert_{L^2(\Omega)}\leq Ch^{2\nu} \lVert v \rVert_{L^2(\Omega)} , \end{equation} for $h<h_0$ and $v\in V_h$. Here $\nu>0$ is a parameter depending on the regularity of the functions in $\dom(A)$ and $h$ is the maximal diameter of a triangle in the chosen tessellation of $\Omega$ and $h_0$ is denoting the minimal level of refinement from which the estimate holds. Note that constants $C$ and $h_0$ do depend on $z$ in an explicit way but do not depend on $v$, see \cite{MR4011540}. We will, however need this estimate solely for at most $d$ poles $\tilde{\zeta}_i$, $i=1,\cdots,d$ of the rational function $r_{RK}$ from \eqref{pf}, and so \[ \lVert r_{RK}(A)v-r_{RK}(A_h)v \rVert_{L^2(\Omega)}\leq d~C~h^{2\nu} \lVert v \rVert_{L^2(\Omega)} . \] Finally, let $g(x)=u_0(x)+u_1(x)\mathrm{e}^{ax}$ be the perturbed exponential function. Based on \eqref{SC}, for a given rational function $r_{RK}$ and $v\in V_h$ we have the estimate \begin{align*} \lVert g(A)v &- r_{RK}(A_h)v \rVert_{L^2(\Omega)} \\ &\leq \lVert g(A)v-r_{RK}(A)v \rVert_{L^2(\Omega)}+ \lVert r_{RK}(A)v-r_{RK}(A_h)v \rVert_{L^2(\Omega)}\\ &\leq \lVert g-r_{RK} \rVert_{L^{\infty}(-\infty,0]} \lVert v \rVert +dCh^{2\nu} \lVert v \rVert_{L^2(\Omega)} . \end{align*} By choosing suitable $r_{RK}$ and $h$, the last estimate ensures a good approximation of $g(A)v$ based on a finite dimensional approximation of the operator $A$. \section{Numerical examples} \label{sec:examples} In this section we consider several constrained optimization problems in 1D and 2D. The problems are academic and are primarily chosen to test the efficiency of the developed approach. We compare our results with those obtained by other, already existing methods where such a comparison is possible. We will also report the timings as means to get an intuition of the efficiency of implementation. The timings will be reported for the workstation running Intel Core i5 8600K at 3.60 GHz with 24 GB of DDR4 ram. In all examples we take the weight function of the form $\beta=\Eins_{[T/3,2T/3]}$, while the desired trajectory $w$ is assumed to be time independent. This implies that we want the optimal state to be close to $w$ for times $t$ between $T/3$ and $2T/3$, while no desired trajectory is prescribed outside this interval. With this setting and under the additional assumption that the operator $A$ is strictly negative, the operator $\opb$ and the vector $\vb$ from the main theorem can be computed explicitly as \begin{align*} \opb &= \alpha I + \frac{1}{2} A^{-1} S_{2T/3}(I - S_{2T/3}), \quad \vb = A^{-1} S_{T/3} (I - S_{T/3})w . \end{align*} We can now use spectral calculus to exemplary represent the operator $\Psi$ as \[\aligned \Psi &= \alpha I + \int_{\RR}\mathrm{e}^{\lambda~2T/3}~g(\lambda)\,\mathrm{d} E(\lambda)\\ &= \alpha I +\int_{\RR} \mathrm{e}^{\lambda~2T/3} (1/\lambda - \mathrm{e}^{\lambda 2T/3}/\lambda) \,\mathrm{d} E(\lambda) . \endaligned \] The function $\lambda \to 1/\lambda -\mathrm{e}^{\lambda~2T/3}/\lambda$ is obviously the perturbed exponential function for which the rational approximation theory holds (there exists a small degree rational approximation). We can equivalently use rational approximation theory to compute the vector $\vb$. In numerical procedure the first step is to determine $\mu_\eps$ - the solution to the equation $\Phi(\mu)=\eps$ (cf. \eqref{eq:Phi}). Taking into account the properties of $\Phi$ (given in Lemma \ref{lem:Phi}), the equation has a unique solution for every $\eps\in (0,\Phi(0))$. Any root finding algorithm based only on function evaluation can be used to robustly approximate the root $\varepsilon_0$. We use the Brent method as it is implemented in the Matlab's procedure \texttt{fzero}. We keep the convergence criterion for the root finding procedure below the discretization error for the finite element approximation. According to the resolvent analysis, the error in the approximation by a rational function is a lower order perturbation of the system, as compared to the discretization error. The value of the function $\Phi(\mu)$ is computed by using the rational approximation and the spectral calculus \[\aligned (\mu S_{2T} &+ \opb)^{-1} \mu S_{2T} \yhoms \\ &=\int_{-\infty}^0\frac{\mu \mathrm{e}^{2T\lambda}}{\mu \mathrm{e}^{2T\lambda} +\alpha + (1/\lambda -\mathrm{e}^{\lambda~2T/3}/\lambda)} \,\mathrm{d} E(\lambda)\yhoms\\ &\approx r_0\yhoms +\sum_{i=1}^dr_i(\tilde{\zeta}_i-A)^{-1}\yhoms . \endaligned\] The function \[ g(\lambda)=\frac{\mu \mathrm{e}^{2T\lambda}}{\mu \mathrm{e}^{2T\lambda} +\alpha + (1/\lambda -\mathrm{e}^{\lambda~2T/3}/\lambda)} \] is approximated on $(-\infty,0]$ using the rational function $r$ with $18$ pole residue pairs. The approximation $r$ satisfies \eqref{rkf} with $\mathtt{tol}=10^{-15}$, as it is the default for \texttt{rkfit}. The function is a quotient of perturbed exponential functions for which we know that there is a high quality low degree rational approximation. We could compute a rational approximation of $g$ as a quotient of rational approximations, however this rational function could have, in the worst case, double the degree of the best rational approximation of the numerator and denominator. Instead, as discussed in Section \ref{sect:rff}, we choose to approximate the function $g$ directly, as means of keeping the degree of the approximating rational function lower. Note that these approximations are obviously independent of $A$, and hold on $(-\infty,0]$. Once, the equation for $\mu_\eps$ is solved, the optimal initial control $\uhat$ follows by \eqref{sol_form}. For its computation we explore the same procedure as the one for calculation of $\Phi(\mu)$. \subsection{1D heat equation} As the first test of the proposed method we consider the heat equation (with variable coefficient) on $\Omega=[0,\pi]$ accompanied by homogeneous Dirichlet boundary conditions. The operator $A$ is taken of the form \[ A=-\partial_x((1+a \Eins_{\left[\gamma,\pi\right]})\partial_x) \] with $\gamma=2.2$. The parameter $\gamma$ determines the contact of two materials with a different diffusivity coefficient. We consider two cases: \begin{enumerate} \item $a=0$, with $A$ being the isotropic Laplace operator; \item $a=-0.8$, resulting in discontinuity of diffusion coefficient at point $\gamma$. \end{enumerate} Operator $A$ is discretized by conforming linear finite elements with $h=1/20$ and we use the lumped mass discretization in order to be able to utilized optimized \texttt{rkfit} library. Besides the function $\beta$ determined in the beginning of this section, for this example we propose \begin{itemize} \item $\alpha = 10^{-4}$, \item final time $T=0.01$, \item desired trajectory $\omega = \Eins_{[\pi/5,2 \pi/5]}$, \item final target $y^* = \Eins_{[3 \pi/5, 4 \pi/5]}$, \item $f=0$ (homogenous equation). \end{itemize} The choice of $w$ stimulates the state trajectory to be concentrated on the left part of the domain during the central time period, while at the final time the target $y^*$ requires it to be supported at the right hand side, at least for small values of the tolerance $\eps$. For the isotropic case $a=0$, the above setting coincides with Example 4.1 from \cite{LazarMP-17}. In such a way we shall be able to compare our results with those obtained by a different method based on spectral decomposition of the Laplace operator. The example is performed for three values of the final tolerance \[\eps=[0.2, 0.5, 0.9] \Phi(0),\] depicted on Figure \ref{Phi}, together with the corresponding values $\mu_\eps$ (solutions to equation \eqref{eq:Phi}) and graph of function $\Phi$ (in $\log-\log$ scale). The figure confirms the properties of function $\Phi$ provided by Lemma \ref{lem:Phi}. Specially, its initial value $\Phi(0)$ coincides with \[\lVert \ytilde - y^* \rVert =1.0374\] where $\ytilde$ is the optimal final state of the unconstrained problem. The corresponding initial value $\utilde$, which is just the minimizer of functional $J$ can also be obtained by standard methods of convex analysis. \begin{remark} Note that a gradient method for computing the minimizer of $J$ requires a solution of the forward problem for the parabolic equation. A basic step of any implicit method for the solution of a parabolic equation is the evaluation of a resolvent like function. The convergence of a gradient method with Nesterov's acceleration is at best $1/n^2$, where $n$ is the number of forward problem solves. On the other hand, our method is based on utilizing functional calculus and the best rational function approximations of operator functions. In consequence, these operator functions can be approximated with a method that converges at the rate of at least $9^{-n}$, where $n$ is the number of the resolvent evaluations. This is a very crude comparison. However, under a very modest assumption that we need at least one evaluation of the resolvent function per forward problem solve, we clearly see a potential advantage of the the rational function approach. This is the reason why such methods are becoming methods of choice for the solution of parabolic problems and also for numerically inverting the Laplace transform in the case of the solution of inverse problems, see \cite{STAHL2009821}. \end{remark} \medskip The elapsed time to produce the plot which included sampling $\Phi$ in $350$ points was $12.97$ seconds and it took $0.36$ seconds to compute $\Psi(0)$ alone. \begin{figure}[ht] \centering \subfloat[{Unisotropic difusion}]{\includegraphics[width = 5cm]{lloglog_09_uinsotropic.png}} \subfloat[{Isotropic difusion}]{\includegraphics[width = 5cm]{lloglog_09_isotropic.png}} \caption{Function $\Phi$, the chosen values of $\eps$ and corresponding $\mu_\eps$. } \label{Phi} \end{figure} The results in the isotropic case for the prescribed values of $\eps$ are presented in Figure \ref{Fig1D}. \begin{figure}[ht] \centering \includegraphics[width = 4cm]{t_01_00_lap.png} \includegraphics[width = 4cm]{t_01_05_lap.png} \includegraphics[width = 4cm]{t_01_T_lap.png}\\ \includegraphics[width = 4cm]{t_05_00_lap.png} \includegraphics[width = 4cm]{t_05_05_lap.png} \includegraphics[width = 4cm]{t_05_T_lap.png}\\ \includegraphics[width = 4cm]{t_09_00_lap.png} \includegraphics[width = 4cm]{t_09_05_lap.png} \includegraphics[width = 4cm]{t_09_T_lap.png}\\ \caption{Example 5.1, isotropic case. The initial control $u=y(0)$ (left), the computed solution at time $t=T/2$ compared with the desired trajectory $\omega$ (middle), and the optimal final state at $t=T$ compared with the target $y^*$ (right) for three different values of the tolerance $\epsilon$.} \label{Fig1D} \end{figure} When $\eps$ is small the initial mass is concentrated on the support of the target $y^*$, in order to steer the system close to it at the final time. On the opposite, for large value of $\eps$, the initial control is concentrated on the left. In such a way the solution stays close to the desired trajectory $\omega$ in the middle part of the time interval during which the distributed cost $\beta$ is active. Finally, the intermediate value of $\eps$ is a trade-off between the optimisation of the cost functional $J$ and the requirement to hit the final target with the given tolerance. Besides agreeing with the intuition, the results completely coincide with those obtained in \cite[Example 4.1]{LazarMP-17}. This provides the first confirmation of the method proposed in this article. Furthermore, we note that for increased tolerance $\eps$ the optimal control resembles the solution of the unconstrained problem $\utilde$ (Figure \ref{sol-unconstr}), where the latter is just a minimizer of functional $J$. This is expected, as for large values of $\eps$ the solution is less affected by the prescribed target $y^*$, while the unconstrained problem is completely independent of it. \begin{figure}[ht] \centering \includegraphics[width = 6cm]{umin_inhomg.png} \caption{Example 5.1, unisotropic case. The plot of $\utilde$ and $y^*$.} \label{sol-unconstr} \end{figure} The results for the discontinuous diffusion and for the same range of the final tolerance are presented in Figure \ref{Fig1D-disc}. \begin{figure}[ht] \centering \includegraphics[width = 4cm]{t_01_00_unisotropic.png} \includegraphics[width = 4cm]{t_01_05_unisotropic.png} \includegraphics[width = 4cm]{t_01_T_unisotropic.png}\\ \includegraphics[width = 4cm]{t_05_00_unisotropic.png} \includegraphics[width = 4cm]{t_05_05_unisotropic.png} \includegraphics[width = 4cm]{t_05_T_unisotropic.png}\\ \includegraphics[width = 4cm]{t_09_00_unisotropic.png} \includegraphics[width = 4cm]{t_09_05_unisotropic.png} \includegraphics[width = 4cm]{t_09_T_unisotropic.png}\\ \caption{Example 5.1, discontinuous diffusion. The initial control $u=y(0)$ (left), the computed solution at time $t=T/2$ compared with the desired trajectory $\omega$ (middle), and the optimal final state compared with the target $y^*$ (right) for three different values of the tolerance $\epsilon$.} \label{Fig1D-disc} \end{figure} In their main features, the results coincide with those obtained in the case of the constant diffusion coefficient. The novelty is broken symmetry of the solution in the right part of the domain, where the discontinuity occurs. As a consequence, the center of initial mass is slightly shifted rightward, where diffusion processes are slower. This is logical, having in mind that in this region the initial mass can better approximate the characteristic function (of the support of $y^*$) during a larger period of time, as small diffusion rate will not modify its form significantly. \subsection{2D heat equation on irregular domain} In the next example we repeat the same calculation in the 2D setting. To this end we use the Dirichlet Laplace operator defined on the L-shape domain $\Omega=\left[-1,1\right]^2\setminus(\left[-1,0\right]\times\left[0,1\right])$. We will use Lagrange P1 elements (i.e. we approximate using piecewise linear and continuous functions). We have used a shape regular mesh with $h=1/30$, also with the lumped mass discretization of the semigroup, and we choose $T=1/20$. Due to the reentrant corner of the L-shaped domain we have a loss of regularity of the functions in $\dom(A)$ and so the resolvent estimate \eqref{res_est} holds with $\nu$, $0<\nu<1$. In the case of $H^2$-regular solutions we would have $r=1$. For the target data we choose \begin{itemize} \item $\omega(x)=\Eins_{\lVert x-x_0 \rVert_1\leq 0.2}$, \item $y^*(x)=\mathrm{e}^{-20 \lVert x-x_1 \rVert^2} + \mathrm{e}^{-20 \lVert x-x_2 \rVert^2} + \mathrm{e}^{-30 \lVert x-x_3 \rVert^2}$, \end{itemize} with $x_0=(-0.5,-0.5)$, $x_1=(0.5,0.5)$, $x_2=(0.6,0.1)$ and $x_3=(0.8,0.4)$ (Figure \ref{2D-tar-fig}). The other parameters are the same as in the previous example. \begin{figure}[ht] \centering \includegraphics[width = 6cm]{eps_0_T.png} \caption{Example 5.2. The prescribed final target $y^*$.} \label{2D-tar-fig} \end{figure} The results for the three values of the final tolerance $\eps=[0.1, 0.5, 0.9] \Phi(0)$ are displayed in Figure \ref{2D-fig}. We show the solutions' snapshots at $t=0, T/2, T$. \begin{figure}[ht] \centering \includegraphics[width = 4.2cm]{2D_01_00.png} \begin{tikzpicture} \node[inner sep=0pt] (initial) at (0,0) {\includegraphics[width = 4.2cm]{2D_01_05.png}}; \draw[red,thick,dashed] (-0.9,-0.77) circle (0.31cm); \end{tikzpicture} \includegraphics[width = 4.2cm]{2D_01_T.png}\\ \includegraphics[width = 4.2cm]{2D_05_00.png} \begin{tikzpicture} \node[inner sep=0pt] (initial) at (0,0) {\includegraphics[width = 4.2cm]{2D_05_05.png}}; \draw[red,thick,dashed] (-0.9,-0.77) circle (0.31cm); \end{tikzpicture} \includegraphics[width = 4.2cm]{2D_05_T.png}\\ \includegraphics[width = 4.2cm]{2D_09_00.png} \begin{tikzpicture} \node[inner sep=0pt] (initial) at (0,0) {\includegraphics[width = 4.2cm]{2D_09_05.png}}; \draw[red,thick,dashed] (-0.9,-0.77) circle (0.31cm); \end{tikzpicture} \includegraphics[width = 4.2cm]{2D_09_T.png}\\ \caption{Example 5.2. The initial control $u=y(0)$ (left), the computed solution at time $t=T/2$ compared with the desired trajectory $\omega$ (middle), and the optimal final state (right) at $t=T$ for three different values of the tolerance $\epsilon$. The red dashed circle marks the constraint $\omega$ on the trajectory.} \label{2D-fig} \end{figure} The first row depicts evolution of the state for small tolerance $\eps$. The initial control steers the system close to the prescribed target $y^*$ (cf. Figure \ref{2D-tar-fig}) at the final time, while there is no coincidence with $\omega$ in the between period. For the tolerance $0.5$ of the range of $\Phi$ equal importance is assigned both to $\omega$ and the final state. The largest tolerance allows the solution to optimize the given cost functional almost independently of the prescribed target $y^*$. Essentially, the results exhibit the same behavior as those obtained in the previous examples. The elapsed time for computing $\Psi(0)$ -- the unconstrained problem -- is $0.9828$ seconds. This demonstrates efficiency and flexibility of the method in 2D and, in particular, in case of irregular domains. We chose to exemplary report the timing for $\Psi(0)$, since in this case it is possible to compute the value of $\Psi$ with other methods such as those which are based on gradient optimization. \section{Conclusion} In this paper we have constructed and implemented a numerical algorithm for a constrained optimal control problem. The problem consists of identifying an initial datum that minimizes a given cost functional and steers the system at the final time within a prescribed distance from the target. The algorithm results in an (almost explicit) formula for the solution, expressed in terms of the operator governing the system. The formula itself was derived previously (cf. \cite {LazarMP-17}), but its implementation was based on spectral decomposition, which requires knowledge or construction of eigenfunctions of the operator. The main novelty of this article is twofold. Firstly, we provide a complete quantified sensitivity analysis of the solution with respect to all the data entering the problem. In particular, it implies a good approximation of the solution in cases where the operator or the external source are not completely determined. Secondly, for the numerical implementation we explore efficient Krylov subspace techniques that allow us to approximate a complex function of an operator by a series of linear problems. We provide a-priori estimates for the approximation that are not sensitive to any particular spatial discretization, and neither to a matrix representation of the operator $A$. The theoretical results are confirmed by numerical examples. The first and the simplest example coincides with the one analysed in \cite {LazarMP-17}, and the results obtained with two approaches are in complete agreement. The following, more complex examples confirm the good performance of the algorithm in the case of operators with variable coefficients and acting on irregular domains. The proposed approach can be generalised to other optimal control problems. The first step in this direction would be to consider a distributed control problem, i.e. one in which a control enters the equation through a non-homogeneous term and is active along the entire time frame. This would also allow for boundary control problems which, by using the classical Fattorini's approach \cite {F68}, can be expressed as distributed ones. The second generalisation would consider different norms that enter into the cost functional. In particular, it would be tempting to include $L^1$-terms in the cost, since these introduce sparsity into the control. Of course, such a generalisation requires a more subtle theoretical analysis, as the cost functional is not differentiable in this case. This approach would also enable to consider different non-smooth, convex functionals. \section*{Acknowledgment} The work of L.G.\ has been supported by Hrvatska Zaklada za Znanost (Croatian Science Foundation) under the grant IP-2019-04-6268 - Randomized low rank algorithms and applications to parameter dependent problems. The work of M.L.\ and I.N.\ has been supported by Hrvatska Zaklada za Znanost (Croatian Science Foundation) under the grant IP-2016-06-2468 - Control of Dynamical Systems.
{"config": "arxiv", "file": "2109.13783/2021-09-26-arxiv.tex"}
TITLE: Clarification in Covering Radius Problem QUESTION [0 upvotes]: I am having difficulty understanding this particular definition of Covering Radius Problem - "Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice". Can anyone please explain what is the meaning of "largest distance from any vector to the lattice". How is distance of a vector calculated from a lattice rather than another vector? REPLY [1 votes]: If you have a subset $A\subseteq \mathbb R^n$ and a point $x\in \mathbb R^n$ you can define $d(x,A)$ as $\inf\{d(x,a) | a\in A\}$. Since lattices are closed sets we can consider $\min\{d(x,a)|a\in A\}$
{"set_name": "stack_exchange", "score": 0, "question_id": 1938594}
TITLE: A convergence proof, trouble parsing the assignment QUESTION [1 upvotes]: The assignment Let $A \subset \mathbb C$. Show that $z \in A^c$ (the closure of $A$), if and only if there exists a sequence $(z_n)$, so that $z_n \in A$ for all $n \geq 1$ and $z_n \to z$. The problem My issue with this assignment is that it's so confusingly written that I'm not sure what I'm supposed to prove or allowed to assume going in either direction. For example, going from left to right I know that I can assume that $z \in A^c$, but I can't really prove anything simply based on that, and there is clearly stuff on the other side of the equivalence that I probably need to complete the proof. So, what am I supposed to prove and what am I allowed to assume? REPLY [1 votes]: Here is a rough outline for you to fill the details in: Let $z \in A^c$ (the smallest closed subset of $\mathbb C$ that contains $A$). Then we have, for any $n \in \mathbb N$, that $B_{1/n}(z) \cap A \neq \emptyset$ (here $B_\epsilon(a)$ is the open ball with centre $a$ and radius $\epsilon$). (Why is this true?) We may now pick $z_n \in B_{1/n}(z) \cap A$. Then the sequence $(z_n \mid n \in \mathbb N)$ consists of points in $A$ and converges to $z$. (Again, you need to justify this.) Conversely, suppose $(z_n \mid n \in \mathbb N)$ is a sequence of points in $A$ that converges to $z$. We need to show that $z \in A^c$. Suppose not. Then there is some open set $O \subseteq \mathbb C$ such that $z \in O$ and $O \cap A = \emptyset$. (Why?) But we have that for sufficiently large $n$, $z_n \in O \cap A$ (Why?). Contradiction!
{"set_name": "stack_exchange", "score": 1, "question_id": 2930061}
TITLE: Finding $x$ coordinates on a rectangle if Rectangle $a$ was scaled up to Rectangle $b$ QUESTION [0 upvotes]: I wasn't too sure how to explain the question in the title so i drew up my problem that I am trying to solve: http://i.imgur.com/PyWMh6f.jpg Basically I choose a point on rectangle $A$ and then find where this point should be if I were to scale rectangle $A$ up to rectangle $B$'s size. Sorry I had a bit of trouble explaining the problem, I'm just not sure how to go about this. If someone could just explain how this could be achieved that'd be great. REPLY [0 votes]: Here's a quick geometric sketch. I found the answer by using the similar triangles shown in the sketch. You can use the relationships between the lines as ratios of the height of the base of the triangle to the total height and length of the side to the total length, and then apply the same ratios to the new triangle.
{"set_name": "stack_exchange", "score": 0, "question_id": 1374087}
TITLE: Intuition behind equation for finding arc length in polar coordinate QUESTION [2 upvotes]: I know how to derive the equation for finding arc length in polar coordinates but I don't understand this: Given a parametric equation let L be the length of the arc from point t = a to to t = b we have: $L = \int_{a}^{b} \sqrt{dx^2 + dy^2}\frac{dt}{dt} = \int_{a}^{b} \sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2}dt$. To turn this equation into the polar coordinate version we can assume the parametric equation is $x = rcos\theta$ and $y = rsin\theta$ (so t = $\theta$) and after substitution we should get the equation $L = \int_{\theta_{1}}^{\theta_{2}}\sqrt{r^2 +r'^2} d\theta$. So what I don't' understand is the "intuition" behind why $dx^2 + dy^2 \neq dr^2$. Clearly, if $dx^2 + dy^2 = dr^2$ L would equal $\int_{\theta_{1}}^{\theta_{2}}|r'| d\theta$ which "intuitively" can't be correct (length cannot be just a function of slope). I'm not entirely sure how to explain what I mean by "intuition" but I what I'm hoping to figure out is what I'm not understanding that makes me think $dx^2 + dy^2 = dr^2$ should be correct/ is not making me see why this should be incorrect. REPLY [0 votes]: $(dx,dy)$ describes a displacement in an arbitrary direction. It can be decomposed in a radial displacement ($dr$) and a trangential one ($r\,d\theta$) and we have the identity $$dx^2+dy^2=dr^2+r^2d\theta^2.$$ We can check this by direct computation by expanding $$dx^2+dy^2=(dr\cos\theta-r\sin\theta\,d\theta)^2+(dr\sin\theta-r\cos\theta\,d\theta)^2.$$ In other words, $$\frac{ds}{dt}=\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}=\sqrt{\left(\frac{dr}{dt}\right)^2+r^2\left(\frac{d\theta}{dt}\right)^2}$$ and $$\frac{ds}{d\theta}=\sqrt{\left(\frac{dr}{d\theta}\right)^2+r^2}.$$
{"set_name": "stack_exchange", "score": 2, "question_id": 3710293}
\begin{document} \date{\today} \title{Circle Decompositions of Surfaces} \author{G\'abor Moussong and N\'andor Sim\'anyi} \thanks{The first author was supported by the Hungarian Nat.\ Sci.\ Found.\ (OTKA) grant No.\ T047102, the second author was supported by the National Science Foundation, grant DMS-0800538.} \address[G\'abor Moussong]{Inst.\ of Mathematics\\ E\"otv\"os University\\ P\'azm\'any P\'eter S\'et\'any 1/C\\ Budapest 1117, Hungary} \email{mg@math.elte.hu} \address[N\'andor Sim\'anyi]{The University of Alabama at Birmingham\\ Department of Mathematics\\ 1300 University Blvd., Suite 452\\ Birmingham, AL 35294 U.S.A.} \email{simanyi@math.uab.edu} \subjclass{57N05, 54F99} \keywords{Topological surfaces, circle partitions, Jordan-Sch\"onflies Theorem, upper semicontinuous decompositions, circle foliations.} \begin{abstract} We determine which connected surfaces can be partitioned into topological circles. There are exactly seven such surfaces up to homeomorphism: those of finite type, of Euler characteristic zero, and with compact boundary components. As a byproduct, we get that any circle decomposition of a surface is upper semicontinuous. \end{abstract} \maketitle \parskip=0pt plus 2.5pt \section{Introduction} \label{sec_intr} In what follows by a surface we shall mean a second countable, Hausdorff, connected, two-dimensional, topological manifold, possibly with boundary. By a circle in a surface $S$ we shall mean a closed Jordan curve, i.e., any subset of $S$ homeomorphic to the standard unit circle. A circle decomposition of $S$ is a partition of $S$ into circles. Our goal is to show that circle decompositions only exist for a very limited range of surfaces. The main result in this note is Corollary~\ref{allowable} below stating that any surface with a circle decomposition is homeomorphic to either a torus, a Klein bottle, an annulus, a M\"obius band, an open annulus, a half-open annulus, or an open M\"obius band. For short, these seven topological types will be called allowable surfaces. One may observe that these allowable surfaces are precisely those of finite type (i.e., with finitely generated homology), with zero Euler characteristic, and with all boundary components homeomorphic to a circle. It is clear by straightforward geometric constructions that all allowable surfaces admit circle decompositions. Moreover, such constructions can be carried out in the smooth category resulting in smooth foliations with circles. It is well known that any surface foliated by circles has zero Euler characteristic, therefore, circle foliations can only exist for allowable surfaces. However, as the following example shows, a circle decomposition need not be a topological foliation. The authors are indebted to Lex Oversteegen for pointing out the existence of such examples. \begin{E}\label{non-foliation} Construct first of all a smooth, centrally symmetric partition $\mathcal J$ of the square $Q=[-1,1]\times [-1,1]$ into Jordan arcs $J$ such that \begin{enumerate} \item $\{ -1\}\times [-1,1]\in\mathcal J$ and $\{ 1\}\times [-1,1]\in\mathcal J$, \item every $J\in\mathcal J$, other than the two curves listed in (1), has only its endpoints on the boundary $\partial Q$, one on the bottom, one on the top side of $Q$, and \item the element of $\mathcal J$ that contains the origin is $$J^{\ast} =\left\{ (x,y)\in Q:-\frac{1}{2}\le x\le\frac{1}{2},\; y=12x^3-x\,\right\}.$$ \end{enumerate} Next we horizontally shrink by a factor of $1/2$ the square $Q$ along with its smooth partition $\mathcal J$, and insert the arising block in the left half $[-1,0]\times [-1,1]$ of $Q$. Then we horizontally shrink the square $Q$, along with its smooth partition $\mathcal J$, by a factor of $1/4$, and insert the arising block in the rectangle $[0,1/2]\times [-1,1]$. After this we again horizontally shrink the square $Q$, along with its smooth partition $\mathcal J$, by a factor of $1/8$, and insert the arising block in the rectangle $[1/2,3/4]\times [-1,1]$, etc. Finally, we include the right-hand edge of $Q$ to complete a partition $\mathcal P$ of the square $Q$ into Jordan arcs. It is clear that the partition elements of $\mathcal P$ can be parametrized as $$\mathcal P=\{J_t:0\le t\le 1\}$$ in accordance with the horizontal linear order among the curves $J_t\in\mathcal P$. This parametrization is continuous with respect to the Hausdorff metric on the set of compacta in $Q$. Thus, the projection map, which takes all elements of $J_t$ to $t$, is an open quotient map from $Q$ to $[0,1]$. Yet the partition fails to be topologically equivalent to the canonical partition of $Q$ into vertical line segments. Indeed, the partition $\mathcal P$ does not even possess any transversal curve emanating from the point $(1,y)\in Q$ if $-1/9<y<1/9$ ($\pm 1/9$ being the local maximum and minimum of $y$-values along $J^{\ast}$).\hfill\qed \end{E} Squares partitioned as in this example can clearly be involved in circle decompositions of surfaces. So, circle decompositions in general are not foliations. There is however a weaker property of decompositions, namely, upper semicontinuity (see Section~\ref{sec_usc}) which, as we show in Corollary~\ref{usc} and the subsequent remark, is shared by all circle decompositions of surfaces. By the classical Jordan--Sch\"onflies theorem, circles in surfaces have strong local separaton properties, which forces circle decompositions to be upper semicontinuous. Our main theorem will follow relatively easily from this fact in Section~\ref{sec_proof}. It should be noted that many circle decompositions of $3$-manifolds exist (for instance, Euclidean $3$-space can be foliated by circles, see \cite{V}) which in general are not upper semicontinuous. \section{Preliminary lemmas} \begin{Le}\label{nodisk} A closed disk admits no circle decomposition. \end{Le} \begin{proof} Suppose that $\mathcal C$ is a circle decomposition of the closed disk $S$. Any member $C\in\mathcal C$ has a well-defined interior; namely, the connected component of $S-C$ which does not contain $\partial S$. For $C_1,C_2\in\mathcal C$ call $C_1<C_2$ if $C_1$ is contained in the interior of $C_2$. One readily checks that $<$ is a partial order relation on the set $\mathcal C$. Compactness of $S$ implies that any ordered chain in $\mathcal C$ has a lower bound. Then by Zorn's lemma there exists at least one minimal element in $\mathcal C$. But no minimal elements can exist since the interior of any circle must contain further circles. \end{proof} \begin{C}\label{circle_bounds_no_disk} If $\mathcal C$ is a circle decomposition of the surface $S$, then no element of $\mathcal C$ bounds a disk in $S$.\hfill\qed \end{C} It follows for instance that neither an open disk nor a two-sphere admit circle decompositions. \begin{Le}\label{circle_in_annulus} Let $C_0$ and $C_1$ be the two boundary circles of an annulus $A$. If $D\subseteq A$ is a circle with $D\ne C_0$, then there exists at least one connected component $U$ of $A-D$ with $U\cap C_1=\emptyset$. If, further, $D\cap C_0\ne\emptyset$, then any circle in such a component $U$ bounds a disk in $A$. \end{Le} \begin{proof} The set $A-D$ is disconnected unless $D=C_1$. Therefore, the connected set $C_1$ cannot intersect all connected components of $A-D$. This implies the first statement. Let $U$ be a connected component of $A-D$ with $U\cap C_1=\emptyset$. Suppose now that $C\subseteq U$ is a circle which is not nullhomotopic in $A$. Then $C$ and $C_1$ bound an annulus which contains $D$. Therefore, both $D$ and $C_1$ lie on the same side of $C$ in $A$, which implies that $D$ cannot intersect $C_0$. This proves the last statement. \end{proof} \begin{Le}\label{annulus_nice} Suppose that $\mathcal C$ is a circle decomposition of an annulus $A$. Then both boundary circles of $A$ belong to $\mathcal C$. If $C\in\mathcal C$ is contained in the interior of $A$, then $C$ cuts $A$ into two annuli (both of which inherit circle decompositions from $\mathcal C$). \end{Le} \begin{proof} Assume that $C\in\mathcal C$ is different from both boundary circles. If $C$ is not contained entirely in the interior of $A$, then Lemma~\ref{circle_in_annulus} applied to $D=C$ implies that some elements of $\mathcal C$ bound disks, contradicting Corollary~\ref{circle_bounds_no_disk}. This proves the first statement of the lemma. By Corollary~\ref{circle_bounds_no_disk} only homotopically nontrivial circles can belong to~$\mathcal C$, which implies the last statement. \end{proof} The following lemma reduces the main question to the case when the surface is orientable and has no boundary. Recall from the introduction that we call a surface allowable if it is homeomorphic to one from the following list: torus, Klein bottle, annulus, M\"obius band, open annulus, half-open annulus, open M\"obius band. \begin{Le}\label{reduction} Suppose that the surface $S$ is not allowable, and that $S$ admits a circle decomposition. Then there exists an orientable surface $\widetilde{S}$ with $\partial\widetilde{S}=\emptyset$ which is not allowable and admits a circle decomposition. \end{Le} \begin{proof} First we eliminate boundary by attaching parts with circle decompositions to each boundary component of $S$. Let $U$ denote a half-open annulus with a circle decomposition of its interior, and glue a copy of $U$ along $\partial U$ to each compact component of $\partial S$. Let $V$ be a closed half-plane with an interior point removed, and equip the interior of $V$ (an open annulus) with a circle decomposition. Glue a copy of $V$ along its boundary to each noncompact component of $\partial S$. The surface $S'$ obtained through all these gluings has $\partial S'=\emptyset$, and inherits a circle decomposition from $S$ and from the attached parts. Next, if $S'$ is orientable, put $\widetilde{S}=S'$, if not, then define $\widetilde{S}$ as the orientable double covering of $S'$. The circle decomposition of $S'$ clearly lifts to that of $\widetilde{S}$. It remains to be shown that $\widetilde{S}$ is not allowable. By inspection of the list of allowable surfaces it is clear that a surface is allowable if and only if any double covering of it is allowable. Therefore, it suffices to check that $S'$ is not allowable. We know that $S$ is not allowable. If $S$ is not of finite type, then neither is $S'$. So suppose that $S$ has finite type. If all boundary components of $S$ are compact, then $S'$ is homotopy equivalent to $S$, so it is not allowable. Suppose now that $S$ has at least one boundary component homeomorphic to the real line. Then $\pi_1(S')$ is the free product of $\pi_1(S)$ with at least one copy of $\mathbf Z$, therefore, it can only be isomorphic to the fundamental group of one of the allowable surfaces if $S$ is simply connected. But then $S$ cannot have a circle decomposition by Corollary~\ref{circle_bounds_no_disk}. So $S'$ is not allowable in this case either. \end{proof} \section{Upper semicontinuity} \label{sec_usc} Suppose that a Hausdorff topological space $X$ is decomposed to a family $\mathcal F$ of pairwise disjoint compact sets. Recall that $\mathcal F$ is an upper semicontinuous decomposition if for every $F\in\mathcal F$ and for every neighborhood $U$ of $F$ there exists a smaller neighborhood $V$ of $F$ such that $G\subseteq U$ whenever $G\in\mathcal F$ and $G\cap V\ne\emptyset$. \medskip We shall prove that all circle decompositions of surfaces are upper semicontinuous, see Corollary~\ref{usc} below. First we use embedded annuli to characterize upper semicontinuous circle decompositions of orientable surfaces with empty boundary. Let $S$ be a surface (not necessarily orientable), and $C$ be a circle in $S$. Assume that either \begin{enumerate} \item $C$ is contained in the interior of $S$, and is two-sided in $S$, or \item $C$ is a component of $\partial S$. \end{enumerate} \noindent By an annular neighborhood of $C$ we mean any annulus embedded in $S$ for which \begin{enumerate} \item $C$ is the image of the middle circle of the annulus under the embedding in the first case, or \item $C$ is one of the two boundary circles in the second case, respectively. \end{enumerate} \noindent It follows from the classical Jordan--Sch\"onflies theorems that annular neighborhoods exist for $C$, and, consequently, they form a basis of neighborhoods for $C$ in $S$. \medskip Suppose now that a circle decomposition $\mathcal C$ is given on $S$. An embedded annulus in $S$ will be called a $\mathcal C$-annulus if both boundary circles belong to $\mathcal C$. Accordingly, annular neighborhoods of circles will be called $\mathcal C$-annular neighborhoods if they are $\mathcal C$-annuli. \begin{Le}\label{annulus_usc} Any circle decomposition of an annulus is upper semicontinuous. \end{Le} \begin{proof} Let $C_0$ and $C_1$ denote the boundary circles of $A$. By Lemma~\ref{annulus_nice} both belong to $\mathcal C$. For any two further $C,C'\in\mathcal C$ write $C<C'$ if $C$ separates $C_0$ from $C'$ (or, equivalently, if $C'$ separates $C$ from $C_1$). Extend relation $<$ to $C_0$ and $C_1$ by making them smallest and largest, respectively. Then Lemma~\ref{annulus_nice} implies that the set $\mathcal C$ is linearly ordered by relation $<\,$, and that this ordering is dense. It follows now that any $C\in\mathcal C$ equals the intersection of its $\mathcal C$-annular neighborhoods. Indeed, if $x$ is an arbitrary point of $A$ not in $C$, then $x\in C'$ with some $C'\in\mathcal C$ for which we may assume $C<C'$. Pick $C''\in\mathcal C$ with $C<C''<C'$, then $C_0$ and $C''$ bound a $\mathcal C$-annular neighborhood of $C$ not containing $x$. The family of $\mathcal C$-annular neighborhoods of $C$ is closed under finite intersections. Therefore, by compactness, any neighborhood of $C$ contains a $\mathcal C$-annular neighborhood. Now upper semicontinuity follows immediately. \end{proof} \begin{R} It is well known that upper semicontinuity is equivalent to the Hausdorff property of the quotient space. It is easy to see that under the assumptions of Lemma~\ref{annulus_usc} the quotient space is homeomorphic to $[0,1]$. \end{R} \begin{Le}\label{usc=ann_nbhds} Let $S$ be an orientable surface without boundary, and let $\mathcal C$ be a circle decomposition of $S$. Then the following two conditions are equivalent: \begin{enumerate} \item $\mathcal C$ is upper semicontinuous. \item Every $C\in\mathcal C$ admits a $\mathcal C$-annular neighborhood. \end{enumerate} \end{Le} \begin{proof} (1)$\Rightarrow$(2): If $\mathcal C$ is assumed upper semicontinuous and $C\in\mathcal C$ is given, choose a neighborhood $V$ of $C$ such that all members of $\mathcal C$ that meet $V$ stay in the interior of a fixed annular neighborhood $A$ of $C$. Pick two such members $C_1$ and $C_2$ on either side of $C$. By Lemma~\ref{annulus_nice} $C_1$ and $C_2$ bound an annulus within $A$, which therefore is a $\mathcal C$-annular neighborhood of $C$. \noindent (2)$\Rightarrow$(1): Immediate consequence of Lemma~\ref{annulus_usc}. \end{proof} \begin{T}\label{main} Let $\mathcal C$ be a circle decomposition of the surface $S$, and let $C_0$ be a circle in $S$. Assume that either \begin{enumerate} \item $C_0$ is a component of $\partial S$, or \item $C_0\in\mathcal C$ and $C_0$ is a two-sided circle in the interior of $S$. \end{enumerate} \noindent Then $C_0$ has a $\mathcal C$-annular neighborhood. In particular, $C_0$ belongs to $\mathcal C$ in both cases. \end{T} \begin{proof} It will suffice to prove the theorem in case (1). Indeed, the other case follows if we cut $S$ along $C_0$, find $\mathcal C$-annular neighborhoods on both sides and reglue them. Fix an arbitrary annular neighborhood $A$ for $C_0$ in $S$. Then $C_0$ is one of the boundary circles of the annulus $A$. We shall prove that there exists a circle $D\in\mathcal C$ different from $C_0$, and contained in $A$. If such a $D$ is found, then Lemma~\ref{circle_in_annulus} and Corollary~\ref{circle_bounds_no_disk} imply that $C_0\cap D=\emptyset$, and that $D$ is not nullhomotopic in $A$. Then $C_0$ and $D$ bound an annulus which inherits a circle decomposition from $\mathcal C$. Lemma~\ref{annulus_nice} applied to this annulus yields $C_0\in\mathcal C$. So, $C_0$ and $D$ bound a $\mathcal C$-annular neighborhood for $C_0$ in $S$. By way of contradiction, for the rest of the proof we assume that no circle $D\in\mathcal C$ exists with $D\ne C_0$ and $D\subseteq A$. Now we introduce some further notation. For concreteness, let us fix a homeomorphism $h:C_0\times [0,1]\to A$ with $h(x,0)=x$ for $x\in C_0$. For any parameter $t\in (0,1]$ define the following sets: \begin{displaymath} \begin{aligned} C^t= & h(C_0\times\{t\}), \\ A^t= & h(C_0\times [0,t]), \\ U^t= & A^t-C^t=h(C_0\times [0,t)), \\ \mathcal D^t= &\{D\in\mathcal C: D\ne C_0\;\text{and}\;D\cap U^t\ne\emptyset\}. \end{aligned} \end{displaymath} \noindent Then $A^t$ is an annulus bounded by $C^0=C_0$ and $C^t$; in particular, $A^1=A$. (We shall only use these sets for $t=1$, $t=1/2$, and $t=1/3$, that is, for the annulus $A$ and for its half and third.) By our assumption for any $D\in\mathcal D^t$ the set $D\cap U^t$ is a disjoint union of open arcs in $D$. Let the closures of all such arcs (with fixed $t$ and variable $D$) form the set $\mathcal E^t$. Any element $E$ of $\mathcal E^t$ is a Jordan arc in $A^t$ connecting two distinct points of $C^t$. One side of this arc in the half-open annulus $U^t$ is an open disk $V_E$. Let $K_E$ denote the closure of $U^t-V_E$ in $A^t$, then $K_E$ is compact and connected. Two distinct elements $E_1,E_2\in\mathcal E^t$ cannot intersect one another in $U^t$. This implies that $V_{E_1}$ and $V_{E_2}$ are either disjoint, or one is contained in the other. Moreover, if $E_1\ne E_2 $ and $V_{E_1}\subseteq V_{E_2}$, then $E_1\cap U^t\subseteq V_{E_2}$. Therefore, for any finite number of elements $E_1$, $\ldots$, $E_k\in\mathcal E^t$ the set $K_{E_1}\cap\ldots\cap K_{E_k}$ is still connected. If in a family of continua all finite subfamilies have connected intersection, then the intersection of the whole family is a continuum. This implies that the set $K^t=\bigcap\{K_E:E\in\mathcal E^t\}$ is connected. Our goal is to show that $K^t=C_0$. Clearly $K^{t_1}\subseteq K^{t_2}$ whenever $t_1\le t_2$. The relation $K^t=C_0$ is actually true for all $t$, but for our purposes it will suffice to prove this for one particular value $t<1$. For concreteness, let us select $t=1/2$ and put $K=K^{1/2}$. We claim now that $K=C_0$. To this end consider first the family $\mathcal F=\{E\cap K: E\in\mathcal E^1, E\cap K\ne\emptyset\}$ of compact subsets of the circles in the decomposition $\mathcal C$. Since no two distinct arcs in $\mathcal E^1$ can intersect in $U^1$, all elements of $\mathcal F$ are pairwise disjoint. Each member $F$ of $\mathcal F$ is contained in a unique arc $E(F)\in\mathcal E^1$. The correspondence $F\mapsto E(F)$ is clearly injective. Consider the sets of the form $V_{E(F)}$ for $F\in\mathcal F$; these are all nonempty open sets in $S$. We claim that they are pairwise disjoint. Indeed, if $V_{E(F_1)}$ and $V_{E(F_2)}$ intersect, then one is a subset of the other, say, $V_{E(F_1)}\subseteq V_{E(F_2)}$. Now if $F_1\ne F_2$, then $E(F_1)\ne E(F_2)$, and by our previous arguments $E(F_1)\cap U^1\subseteq V_{E(F_2)}$. But this is impossible since $E(F_1)\cap K\ne\emptyset$ while $V_{E(F_2)}$ is disjoint from $K^1$ and $K^1\supseteq K$. It follows that $\mathcal F$ is countable. Now if $C_0\in\mathcal C$, then $K=C_0\cup\bigcup\mathcal F$, and if $C_0\notin\mathcal C$, then $K=\bigcup\mathcal F$. In both cases the continuum $K$ is decomposed into a countable family of pairwise disjoint closed subsets. By a theorem of Sierpi\'nski (\cite{S}) this is only possible if the family consists of a single set. Clearly $K\notin\mathcal F$ since $C_0\subseteq K$. Therefore, only the case $C_0\in\mathcal C$ and $K=C_0$ is possible, and our claim is proved. \smallskip Finally, consider the parallel circle $C^{1/3}$ of the annulus $A$. Since it is disjoint from $K$, the set $C^{1/3}$ is covered by the family of open disks $V_E$ for $E\in\mathcal E^{1/2}$. By compactness a finite number of these disks cover $C^{1/3}$. If two of these disks is not disjoint, then one is contained in the other, therefore a minimal such covering can only consist of a single set $V_E$. This is clearly impossible if $E$ is a Jordan arc connecting two points of $C^{1/2}$ in $A^{1/2}$. This contradiction proves the theorem. \end{proof} \begin{C}\label{usc} If $\mathcal C$ is a circle decomposition of a surface with empty boundary, then $\mathcal C$ is upper semicontinuous. \end{C} \begin{proof} If the surface is orientable, then Lemma~\ref{usc=ann_nbhds} combined with Theorem~\ref{main} gives the result. In the non-orientable case the circle decomposition lifts to a circle decomposition of the orientable double covering. Upper semicontinuity of the latter obviously implies upper semicontinuity of the former. \end{proof} \begin{R} It is also true that all circle decompositions of surfaces are upper semicontinuous, that is, in Corollary~\ref{usc} the surface $S$ may have boundary. If all connected components of $\partial S$ are circles, then this follows from Theorem~\ref{main}. One may prove directly that if $S$ has a circle decomposition, then none of the boundary components can be homeomorphic to the real line. We omit this proof since this fact will follow from Corollary~\ref{allowable}. \end{R} \section{Proof of the main theorem} \label{sec_proof} \begin{T}\label{torus_annulus} Let $S$ be an orientable surface without boundary. If there exists a circle decomposition of $S$, then $S$ is either a torus or an open annulus. \end{T} \begin{proof} Let $\mathcal C$ be a circle decomposition of $S$. Call two members of $\mathcal C$ equivalent if they are equal or bound a $\mathcal C$-annulus in $S$. This is clearly an equivalence relation, and Theorem~\ref{main} implies that the union of each equivalence class is open in $S$. Since $S$ is connected, there is a single class. Observe that if two $\mathcal C$-annuli in $S$ are not disjoint, then their union is either again a $\mathcal C$-annulus, or else is a torus which equals $S$. If $K\subseteq S$ is any connected compact set, then repeated application of this last observation shows that either $S$ is a torus, or $K$ is covered by a single $\mathcal C$-annulus. So, if $S$ is compact, then it is a torus. If $S$ is not compact, then one can exhaust $S$ by an increasing sequence of connected compact subsets, therefore, $S$ can be exhausted by a strictly increasing sequence of $\mathcal C$-annuli. The union of such a sequence is an open annulus, so in the noncompact case $S$ is an open annulus. \end{proof} \begin{C}\label{allowable} If a surface $S$ admits a circle decomposition, then $S$ is allowable. \end{C} \begin{proof} If $S$ were not allowable, then Lemma~\ref{reduction} would produce $\widetilde{S}$, orientable without boundary, still not allowable, and still admitting a circle decomposition. But Theorem~\ref{torus_annulus} implies that such an $\widetilde{S}$ must be a torus or an open annulus, both of which are allowable, a contradiction. \end{proof}
{"config": "arxiv", "file": "1009.0575.tex"}
TITLE: Finding the elementary and power sum symmetric polynomial given complete symmetric polynomial QUESTION [2 upvotes]: I have been given the complete symmetric polynomial, $h_n$ as $$h_n = n, \ \ \forall n \geq 1$$ I have to show that the sequence of elementary symmetric polynomials $(e_n)$ and the power sum symmetric polynomial $(p_n)$ are periodic with period $3$ and $6$ respectively. (Note some unknown values of $x_1,x_2,\cdots$ have been plugged into $h_n,e_n,p_n$.) We denote the the generating functions of $h_n, \ p_n, \ e_n$ as $H(t),\ P(t),\ E(t)$ respectively, and $$E(t)H(-t) = 1$$ $$P(t) = \frac{H'(t)}{H(t)}$$ Therefore, I get $$H(t)=\sum_{n \geq 0} h_nt^n = h_0 +\sum_{n \geq 1} nt^n = 1+\frac{t}{(1-t)^2} = \frac{1+t^2 - t}{(1-t)^2}$$ $$\implies E(t)=\frac{(1+t)^2}{1+t^2 + t}=\frac{(1-t)(1+t)^2}{1-t^3}$$ $$\implies P(t)=\frac{1+t}{(1-t)(1+t^2 - t)}=\frac{(1+t)^2}{(1-t)(1+t^3)}$$ Any kind of help will be appreciated! Addendum: This is a question that appears in Macdonald's Symmetric Functions and Hall Polynomials, all notations are borrowed from that text. You can find an image of the original question below: REPLY [1 votes]: I was able to solve the question, and I thought I should post the answer. Sequence of Elementary Symmetric Functions $$E(t) = \frac{(1-t)(1+t)^2}{1-t^3} = \frac{1-t^3+t-t^2}{1-t^3}=1+\frac{t-t^2}{1-t^3}$$ $$\text{Therefore, } e_n= \begin{cases} 1 & n = 3k+1\\ -1 & n = 3k+2\\ 0 & n = 3k+3 \end{cases} \ \ \ \ \ \ \forall k \geq 0$$ Sequence of Power Sum Symmetric Functions $$P(t) = \frac{(1+t)^2}{(1-t)(1+t^3)}=\frac{(1+t+t^2)(1+t)^2}{(1-t^3)(1+t^3)}=\frac{1 + 3t + 4t^2 + 3t^3 + t^4}{1-t^6}$$ $$\implies P(t) = 1+\frac{3t + 4t^2 + 3t^3 + t^4 + t^6}{1-t^6}$$ $$\text{Therefore, } p_n= \begin{cases} 3 & n = 6k+1\\ 4 & n = 6k+2\\ 3 & n = 6k+3\\ 1 & n = 6k+4\\ 0 & n = 6k+5\\ 1 & n = 6k+6 \end{cases} \ \ \ \ \ \ \forall k \geq 0$$
{"set_name": "stack_exchange", "score": 2, "question_id": 2716332}
\begin{document} \begin{center} {\bf THE FERMAT-TORRICELLI PROBLEM IN THE LIGHT OF\\ CONVEX ANALYSIS}\\[2ex] Nguyen Mau Nam\footnote{Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, Portland, OR 97202, United States (mau.nam.nguyen@pdx.edu). The research of Nguyen Mau Nam was partially supported by the Simons Foundation under grant \#208785.} \end{center} \small{\bf Abstract.} In the early 17th century, Pierre de Fermat proposed the following problem: given three points in the plane, find a point such that the sum of its Euclidean distances to the three given points is minimal. This problem was solved by Evangelista Torricelli and was named the {\em Fermat-Torricelli problem}. A more general version of the Fermat-Torricelli problem asks for a point that minimizes the sum of the distances to a finite number of given points in $\Bbb R^n$. This is one of the main problems in location science. In this paper, we revisit the Fermat-Torricelli problem from both theoretical and numerical viewpoints using some ingredients of of convex analysis and optimization. \medskip \vspace*{0,05in} \noindent {\bf Key words.} Fermat-Torricelli problem, Weiszfeld's algorithm, Kuhn's proof, convex analysis, subgradients \noindent {\bf AMS subject classifications.} 49J52, 49J53, 90C31. \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \renewcommand{\theequation}{\thesection.\arabic{equation}} \normalsize \section{Introduction} The Fermat-Torricelli problem asks for a point that minimizes the sum of the distances to three given points in the plane. This problem was proposed by Fermat and solved by Torricelli. Torricelli's solution states as follows: If one of the angles of the triangle formed by the three given points is greater than or equal to $120^\circ$, the corresponding vertex is the solution of the problem. Otherwise, the solution is a unique point inside of the triangle formed by the three points such that each side is seen at an angle of $120^\circ$. The first numerical algorithm for solving the general Fermat-Torricelli problem was introduced by Weiszfeld in 1937 \cite{w}. The assumptions that guarantee the convergence along with the proof were given by Kuhn in 1972. Kuhn also pointed out an example in which the Weiszfeld's algorithm fails to converge; see \cite{k}. The Fermat-Torricelli problem has attracted great attention from many researchers not only because of its mathematical beauty, but also because of its important applications to location science. Many generalized versions of the Fermat-Torricelli and several new algorithms have been introduced to deal with generalized Fermat-Torricelli problems as well as to improve the Weiszfeld's algorithm; see, e.g., \cite{b1,n2,mns,mv,ul,vz}. The problem has also been revisited several times from different viewpoints; see, e.g., \cite{ck,d,p,wf} and the references therein. In this picture, our goal is not to produce any new result, but to provide easy access to the problem from both theoretical and numerical aspects using some tools of convex analysis. These tools are presented in the paper by elementary proofs that are understandable for students with basic background in introduction to analysis. The paper is organized as follows. In Section 2, we proof the existence and uniqueness of the optimal solution. We also present the proofs of properties of the optimal solution as well as its its construction using \emph{convex subdifferential}. The advantage of using convex analysis when solving the Fermat-Torricelli problem has been observed in many books on convex and nonsmooth analysis; see, e.g., \cite{clsw,gn,r1} and the references therein. Section 3 is devoted to revisiting Kuhn's proof of the convergence of the Weiszfeld's algorithm. In this section we follow the theme to proof the convergence given by Kuhn \cite{k}, but we include some ingredients from convex analysis to replace for some technical tools in order to make the proof more clear. Throughout the paper, $\B$ denotes the closed unit ball of $\R^n$; $\B(\ox; r)$ denotes the closed ball with center $\ox$ and radius $r$. \section{Elements of Convex Analysis and Properties of Solutions} In this section, we review important concepts of convex analysis to study the classical Fermat-Torricelli problem as well as the problem in the general form. We also present element proofs for some properties of optimal solutions of the problem. More details of convex analysis can be found, for instance, in \cite{r}. Let $\|\cdot\|$ be the Euclidean norm in $\Bbb R^n$. Given a finite number of points $a_i$ for $i=1,\ldots,m$ in $\Bbb R^n$, define \begin{equation}\label{cost} \ph(x):=\sum_{i=1}^m \|x-a_i\|. \end{equation} Then the mathematical model of the Fermat-Torricelli problem is \begin{equation}\label{ft} \mbox{\rm minimize }\ph(x)\; \mbox{\rm subject to }x\in \Bbb R^n. \end{equation} The \emph{weighted version} of the problem can be formulated and treated by a similar way. Let $f\colon\R^n\to\R$ be an real-valued function. The {\em epigraph} of $f$ is a subset of $\R^n\times\R$ defined by \begin{equation*} \mbox{epi }f:=\big\{(x,\alpha)\in\R^{n+1}\;\big|\;x\in\R^n \;\mbox{ and }\;\alpha\ge f(x)\big\}. \end{equation*} The function $f$ is called {\em convex} if \begin{equation*} f\big(\lm x+(1-\lm)y\big)\le\lm f(x)+(1-\lm)f(y)\;\mbox{ for all }\;x, y\in \R^n\;\mbox{ and }\;\lm\in (0,1). \end{equation*} If this equality becomes strict for $x\neq y$, we say that $f$ is \emph{strictly convex}. We can prove that $f$ is a convex function on $\Bbb R^n$ if and only if its epigraph is a convex set in $\Bbb R^{n+1}$. It is clear that the function $\ph$ given by (\ref{cost}) is a convex function. \begin{Proposition} Let $f: \R^n\to \R$ be a convex function. Then $f$ has a local minimum at $\bar x$ if and only if $f$ has an absolute minimum at $\bar x$. \end{Proposition} {\bf Proof:} We only need to prove the implication since the converse is trivial. Suppose that $f$ has a local minimum at $\bar x$. Then there exists $\delta>0$ with \begin{equation*} f(u)\geq f(\bar x) \mbox{ for all }u\in \B(\bar x ;\delta). \end{equation*} For any $x\in \R$, one has that $x_k=(1-\dfrac{1}{k})\bar x+\dfrac{1}{k}x \to \bar x$. Thus, $x_k\in \B(\bar x; \delta)$ when $k$ is sufficiently large. It follows that \begin{equation*} f(\bar x)\leq f(x_k)\leq (1-\dfrac{1}{k}) f(\bar x)+\dfrac{1}{k} f(x). \end{equation*} This implies \begin{equation*} \dfrac{1}{k}f(\bar x)\leq \dfrac{1}{k} f(x), \end{equation*} and hence $f(\bar x)\leq f(x)$. Therefore, $f$ has an absolute minimum at $\bar x$. $\h$ \begin{Proposition} The solution set of the Fermat-Torricelli problem {\rm (\ref{ft})} is nonempty. \end{Proposition} {\bf Proof:} Let $m:=\inf\{\ph(x)\; |\; x\in \R^n\}$. Then $m$ is a nonnegative real number. Let $(x_k)$ be a sequence such that \begin{equation*} \lim_{k\to\infty}\ph(x_k)=m. \end{equation*} By definition, there exists $k_0\in \Bbb N$ satisfying \begin{equation*} \|x_k-a_1\|\leq \ph(x_k)\leq m+1\; \mbox{\rm for all }k\geq k_0. \end{equation*} This implies $\|x_k\|\leq m+1+\|a_1\|$. Thus, $(x_k)$ is a bounded sequence, so it has a subsequence $(x_{k_\ell})$ that converges to $\ox\in \R^n$. Since $\ph$ is a continuous function, \begin{equation*} \ph(\ox)=\lim_{\ell\to\infty}\ph(x_{k_\ell})=m. \end{equation*} Therefore, $\ox$ is an optimal solution of the problem. $\h$ For two different points $a, b\in \R^n$, the line containing $a$ and $b$ is the following set: \begin{equation*} \mathcal{L}(a, b):=\{ta+(1-t)b\; |\; t\in \R\}. \end{equation*} \begin{Proposition} Suppose that $a_i$ for $i=1,\ldots,m$ do not lie on the same line (not collinear). Then the function $\ph$ given by {\rm (\ref{cost})} is strictly convex and the Fermat-Torricelli problem {\rm (\ref{ft})} has a unique solution. \end{Proposition} {\bf Proof: }Define $\ph_i(x):=\|x-a_i\|$ for $i=1,\ldots,m$. Then $\ph=\sum_{i=1}^m\ph_i$. For any $x, y\in \R^n$ and $\lambda\in (0, 1)$, one has \begin{equation*} \ph_i(\lambda x+(1-\lambda)y)\leq \lambda \ph_i(x)+(1-\lambda)\ph_i(y)\; \mbox{\rm for }i=1,\ldots,m. \end{equation*} This implies \begin{equation}\label{convexity} \ph(\lambda x+(1-\lambda)y)\leq \lambda \ph(x)+(1-\lambda)\ph(y). \end{equation} On the contrary, suppose $\ph$ is not strictly convex. It means that there exist $\ox, \oy\in \R^n$ with $\ox\neq\oy$ and $\lambda\in (0,1)$ for which (\ref{convexity}) holds as equality. Then \begin{equation*} \ph_i(\lambda \ox+(1-\lambda)\oy)= \lambda \ph_i(\ox)+(1-\lambda)\ph_i(\oy)\; \mbox{\rm for }i=1,\ldots,m. \end{equation*} Thus, \begin{equation*} \|\lambda (\ox-a_i)+(1-\lambda)(\oy-a_i)\|= \|\lambda(\ox-a_i)\|+\|(1-\lambda)(\oy-a_i)\|\; \mbox{\rm for }i=1,\ldots,m. \end{equation*} If $\ox\neq a_i$ and $\oy\neq a_i$, then there exists $t_i>0$ such that \begin{equation*} t_i\lambda (\ox-a_i)=(1-\lambda)(\oy-a_i). \end{equation*} Thus, $\ox-a_i=\gamma_i (\oy-a_i)$, where $\gamma_i:=\dfrac{1-\lambda}{t_i\lambda}$. Since $\ox\neq \oy$, one has $\gamma\neq 1$, and \begin{equation*} a_i=\dfrac{1}{1-\gamma}\ox-\dfrac{\gamma}{1-\gamma}\oy\in \mathcal{L}(\ox,\oy). \end{equation*} In the case where $\ox=a_i$ or $\oy=a_i$, it is obvious that $a_i\in \mathcal{L}(\ox, \oy)$. We have proved that $a_i\in \mathcal{L}(\ox,\oy)$ for $i=1,\ldots,m$, which is a contradiction. $\h$ An element $v\in\R^n$ is called a {\em subgradient} of a convex function $f\colon\R^n\to\R$ at $\ox\in\R^n$ if it satisfies the inequality \begin{equation*}\label{convex subdifferential} f(x)\geq f(\ox)+\la v, x-\ox\ra \;\mbox{ for all }\;x\in\R^n, \end{equation*} where $\la\cdot,\cdot\ra$ stands for the usual scalar product in $\R^n$. The set of all subgradients of $f$ at $\ox$ is called the \emph{subdifferential} of the function at $\ox$ and is denoted by $\partial f(\ox)$. Directly from the definition, one has the following subdifferential Fermat rule: \begin{equation}\label{fermat} \mbox{\rm $f$ has an absolute minimum at $\ox$ if and only if $0\in \partial f(\ox)$.} \end{equation} The proposition below shows that the subdifferential of a convex function at a given point reduces to the gradient at that point when the function is differentiable. \begin{Proposition}\label{dd1} Suppose that $f:\R^n\to \R$ is convex and Fr\'echet differentiable at $\ox$. Then \begin{equation}\label{d1} \la\nabla f(\ox), x-\ox\ra \leq f(x)-f(\ox) \; \mbox{ for all }x\in \R^n. \end{equation} Moreover, $\partial f(\ox)=\{\nabla f(\ox)\}$. \end{Proposition} {\bf Proof: }Since $f$ is Fr\'echet differentiable at $\ox$, by definition, for any $\epsilon>0$, there exists $\delta>0$ such that \begin{equation*}\label{d2} -\epsilon \|x-\ox\|\leq f(x)-f(\ox)-\la \nabla f(\ox), x-\ox\ra \leq \epsilon \|x-\ox\|\; \mbox{\rm whenever }\|x-\ox\|<\delta. \end{equation*} Define \begin{equation*} \psi(x)=f(x)-f(\ox)-\la \nabla f(\ox), x-\ox\ra +\epsilon \|x-\ox\|. \end{equation*} Then $\psi(x)\geq \psi(\ox)=0$ for all $x\in \B(\ox; \delta)$. Since $\ph$ is a convex function, $\psi(x)\geq \psi(\ox)$ for all $x\in \R^n$. Thus, \begin{equation*} \la \nabla f(\ox), x-\ox\ra \leq f(x)-f(\ox)+\epsilon \|x-\ox\| \; \mbox{for all }x\in \R^n. \end{equation*} Letting $\epsilon\to 0$, one obtains (\ref{d1}). Equality (\ref{d1}) implies that $\nabla f(\ox)\in \partial f(\ox)$. Take any $v\in \partial f(\ox)$, one has \begin{equation*} \la v, x-\ox\ra \leq f(x)-f(\ox)\; \mbox{\rm for all }x\in \R^n. \end{equation*} The Fr\'echet differentiability of $f$ also implies that for any $\epsilon>0$, there exists $\delta>0$ such that \begin{equation*} \la v-\nabla f(\ox), x-\ox\ra \leq \epsilon \|x-\ox\|\; \mbox{ whenever }\|x-\ox\|<\delta. \end{equation*} It follows that $\|v-\nabla f(\ox)\|\leq \epsilon$, which implies $v=\nabla f(\ox)$ since $\epsilon>0$ is arbitrary. Therefore, $\partial f(\ox)=\{\nabla f(\ox)\}$. $\h$ The subdifferential formula for the norm function in the next example plays a crucial role in our subsequent analysis to solve the Fermat-Torricelli problem. \begin{Example} {\rm Let $p(x)=\|x\|$, the Euclidean norm function on $\R^n$. Then \begin{equation*} \partial p(x)=\begin{cases} \B &\text{if }\;x=0, \\ \Big\{\dfrac{x}{\|x\|}\Big\}& \text{otherwise}. \end{cases} \end{equation*} Since the function $p$ is Fr\'echet differentiable with $\nabla p(x)=\dfrac{x}{\|x\|}$ for $x\neq 0$, it suffices to prove the formula for $x=0$. By definition, an element $v\in \R^n$ is a subgradient of $p$ at $0$ if and only if \begin{equation*} \la v, x\ra =\la v, x-0\ra \leq p(x)-p(0)=\|x\|\; \mbox{\rm for all }x\in \R^n. \end{equation*} For $x:=v\in \R^n$, one has $\la v, v\ra \leq \|v\|$. This implies $\|v\|\leq 1$ or $v\in \B$. Moreover, if $v\in \B$, by the Cauchy-Schwarz inequality, \begin{equation*} \la v, x-0\ra=\la v, x\ra \leq \|v\| \|x\|\leq \|x\|=p(x)-p(0)\; \mbox{\rm for all }x\in \R^n. \end{equation*} It follows that $v\in \partial p(0)$. Therefore, $\partial p(0)=\B$. } \end{Example} Solving the Fermat-Torricelli problem involves using the following simplified subdifferential rule for the sum of a nondifferentiable function and a differentiable function. A more general formula holds true when all of the functions involved are nondifferentiable. \begin{Proposition}\label{sr} Suppose that $f_i: \R^n\to \R$ for $i=1,2$ are convex functions and $f_2$ is differentiable at $\ox$. Then \begin{equation}\label{srl} \partial (f_1+f_2)(\ox)=\partial f_1(\ox)+\nabla f_2(\ox). \end{equation} \end{Proposition} {\bf Proof: }Fix any $v\in \partial (f_1+f_2)(\ox)$. For any $x\in \R^n$, one has \begin{equation*} \la v, x-\ox\ra \leq f_1(x)-f_1(\ox)+f_2(x)-f_2(\ox)=f_1(x)-f_1(\ox)+\la \nabla f_2(\ox), x-\ox\ra +o(\|x-\ox\|). \end{equation*} For any $\epsilon>0$, there exists $\delta>0$ such that \begin{equation*} 0\leq \la \nabla f_2(\ox)-v, x-\ox\ra +f_1(x)-f_1(\ox)+\epsilon\|x-\ox\|\; \mbox{\rm for }x\in \B(\ox; \delta). \end{equation*} The convexity of $f_1$ implies that this is true for all $x$. Letting $\epsilon\to 0$, one has \begin{equation*} 0\leq \la \nabla f_2(\ox)-v, x-\ox\ra +f_1(x)-f_1(\ox)\; \mbox{\rm for }x\in \R^n. \end{equation*} By definition, $v-\nabla f_2(\ox)\in \partial f_1(\ox)$, and hence $v\in \partial f_1(\ox)+\nabla f_2(\ox).$ We have proved the inclusion $\subseteq$. The opposite inclusion follows from the definition. $\h$ Let us now use subgradients of the norm function to derive Torricelli's solution for the Fermat-Torricelli problem. Given two nonzero vectors $u$ and $v$, define \begin{equation*} \cos (u, v)=\dfrac{\la u, v\ra}{\|u\| \|v\|}. \end{equation*} For $\ox\neq a_i$, let \begin{equation*}\label{v} v_i=\dfrac{\ox-a_i}{\|\ox-a_i\|}, \; i=1,2,3. \end{equation*} Geometrically, $v_i$ is the unit vector pointing in the direction from the vertex $a_i$ to $\ox$. Observe that the classical Fermat-Torricelli problem always has a unique solution even if three given points are on the same line. In the latter case, the middle point is the solution of the problem. \begin{Proposition}\label{3point} Consider the Fermat-Torricelli problem given by three points $a_1, a_2, a_3$. \\ {\rm (i)} Suppose $\ox\notin\{a_1, a_2, a_3\}$. Then $\ox$ is the solution of the problem if and only if \begin{equation*} \cos(v_1, v_2)=\cos(v_2, v_3)=\cos(v_3, v_1)=-1/2. \end{equation*} {\rm (ii)} Suppose $\ox\in \{a_1, a_2, a_3\}$, say $\ox=a_1$. Then $\ox$ is the solution of the problem if and only if \begin{equation*} \cos\la v_2, v_3\ra\leq -1/2. \end{equation*} \end{Proposition} {\bf Proof: }(i) In this case, the function $\ph$ given by (\ref{cost}) is Fr\'echet differentiable at $\ox$. Since $\ph$ is convex, $\ox$ is the solution of the Fermat-Torricelli problem if and only if \begin{equation*} \nabla \ph(\ox)=v_1+v_2+v_3=0. \end{equation*} Since $\|v_i\|=1$ for $i=1,2,3$, one has \begin{align*} &\la v_1, v_2\ra +\la v_1, v_3\ra=-1\\ &\la v_2, v_1\ra +\la v_2, v_3\ra =-1\\ &\la v_3, v_1\ra +\la v_3, v_2\ra =-1. \end{align*} Solving this system of equations yields \begin{equation*} \la v_i, v_j\ra =\cos(v_i,v_j)=-1/2\; \mbox{\rm for } i\neq j, i,j\in \{1,2,3\}. \end{equation*} Moreover, if $\la v_i, v_j\ra =-1/2$ for $i\neq j$, $i,j\in \{1,2,3\}$, then \begin{equation*} \|v_1+v_2+v_3\|^2=\sum_{i=1}^3\|v_i\|+\sum_{i,j=1, i\neq j}^3\la v_i, v_j\ra =0. \end{equation*} It follows that $v_1+v_2+v_3=0$. \\[1ex] (ii) By the subdifferential Fermat rule (\ref{fermat}) and the subdifferential sum rule (\ref{srl}), $\ox=a_1$ is the solution of the Fermat-Torricelli problem if and only if \begin{equation*} 0\in \partial \ph(a_1)=\B+v_2+v_3. \end{equation*} This is equivalent to $\|v_2+v_3\|^2\leq 1$ or $\|v_2\|^2+\|v_3\|^2+2\la v_2, v_3\ra\leq 1$. Since $v_2$ and $v_3$ are unit vectors, we obtain \begin{equation*} \la v_2, v_3\ra =\cos(v_2, v_3)\leq -1/2. \end{equation*} The proof is now complete. $\h$ \begin{Example}{\rm Let us discuss the construction of the solution of the Fermat-Torricelli problem in the plane. Consider the Fermat-Torricelli problem given by three points $A$, $B$, and $C$ as in the figure above. If one of the angles of the triangle $ABC$ is greater than or equal to $120^\circ$, then the corresponding vertex is the solution of the problem by Proposition \ref{3point} (ii). Let us consider the case where non of the angles of the triangle is greater than or equal to $120^\circ$. Construct two equilateral triangles $ABD$ and $ACE$ and let $S$ be the intersection of $DC$ and $BE$ as in the figure. Two quadrilaterals $ADBC$ and $ABCE$ are convex, and hence $S$ lies inside the triangle $ABC$. It is clear that two triangles $DAC$ and $BAE$ are congruent (SAS). A rotation of $60^\circ$ about $A$ maps the triangle $DAC$ to the triangle $BAE$. The rotation maps $CD$ to $BE$, so $\angle DSB=60^\circ$. Let $T$ be the image of $S$ though this rotation. Then $T$ belongs to $BE$. It follows that $\angle AST=\angle ASE=60^\circ$. Moreover, $\angle DSA=60^\circ$, and hence $\angle BSA=120^\circ$. It is now clear that $\angle ASC=120^\circ$ and $\angle BSC=120^\circ$. By Proposition \ref{3point} (i), the point $S$ is the solution of the problem.} \begin{center} \includegraphics[scale=0.70]{Fig1.eps} \end{center} \end{Example} \section{The Weiszfeld's Algorithm} In this section, we revisit Kuhn's proof \cite{k} of the convergence of the Weiszfeld's algorithm \cite{w} for solving the Fermat-Torricelli problem (\ref{ft}). With some additional ingredients of convex analysis, we are able to provide a more clear picture of Kuhn's proof. Throughout this section, we assume that $a_i$ for $i=1,\ldots,m$ are not collinear. The gradient of the function $\ph$ given by (\ref{cost}) is \begin{equation*} \nabla \ph(x)=\sum_{i=1}^m \dfrac{x-a_i}{\|x-a_i\|}, \; x\notin\{a_1, a_2, \ldots, a_m\}. \end{equation*} Solving the equation $\nabla \ph(x)=0$ gives \begin{equation*} x=\dfrac{\sum_{i=1}^m \dfrac{a_i}{\|x-a_i\|}}{\sum_{i=1}^m\dfrac{1}{\|x-a_i\|}}=:F(x). \end{equation*} For continuity, define $F(x):=x$ for $x\in \{a_1, a_2, \ldots, a_m\}$. Weiszfeld introduce the following algorithm: choose a starting point $x_0\in \R^n$ and define \begin{equation*} x_{k+1}=F(x_k)\; \mbox{\rm for }k\in \N. \end{equation*} He also claimed that if $x_0\notin \{a_1, a_2, \ldots, a_m\}$, where $a_i$ for $i=1,\ldots,m$ are not collinear, then $(x_k)$ converges to the unique optimal solution of the problem. A correct statement and the proof of the convergence were given by Kuhn in 1972. The proposition below guarantees that the function value decreases after each iteration; see \cite[Subsection 3.1]{k}. \begin{Proposition}\label{dp} If $F(x)\neq x$, then $\ph(F(x))<\ph(x)$. \end{Proposition} {\bf Proof: }It is clear that $x$ is not a vertex, since otherwise, $F(x)=x$. Moreover, $F(x)$ is the unique minimizer of the following strictly convex function: \begin{equation*} g(z)=\sum_{i=1}^m \dfrac{\|z-a_i\|^2}{\|x-a_i\|}. \end{equation*} Since $F(x)\neq x$, one has $g(F(x))<g(x)=\ph(x)$. Moreover, \begin{align*} g(F(x))&=\sum_{i=1}^m\dfrac{\|F(x)-a_i\|^2}{\|x-a_i\|}\\ &=\sum_{i=1}^m \dfrac{(\|x-a_i\|+\|F(x)-a_i\|-\|x-a_i\|)^2}{\|x-a_i\|}\\ &=\ph(x)+2(\ph(F(x))-\ph(x))+\sum_{i=1}^m\dfrac{(\|F(x)-a_i\|-\|x-a_i\|)^2}{\|x-a_i\|}. \end{align*} It follows that \begin{equation*} 2\ph(F(x))+\sum_{i=1}^m\dfrac{(\|F(x)-a_i\|-\|x-a_i\|)^2}{\|x-a_i\|}<2\ph(x). \end{equation*} Therefore, $\ph(F(x))<\ph(x)$. $\h$ The next two propositions show the behavior of the \emph{algorithm mapping} $F$ near a vertex and deal with the case where a vertex is the solution of the problem. Let us first present a necessary and sufficient condition for a vertex to be the optimal solution of the problem. It can be used to easily derive the result in \cite[Subsection 2.1]{k}. Define \begin{equation*} R_k:=\sum_{i=1, i\neq k}^m\dfrac{a_i-a_k}{\|a_i-a_k\|}. \end{equation*} \begin{Proposition}\label{lm2} The vertex $a_k$ is the optimal solution of the problem if and only if $$\|R_k\|\leq 1.$$ \end{Proposition} {\bf Proof: }By the subdifferential Fermat rule (\ref{fermat}) and the subdifferential sum rule from Proposition \ref{sr}, the vertex $a_k$ is the optimal solution of the problem if and only if \begin{equation*} 0\in \partial \ph (a_k)=-R_k+\B(0,1). \end{equation*} This is equivalent to $\|R_k\|\leq 1$. $\h$ Proposition \ref{lm2} allows us to simplify the proof of the following result in \cite[Subsection 3.2]{k}. \begin{Proposition}\label{k} Suppose that $a_k$ is not the optimal solution. Then there exists $\delta>0$ such that $0<\|x-a_k\|\leq \delta$ implies that there exists a positive integer $s$ with \begin{equation*} \|F^s(x)-a_k\|>\delta\; \mbox{\rm and } \|F^{s-1}(x)-a_k\|\leq \delta. \end{equation*} \end{Proposition} {\bf Proof: }For any $x$, which is not a vertex, one has \begin{align*} F(x)=\dfrac{\sum_{i=1}^m\dfrac{a_i}{\|x-a_i\|}}{\sum_{i=1}^m\dfrac{1}{\|x-a_i\|}}. \end{align*} Then \begin{equation*} F(x)-a_k=\dfrac{\sum_{i=1, i\neq k}^m\dfrac{a_i-a_k}{\|x-a_i\|}}{\sum_{i=1}^m\dfrac{1}{\|x-a_i\|}}. \end{equation*} Thus, \begin{equation*} \lim_{x\to a_k}\dfrac{F(x)-a_k}{\|x-a_k\|}=\dfrac{\sum_{i=1, i\neq k}^m\dfrac{a_i-a_k}{\|x-a_i\|}}{1+\sum_{i=1, i\neq k}^m\dfrac{\|x-a_k\|}{\|x-a_i\|}}=R_k. \end{equation*} By Proposition \ref{lm2}, \begin{equation*} \lim_{x\to a_k}\dfrac{\|F(x)-a_k\|}{\|x-a_k\|}=\|R_k\|>1. \end{equation*} Thus, there exist $\epsilon>0$ and $\delta>0$ such that \begin{equation*} \dfrac{\|F(x)-a_k\|}{\|x-a_k\|}>(1+\epsilon) \; \mbox{\rm whenever }0<\|x-a_k\|<\delta. \end{equation*} and $a_i\notin \B(a_k, \delta)$ for $i\neq k$. The conclusion then follows easily. $\h$ We finally present Kuhn's statement and proof for the convergence of the Weiszfeld's algorithm; see \cite[Subsection 3.4]{k}. \begin{Theorem}Let $(x_k)$ be the sequence formed by the Weiszfeld's algorithm. Suppose that $x_k\notin\{a_1, a_2, \ldots, a_m\}$ for $k\geq 0$. Then $(x_k)$ converges to the optimal solution $\ox$ of the problem. \end{Theorem} {\bf Proof: }In the case where $x_k=x_{k+1}$ for some $k=k_0$, one has that $x_k$ is a constant sequence for $k\geq k_0$. Thus, it converges to $x_{k_0}$. Since $F(x_{k_0})=x_{k_0}$ and $x_{k_0}$ is not a vertex, $x_{k_0}$ is the solution of the problem. So we can assume that $x_{k+1}\neq x_k$ for every $k$. By Proposition \ref{dp}, the sequence $(\ph(x_k))$ is nonnegative and decreasing, so it converges. It follows that \begin{equation}\label{dc} \lim_{k\to\infty} (\ph(x_k)-\ph(x_{k+1}))=0. \end{equation} By definition, for $k\geq 1$, $x_k\in \mbox{\rm co }\{a_1, \ldots, a_m\}$, which is a compact set. Then $(x_k)$ has a convergent subsequence $(x_{k_\ell})$ to a point $\oz$. It suffices to prove that $\oz=\ox$. By (\ref{dc}), \begin{equation*} \lim_{\ell\to\infty}(\ph(x_{k_\ell})-\ph(F(x_{k_\ell}))=0. \end{equation*} By the continuity, $\ph(\oz)=\ph(F(\oz))$, which implies $F(\oz)=\oz$. If $\oz$ is not a vertex, one has $\oz$ is the solution of the problem, so $\oz=\ox$. Let us consider the case where $\oz$ is a vertex, say $a_1$. Suppose by contradiction that $\oz\neq \ox$. Choose $\delta$ sufficiently small such that the property in Proposition \ref{k} holds and $\B(a_1; \delta)$ does not contain $\ox$ and $a_i$ for $i=2,\ldots,m$. Since $x_{k_\ell}\to a_1=\oz$, we can assume without loss of generality that the sequence is contained in $\B(a_1; \delta)$. For $x=x_{k_1}$, choose $q_1$ such that $x_{q_1}\in \B(a_1; \delta)$ and $F(x_{q_1})\notin \B(a_1; \delta)$. Choose an index $k_\ell>q_1$ and apply Proposition \ref{k}, we find $q_2>q_1$ such that $x_{q_2}\in \B(a_1; \delta)$ and $F(x_{q_2})\notin \B(a_1; \delta)$. Repeating this procedure, we find $(x_{q_\ell})$ with $x_{q_\ell}\in \B(a_1; \delta)$ and $F(x_{q_\ell})$ is not in this ball. Extracting a further subsequence, we can assume that $x_{q_\ell}\to \bar q$. By the procedure that has been used, one has $F(\bar q)=\bar q$. If $\bar q$ is not a vertex, then it is the solution, which is a contradiction because the solution $\ox$ is not in $\B(a_1; \delta)$. Thus, $\bar q$ is a vertex, which must be $a_1$. Then \begin{equation*} \lim_{\ell\to\infty}\dfrac{\|F(x_{q_\ell})-a_1\|}{\|x_{q_\ell}-a_1\|}=\infty. \end{equation*} This is a contradiction according to Proposition \ref{k}. $\h$\\[1ex] {\bf Acknowledgement.} The author would like to thank Prof. Joel Shapiro for giving comments that help improve the presentation of the paper.
{"config": "arxiv", "file": "1302.5244/FTNam.tex"}
TITLE: using monte carlo to determine uncertainty. QUESTION [0 upvotes]: In my thesis there are some uncertainties (for example in geometry: the diameter of cylinder, the height ,.. or the temperature of inlet fluid ) and I want to know what is the effect of them in my answer. it seems that a simple approach can be monte carlo. I tried to learn about it but the monte carlo in text books seems irrelevant to my aim. assume there is two uncertainty vary from [-5 to 5] and I have a code that for each input can calculate the output. I want to know why we don't use organized sample instead of random. why we don't use sample 1 instead ofsample 2 in figure below. and I will be grateful if you introduce me a source to study for using monte carlo in my thesis. thanks REPLY [0 votes]: I don't think you want sample 1 or sample 2. Do you have a uniform error on [-5,5], or is the error a Gaussian distribution? If it is Gaussian, you want to sample a Gaussian distribution, not random or uniform. Note that this method of quantifying the error is often called a form of Uncertainty Quantification, or UQ for short.
{"set_name": "stack_exchange", "score": 0, "question_id": 2876714}
\section{Bases of Finitely Generated Free Module have Equal Cardinality} Tags: Module Theory \begin{theorem} Let $R$ be a [[Definition:Commutative Ring with Unity|commutative ring with unity]]. Let $M$ be a [[Definition:Free Module|free]] $R$-[[Definition:Module|module]]. Let $M$ be [[Definition:Finitely Generated Module|finitely generated]]. Let $B$ and $C$ be [[Definition:Basis of Module|bases]] of $M$. Then $B$ and $C$ are [[Definition:Finite|finite]] and have the same [[Definition:Cardinality of Finite Set|cardinality]]. \end{theorem}
{"config": "wiki", "file": "thm_14452.txt"}
TITLE: If the work done by a force along a closed path is zero, is it necessarily conservative? QUESTION [1 upvotes]: I just had a simple doubt. If a force is conservative, we know that the work done by it around a closed path is zero. I believe the converse should also be true. I can't think of any counterexamples in which the work done by the force along a closed path is zero but the force is non conservative. However, when asked how to check whether a force is conservative or not by my professor, I suggested the method of checking the work done along a closed path, but he rejected it saying it wasn't always true. Can you give me a counterexample, or explain why it may not always be true? REPLY [5 votes]: A field is conservative if and only if the work around any closed path is $0$. Therefore, if a field is conservative then the work around a single chosen path is guaranteed to be $0$, but this does not mean if we have a field and a single path has a work of $0$ that the field is conservative, as we have only checked one path, not all paths$^*$. A simple yet contrived example is a a field described by $$\mathbf F(x,y)= \begin{cases} F\,\hat y, & \text{for $x\geq0$} \\ -F\,\hat y, & \text{for $x<0$} \end{cases}$$ You could look at the work done around a closed path where the sign of $x$ does not change and find that the work is $0$. However, if you look at the work done along a closed path where the sign of $x$ does change then you could get paths where the work is not $0$. An example of such a path would be a square path that is bisected by the $x=0$ line. Since we have found a closed path where the work is not $0$ the field is not conservative, even though there do exist closed paths where the work is $0$. $^*$Of course, there are other ways to check if a field is conservative besides explicitly checking the work along every possible path.
{"set_name": "stack_exchange", "score": 1, "question_id": 515326}
TITLE: Closure of the image of an annulus under a non-polynomial entire function QUESTION [0 upvotes]: Let $f:\mathbb{C}\rightarrow\mathbb{C}$ be an entire function. How can one show that if $f$ is not a polynomial, then $\overline{f(A)}=\mathbb{C}$, where $A$ is the annulus $\{z\in\mathbb{C}: |z|>r\},\forall r>0$? If $f$ is a polynomial, then the funtion $g:\mathbb{C}\backslash\{0\}\rightarrow \mathbb{C}$, given by $z\mapsto g(z)=f(1/z)$, has a pole at $z=0$ (and vice-versa). Does this help? REPLY [1 votes]: Hints: Suppose $f(A)$ is not dense. Then there exists an open disk $B(w,s)$ disjoint from $f(A)$. Note that $|\frac 1{f(z)-w}| \leq \frac 1 s$ for $|z| >r$. Let $z_1,z_2,...,z_n$ be the zeros of $f(z)-w$ in $\{z:|z| \leq r\}$ counted according to multiplicities. Let $g(z)=\frac {\prod_j (z-z_j)} {f(z)-w}$. Then $g$ is an entire function and there exists a finite constants $C,D$ such that $|g(z)| \leq C|z|^{n}+D$. This implies that $g$ is a polynomial of degree at most $n$. Can you finish?
{"set_name": "stack_exchange", "score": 0, "question_id": 4124181}
\begin{document} \title[Third kind elliptic integrals and 1-motives] {Third kind elliptic integrals and 1-motives} \author{Cristiana Bertolin} \address{Dipartimento di Matematica, Universit\`a di Torino, Via Carlo Alberto 10, Italy} \email{cristiana.bertolin@unito.it} \subjclass[2010]{11J81, 11J95, 11G99} \keywords{1-motives, periods, third kind integrals} \dedicatory{with a letter of Y. Andr\'e \\ and an appendix by M. Waldschmidt } \begin{abstract} In \cite{B02} we have showed that the Generalized Grothendieck's Period Conjecture applied to 1-motives, whose underlying semi-abelian variety is a product of elliptic curves and of tori, is equivalent to a transcendental conjecture involving elliptic integrals of the first and second kind, and logarithms of complex numbers. In this paper we investigate the Generalized Grothendieck's Period Conjecture in the case of 1-motives whose underlying semi-abelian variety is a \textit{non trivial extension} of a product of elliptic curves by a torus. This will imply the introduction of \textit{elliptic integrals of the third kind} for the computation of the periods of $M$ and therefore the Generalized Grothendieck's Period Conjecture applied to $M$ will be equivalent to a transcendental conjecture involving elliptic integrals of the first, second and third kind. \end{abstract} \maketitle \tableofcontents \section*{Introduction} Let $\cE$ be an elliptic curve defined over $\CC$ with Weierstrass coordinate functions $x$ and $y$. On $\cE$ we have the differential of the first kind $\omega = \frac{dx}{y},$ which is holomorphic, the differential of the second kind $ \eta = -\frac{xdx}{y},$ which has a double pole with residue zero at each point of the lattice $\HH_1(\cE(\CC),\ZZ)$ and no other pole, and the differential of the third kind \[ \xi_Q = \frac{1}{2} \frac{y-y(Q)}{x - x(Q)} \frac{dx}{y}, \] for any point $Q $ of $ \cE(\CC), Q \not=0,$ whose residue divisor is $D=-(0)+(-Q).$ Let $\gamma_1, \gamma_2$ be two closed paths on $\cE(\CC)$ which build a basis for the lattice $\HH_1(\cE(\CC),\ZZ)$. In his Peccot lecture at the Coll\`ege de France in 1977, M. Waldschmidt observed that the periods of the Weierstrass $\wp$-function (\ref{eq:periods-wp}) are the elliptic integrals of the first kind $ \int_{\gamma_i} \omega = \omega_i$ $(i=1,2)$, the quasi-periods of the Weierstrass $\zeta$-function (\ref{eq:periods-zeta}) are the elliptic integrals of the second kind $ \int_{\gamma_i} \eta = \eta_i$ $(i=1,2)$, but \textit{there is no function whose quasi-quasi-periods are elliptic integrals of the third kind}. J.-P.~Serre answered this question furnishing the function \[ f_q(z)= \frac{\sigma(z+q)}{\sigma(z) \sigma(q)} e^{-\zeta(q) z } \qquad \mathrm{with}\; q \in \CC \setminus \Lambda \] whose \textit{quasi-quasi periods} (\ref{eq:periods-fq}) are \textit{the exponentials of the elliptic integrals of the third kind} $ \int_{\gamma_i} \xi_Q = \eta_i q - \omega_i \zeta(q)$ $(i=1,2),$ where $q$ is an elliptic logarithm of the point $Q$. Consider now an extension $G$ of $\cE$ by $\GG_m$ parameterized by the divisor $D=(-Q)-(0)$ of $\mathrm{Pic}^0(\cE) \cong \cE^* = \underline{\Ext}^1(\cE,\GG_m)$. Since the three differentials $\{\omega, \eta,\xi_Q\}$ build a basis of the De Rham cohomology $\HH^1_{\dR}(G)$ of the extension $G$, elliptic integrals of the third kind play a role in the Generalized Grothendieck's Period Conjecture (\ref{eq:GCP}). The aim of this paper is to understand this role applying the Generalized Grothendieck's Period Conjecture to 1-motives whose underlying semi-abelian variety is a \textit{non trivial extension} of a product of elliptic curves by a torus. At the end of this paper the reader can find \begin{itemize} \item an appendix by M. Waldschmidt in which he quotes transcendence results concerning elliptic integrals of the third kind; \item a letter of Y. Andr\'e containing an overview of Grothendieck's Period Conjecture and its generalization. \end{itemize} A 1-motive $M=[u:X \rightarrow G]$ over a sub-field $K$ of $\CC$ consists of a finitely generated free $\ZZ$-module $X$, an extension $G$ of an abelian variety by a torus, and a homomorphism $u:X \to G(K)$. Denote by $M_\CC $ the 1-motive defined over $\CC$ obtained from $M$ extending the scalars from $K$ to $\CC$. In \cite{D75} Deligne associates to the 1-motive $M$ \begin{itemize} \item its De Rham realization $\T_{\dR}(M)$: it is the finite dimensional $K$-vector space $\Lie (G^\natural)$, with $ M^\natural =[u:X \rightarrow G^\natural]$ the universal extension of $M$ by the vector group \par\noindent $\Hom(\Ext^1(M,\GG_a),\GG_a)$, \item its Hodge realization $\T_{\QQ}(M_\CC)$: it is the finite dimensional $\QQ$-vector space $\T_{\ZZ}(M_\CC) \otimes_\ZZ \QQ$, with $\T_{\ZZ}(M_\CC)$ the fibered product of $\Lie (G)$ and $X$ over $G$ via the exponential map $ \exp : \Lie (G) \to G$ and the homomorphism $u:X \to G.$ The $\ZZ$-module $\T_{\ZZ}(M_\CC)$ is in fact endowed with a structure of $\ZZ$-mixed Hodge structure, without torsion, of level $\leq 1$, and of type $\{(0,0),(0,-1),(-1,0), (-1,-1)\}.$ \end{itemize} Since the Hodge realizations attached to 1-motives are mixed Hodge structures, 1-motives are mixed motives. In particular they are the mixed motives coming geometrically from varieties of dimension $\leq 1$. In \cite[(10.1.8)]{D75}, Deligne shows that the De Rham and the Hodge realizations of $M$ are isomorphic \begin{equation} \label{eq:betaM} \beta_M: \T_{\dR}(M) \otimes_K \CC \longrightarrow \T_{\QQ}(M_\CC)\otimes_K \CC. \end{equation} The \textit{periods of M} are the coefficients of the matrix which represents this isomorphism with respect to $K$-bases. By Nori's and Ayoub's works (see \cite{Ay14} and \cite{N00}), it is possible to endow the category of 1-motives with a tannakian structure with rational coefficients, and therefore to define the motivic Galois group \[\Galmot (M)\] of a 1-motive $M$ as the fundamental group of the tannakian sub-category $< M>^\otimes$ generated by $M$ (see \cite[Def 6.1]{D89} or \cite[Def 8.13]{D90}). Applying the Generalized Grothendieck's Period Conjecture proposed by Andr\'e (see conjecture (?!) of Andr\'e's letter) to 1-motives we get \begin{conjecture}[Generalized Grothendieck's Period Conjecture for 1-motives] Let $M$ be a 1-motive defined over a sub-field $K$ of $\CC$, then \begin{equation} \mathrm{tran.deg}_{\QQ}\, K (\mathrm{periods}(M)) \geq \dim \Galmot (M) \label{eq:GCP} \end{equation} where $K (\mathrm{periods}(M))$ is the field generated over $K$ by the periods of $M$. \end{conjecture} In \cite{B02} we showed that the conjecture (\ref{eq:GCP}) applied to a 1-motive of type \[ M=[ u:\ZZ^{r} \, \longrightarrow \,\Pi^n_{j=1} {\cE}_j \times {\GG}_m^s] \] is equivalent to the elliptico-toric conjecture (see \cite[1.1]{B02}) which involves elliptic integrals of the first and second kind and logarithms of complex numbers. Consider now the 1-motive \begin{equation}\label{eq:M} M=[ u:\ZZ^{r} \, \longrightarrow \, G] \end{equation} where $G$ is a \textit{non trivial} extension of a product $\Pi^n_{j=1} \cE_j $ of pairwise not isogenous elliptic curves by the torus $\GG_m^s.$ In this paper we introduce \textit{the 1-motivic elliptic conjecture} (\S \ref{conjecture}) which involves elliptic integrals of the first, second and third kind. Our main Theorem is that this 1-motivic elliptic conjecture is equivalent to the Generalized Grothendieck's Period Conjecture applied to the 1-motive (\ref{eq:M}) (Theorem \ref{thmMain}). The presence of elliptic integrals of the third kind in the 1-motivic elliptic conjecture corresponds to the fact that the extension $G$ underlying $M$ is not trivial. If in the 1-motivic elliptic conjecture we assume that the points defining the extension $G$ are trivial, then this conjecture coincides with the elliptico-toric conjecture stated in\cite[1.1]{B02} (see Remarks \ref{Rk1}). Observe that the 1-motivic elliptic conjecture contains also the Schanuel conjecture (see Remarks \ref{Rk2}). In Section \ref{EllipticIntegral} we recall basic facts about differential forms on elliptic curves. In Section \ref{periods} we study the short exact sequences which ``d\'evissent'' the Hodge and De Rham realizations of 1-motives and which are induced by the weight filtration of 1-motives. In Lemma \ref{lem:decomposition} we prove that instead of working with the 1-motive (\ref{eq:M}) we can work with a direct sum of 1-motives having $r=n=s=1$. Using Deligne's construction of a 1-motive starting from an open singular curve, in \cite[\S 2]{Ber08} D. Bertrand has computed the periods of the 1-motive (\ref{eq:M}) with $r=n=s=1.$ Putting together Lemma \ref{lem:decomposition} and Bertrand's calculation of the periods in the case $r=n=s=1$, we compute explicitly the periods of the 1-motive (\ref{eq:M}) (see Proposition \ref{proof-periods}). In section \ref{motivicGaloisgroup}, which is the most technical one, we study the motivic Galois group of 1-motives. We will follow neither Nori and Ayoub's theories nor Grothendieck's theory involving mixed realizations, but we will work in a completely geometrical setting using \textit{algebraic geometry on tannakian categories}. In Theorem \ref{eq:dimUR} we compute explicitly the dimension of the unipotent radical of the motivic Galois group of an arbitrary 1-motive over $K$. Then, as a corollary, we calculate explicitly the dimension of the motivic Galois group of the 1-motive (\ref{eq:M}) (see Corollary \ref{eq:dimGalMot}). For this last result, we restrict to work with a 1-motive whose underlying extension $G$ involves a product of elliptic curves, because only in this case we know explicitly the dimension of the reductive part of the motivic Galois group (in general, the dimension of the motivic Galois group of an abelian variety is not known). In section \ref{conjecture} we state the 1-motivic elliptic conjecture and we prove our main Theorem \ref{thmMain}. In section \ref{lowDim} we compute explicitly the Generalized Grothendieck's Period Conjecture in the low dimensional case, that is assuming $r=n=s=1$ in (\ref{eq:M}). In particular we investigate the cases where $\mathrm{End}(\cE) \otimes_\ZZ \QQ$-linear dependence and torsion properties affect the dimension of the unipotent radical of $\Galmot (M)$. \section*{Acknowledgements} I want to express my gratitude to M. Waldschmidt for pointing out to me the study of third kind elliptic integrals and for his appendix. I am very grateful to Y. Andr\'e for his letter and for the discussions we had about the motivic Galois group. I also thank D. Bertrand and P. Philippon for their comments on an earlier version of this paper. This paper was written during a 2 months stay at the IHES. The author thanks the Institute for the wonderful work conditions. \section*{Notation} Let $K$ be a sub-field of $\CC$ and denote by $\overline{K}$ its algebraic closure. A 1-motive $M=[u:X \rightarrow G]$ over $K$ consists of a group scheme $X$ which is locally for the \'etale topology a constant group scheme defined by a finitely generated free $\ZZ \,$-module, an extension $G$ of an abelian variety $A$ by a torus $T$, and a homomorphism $u:X \to G(K)$. In this paper we will consider above all 1-motives in which $X= \ZZ^r$ and $G$ is an extension of a finite product $\Pi^n_{j=1} \cE_j $ of elliptic curves by the torus $\GG_m^s$ (here $r,n$ and $s$ are integers bigger or equal to 0). There is a more symmetrical definition of 1-motives. In fact to have the 1-motive $M=[u:\ZZ^r \rightarrow G]$ is equivalent to have the 7-tuple $(\ZZ^r,\ZZ^s, \Pi^n_{j=1} \cE_j ,\Pi^n_{j=1} \cE_j^*, v ,v^*,\psi)$ where \begin{itemize} \item $\ZZ^s$ is the character group of the torus $\GG_m^s$ underlying the 1-motive $M$. \item $v:\ZZ^r \rightarrow \Pi^n_{j=1} \cE_j$ and $v^*:\ZZ^s \rightarrow \Pi^n_{j=1} \cE_j^*$ are two morphisms of $K$-group varieties (here $\cE_j^* := \underline{\Ext}^1(\cE_j,\GG_m)$ is the Cartier dual of the elliptic curve $\cE_j$). To have the morphism $v$ is equivalent to have $r$ points $P_k=(P_{1k}, \ldots, P_{nk})$ of $ \Pi^n_{j=1} \cE_j(K)$ with $k=1, \ldots, r$, whereas to have the morphism $v^*$ is equivalent to have $s$ points $Q_i=(Q_{1i}, \ldots, Q_{ni})$ of $ \Pi^n_{j=1} \cE_j^*(K)$ with $i=1, \ldots, s.$ Via the isomorphism $\underline{\Ext}^1(\Pi^n_{j=1}\cE_j,\GG_m^s) \cong (\Pi_{j=1}^n \cE_j^*)^s ,$ to have the $s$ points $Q_i=(Q_{1i}, \ldots, Q_{ni})$ is equivalent to have the extension $G$ of $\Pi^n_{j=1} \cE_j$ by $\GG_m^s$. \item $\psi$ is a trivialization of the pull-back $(v,v^*)^*\mathcal{P}$ via $(v,v^*)$ of the Poincar\' e biextension $\mathcal{P}$ of $(\Pi^n_{j=1} \cE_j,\Pi^n_{j=1} \cE_j^*)$ by $\GG_m$. To have this trivialization $\psi$ is equivalent to have $r$ points $R_k \in G(K)$ with $k=1, \ldots, r$ such that the image of $R_k$ via the projection $G \to \Pi^n_{j=1} \cE_j$ is $P_k=(P_{1k}, \ldots, P_{nk})$, and so finally to have the morphism $u:\ZZ^r \rightarrow G. $ \end{itemize} The index $k$, $0 \leq k \leq r,$ is related to the lattice $\ZZ^r$, the index $j$, $0 \leq j \leq n,$ is related to the elliptic curves, and the index $i$, $0 \leq i \leq s,$ is related to the torus $\GG_m^s$. For $j=1, \ldots, n$, we index with a $j$ all the data related to the elliptic curve $\cE_j$: for example we denote by $\wp_j(z)$ the Weierstrass $\wp$-function of $\cE_j$, by $\omega_{j1}, \omega_{j2}$ its periods, ... On any 1-motive $M=[u:X \rightarrow G] $ it is defined an increasing filtration $\W_{\bullet}$, called the \textit{weight filtration} of $M$: $\W_{0}(M)=M, \W_{-1}(M)=[0 \to G], \W_{-2}(M)=[0 \to T].$ If we set ${\Gr}_{n}^{\W} := \W_{n} / \W_{n-1},$ we have ${\Gr}_{0}^{\W}(M)= [ X \to 0], {\Gr}_{-1}^{\W}(M)= [0 \to A]$ and $ {\Gr}_{-2}^{\W}(M)= [0 \to T].$ Two 1-motives $M_i=[u_i:X_i \rightarrow G_i]$ over $K$ (for $i=1,2$) are isogeneous is there exists a morphism of complexes $(f_X,f_G):M_1 \to M_2$ such that $f_X:X_1 \to X_2$ is injective with finite cokernel, and $f_G:G_1 \to G_2$ is surjective with finite kernel. Since \cite[Thm (10.1.3)]{D75} is true modulo isogenies, two isogeneous 1-motives have the same periods. Moreover, two isogeneous 1-motives build the same tannakian category and so they have the same motivic Galois group. Hence in this paper \textit{we can work modulo isogenies}. In particular the elliptic curves $\cE_1, \dots, \cE_n$ will be pairwise not isogenous. \section{Elliptic integrals of third kind}\label{EllipticIntegral} Let $\cE$ be an elliptic curve defined over $\CC$ with Weierstrass coordinate functions $x$ and $y$. Set $\Lambda := \HH_1(\cE(\CC),\ZZ). $ Let $\wp(z)$ be the Weierstrass $\wp$-function relative to the lattice $\Lambda$: it is a meromorphic function on $\CC$ having a double pole with residue zero at each point of $\Lambda$ and no other poles. Consider the elliptic exponential \begin{align} \nonumber \exp_{\cE}: \CC & \longrightarrow \cE(\CC) \subseteq \PP^2(\CC)\\ \nonumber z & \longmapsto \exp_{\cE}(z)=[\wp(z),\wp(z)',1] \end{align} whose kernel is the lattice $\Lambda.$ In particular the map $\exp_{\cE}$ induces a complex analytic isomorphism between the quotient $\CC / \Lambda$ and the $\CC$-valuated points of the elliptic curve $\cE$. In this paper, we will use small letters for elliptic logarithms of points on elliptic curves which are written with capital letters, that is $\exp_{\cE}(p)=P \in \cE (\CC)$ for any $p \in \CC$. Let $ \sigma(z)$ be the Weierstrass $\sigma$-function relative to the lattice $\Lambda$: it is a holomorphic function on all of $\CC$ and it has simple zeros at each point of $\Lambda$ and no other zeros. Finally let $\zeta (z)$ be the Weierstrass $\zeta$-function relative to the lattice $\Lambda$: it is a meromorphic function on $\CC$ with simple poles at each point of $\Lambda$ and no other poles. We have the well-known equalities \[ \frac{d}{dz} \log \sigma(z)= \zeta(z) \quad \mathrm{and} \quad \frac{d}{dz} \zeta(z)= -\wp(z). \] Recall that a meromorphic differential 1-form is of the \emph{first kind} if it is holomorphic everywhere, of the \emph{second kind} if the residue at any pole vanishes, and of the \emph{third kind} in general. On the elliptic curve $\cE$ we have the following differential 1-forms: \begin{enumerate} \item the differential of the first kind \begin{equation}\label{eq:diffFirstk} \omega = \frac{dx}{y}, \end{equation} which has neither zeros nor poles and which is invariant under translation. We have that $\exp_{\cE}^{*}(\omega) = dz.$ \item the differential of the second kind \begin{equation}\label{eq:diffSecondk} \eta = -\frac{xdx}{y}. \end{equation} In particular $\exp_{\cE}^{*}(\eta) = -\wp(z) dz$ which has a double pole with residue zero at each point of $\Lambda$ and no other poles. \item the differential of the third kind \begin{equation}\label{eq:diffThirdk} \xi_Q = \frac{1}{2} \frac{y-y(Q)}{x - x(Q)} \frac{dx}{y} \end{equation} for any point $Q $ of $ \cE(\CC), Q \not=0.$ The residue divisor of $\xi_Q$ is $-(0)+(-Q).$ If we denote $q \in \CC$ an elliptic logarithm of the point $Q$, that is $\exp_{\cE}(q)=Q$, we have that \[\exp_{\cE}^{*}(\xi_Q) = \frac{1}{2} \frac{\wp'(z)- \wp'(q)}{\wp(z) - \wp(q)} dz, \] which has residue -1 at each point of $\Lambda$. \end{enumerate} The 1-dimensional $\CC$-vector space of differentials of the first kind is $\HH^0(\cE, \Omega^1_\cE).$ The 1-dimensional $\CC$-vector space of differentials of the second kind modulo holomorphic differentials and exact differentials is $\HH^1(\cE, \mathcal{O}_\cE).$ In particular the first De Rham cohomology group $\HH^1_\dR(\cE)$ of the elliptic curve $\cE$ is the direct sum $\HH^0(\cE, \Omega^1_\cE) \oplus \HH^1(\cE, \mathcal{O}_\cE)$ of these two spaces and it has dimension 2. The $\CC$-vector space of differentials of the third kind is infinite dimensional. The inverse map of the complex analytic isomorphism $\CC / \Lambda \to \cE(\CC)$ induced by the elliptic exponential is given by the integration $\cE(\CC) \to \CC / \Lambda, P \to \int^{P}_{O} \omega \quad \mathrm{mod} \Lambda$, where O is the neutral element for the group law of the elliptic curve. Let $\gamma_1, \gamma_2$ be two closed paths on $\cE(\CC)$ which build a basis of $\HH_1(\cE_\CC,\QQ)$. Then \textit{the elliptic integrals of the first kind} $ \int_{\gamma_i} \omega = \omega_i$ $(i=1,2)$ are \textit{the periods of the Weierstrass $\wp$-function}: \begin{equation}\label{eq:periods-wp} \wp(z+\omega_i)= \wp(z) \quad \quad \mathrm{for} \; i=1,2. \end{equation} Moreover \textit{the elliptic integrals of the second kind} $ \int_{\gamma_i} \eta = \eta_i$ $(i=1,2)$ are \textit{the quasi-periods of the Weierstrass $\zeta$-function}: \begin{equation}\label{eq:periods-zeta} \zeta(z+\omega_i)= \zeta(z) + \eta_i \quad \quad \mathrm{for} \; i=1,2. \end{equation} Consider Serre's function \begin{equation}\label{eq:def-fq} f_q(z)= \frac{\sigma(z+q)}{\sigma(z) \sigma(q)} e^{-\zeta(q) z } \qquad \mathrm{with}\; q \in \CC \setminus \Lambda \end{equation} whose logarithmic differential is \begin{equation}\label{eq:expEXiq} \frac{f_q'(z)}{f_q(z)} dz = \frac{1}{2} \frac{\wp'(z)- \wp'(q)}{\wp(z) - \wp(q)} dz =\exp_{\cE}^{*}(\xi_Q) \end{equation} (see \cite{W84} and \cite[\S 2]{Ber08}). \textit{The exponentials of the elliptic integrals of the third kind} $ \int_{\gamma_i} \xi_Q = \eta_i q - \omega_i \zeta(q)$ $(i=1,2)$ are \textit{the quasi-quasi periods} of the function $f_q(z):$ \begin{equation}\label{eq:periods-fq} f_q(z+ \omega_i)= f_q(z) e^{\eta_i q - \omega_i \zeta(q)} \quad \quad \mathrm{for} \; i=1,2. \end{equation} As observed in \cite{W84}, we have that \begin{equation} \frac{f_q(z_1+ z_2)}{f_q(z_1)f_q( z_2)}= \frac{\sigma(q+z_1+z_2)\sigma(q) \sigma(z_1)\sigma(z_2)}{\sigma(q+z_1)\sigma(z_1+z_2)\sigma(q+z_2)}. \label{eq:fq-sigma} \end{equation} Consider now an extension $G$ of our elliptic curve $\cE$ by $\GG_m, $ which is defined over $\CC$. Via the isomorphism $\mathrm{Pic}^0(\cE) \cong \cE^* = \underline{\Ext}^1(\cE,\GG_m)$, to have the extension $G$ is equivalent to have a divisor $D=(-Q)-(0)$ of $\mathrm{Pic}^0(\cE) $ or a point $-Q$ of $ \cE^*(\CC)$. In this paper we identify $\cE$ with $\cE^*$. A basis of the first De Rham cohomology group $\HH^1_\dR(G)$ of the extension $G$ is given by $\{\omega, \eta, \xi_Q \}$. Consider the semi-abelian exponential \begin{equation}\label{eq:semiablog} \exp_{G}: \CC^2 \longrightarrow G(\CC) \subseteq \PP^4(\CC) \end{equation} \[ (w,z) \longmapsto \exp_{G}(w,z)=\sigma(z)^3 \Big[\wp(z),\wp(z)',1, e^{w} f_q(z), e^{w} f_q(z) \Big( \wp(z) + \frac{\wp'(z)- \wp'(q)}{\wp(z)- \wp(q)} \Big) \Big] \] whose kernel is $\HH_1(G(\CC),\ZZ)$. A basis of the Hodge realization $\HH_1(G(\CC),\QQ)$ of the extension $G$ is given by a closed path $\delta_{Q}$ around $Q$ on $G(\CC)$ and two closed paths $\tilde{\gamma}_1, \tilde{\gamma}_2$ on $G(\CC)$ which lift a basis $\{\gamma_1, \gamma_2\}$ of $\HH_1(\cE_\CC,\QQ)$ via the surjection $ \HH_1(G_\CC,\QQ) \rightarrow \HH_1(\cE_\CC,\QQ).$ We have that \begin{equation}\label{eq:expGXiq} \exp_{G}^{*}(\xi_Q) = dw + \frac{f_q'(z)}{f_q(z)} dz. \end{equation} \section{Periods of 1-motives involving elleptic curves}\label{periods} Let $M=[u:X \to G]$ be a 1-motive over $K$ with $G$ an extension of an abelian variety $A$ by a torus $T$. As recalled in the introduction, to the 1-motive $M_{\CC}$ obtained from $M$ extending the scalars from $K$ to $\CC$, we can associate its Hodge realization ${\T}_{\QQ}(M_\CC)= (\Lie(G_\CC)\times_G X) \otimes_\ZZ \QQ $ which is endowed with the weight filtration (defined over the integers) ${\W}_{0}{\T}_{\ZZ}(M_\CC) =\Lie(G_\CC)\times_G X, {\W}_{-1}{\T}_{\ZZ}(M_\CC) = {\HH}_1(G_\CC,\ZZ), {\W}_{-2}{\T}_{\ZZ}(M_\CC) = {\HH}_1( T_\CC,\ZZ).$ In particular we have that ${\Gr}_0^{\W}{\T}_{\ZZ}(M_\CC)\cong X, {\Gr}_{-1}^{\W}{\T}_{\ZZ}(M_\CC) \cong {\HH}_{1}(A_\CC,\ZZ)$ and $ {\Gr}_{-2}^{\W}{\T}_{\ZZ}(M_\CC) \cong {\HH}_{1}(T_\CC,\ZZ).$ Moreover to $M$ we can associate its De Rham realization ${\T}_{\dR}(M) = \Lie (G^\natural)$, where $M^\natural=[X \rightarrow G^\natural]$ is the universal vectorial extension of $M$, which is endowed with the Hodge filtration ${\F}^0{\T}_{\dR}(M)= \ker \big( \Lie ( G^\natural) \rightarrow \Lie ( G) \big).$ The weight filtration induces for the Hodge realization the short exact sequence \begin{equation}\label{eq:Hodge} 0 \longrightarrow \HH_1(G_\CC,\ZZ) \longrightarrow \T_{\ZZ} ( M_\CC) \longrightarrow \T_{\ZZ} (X) \longrightarrow 0 \end{equation} which is not split in general. On the other hand, for the De Rham realization we have that \begin{lemma} The short exact sequence, induced by the weight filtration, \begin{equation}\label{eq:DRham0} 0 \longrightarrow {\T}_{\dR}(G) \longrightarrow {\T}_{\dR}(M) \longrightarrow {\T}_{\dR}(X) \longrightarrow 0 \end{equation} is canonically split. \end{lemma} \begin{proof} Consider the short exact sequence $0 \to G \to M \to X[1] \to 0$. Applying $\Hom(-,\GG_a)$ we get the short exact sequence of finitely dimensional $K$-vector spaces \[ 0 \longrightarrow \Hom (X,\GG_a) \longrightarrow \Ext^1(M,\GG_a) \to\Ext^1(G,\GG_a) \longrightarrow 0 \] Taking the dual we obtain the short exact sequence \[ 0 \longrightarrow \Hom (\Ext^1(G,\GG_a),\GG_a) \longrightarrow \Hom (\Ext^1(M,\GG_a),\GG_a) \longrightarrow X \to 0 \] which is split since $\Ext^1(X, \GG_a)=0$. Now consider the composite of the section \pn $X \to \Hom (\Ext^1(M,\GG_a),\GG_a)$ with the inclusion $\Hom (\Ext^1(M,\GG_a),\GG_a) \to G^\natural$. Recalling that ${\F}^0{\T}_{\dR}(M) \cong \Hom (\Ext^1(M,\GG_a),\GG_a)$, if we take Lie algebras we get the arrow $\T_{\dR} (X) = X \otimes K \to {\F}^0{\T}_{\dR}(M) \to {\T}_{\dR}(M) = \Lie (G^\natural)$ which is a section of the exact sequence (\ref{eq:DRham0}). \end{proof} Denote by $\HH_{\dR}(M) $ the dual $K$-vector space of ${\T}_{\dR}(M)$. By the above Lemma we have that \begin{equation}\label{eq:DRham} \HH_{\dR}(M) = \HH_{\dR}^1(G) \oplus \HH_{\dR}^1(X) . \end{equation} Consider now a 1-motive $M=[u:\ZZ^r \rightarrow G]$ defined over $K$, where $G$ is an extension of a finite product $\Pi^n_{j=1} \cE_j $ of elliptic curves by the torus $\GG_m^s$. Let $\{ z_k \}_{k=1, \dots, r}$ be a basis of $\ZZ^r$ and let $\{ t_i \}_{i=1, \dots, s}$ be a basis of the character group $\ZZ^s$ of $\GG_m^s$. For the moment, in order to simplify notation, denote by $A$ the product of elliptic curves $\Pi^n_{j=1} \cE_j$. Denote by $G_i$ the push-out of G by $t_i: \GG_m^s \to \GG_m$, which is the extension of $A$ by $\GG_m$ parameterized by the point $v^*(t_i)=Q_i=(Q_{1i}, \dots, Q_{ni})$, and by $R_{ik}$ the $K$-rational point of $G_i$ above $v(z_k)=P_k=(P_{1k}, \dots, P_{nk})$. Consider the 1-motive defined over $K$ \[M_{ik}= [u_{ik}:z_k \ZZ \rightarrow G_i]\] with $u_{ik}(z_k)= R_{ik} $ for $i=1, \dots, s$ and $k=1, \dots, r$. In \cite[Thm 1.7]{B02-2} we have proved geometrically that the 1-motives $M=[u:\ZZ^r \rightarrow G]$ and $\oplus_{i=1}^s \oplus_{k=1}^r M_{ik}$ generate the same tannakian category. Via the isomorphism $\underline{\Ext}^1(\Pi^n_{j=1}\cE_j,\GG_m) \cong \Pi_{j=1}^n \underline{\Ext}^1(\cE_j,\GG_m) ,$ the extension $G_i$ of $A$ by $\GG_m$ parametrized by the point $v^*(t_i)=Q_i=(Q_{1i}, \dots, Q_{ni})$ corresponds to the product of extensions $G_{1i} \times G_{2i} \times \dots \times G_{ni}$ where $ G_{ji}$ is an extension of $\cE_j$ by $\GG_m$ parametrized by the point $Q_{ji}$, and the $K$-rational point $R_{ik}$ of $G_i$ living above $P_k=(P_{1k}, \dots, P_{nk})$ corresponds to the $K$-rational points $(R_{1ik}, \dots, R_{nik})$ with $R_{jik} \in G_{ji}(K) $ living above $P_{jk} \in \cE_j (K).$ Consider the 1-motive defined over $K$ \begin{equation}\label{eq:jik} M_{jik}= [u_{jik}:z_k \ZZ \rightarrow G_{ji}] \end{equation} with $u_{jik}(z_k)= R_{jik} $ for $i=1, \dots, s$, $k=1, \dots, r$ and $j=1, \dots, n.$ Let $(l_{jik},p_{jk}) \in \CC^2$ be a semi-abelian logarithm (\ref{eq:semiablog}) of $R_{jik},$ that is \begin{equation}\label{eq:l} \exp_{G_{ji}} (l_{jik},p_{jk}) = R_{jik}. \end{equation} \begin{lemma}\label{lem:decomposition} The 1-motives $M$ and $\oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n M_{jik}$ generate the same tannakian category. \end{lemma} \begin{proof} As in \cite[Thm 1.7]{B02-2} we will work geometrically and because of loc. cit. it is enough to show that the 1-motives $\oplus_{i=1}^s \oplus_{k=1}^r M_{ik}$ and $\oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n M_{jik}$ generate the same tannakian category. Clearly \[ \oplus_{j=1}^n \Big( \oplus_{i=1}^s \oplus_{k=1}^r M_{ik} \big/ [0 \to \Pi_{1 \leqslant l \leqslant n \atop l \not= j} G_{li} ] \Big) = \oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n M_{jik} \] and so $< \oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n M_{jik}>^\otimes \; \; \subset \; \; < \oplus_{i=1}^s \oplus_{k=1}^r M_{ik}>^\otimes.$ On the other hand, if $\mathrm{d}_\ZZ: \ZZ \to \ZZ^n$ is the diagonal morphism, for fixed $i$ and $k$ we have that \[ \oplus_{j=1}^n M_{jik} \big/ [ \ZZ^n / \mathrm{d}_\ZZ(\ZZ) \to 0] = [\small{\Pi}_{j} u_{jik} : \mathrm{d}_\ZZ(\ZZ) \longrightarrow G_{1i} \times G_{2i} \times \dots \times G_{ni}] = [u_{ik} :\ZZ \longrightarrow G_i] =M_{ik} \] and so \[\oplus_{i=1}^s \oplus_{k=1}^r \Big( \oplus_{j=1}^n M_{jik} \big/ [ \ZZ^n / \mathrm{d}_\ZZ(\ZZ) \to 0] \Big) = \oplus_{i=1}^s \oplus_{k=1}^r M_{ik} \] that is $ < \oplus_{i=1}^s \oplus_{k=1}^r M_{ik}>^\otimes \; \; \subset \; \; < \oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n M_{jik}>^\otimes .$ \end{proof} The matrix which represents the isomorphism (\ref{eq:betaM}) for the 1-motive $M=[u:{\ZZ}^r \to G]$, where $G$ is an extension of $\Pi^n_{j=1} \cE_j $ by $\GG_m^s$, is a huge matrix difficult to write down. The above Lemma implies that, instead of studying this huge matrix, it is enough to study the $rsn$ matrices which represent the isomorphism (\ref{eq:betaM}) for the $rsn$ 1-motives $M_{jik}= [u_{jik}:z_k \ZZ \rightarrow G_{ji}].$ Following \cite[\S 2]{Ber08}, now we compute explicitly the periods of the 1-motive $M=[u:\ZZ \to G]$, where $G$ is an extension of one elliptic curve $ \cE$ by the torus $\GG_m.$ We need Deligne's construction of $M$ starting from an open singular curve (see \cite[(10.3.1)-(10.3.2)-(10.3.3]{D75}) that we recall briefly. Via the isomorphism $\mathrm{Pic}^0(\cE) \cong \cE^* = \underline{\Ext}^1(\cE,\GG_m)$, to have the extension $G$ of $\cE$ by $\GG_m$ underlying the 1-motive $M$ is equivalent to have the divisor $D=(-Q)-(0)$ of $\mathrm{Pic}^0(\cE)$ or the point $-Q$ of $ \cong \cE^* $. We assume $Q$ to be a non torsion point. According to \cite[page 227]{M74}, to have the point $u(1)=R \in G(K)$ is equivalent to have a couple \[(P,g_R) \in \cE(K) \times K(\cE)^*\] where $\pi(R)=P \in \cE(K)$ (here $\pi: G \to \cE$ is the surjective morphism of group varieties underlying the extension $G$), and where $g_R: \cE \to \GG_m, x \mapsto R+ \rho(x) -\rho(x+P)$ (here $\rho: \cE \to G$ is a section of $\pi$), is a rational function on $\cE$ whose divisor is $T^{*}_{P}D-D=(-Q+P)-(P)-(-Q)+(0)$ (here $T_P: \cE \to \cE$ is the translation by the point $P$). We assume also $R$ to be a non torsion point. Now pinch the elliptic curve $\cE$ at the two points $-Q$ and $O$ and puncture it at two $K$-rational points $P_2$ and $P_1$ whose difference (according to the group law of $\cE$) is $P$, that is $P=P_2-P_1.$ The motivic $\HH^1$ of the open singular curve obtained in this way from $\cE$ is the 1-motive $M=[u:\ZZ \rightarrow G]$, with $u(1)=R$. We will apply Deligne's construction to each 1-motive $M_{jik}= [u_{jik}:z_k \ZZ \rightarrow G_{ji}]$ with $u_{jik}(z_k)= R_{jik} .$ \begin{proposition}\label{proof-periods} Choose the following basis of the $\QQ$-vector space $\T_{\QQ}(M_{jik \; \CC}):$ \begin{itemize} \item two closed paths $\tilde{\gamma}_{j1}, \tilde{\gamma}_{j2}$ on $G_{ji}(\CC)$ which lift the basis $\{\gamma_{j1}, \gamma_{j2}\}$ of $\HH_1(\cE_{j \;\CC},\QQ)$ via the surjection $ \HH_1(G_{ji \; \CC},\QQ) \rightarrow \HH_1(\cE_{j \;\CC},\QQ)$; \item a closed path $\delta_{Q_{ji} }$ around $-Q_{ji}$ on $G_{ji}(\CC)$ (here we identify $G_{ji}$ with the pinched elliptic curve $\cE_j$); and \item a closed path $\beta_{R_{jik}}$, which lifts the basis $\{z_k\}$ of $\T_{\QQ}(z_k \ZZ)$ via the surjection $ \T_{\QQ} ( M_{jik \; \CC}) \rightarrow \T_{\QQ} (z_k \ZZ) $, and whose restriction to $\HH_1(G_{ji \; \CC},\QQ)$ is a closed path $\beta_{R_{jik}|G_{ji}}$ on $G_{ji}(\CC)$ having the following properties: $\beta_{R_{jik}|G_{ji}}$ lifts a path $\beta_{P^1_{jk}P^2_{jk}}$ on $\cE_{j }(\CC) $ from $P^1_{jk}$ to $P^2_{jk}$ (with $P^2_{jk}-P^1_{jk}=P_{jk}$) via the surjection $ \HH_1(G_{ji \; \CC},\QQ) \rightarrow \HH_1(\cE_{j \;\CC},\QQ)$, and its restriction to $\HH_1(\GG_m,\QQ)$ is a path $\beta_{jik}$ on $\GG_m(\CC)=\CC^* $ from $1$ to $l_{jik} (\ref{eq:l});$ \end{itemize} and the following basis of the $K$-vector space $\HH_{\dR}(M_{jik}):$ \begin{itemize} \item the differentials of the first kind $\omega_j=\frac{dx_j}{y_j}$ (\ref{eq:diffFirstk}) and of the second kind $\eta_j=-\frac{x_jdx_j}{y_j}$ (\ref{eq:diffSecondk}) of $\cE_j$; \item the differential of the third kind $\xi_{Q_{ji} }= \frac{1}{2} \frac{y_j-y_j(Q_{ji})}{x_j - x_j(Q_{ji})} \frac{dx_i}{y_j}$ (\ref{eq:diffThirdk}) of $\cE_j$, whose residue divisor is $D=(-Q_{ji})-(0)$ and which lifts the basis $\{\frac{dt_i}{t_i}\}$ of $\HH_{\dR}^1(\GG_m)$ via the surjection $\HH_{\dR}^1(G_{ji}) \rightarrow \HH_{\dR}^1(\GG_m)$; \item the differential $df_j$ of a rational function $f_j$ on $\cE_j$ such that $f_j(P^2_{jk})$ differs from $f_j(P^1_{jk})$ by 1. \end{itemize} These periods of the 1-motive $M=[u:{\ZZ}^r \to G]$, where $G$ is an extension of $\Pi^n_{j=1} \cE_j $ by $\GG_m^s$, are then \[1, \omega_{j1},\omega_{j2},\eta_{j1}, \eta_{j2}, p_{jk},\zeta_j(p_{jk}), \eta_{j1} q_{ji} - \omega_{j1} \zeta_j(q_{ji}),\eta_{j2} q_{ji} - \omega_{j2} \zeta_j(q_{ji}) , \log f_{q_{ji}}(p_{jk}) + l_{jik}, 2i \pi \] with $e^{l_{jik}} \in K^*,$ for $j=1, \ldots, n, k=1, \ldots, r$ and $i=1, \ldots,s.$ \end{proposition} \begin{proof} By Lemma \ref{lem:decomposition}, the 1-motives $M=[u:\ZZ^r \rightarrow G]$ and $\oplus_{i=1}^s \oplus_{k=1}^r \oplus_{j=1}^n [u_{jik}:z_k \ZZ \rightarrow G_{ji}]$ have the same periods and therefore we are reduced to prove the case $r=n=s=1$. Consider the 1-motive $M=[u: z\ZZ \to G],$ where $G$ is an extension of an elliptic curve $\cE$ by $\GG_m$ parameterized by $v^*(t)=-Q \in \cE(K)$, and $u(z)=R$ is a point of $G(K)$ living over $v(z)=P \in \cE(K).$ Let $(l,p) \in \CC^2$ be a semi-abelian logarithm of $R,$ that is \[\exp_G (l,p) = R. \] Let $P_2$ and $P_1$ be two $K$-rational points whose difference is $P$. Because of the weight filtration of $M$, we have the non-split short exact sequence \[ 0 \longrightarrow \HH_{\dR}^1(\cE) \longrightarrow \HH_{\dR}^1(G) \longrightarrow \HH_{\dR}^1(\GG_m) \longrightarrow 0 \] As $K$-basis of $\HH_{\dR}^1(G)$ we choose the differentials of the first kind $\omega $ and of the second kind $\eta$ of $\cE,$ and the differential of the third kind $\xi_Q$, which lifts the only element $\frac{dt}{t}$ of the basis of $\HH_{\dR}^1(\GG_m)$. Because of the decomposition (\ref{eq:DRham}), we complete the basis of $\HH_{\dR}(M)$ with the differential $df$ of a rational function $f$ on $\cE$ such that $f(P_2)$ differs from $f(P_1)$ by 1. \par\noindent Always because of the weight filtration of $M$, we have the non-split short exact sequence \[ 0 \longrightarrow \HH_1(\GG_m,\ZZ) \longrightarrow \HH_1(G_\CC,\ZZ) \longrightarrow \HH_1(\cE_\CC,\ZZ) \longrightarrow 0 \] As $\QQ$-basis of $ \HH_1(G_\CC,\QQ)$ we choose two closed paths $\tilde{\gamma}_1, \tilde{\gamma}_2$ which lift the basis $\gamma_1, \gamma_2$ of $\HH_1(\cE_\CC,\QQ)$ and a closed path $\delta_{Q}$ around $-Q$. Because of the non-split exact sequence (\ref{eq:Hodge}), we complete the basis of $\T_{\QQ}(M)$ with a closed path $\beta_R$, which lifts the only element $z$ of the basis of $\T_{\QQ}(z \ZZ) = \ZZ \otimes \QQ$ via the surjection $ \T_{\QQ} ( M_{ \CC}) \rightarrow \T_{\QQ} (z \ZZ) $, and whose restriction to $\HH_1(G_{ \CC},\QQ)$ is a closed path $\beta_{R|G}$ on $G(\CC)$ having the following properties: $\beta_{R|G}$ lifts a path $\beta_{P_1P_2}$ on $\cE(\CC) $ from $P_1$ to $P_2$, and its restriction to $\HH_1(\GG_m,\QQ)$ is a path $\beta_l$ on $\GG_m(\CC)=\CC^* $ from $1$ to $l.$ With respect to these bases of $\T_{\QQ}(M)$ and $\HH_{\dR}(M)$, the matrix which represents the isomorphism (\ref{eq:betaM}) for the 1-motive $M=[u: z \ZZ \to G]$ is \begin{equation}\label{eq:matrix-integrales} \left( {\begin{array}{cccc} \int_{\beta_R} df &\int_{\beta_{P_1P_2}} \omega & \int_{\beta_{P_1P_2}} \eta &\int_{\beta_{R|G}} \xi_Q \\ \int_{\tilde{\gamma}_1}df &\int_{\gamma_1} \omega & \int_{\gamma_1} \eta &\int_{\tilde{\gamma}_1} \xi_Q \\ \int_{\tilde{\gamma}_2}df &\int_{\gamma_2} \omega & \int_{\gamma_2} \eta &\int_{\tilde{\gamma}_2} \xi_Q \\ \int_{\delta_{Q}}df &\int_{\delta_{Q}}\omega & \int_{\delta_{Q}}\eta & \int_{\delta_{Q}} \xi_Q \\ \end{array} } \right) \end{equation} Recalling that $\exp_{\cE}^{*}(\omega) = dz, \exp_{\cE}^{*}(\eta) = d \zeta(z)$, (\ref{eq:expEXiq}) and (\ref{eq:expGXiq}) we can now compute explicitly all these integrals: \begin{itemize} \item $\int_{\beta_R} df= f(P_2)-f(P_1)=1,$ \item $\int_{\tilde{\gamma}_1}df=\int_{\tilde{\gamma}_2}df = \int_{\delta_{Q}}df =0$ because of the decomposition (\ref{eq:DRham}), \item $\int_{\beta_{P_1P_2}} \omega= \int_{p_1}^{p_2} dz= p_2 - p_1 =p,$ \item $\int_{\gamma_i} \omega= \int_{0}^{\omega_i} dz= \omega_i$ for $i=1,2,$ \item $\int_{\delta_{Q}}\omega = \int_{\delta_{Q}}\eta= 0$ since the image of $\delta_{Q}$ via $\HH_1(G_\CC,\QQ) \to \HH_1(\cE_\CC,\QQ)$ is zero, \item $\int_{\gamma_i} \eta = \int_{0}^{\omega_i} d \zeta= \zeta(\omega_i) - \zeta(0) =\eta_i$ for $i=1,2,$ \item $\int_{\beta_{P_1P_2}} \eta= \int_{p_1}^{p_2} d \zeta(z) = \zeta(p_2) - \zeta(p_1). $ \end{itemize} By the pseudo addition formula for the Weierstrass $\zeta$-function (see \cite[Example 2, p 451]{WW}), $\zeta(z+y) - \zeta(z)- \zeta(y) = \frac{1}{2} \frac{\wp'(z)-\wp'(y)}{\wp(z)-\wp(y)} \in K(\cE)$, and so it exists a rational function $g$ on $\cE$ such that $g(p_2)-g(p_1)= - \zeta(p + p_1) + \zeta(p)+ \zeta(p_1).$ Since the differential of the second kind $\eta$ lives in the quotient space $\HH^1(\cE, \mathcal{O}_\cE),$ we can add to the class of $\eta $ the exact differential $dg$, getting \begin{itemize} \item $\int_{\beta_{P_1P_2}}( \eta + dg) = \int_{p_1}^{p_2}( d \zeta(z) + dg) = \zeta(p_2) - \zeta(p_1) +g(p_2) -g(p_1) = \zeta(p),$ \item $\int_{\beta_{R|G}} \xi_Q= \int_0^l dw +\int_{p_1}^{p_2} \frac{f_q'(z)}{f_q(z)} dz = l+ \int_{p_1}^{p_2} d \log f_q(z) = l + \log \frac{f_q(p_2)}{f_q(p_1)} .$ \end{itemize} Since by \cite[20-53]{WW} the quotient of $\sigma$-functions is a rational function on $\cE$, from the equality (\ref{eq:fq-sigma}) it exists a rational function $g_q(z)$ on $\cE$ such that $ \frac{g_q(p_2)}{g_q(p_1)} = (\frac{f_q(p+p_1)}{f_q(p)f_q(p_1)})^{-1}$, and therefore we get \begin{itemize} \item $\int_{\beta_{R|G}} ( \xi_Q + d \log g_q(z)) = \int_0^l dw+ \int_{p_1}^{p_2}( d \log f_q(z) + d \log g_q(z)) = l+ \log \big( \frac{f_q(p_2)}{f_q(p_1)} \frac{g_q(p_2)}{g_q(p_1)} \big) = \\ l+ \log \big( \frac{f_q(p_2)}{f_q(p_1)} \frac{f_q(p)f_q(p_1)}{f_q(p_2)} \big) = l+ \log( f_q(p) ),$ with $e^l \in K^*$, \item $\int_{\tilde{\gamma}_i} \xi_Q = \int_{0}^{\omega_i} \frac{f_q'(z)}{f_q(z)} dz = \int_{0}^{\omega_i} d \log f_q(z) = \log \frac{f_q(\omega_i)}{f_q(0)} =\eta_i q - \omega_i \zeta(q) $ by (\ref{eq:periods-fq}) for $i=1,2,$ \item $\int_{\delta_{-Q}} \xi_Q = 2i \pi \mathrm{Res}_{-Q} \xi_Q = 2 i \pi.$ \end{itemize} The addition of the differential $d \log g_q(z)$ to the differential of the third kind $\xi_Q$ will modify the last two integrals by an integral multiple of $2 i \pi$ (see \cite[Thm 10-7]{S}) and this is irrelevant for the computation of the field generated by the periods of $M.$ Explicitly the matrix (\ref{eq:matrix-integrales}) becomes \begin{equation}\label{eq:matrix-periods} \left( {\begin{array}{cccc} 1 &p & \zeta(p) &\log f_q(p) + l \\ 0 & \omega_1 & \eta_1 &\eta_1 q - \omega_1 \zeta(q) \\ 0 & \omega_2 & \eta_2 &\eta_2 q - \omega_2 \zeta(q) \\ 0 &0 & 0 & 2 i \pi\\ \end{array} } \right), \end{equation} with $e^l \in K^*,$ and so the periods of the 1-motive $M=[u:z \ZZ \to G], u(z)=R,$ are $1, \omega_1,\omega_2, \eta_1, \eta_2, p, \zeta(p) , \log f_q(p) + l , \eta_1 q - \omega_1 \zeta(q), \eta_2 q - \omega_2 \zeta(q), 2 i \pi.$ \end{proof} \begin{remark} The determinations of the complex and elliptic logarithms, which appear in the first line of the matrix (\ref{eq:matrix-periods}), are not well-defined since they depend on the lifting $\beta_{P_1P_2}$ of the basis of $\T_{\QQ}(z\ZZ)$ (recall that the short exact sequence (\ref{eq:Hodge}) is not split). Nevertheless, the field $K (\mathrm{periodes}(M))$, which is involved in the Generalized Grothendieck's Period Conjecture, is totally independent of these choices since it contains $2i \pi$, the periods of the Weierstrass $\wp$-function, the quasi-periods of the Weierstrass $\zeta$-function, and finally the quasi-quasi-periods of Serre's function $f_q(z)$ (\ref{eq:def-fq}). \end{remark} We finish this section with an example: Consider the 1-motive $M=[u:{\ZZ}^2 \to G]$, where $G$ is an extension of $\cE_1 \times \cE_2 $ by $\GG_m^3$ parameterized by the $K$-rational points $Q_1=(Q_{11},Q_{21}), Q_2=(Q_{12},Q_{22}), Q_3=(Q_{13},Q_{23})$ of $\cE_1^* \times \cE_2^* $, and the morphism $u$ corresponds to two $K$-rational points $R_1,R_2$ of $G$ leaving over two points $P_1=(P_{11},P_{21}), P_2=(P_{12},P_{22})$ of $\cE_1 \times \cE_2. $ The more compact way to write down the matrix which represents the isomorphism (\ref{eq:betaM}) for our 1-motive $M=[u:{\ZZ}^2 \to G]$ is to consider the 1-motive \[ M'= M/ [0 \longrightarrow \cE_1] \oplus M/ [0 \longrightarrow \cE_2], \] that is, with the above notation $M'=[u_1={\ZZ}^2 \to \Pi_{i=1}^3 G_{1i} ] \oplus [u_2={\ZZ}^2 \to \Pi_{i=1}^3 G_{2i} ] $ with $u_1$ corresponding to two $K$-rational points $(R_{111},R_{121},R_{131} )$ and $(R_{112},R_{122},R_{132})$ of $\Pi_{i=1}^3 G_{1i}$ living over $P_{11}$ and $P_{12}$, and $u_2$ corresponding to two $K$-rational points $(R_{211},R_{221},R_{231} )$ and $(R_{212},R_{222},R_{232})$ of $\Pi_{i=1}^3 G_{2i}$ living over $P_{21}$ and $P_{22}$. The 1-motives $M$ and $M'$ generate the same tannakian category: in fact, it is clear that $ <M'>^\otimes \; \; \subset \; \; < M>^\otimes $ and in the other hand $M= M' / [ \ZZ^2 / \mathrm{d}_\ZZ(\ZZ) \to 0]$. The matrix representing the isomorphism (\ref{eq:betaM}) for the 1-motive $M'$ with respect to the $K$-bases chosen in the above Corollary is $$ \left(\begin{matrix} & &\scriptstyle{ p_{11} } &\scriptstyle{ \zeta_1(p_{11})} & & 0 & 0 & \scriptstyle{\log f_{q_{11}}(p_{11})+l_{111}}& \scriptstyle{\log f_{q_{12}}(p_{11})+l_{121}} & \scriptstyle{\log f_{q_{13}}(p_{11})+l_{131}} \cr \scriptstyle{ {\rm Id}_{4 \times 4} } & & \scriptstyle{ p_{12} } & \scriptstyle{ \zeta_1(p_{12})} & & 0 &0 & \scriptstyle{\log f_{q_{11}}(p_{12})+l_{112}} & \scriptstyle{\log f_{q_{12}}(p_{12})+l_{121}} &\scriptstyle{\log f_{q_{13}}(p_{12})+l_{131}} \cr & &0 &0 & &\scriptstyle{ p_{21} } & \scriptstyle{ \zeta_2(p_{21})} & \scriptstyle{\log f_{q_{21}}(p_{21})+l_{211}}& \scriptstyle{\log f_{q_{22}}(p_{21})+l_{221}} & \scriptstyle{\log f_{q_{23}}(p_{21})+l_{231}} \cr & & 0 & 0 & & \scriptstyle{ p_{22} } &\scriptstyle{ \zeta_2(p_{22})} & \scriptstyle{\log f_{q_{21}}(p_{22})+l_{212}} & \scriptstyle{\log f_{q_{22}}(p_{22})+l_{222}} &\scriptstyle{\log f_{q_{23}}(p_{22})+l_{232}} \cr & & \scriptstyle{ { \omega}_{11}} &\scriptstyle{ { \eta}_{11}}& & & & \scriptstyle{ { \eta}_{11} q_{11}-{ \omega}_{11} \zeta_1(q_{11}) }& \scriptstyle{ { \eta}_{11} q_{12}-{ \omega}_{11} \zeta_1(q_{12}) }& \scriptstyle{ { \eta}_{11} q_{13}-{ \omega}_{11} \zeta_1(q_{13}) }\cr & & \scriptstyle{ { \omega}_{12}} &\scriptstyle{ { \eta}_{12}}& & & & \scriptstyle{ { \eta}_{12} q_{11}-{ \omega}_{12} \zeta_1(q_{11})}& \scriptstyle{ { \eta}_{12} q_{12}-{ \omega}_{12} \zeta_1(q_{12})}& \scriptstyle{ { \eta}_{12} q_{13}-{ \omega}_{12} \zeta_1(q_{13})} \cr & & & & &\scriptstyle{ { \omega}_{21}} & \scriptstyle{ { \eta}_{21} } & \scriptstyle{ { \eta}_{21} q_{21}-{ \omega}_{21} \zeta_2(q_{21})}& \scriptstyle{ { \eta}_{21} q_{22}-{ \omega}_{21} \zeta_2(q_{22})}& \scriptstyle{ { \eta}_{21} q_{23}-{ \omega}_{21} \zeta_2(q_{23})} \cr & & & & &\scriptstyle{ { \omega}_{22}} & \scriptstyle{ { \eta}_{22}}& \scriptstyle{ { \eta}_{22} q_{21}-{ \omega}_{22} \zeta_2(q_{21})} & \scriptstyle{ { \eta}_{22} q_{22}-{ \omega}_{22} \zeta_2(q_{22})} & \scriptstyle{ { \eta}_{22} q_{23}-{ \omega}_{22} \zeta_2(q_{23})} \cr & & & & & & & & \scriptstyle{ 2i \pi {\rm Id}_{3 \times 3} }& \end{matrix} \right). $$ In general, for a 1-motive of the kind $M=[u:\ZZ^r \rightarrow G]$, where $G$ is an extension of a finite product $\Pi^n_{j=1} \cE_j $ of elliptic curves by the torus $\GG_m^s$, we will consider the 1-motive \[ M'= \oplus_{j=1}^n \big( M/ [0 \longrightarrow \Pi_{1 \leq l \leq n \atop l \not =j }\cE_j] \big) \] whose matrix representing the isomorphism (\ref{eq:betaM}) with respect to the $K$-bases chosen in the above Corollary is $$ \left(\begin{matrix} A&B&C \cr 0 & D&E \cr 0 &0 & F \end{matrix} \right) $$ with $A= {\rm Id}_{rn \times rn},B$ the $rn \times 2n$ matrix involving the periods coming from the morphism $v: \ZZ^r \to \Pi^n_{j=1} \cE_j $ , $C$ the $rn \times s$ matrix involving the periods coming from the trivialization $\Psi$ of the pull-back via $(v,v^*)$ of the Poincar\'e biextension $\mathcal{P}$ of $(\Pi^n_{j=1}\cE_j, \Pi^n_{j=1}\cE_j^*)$ by $\GG_m$ , $D$ the $2n \times 2n$ matrix having in the diagonal the period matrix of each elliptic curves $\cE_j$, $ E$ the $2n \times s$ matrix involving the periods coming from the morphism $v^*: \ZZ^s \to \Pi^n_{j=1} \cE_j^* $, and finally $ F = 2i \pi{\rm Id}_{s \times s}$ the period matrix of $\GG_m^s$. \section{Dimension of the unipotent radical of the motivic Galois group of a 1-motive}\label{motivicGaloisgroup} Denote by $\mathcal{MM}_{\leq 1}(K)$ the category of 1-motives defined over $K$. Using Nori's and Ayoub's works (see \cite{Ay14} and \cite{N00}), it is possible to endow the category of 1-motives with a \textit{tannakian structure with rational coefficients} (roughly speaking a tannakian category $\mathcal{T}$ with rational coefficients is an abelian category with a functor $\otimes :\mathcal{T} \times \mathcal{T} \to \mathcal{T}$ defining the tensor product of two objects of $\mathcal{T} $, and with a fibre functor over $\mathrm{Spec}(\QQ)$ - see \cite[2.1, 1.9, 2.8]{D90} for details). We work in a completely geometrical setting using algebraic geometry on tannakian category and defining as one goes along the objects, the morphisms and the tensor products that we will need (essentially we tensorize motives with pure motives of weight 0, and as morphisms we use projections and biextensions). The unit object of the tannakian category $\mathcal{MM}_{\leq 1}(K)$ is the 1-motive $\ZZ(0)= [ \ZZ \to 0]$. In this section we use the notation $Y(1)$ for the torus whose cocharacter group is $Y$. In particular $\ZZ(1)= [ 0 \to \GG_m]$. If $M$ is a 1-motive, we denote by $M^\vee \cong \uHom (M, \ZZ(0))$ its dual and by $ev_M : M \otimes M^\vee \to \ZZ(0), \delta_M: \ZZ(0) \to M^\vee \otimes M$ the arrows of $\mathcal{MM}_{\leq 1}(K)$ characterizing this dual. The Cartier dual of $M$ is $M^*= M^\vee \otimes \ZZ(1)$. If $M_1,M_2$ are two 1-motives, we set \begin{equation}\label{eq:BiextHom} \Hom_{\mathcal{MM}_{\leq 1}(K)}(M_1 \otimes M_2, M_3):= \mathrm{Biext}^1 (M_1,M_2; M_3) \end{equation} where $\mathrm{Biext}^1 ((M_1,M_2;M_3)$ is the abelian group of isomorphism classes of biextensions of $(M_1,M_2)$ by $M_3$. In particular the isomorphism class of the Poincar\'e biextension $\mathcal{P}$ of $(A,A^*)$ by $\GG_m$ is the Weil pairing $P_\mathcal{P} : A \otimes A^* \to \ZZ(1)$ of $A.$ The tannakian sub-category $<M>^\otimes$ generated by the 1-motive $M$ is the full sub-category of $\mathcal{MM}_{\leq 1}(K)$ whose objects are sub-quotients of direct sums of $M^{\otimes \; n} \otimes M^{\vee \; \otimes \; m}$, and whose fibre functor is the restriction of the fibre functor of $\mathcal{MM}_{\leq 1}(K)$ to $<M>^\otimes$. Because of the tensor product of $<M>^\otimes$, we have the notion of commutative Hopf algebra in the category $\Ind <M>^\otimes$ of Ind-objects of $<M>^\otimes$, and this allows us to define the category of affine $<M>^\otimes$-group schemes, just called \textit{motivic affine group schemes}, as the opposite of the category of commutative Hopf algebras in $\Ind <M>^\otimes.$ The Lie algebra of a motivic affine group scheme is a pro-object $\rm L$ of $\langle M \rangle^\otimes$ endowed with a Lie algebra structure, i.e. $\rm L$ is endowed with an anti-symmetric application $[\, , \,]: {\rm L} \otimes {\rm L} \to {\rm L}$ satisfying the Jacobi identity. The \textit{motivic Galois group} $\Galmot (M)$ of $M$ is the fundamental group of the tannakian category $< M >^\otimes$ generated by $M$, i.e. the motivic affine group scheme ${\rm Sp}( \Lambda),$ where $ \Lambda$ is the commutative Hopf algebra of $<M>^\otimes$ which is universal for the following property: for any object $X$ of $<M>^\otimes,$ it exists a morphism \begin{equation}\label{eq:lambdaX} \lambda_X: X^{\vee} \otimes X \longrightarrow \Lambda \end{equation} functorial on $X$, i.e. such that for any morphism $f: X \to Y$ in $<M>^\otimes$ the diagram \[ \begin{matrix} Y^{\vee} \otimes X&{\buildrel f^t \otimes 1 \over \longrightarrow}& X^{\vee} \otimes X \cr {\scriptstyle 1 \otimes f}\downarrow \quad \quad & & \quad \quad \downarrow {\scriptstyle \lambda_X}\cr Y^{\vee} \otimes Y & {\buildrel \lambda_Y \over \longrightarrow} & \Lambda \end{matrix} \] is commutative. The universal property of $\Lambda$ is that for any object $U$ of $<M>^\otimes$, the map \begin{align} \nonumber {\Hom}(\Lambda, U) & \longrightarrow \big\{ u_X: X^{\vee} \otimes X \to U, ~~ {\rm {functorial~ on~}} X \big\} \\ \nonumber f & \longmapsto f \circ \lambda_X \end{align} is bijective. The morphisms (\ref{eq:lambdaX}), which can be rewritten as $ X \to X \otimes \Lambda$, define the action of the motivic Galois group $\Galmot (M)$ on each object $X$ of $<M>^\otimes$. If $\omega_\QQ$ is the fibre functor Hodge realization of the tannakian category $<M>^\otimes$, $\omega_\QQ (\Lambda)$ is the Hopf algebra whose spectrum ${\rm Spec} (\omega (\Lambda))$ is the $\QQ$-group scheme $ {\underline {\rm Aut}}^{\otimes}_\QQ(\omega_\QQ)$, i.e. the Mumford-Tate group $\mathrm{MT}(M)$ of $M$. In other words, the motivic Galois group of $M$ is \textit{the geometric interpretation} of the Mumford-Tate group of $M$. By \cite[Thm 1.2.1]{A19} these two group schemes coincides, and in particular they have the same dimension \begin{equation}\label{dimGalMT} \dim \Galmot (M) = \dim \mathrm{MT}(M). \end{equation} Let $M=[u:X \to G]$ be a 1-motive defined over $K$, with $G$ an extension of an abelian variety $A$ by a torus $T$. The weight filtration $\W_{\bullet}$ of $M$ induces a filtration on its motivic Galois group $\Galmot (M)$ (\cite[Chp IV \S 2]{S72}): \vskip 0.3 true cm \par\noindent $ \W_{0}(\Galmot(M))=\Galmot(M) $ \vskip 0.3 true cm \par\noindent $ \W_{-1}(\Galmot(M))= \big\{ g \in \Galmot(M) \, \, \vert \, \, (g - id)M \subseteq \W_{-1}(M) ,(g - id) \W_{-1}(M) \subseteq \W_{-2}(M),$ \par\noindent $ (g - id) \W_{-2}(M)=0 \big\} , $ \vskip 0.3 true cm \par\noindent $ \W_{-2}(\Galmot(M))=\big\{ g \in \Galmot(M) \, \, \vert \, \, (g - id) M \subseteq \W_{-2}(M), (g - id) \W_{-1}(M) =0 \big\}, $ \vskip 0.3 true cm \par\noindent $ \W_{-3}(\Galmot(M))=0.$ \vskip 0.3 true cm \par\noindent Clearly $ \W_{-1}(\Galmot(M))$ is unipotent. Denote by $\UR (M)$ the unipotent radical of $\Galmot(M)$. Consider the graduated 1-motive \[\widetilde{M}= \Gr_*^\W (M) = X+A+T \] associated to $M$ and let $<\widetilde{M}>^\otimes$ be the tannakian sub-category of $<M>^\otimes$ generated by $\widetilde{M}$. The functor ``take the graduated'' $\Gr_*^\W : <M>^\otimes \twoheadrightarrow <\widetilde{M}>^\otimes$, which is a projection, induces the inclusion of motivic affine group schemes \begin{equation} \label{eq:Gr_0} \Galmot(\widetilde{M}) \hookrightarrow \Gr_*^\W \Galmot(M). \end{equation} \begin{lemma}\label{eq:dimGr0} Let $M=[u:X \to G]$ be a 1-motive defined over $K$, with $G$ an extension of an abelian variety $A$ by a torus $T$. The quotient ${\Gr}_{0}^{\W}(\Galmot(M))$ is reductive and the inclusion of motivic group schemes (\ref{eq:Gr_0}) identifies $\Galmot(\widetilde{M}) $ with this quotient. Moreover, if $X= \ZZ^r$ and $T= \GG_m^s$ \[ \dim {\Gr}_{0}^{\W}\big(\Galmot(M)\big)= \dim\Galmot (\widetilde{M}) = \begin{cases} \dim \Galmot (A) & \mbox{if } A \not= 0,\\ 1 & \mbox{if } A=0, T \not=0 ,\\ 0 & \mbox{if } A=T=0. \end{cases} \] \end{lemma} \begin{proof} By a motivic analogue of \cite[\S 2.2]{By83}, ${\Gr}_{0}^{\W}(\Galmot(M))$ acts via $\gal$ on ${\Gr}_{0}^{W}(M)$, by homotheties on ${\Gr}_{-2}^{\W}(M)$, and its image in the group of authomorphisms of ${\Gr}_{-1}^{\W}(M)$ is the motivic Galois group $\Galmot(A)$ of the abelian variety $A $ underlying $M$. Therefore ${\Gr}_{0}^{\W}(\Galmot(M))$ is reductive, and via the inclusion (\ref{eq:Gr_0}) it coincides with $\Galmot(\widetilde{M}) .$ To conclude, observe that $\Lie \, \Galmot(\GG_m)= \GG_m$ which has dimension 1, and $\Galmot(\ZZ)= \mathrm{Sp}(\ZZ(0))$ which has dimension 0. \end{proof} The inclusion $<\widetilde{M}>^\otimes \hookrightarrow <M>^\otimes $ of tannakian categories induces the following surjection of motivic affine group schemes \begin{equation} \label{eq:RestrictionGr_0} \Galmot (M) \twoheadrightarrow \Galmot (\widetilde{M}) \end{equation} which is the restriction $g \mapsto g_{|\widetilde{M} }.$ As an immediate consequence of the above Lemma we have \begin{corollary}\label{eq:DecomDim} Let $M=[u:X \to G]$ be a 1-motive defined over $K$. Then \[\W_{-1}(\Galmot(M))=\ker \big[\Galmot (M) \twoheadrightarrow \Galmot (\widetilde{M}) \big]. \] In particular, $\W_{-1}(\Galmot(M))$ is the unipotent radical $ \UR (M)$ of $\Galmot(M)$ and \[ \dim \Galmot (M) = \dim \Galmot (\widetilde{M}) + \dim \UR (M). \] \end{corollary} Observe that we can prove the equality $ \W_{-1}(\Galmot(M))=\ker \big[\Galmot (M) \twoheadrightarrow \Galmot (\widetilde{M}) \big]$ directly using the definition of the weight filtration: \[ \begin{aligned} g \in \W_{-1}(\Galmot(M)) & \Longleftrightarrow (g - id) {\Gr}_{0}^{\W}(M) =0, (g - id) {\Gr}_{-1}^{\W}(M) =0,(g - id) {\Gr}_{-2}^{\W}(M) =0 \\ & \Longleftrightarrow g_{| {\Gr}_{*}^{\W}(M) } = \id, \; \; \mathrm{i.e.} \; \; g= \id \;\; \mathrm{in}\; \; \Galmot(\widetilde{M}). \end{aligned}\] The inclusion $<M + M^\vee /\W_{-2} (M + M^\vee) >^\otimes \hookrightarrow <M>^\otimes $ of tannakian categories induces the following surjection of motivic affine group schemes \begin{equation} \label{eq:Gr_1} \Galmot (M) \twoheadrightarrow \Galmot \big(M + M^\vee /\W_{-2} (M + M^\vee)\big) \end{equation} which is the restriction $g \mapsto g_{|M + M^\vee /\W_{-2} (M + M^\vee) }.$ \begin{lemma}\label{eq:DecomRU} Let $M=[u:X \to G]$ be a 1-motive defined over $K$. Then \[\W_{-2}(\Galmot(M))=\ker \big[\Galmot (M) \twoheadrightarrow \Galmot (M + M^\vee /\W_{-2} (M + M^\vee)) \big]. \] In particular, the quotient ${\Gr}_{-1}^{\W}(\Galmot(M))$ of the unipotent radical $ \UR (M)$ is the unipotent radical $\W_{-1} \big( \Galmot (M + M^\vee /\W_{-2} (M + M^\vee))\big)$ of $\Galmot\big( M + M^\vee /\W_{-2} (M + M^\vee)\big)$. \end{lemma} \begin{proof} Using the definition of the weight filtration, we have: \[ \begin{aligned} g \in \W_{-2}(\Galmot(M)) & \Longleftrightarrow (g - id) M/\W_{-2}(M) =0,\; (g - id)\W_{-1}(M) =0 \\ & \Longleftrightarrow g_{| M/\W_{-2}(M) } = \id, \; g_{| M^\vee/\W_{-2}(M^\vee) } = \id \\ & \Longleftrightarrow g= \id \; \; \mathrm{in}\;\; \Galmot (M + M^\vee /\W_{-2} (M + M^\vee)). \end{aligned}\] Since the surjection of motivic affine group schemes (\ref{eq:Gr_1}) respects the weight filtration, $\W_{-2}(\Galmot(M))$ is in fact the kernel of $\W_{-1} (\Galmot (M)) \twoheadrightarrow \W_{-1} (\Galmot (M + M^\vee /\W_{-2} (M + M^\vee))) .$ Hence we get the second statement. \end{proof} \par\noindent From the definition of weight filtration, we observe that \[ \W_{-2}(\Galmot(M)) \subseteq \uHom (X,Y(1)) \cong X^\vee \otimes Y (1). \] By the above Lemma, we have that \[{\Gr}_{-1}^{\W}(\Galmot(M)) \subseteq \uHom (X+ Y^\vee,A+A^*) \cong X^\vee \otimes A+Y \otimes A^*. \] In order to compute the dimension of the unipotent radical $\UR (M) $ of $\Galmot (M )$ we use notations of \cite[\S 3]{B03} that we recall briefly. Let $(X,Y^\vee, A,A^*, v:X \to A, v^*:Y^\vee \to A^*, \psi:X \otimes Y^\vee \to (v \times v^*)^* \mathcal{P})$ be the 7-tuple defining the 1-motive $M=[u:X \to G]$ over $K$, where $G$ an extension of $A$ by the torus $Y(1)$. Let \[E=\W_{-1}( {\underline {\End}}(\widetilde{M})).\] It is the direct sum of the pure motives $E_{-1}= X^\vee \otimes A + A^\vee \otimes Y(1)$ and $E_{-2}= X^\vee \otimes Y(1)$ of weight -1 and -2. As observed in \cite[\S 3]{B03}, the composition of endomorphisms furnishes a ring structure to $E$ given by the arrow $P: E \otimes E \to E$ of $\langle M \rangle^\otimes$ whose only non trivial component is $$E_{-1} \otimes E_{-1} \longrightarrow (X^\vee \otimes A) \otimes (A^* \otimes Y) \longrightarrow {\ZZ}(1) \otimes X^\vee \otimes Y = E_{-2},$$ where the first arrow is the projection from $ E_{-1} \otimes E_{-1}$ to $(X^\vee \otimes A) \otimes (A^* \otimes Y) $ and the second arrow is the Weil pairing $P_{\mathcal{P}}: A \otimes A^* \to \ZZ(1)$ of $A.$ Because of the definition (\ref{eq:BiextHom}) the product $P:E_{-1} \otimes E_{-1} \to E_{-2}$ defines a biextension $\mathcal{B}$ of $(E_{-1},E_{-1})$ by $E_{-2} $, whose pull-back $ d^* \mathcal{B}$ via the diagonal morphism $d: E_{-1} \to E_{-1} \times E_{-1}$ is a $\Sigma - X^\vee \otimes Y (1)$-torsor over $E_{-1}$. By \cite[Lem 3.3]{B03} this $\Sigma - X^\vee \otimes Y (1)$-torsor $ d^* \mathcal{B}$ induces a Lie bracket $[\, ,\,]: E \otimes E \to E$ on $E$ which becomes therefore a Lie algebra. The action of $E=W_{-1}( {\underline {\End}}(\widetilde{M}))$ on $\widetilde{M}$ is given by the arrow $ E \otimes \widetilde{M} \to \widetilde{M}$ of $\langle M \rangle^\otimes$ whose only non trivial components are \begin{align} \label{eq:alpha1} \alpha_1:& (X^\vee \otimes A) \otimes X \longrightarrow A \\ \nonumber \alpha_2:& (A^* \otimes Y) \otimes A \longrightarrow Y(1) \\ \nonumber \gamma :& (X^\vee \otimes Y(1)) \otimes X \longrightarrow Y(1), \end{align} where the first and the last arrows are induced by $ev_{X^\vee}: X^\vee \otimes X \to \ZZ(0)$, while the second one is $\rk (Y)$-copies of the Weil pairing $P_{\mathcal{P}}: A \otimes A^* \to \ZZ(1)$ of $A$. By \cite[Lem 3.3]{B03}, via the arrow $(\alpha_1, \alpha_2, \gamma ): E \otimes \widetilde{M} \to \widetilde{M}$, the 1-motive $\widetilde{M}$ is in fact a $(E,[,])$-Lie module. As observed in \cite[Rem 3.4 (3)]{B03} $E$ acts also on the Cartier dual $\widetilde{M}^*= Y^\vee + A^* + X^\vee(1)$ of $\widetilde{M}$ and this action is given by the arrows \begin{align} \label{eq:alpha2*} \alpha_2^*:& (A^* \otimes Y) \otimes Y^\vee \longrightarrow A^* \\ \nonumber\alpha_1^*:& (X^\vee \otimes A) \otimes A^* \longrightarrow X^\vee(1) \\ \nonumber \gamma^* :& (X^\vee \otimes Y(1)) \otimes Y^\vee \longrightarrow X^\vee(1), \end{align} where $\alpha_2^*$ et $\gamma^*$ are projections, while $\alpha_1^*$ is $\rk (X^\vee)$-copies of the Weil pairing $P_{\mathcal{P}}: A \otimes A^* \to \ZZ(1)$ of $A$. Via the arrows $\delta_{ X^\vee}: \ZZ(0) \to X \otimes X^\vee$ et $\delta_{ Y}: \ZZ(0) \to Y^\vee \otimes Y$, to have the morphisms $v: X \to A$ and $v^*: Y^\vee \to A^*$ underlying the 1-motive $M$ is the same thing as to have the morphisms $V: \ZZ(0) \to A \otimes X^\vee $ and $V^*: \ZZ(0) \to A^* \otimes Y,$ i.e. to have a point \[b=(b_1,b_2) \in E_{-1}(K)= A \otimes X^\vee(K)+A^* \otimes Y(K). \] Fix now an element $(x,y^\vee)$ in the character group $X \otimes Y^\vee$ of the torus $X^\vee \otimes Y(1)$. By construction of the point $b$, it exists an element $(s,t) \in X \otimes Y^\vee (K)$ such that \begin{align} \nonumber v(x) &= \alpha_1(b_1,s) \in A(K) \\ \nonumber v^*(y^\vee)& =\alpha_2^*(b_2,t)\in A^*(K). \end{align} Let $i^*_{x,y^\vee} d^*\mathcal{B}$ be the pull-back of $d^* \mathcal{B}$ via the inclusion $i_{x,y^\vee}: \{ (v(x),v^*(y^\vee) )\} \hookrightarrow E_{-1}$ in $E_{-1}$ of the abelian sub-variety generated by the point $ (v(x),v^*(y^\vee) )$. The push-down $(x,y^\vee)_*i^*_{x,y^\vee} d^*\mathcal{B}$ of $i^*_{x,y^\vee} d^*\mathcal{B}$ via the character $(x,y^\vee):X^\vee \otimes Y(1)\to \ZZ(1)$ is a $\Sigma-\ZZ(1)$-torsor over $ \{ (v(x),v^*(y^\vee))\}: $ \[\begin{matrix} (x,y^\vee)_*i^*_{x,y^\vee} d^*\mathcal{B}& \longleftarrow & i^*_{x,y^\vee} d^*\mathcal{B} & \longrightarrow & d^* \mathcal{B} \\ \downarrow & &\downarrow & & \downarrow \\ \{ (v(x),v^*(y^\vee)) \} & = & \{ (v(x),v^*(y^\vee)) \} & {\buildrel i_{x,y^\vee} \over \longrightarrow} & E_{-1} \end{matrix} \] To have the point $\psi(x,y^\vee)$ is equivalent to have a point $(\widetilde{b})_{x,y^\vee}$ of $(x,y^\vee)_*i^*_{x,y^\vee} d^*\mathcal{B}$ over $ (v(x),v^*(y^\vee))$, and so to have the trivialization $\psi$ is equivalent to have a point \[\widetilde{b} \in (d^*\mathcal{B})_{b}\] in the fibre of $d^*\mathcal{B}$ over $b=(b_1,b_2).$ Consider now the following pure motives: \begin{enumerate} \item Let $B$ be the \textit{smallest} abelian sub-variety (modulo isogenies) of $X^\vee \otimes A+A^* \otimes Y$ which contains the point $b=(b_1,b_2) \in X^\vee \otimes A (K) + A^* \otimes Y (K) $. The pull-back $i^*d^* \mathcal{B}$ of $d^* \mathcal{B}$ via the inclusion $i: B \hookrightarrow E_{-1}$ of $B$ on $E_{-1}$, is a $\Sigma-X^\vee \otimes Y(1)$-torsor over $B$. \item Let $Z_1$ be the \textit{smallest} $\gal$-sub-module of $X^\vee \otimes Y$ such that the torus $Z_1(1)$ contains the image of the Lie bracket $[\, ,\,]: B \otimes B \to X^\vee \otimes Y(1)$. The push-down $p_*i^*d^* \mathcal{B}$ of the $\Sigma-X^\vee \otimes Y(1)$-torsor $i^*d^* \mathcal{B}$ via the projection $p:X^\vee \otimes Y(1) \twoheadrightarrow (X^\vee \otimes Y/ Z_1)(1)$ is the trivial $\Sigma-(X^\vee \otimes Y/ Z_1)(1)$-torsor over $B$, i.e. \[p_*i^*d^* \mathcal{B}= B \times (X^\vee \otimes Y/ Z_1)(1).\] Note by $\pi: p_*i^*d^* \mathcal{B} \twoheadrightarrow (X^\vee \otimes Y/ Z_1)(1)$ the canonical projection. We still note $\widetilde{b}$ the points of $i^*d^* \mathcal{B}$ and of $p_*i^*d^* \mathcal{B}$ living over $b \in B$. \item Let $Z$ be the \textit{smallest} $\gal$-sub-module of $X^\vee \otimes Y$ containing $Z_1$ and such that the sub-torus $(Z/ Z_1)(1)$ of $(X^\vee \otimes Y/ Z_1)(1)$ contains $\pi (\widetilde{b}) $. \end{enumerate} Let $A_\CC$ be the abelian variety defined over $\CC$ obtained from $A$ extending the scalars from $K$ to the complexes. Denote by $g$ the dimension of $A$. Consider the abelian exponential \[ \exp_{A}: \Lie A_\CC \longrightarrow A_\CC \] whose kernel is the lattice $\HH_1(A_\CC(\CC),\ZZ),$ and denote by $\log_A$ an abelian logarithm of $A$, that is a choice of an inverse map of $\exp_{A}$. Consider the composite \[ P_\mathcal{P} \circ (v \times v^*): X \otimes Y^\vee \longrightarrow \ZZ(1)\] where $P_\mathcal{P}: A \otimes A^* \to \ZZ(1)$ is the Weil pairing of $A.$ Since we work modulo isogenies, we identify the abelian variety $A$ with its Cartier dual $A^*$. Let $\omega_1, \dots , \omega_g $ be differentials of the first kind which build a basis of the $K$-vector space $\HH^0(A, \Omega^1_A)$ of holomorphic differentials, and let $\eta_1, \dots , \eta_g $ be differentials of the second kind which build a basis of the $K$-vector space $\HH^1(A, \mathcal{O}_A)$ of differentials of the second kind modulo holomorphic differentials and exact differentials. As in the case of elliptic curves, the first De Rham cohomology group $\HH^1_\dR(A)$ of the abelian variety $A$ is the direct sum $\HH^0(A, \Omega^1_A) \oplus \HH^1(A, \mathcal{O}_A)$ of these two vector spaces and it has dimension $2g$. Let $\gamma_1, \dots, \gamma_{2g}$ be closed paths which build a basis of the $\QQ$-vector space $\HH_1(A_\CC,\QQ)$. For $n=1, \dots, g$ and $m=1,\dots, 2g$, the abelian integrals of the first kind $ \int_{\gamma_m} \omega_n = \omega_{nm}$ are the \textit{periods} of the abelian variety $A$, and the abelian integrals of the second kind $ \int_{\gamma_m} \eta_n = \eta_{nm}$ are the \textit{quasi-periods} of $A$. \begin{theorem}\label{eq:dimUR} Let $M=[u:X \to G]$ be a 1-motive defined over $K$, with $G$ an extension of an abelain variety $A$ by a torus $Y(1)$. Denote by $ F=\End( A) \otimes_\ZZ \QQ$ the field of endomorphisms of the abelian variety $A.$ Let $x_1, \dots, x_{\mathrm{rk}(X)}$ be generators of the character group $X$ and let $y^\vee_1, \dots , y^\vee_{\mathrm{rk}(Y^\vee)}$ be generators of the character group $Y^\vee.$ Then \[ \dim_{\QQ} \UR( M)=\] \[ 2 \dim_{F} \mathcal{A}b \mathcal{L}og \; \im (v,v^*) + \dim_{\QQ} \mathcal{L}og \;\im (P_\mathcal{P} \circ (v \times v^*)) + \dim_{\QQ} \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}) \] where \begin{itemize} \item $ \mathcal{A}b\mathcal{L}og \; \im(v,v^*)$ is the $F$-sub-vector space of $\CC / (\sum_{n=1, \dots, g \atop m=1, \dots, 2g}F \, \omega_{nm}) $ generated by the abelian logarithms $\{ \log_A v(x_k), \log_A v^*(y^\vee_i) \}_{ k=1, \ldots, \mathrm{rk}(X) \atop i=1, \ldots, \mathrm{rk}(Y^\vee)}$ ; \item $ \mathcal{L}og \; \im ( P_\mathcal{P} \circ (v \times v^*))$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms \par\noindent $\{ \log P_\mathcal{P}(v(x_k),v^*(y^\vee_i) ) \}_{ k=1, \ldots, \mathrm{rk}(X) \atop i=1, \ldots, \mathrm{rk}(Y^\vee)}$; \item $ \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))})$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms $\{ \log \psi(x_{k'},y^\vee_{i'} ) \}_{(x_{k'},y^\vee_{i'} ) \in \ker (P_\mathcal{P} \circ (v \times v^*)) \atop 1\leq {k'} \leq \mathrm{rk}(X), \; 1\leq {i'} \leq \mathrm{rk}(Y^\vee) } .$ \end{itemize} \end{theorem} \begin{proof} By the main theorem of \cite[Thm 0.1]{B03}, the unipotent radical $W_{-1} (\Lie {\Galmot}(M))$ is the semi-abelian variety extension of $B$ by $Z(1)$ defined by the adjoint action of the Lie algebra $(B,Z(1), [\, , \,])$ over $B+Z(1).$ Since the tannakian category $<M>^\otimes$ has rational coefficients, we have that $ \dim_{\QQ} W_{-1} ( {\Galmot}(M)) = 2 \dim B + \dim Z(1) $. Concerning the abelian part \[\dim B= \dim_F \mathcal{A}b\mathcal{L}og \; \im(v,v^*).\] On the other hand, for the toric part $\dim Z(1) = \dim (Z/ Z_1)(1) + Z_1(1)$ by construction. Because of the explicit description of the Lie bracket $[\, ,\,]: B \otimes B \to X^\vee \otimes Y(1)$ given in \cite[(2.8.4)]{B03}, \[ \dim Z_1(1) = \dim_{\QQ} \mathcal{L}og \;\im (P_\mathcal{P} \circ (v \times v^*)). \] Finally by construction \[ \dim(Z/ Z_1)(1) = \dim_{\QQ} \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}) .\] \end{proof} \begin{remark} The dimension of the quotient ${\Gr}_{-1}^{\W}(\Galmot(M))$ of the unipotent radical $ \UR( M)$ is twice the dimension of the abelian sub-variety $B$ of $X^\vee \otimes A+A^* \otimes Y,$ that is \[ \dim_\QQ {\Gr}_{-1}^{\W}(\Galmot(M)) = 2 \dim_F \mathcal{A}b\mathcal{L}og \; \im(v,v^*).\] The dimension of $ W_{-2} ( {\Galmot}(M))$ is the dimension of the sub-torus $Z(1)$ of $X^\vee \otimes Y(1)$, that is \[ \dim_{\QQ} W_{-2} ( {\Galmot}(M)) = \dim_{\QQ} \mathcal{L}og \;\im (P_\mathcal{P} \circ (v \times v^*)) + \dim_{\QQ} \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}) \] \end{remark} \begin{remark} A 1-motive $M=[u:X \to G]$ defined over $K$ is said to be \textit{deficient} if $ \W_{-2}(\Galmot(M))=0.$ In \cite{JR} Jacquinot and Ribet construct such a 1-motive in the case $\mathrm{rk}(X)=\mathrm{rk}(Y^\vee)=1$. By the above Theorem we have that $M$ is deficient if and only if for any $(x,y^{\vee}) \in X \otimes Y^{\vee}$, \[ P_\mathcal{P} (v(x), v^*(y^{\vee})) =1 \quad \mathrm{and} \quad \psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}(x,y^{\vee}) =1,\] that is if and only if the two arrows $P_\mathcal{P} \circ (v \times v^*): X \otimes Y^{\vee} \to \ZZ(1)$ and $\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))} : X \otimes Y^{\vee} \to \ZZ(1)$ are the trivial arrow. \end{remark} Now let $M=[u:{\ZZ}^r \to G]$ be a 1-motive defined over $K$, with $G$ an extension of a product $\Pi^n_{j=1} \cE_j $ of pairwise not isogenous elliptic curves by the torus $\GG_m^s.$ We go back to the notation used in Section \ref{periods}. Denote by $\pr_h:\Pi^n_{j=1} \cE_j \to \cE_h$ and $\pr_h^*:\Pi^n_{j=1} \cE_j^* \to \cE_h^*$ the projections into the $h$-th elliptic curve and consider the composites $ v_h= \pr_h \circ v:\ZZ^r \rightarrow \cE_h$ and $v^*_h=\pr_h^* \circ v^* :\ZZ^s \rightarrow \cE_h^*.$ Let $\mathcal{P}$ be the Poincar\'e biextension of $(\Pi^n_{j=1} \cE_j, \Pi^n_{j=1} \cE_j^*)$ by $\GG_m$ and let $\mathcal{P}_j$ be the Poincar\'e biextension of $( \cE_j, \cE_j^*)$ by $\GG_m$. The category of biextensions is additive in each variable, and so we have that $P_\mathcal{P} = \Pi^n_{j=1}P_{\mathcal{P}_j}$, where $P_{\mathcal{P}_j}: \cE_j \otimes \cE_j^*\to \ZZ(1)$ is the Weil pairing of the elliptic curve $\cE_j$. \begin{corollary}\label{eq:dimGalMot} Let $M=[u:{\ZZ}^r \to G]$ be a 1-motive defined over $K$, with $G$ an extension of a product $\Pi^n_{j=1} \cE_j $ of pairwise not isogenous elliptic curves by the torus $\GG_m^s.$ Denote by $ k_j=\End( \cE_j) \otimes_\ZZ \QQ$ the field of endomorphisms of the elliptic curve $\cE_j$ for $j=1, \dots, n.$ Let $x_1, \dots, x_r$ be generators of the character group ${\ZZ}^r$ and let $y^\vee_1, \dots , y^\vee_{s}$ be generators of the character group ${\ZZ}^{s}.$ Then \[ \dim_\QQ \Galmot(M) = 4 \sum_{j=1}^n (\dim_{\QQ} k_j)^{-1}-n+1 + \sum_{j=1}^n 2 \dim_{k_j} \mathcal{A}b \mathcal{L}og \; \im (v_j,v^*_j) + \] \[ \dim_{\QQ} \mathcal{L}og \;\im (P_\mathcal{P} \circ (v \times v^*)) + \dim_{\QQ} \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}) \] \begin{itemize} \item $\mathcal{A}b \mathcal{L}og \; \im (v_j,v^*_j)$ is the $k_j$-sub-vector space of $\CC / k_j \, \omega_{j1}+ k_j \, \omega_{j2}$ generated by the elliptic logarithms $\{ p_{jk}, q_{ji} \}_{ k=1, \ldots, r \atop i=1, \ldots, s}$ of the points $\{ P_{jk}, Q_{ji} \}_{ k=1, \ldots, r \atop i=1, \ldots, s}$ for $j=1, \ldots, n;$ \item $ \mathcal{L}og \; \im ( P_\mathcal{P} \circ (v \times v^*))$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms \par\noindent $\{ \log P_{\mathcal{P}_j}(P_{jk}, Q_{ji} ) \}_{ k=1, \ldots,r, \; \; i=1, \ldots, s \atop j=1, \ldots, n}$; \item $ \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))})$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms $\{ \log \psi(x_{k'} ,y^\vee_{i'} ) \}_{(x_{k'} ,y^\vee_{i'} ) \in \ker (P_{\mathcal{P}_j} \circ (v_j\times v^*_j)) \atop 1\leq {k'} \leq r, \; 1\leq {i'} \leq s, \; j=1, \ldots, n } .$ \end{itemize} \end{corollary} \begin{proof} Since the elliptic curves are pairwise not isogenous, by \cite[\S 2]{Moonen} and (\ref{dimGalMT}) we have that \[\dim \Galmot\big(\Pi_{j=1}^{n} \cE_j \big)=4 \; \sum_{j=1}^n (\dim_\QQ k_j)^{-1}-n+1.\] Therefore putting together Corollary \ref{eq:DecomDim}, Lemma \ref{eq:dimGr0} and Theorem \ref{eq:dimUR} we can conclude. \end{proof} \begin{remark} We can express the dimension of the motivic Galois group of a product of elliptic curves also as $ 3 n_1 +n_2 +1,$ where $n_1$ is the number of elliptic curves without complex multiplication and $n_2$ is the number of elliptic curves with complex multiplication. Therefore \[ \dim \Galmot(M) =\dim\UR (M) + 3 n_1 +n_2 +1 \] \end{remark} \section{The 1-motivic elliptic conjecture}\label{conjecture} \pn \textbf{The 1-motivic elliptic conjecture} \par\noindent Consider \begin{itemize} \item $n$ elliptic curves $\cE_1, \dots, \cE_n$ pairwise not isogenous. For $j=1, \ldots, n,$ denote by $ k_j=\End( \cE_j) \otimes_\ZZ \QQ$ the field of endomorphisms of $\cE_j$ and let $g_{2j}=60 \; \mathrm{G}_{4j}$ and $g_{3j}=140 \; \mathrm{G}_{6j}$, where $\mathrm{G}_{4j}$ and $\mathrm{G}_{6j}$ are the Eisenstein series relative to the lattice $ \HH_1(\cE_j(\CC),\ZZ)$ of weight 4 and 6 respectively; \item $s$ points $Q_i= (Q_{1i},\dots,Q_{ni})$ of $\Pi^n_{j=1} \cE_j^*(\CC)$ for $i=1,\ldots, s$. These points determine an extension $G$ of $\Pi^n_{j=1} \cE_j $ by $ {\GG}_m^s$; \item $r$ points $R_1,\dots,R_r $ of $G(\CC)$. Denote by $(P_{1k},\dots,P_{nk}) \in \Pi^n_{j=1} \cE_j(\CC)$ the projection of the point $R_k$ on $\Pi^n_{j=1} \cE_j$ for $k= \ldots, r.$ \end{itemize} Then \[ \mathrm{tran.deg}_{\QQ}\, \QQ \Big(2i \pi, g_{2j},g_{3j}, Q_{ji},R_{k},\omega_{j1},\omega_{j2},\eta_{j1}, \eta_{j2}, p_{jk},\zeta_j(p_{jk}),\] \[ \eta_{j1} q_{ji} - \omega_{j1} \zeta_j(q_{ji}),\eta_{j2} q_{ji} - \omega_{j2} \zeta_j(q_{ji}) , \log f_{q_{ji}}(p_{jk}) +l_{jik} {\Big)}_{j=1,\ldots,n \;\; i=1,\ldots,s \atop k=1,\dots,r } \geq \] \[ 4 \sum_{j=1}^n (\dim_{\QQ} k_j)^{-1}-n+1 + \sum_{j=1}^n 2 \dim_{k_j} \mathcal{A}b \mathcal{L}og \; \im (v_j,v^*_j) + \] \[ \dim_{\QQ} \mathcal{L}og \;\im (P_\mathcal{P} \circ (v \times v^*)) + \dim_{\QQ} \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))}) \] where \begin{itemize} \item $\mathcal{A}b \mathcal{L}og \; \im (v_j,v^*_j)$ is the $k_j$-sub-vector space of $\CC / k_j \, \omega_{j1}+ k_j \, \omega_{j2}$ generated by the elliptic logarithms $\{ p_{jk}, q_{ji} \}_{ k=1, \ldots, r \atop i=1, \ldots,s}$ of the points $\{ P_{jk}, Q_{ji} \}_{ k=1, \ldots, r \atop i=1, \ldots, s}$ for $j=1, \ldots, n;$ \item $ \mathcal{L}og \; \im ( P_\mathcal{P} \circ (v \times v^*))$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms \par\noindent $\{ \log P_{\mathcal{P}_j}(P_{jk}, Q_{ji} ) \}_{ k=1, \ldots, r, \; \; i=1, \ldots, s \atop j=1, \ldots, n}$; \item $ \mathcal{L}og \; \im (\psi_{| \ker (P_\mathcal{P} \circ (v \times v^*))})$ is the $\QQ\,$-sub-vector space of $\CC / 2 i \pi \QQ$ generated by the logarithms $\{ \log \psi(x,y^\vee ) \}_{(x,y^\vee ) \in \ker (P_{\mathcal{P}_j} \circ (v_j\times v^*_j)) \atop (x,y^\vee ) \in \ZZ^r \otimes \ZZ^s} .$ \end{itemize} Because of Proposition \ref{proof-periods} and Corollary \ref{eq:dimGalMot}, we can conclude that \begin{theorem}\label{thmMain} Let $M=[u:{\ZZ}^r \to G]$ be a 1-motive defined over $K$, with $G$ an extension of a product $\Pi^n_{j=1} \cE_j $ of pairwise not isogenous elliptic curves by the torus $\GG_m^s.$ Then the Generalized Grothendieck's Period Conjecture applied to $M$ is equivalent to the 1-motivic elliptic conjecture. \end{theorem} \begin{remark} \label{Rk1} If $Q_{ji}=0$ for $j=1, \dots,n$ and $i=1, \dots,s$, the above conjecture is the elliptic-toric conjecture stated in \cite[1.1]{B02}, which is equivalent to the Generalized Grothendieck's Period Conjecture applied to the 1-motive $M=[u: \Pi_{k=1}^r z_k {\ZZ} \to \GG_m^s \times \Pi^n_{j=1} \cE_j]$ with $u(z_k) = (R_{1k}, \dots,R_{sk} , P_{1k}, \dots , P_{nk}) \in \GG_m^s (K) \times \Pi^n_{j=1} \cE_j(K).$ \end{remark} \begin{remark} \label{Rk2} If $Q_{ji}=P_{ij}=\cE_j=0$ for $j=1, \dots,n$ and $i=1, \dots,s$, the above conjecture is equivalent to the Generalized Grothendieck's Period Conjecture applied to the 1-motive $M=[u: \Pi_{k=1}^r z_k {\ZZ} \to \GG_m^s]$ with $u(z_k) = (R_{1k}, \dots,R_{sk}) \in \GG_m^s (K) $, which in turn is equivalent to the Schanuel conjecture (see \cite[Cor 1.3 and \S 3]{B02}). \end{remark} \section{Low dimensional case: $r=n=s=1$}\label{lowDim} In this section we work with a 1-motive $M=[ u:\ZZ \rightarrow G], u(1)=R,$ defined over $K$ in which $G$ is an extension of just one elliptic curve $\cE$ by the torus $\GG_m$, i.e. $r=s=n=1$. Let $g_2=60 \; \mathrm{G}_4$ and $g_3=140 \; \mathrm{G}_6$ with $\mathrm{G}_4$ and $\mathrm{G}_6$ the Eisenstein series relative to the lattice $\Lambda := \HH_1(\cE(\CC),\ZZ)$ of weight 4 and 6 respectively. The field of definition $K$ of the 1-motive $M=[u:\ZZ \rightarrow G], u(1)=R$ is \[\QQ \big( g_2, g_3, Q,R \big).\] By Proposition \ref{proof-periods}, the field $K (\mathrm{periods}(M))$ generated over $K$ by the periods of $M$, which are the coefficients of the matrix (\ref{eq:matrix-periods}), is \[\QQ \Big(g_2, g_3, Q, R, 2 i \pi, \omega_1,\omega_2,\eta_1,\eta_2, p,\zeta(p), \eta_1 q - \omega_1 \zeta(q),\eta_2 q - \omega_2 \zeta(q) , \log f_q(p) +l \Big). \] $\End(\cE) \otimes_\ZZ \QQ$-linear dependence between the points $P$ and $Q$ and torsion properties of the points $P, Q,R$ affect the dimension of the unipotent radical of $\Galmot (M)$. By Corollary \ref{eq:dimGalMot} we have the following table concerning the dimension of the motivic Galois group $\Galmot (M)$ of $M$: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $\dim \UR (M)$ & $\dim \Galmot (M) $ & $\dim \Galmot (M) $ & $M$ \\ & & $\cE$ CM & $\cE$ not CM & \\ \hline Q, R torsion & 0 & 2& 4 & $M=[u:\ZZ \rightarrow \cE \times \GG_m]$ \\ ($\Rightarrow$ P torsion) & & & & $ u(1)=(0,1)$ \\ \hline P,Q torsion &1 & 3& 5 & $M=[u:\ZZ \rightarrow \cE \times \GG_m] $\\ (R not torsion) & & & & $u(1)=(0,R) $ \\ \hline R torsion &2 & 4& 6 & $M=[u:\ZZ \rightarrow G]$\\ ($\Rightarrow$ P torsion) & & & & $ u(1)=0$\\ \hline Q torsion &3 &5 & 7 & $M=[u:\ZZ \rightarrow \cE \times \GG_m]$\\ (P and R not torsion) & & & & $u(1)=(P,R) $ \\ \hline P torsion &3 &5 &7 & $M=[u:\ZZ \rightarrow \cE^* \times \GG_m] $ \\ (R and Q not torsion) & & & & $u(1)=(Q,R) $ \\ \hline P,Q &5 &7 & 9 & $M=[u:\ZZ \rightarrow G]$\\ $\End( \cE) \otimes_\ZZ \QQ$-lin indep & & & &$ u(1)=R$\\ \hline \end{tabular} \end{center} We can now state explicitly the Generalized Grothendieck's Period Conjecture (\ref{eq:GCP}) for the 1-motives involved on the above table: \begin{itemize} \item $R$ and $Q$ are torsion: We work with the 1-motive $M=[u:\ZZ \rightarrow \cE \times \GG_m], u(1)=(0,1)$ or $M=[0 \to \cE].$ If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2 \Big)\geq 4\] that is 4 at least of the 6 numbers $ g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2$ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\eta_1 \Big)\geq 2\] that is 2 at least of the 4 numbers $g_2,g_3,\omega_1,\eta_1$ are algebraically independent over $\QQ.$ If we assume $g_2,g_3 \in \overline{\QQ},$ we get the Chudnovsky Theorem: $\mathrm{tran.deg}_{\QQ}\, \QQ (\omega_1,\eta_1 )=2.$ \item $P$ and $Q$ are torsion: We work with the 1-motive $M=[u:\ZZ \rightarrow \cE \times \GG_m], u(1)=(0,R)$ (this case was studied in the author's Ph.D thesis, see \cite{B02}). If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\omega_2,\eta_1,\eta_2,R, \log(R) \Big)\geq 5\] that is 5 at least of the 8 numbers $g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2,R, \log(R)$ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\eta_1,R, \log(R) \Big)\geq 3\] that is 3 at least of the 6 numbers $g_2,g_3,\omega_1,\eta_1,R, \log(R)$ are algebraically independent over $\QQ$. \item $R$ is torsion: We work with the 1-motive $M=[u:\ZZ \rightarrow G], u(1)=0$ or $M=[v^*:\ZZ \to \cE^*], v^*(1)=Q.$ If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2,Q,q, \zeta(q) \Big)\geq 6\] that is 6 at least of the 9 numbers $g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2,Q,q, \zeta(q)$ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\eta_1,Q, q, \zeta(q) \Big)\geq 4\] that is 4 at least of the 7 numbers $ g_2,g_3,\omega_1,\eta_1,Q, q, \zeta(q)$ are algebraically independent over $\QQ$. \item $Q$ is torsion: We work with the 1-motive $M=[u:\ZZ \rightarrow \cE \times \GG_m], u(1)=(P,R)$ (this case was considered in the author's Ph.D thesis, see \cite{B02}). If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2, P,R,p,\zeta(p), \log(R) \Big)\geq 7\] that is 7 at least of the 11 numbers $g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2, P,R,p,\zeta(p), \log(R) $ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big(g_2,g_3,\omega_1,\eta_1, P,R,p,\zeta(p), \log(R) \Big)\geq 5\] that is 5 at least of the 9 numbers $g_2,g_3,\omega_1,\eta_1, P,R,p,\zeta(p), \log(R) $ are algebraically independent over $\QQ$. \item $P$ is torsion: We work with the 1-motive $M=[u:\ZZ \rightarrow G], u(1)=R \in \GG_m(K)$ or $M=[u:\ZZ \rightarrow \cE^* \times \GG_m], u(1)=(Q,R).$ If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2,Q,R, q, \zeta(q),\log (R) \Big)\geq 7\] that is 7 at least of the 11 numbers $ g_2,g_3,\omega_1,\omega_2,\eta_1, \eta_2,Q,R, q, \zeta(q),\log (R) $ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2,g_3,\omega_1,\eta_1,Q,R, q, \zeta(q),\log (R) \Big)\geq 5\] that is 5 at least of the 9 numbers $ g_2,g_3,\omega_1,\eta_1,Q,R, q, \zeta(q),\log (R) $ are algebraically independent over $\QQ$. \item $P,Q,R$ are not torsion and $P,Q$ are $\End( \cE) \otimes_\ZZ \QQ$-linearly independent: We work with the 1-motive $M=[u:\ZZ \rightarrow G], u(1)=R \in G(K).$ If $\cE$ is not CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2, g_3, Q, R, \omega_1,\omega_2,\eta_1,\eta_2, p,\zeta(p),q, \zeta(q), \eta_1 q - \omega_1 \zeta(q),\eta_2 q - \omega_2 \zeta(q) , \log f_q(p)+l \Big)\geq 9\] that is 9 at least of the 15 numbers $ g_2, g_3, Q, R, \omega_1,\omega_2,\eta_1,\eta_2, p,\zeta(p),q, \zeta(q), \eta_1 q - \omega_1 \zeta(q),\eta_2 q - \omega_2 \zeta(q) , \log f_q(p) $ are algebraically independent over $\QQ$. If $\cE$ is CM, \[\mathrm{tran.deg}_{\QQ}\, \QQ \Big( g_2, g_3, Q, R, \omega_1,\eta_1, p,\zeta(p),q, \zeta(q), \eta_1 q - \omega_1 \zeta(q),\eta_2 q - \omega_2 \zeta(q) , \log f_q(p) +l \Big)\geq 7\] that is 7 at least of the 13 numbers $ g_2, g_3, Q, R, \omega_1,\eta_1, p,\zeta(p),q, \zeta(q), \eta_1 q - \omega_1 \zeta(q),\eta_2 q - \omega_2 \zeta(q) , \log f_q(p) $ are algebraically independent over $\QQ$. \end{itemize} \bibliographystyle{plain}
{"config": "arxiv", "file": "1905.07247/1.tex"}
TITLE: Definite Trig Integrals: Changing Limits of Integration QUESTION [1 upvotes]: $$\int_0^{\pi/4} \sec^4 \theta \tan^4 \theta\; d\theta$$ I used the substitution: let $u = \tan \theta$ ... then $du = \sec^2 \theta \; d\theta$. I know that now I have to change the limits of integration, but am stuck as to how I should proceed. Should I sub the original limits into $\tan \theta$ or should I let $\tan \theta$ equal the original limits and then get the new limits? And if it help, the answers of the definite integral is supposed to be $0$. Thanks in advance. REPLY [1 votes]: \begin{align} I &= \int_{0}^{\pi/4} \sec^{4}(\theta) \tan^{4}(\theta) \ d\theta \\ &= \int_{0}^{\pi/4} \sec^{2}(\theta) ( 1 + \tan^{2}(\theta)) \tan^{4}(\theta) \ d\theta \\ &= \int_{0}^{\pi/4} \tan^{4}(\theta) \ d(\tan\theta) + \int_{0}^{\pi/4} \tan^{6}(\theta) \ d(\tan\theta) \\ &= \left[ \frac{1}{5} \tan^{5}(\theta) \right]_{0}^{\pi/4} + \left[ \frac{1}{7} \tan^{7}(\theta) \right]_{0}^{\pi/4} \\ &= \frac{1}{5} + \frac{1}{7} = \frac{12}{35}. \end{align}
{"set_name": "stack_exchange", "score": 1, "question_id": 799711}
TITLE: Deriving the joint probability distribution of a transformed rv QUESTION [2 upvotes]: Suppose that A and B are 2 r.v with joint probability distribution given by $f_{AB}(a,b) = -\frac{1}{2}(ln(a)+ln(b))$, if $0 \lt a \lt 1$ and $0 \lt b \lt 1$, and 0 otherwise. Define $X = A+B$ and $Y = \frac{A}{B}$. How do I derive the joint distribution of $f_{XY}(x,y)$? Progress: I know that X will take values $0 \lt x \lt 2$, and the Jacobian = $2y$ . Any help will be much appreciated. REPLY [2 votes]: The jacobian results to me $$|J|=\frac{x}{(y+1)^2}$$ and after standard calculations the joint density results to me $$f_{XY}(x,y)=\frac{-x}{2(y+1)^2}\log\frac{x^2y}{(y+1)^2}\cdot\left[\mathbb{1}_{(0;1]}(x)\mathbb{1}_{(0;+\infty)}(y)+\mathbb{1}_{(1;2)}(x)\mathbb{1}_{\left(x-1;\frac{1}{x-1}\right)}(y)\right]$$ This because when $x\in(0;1]$ there are no problems but when $1<x<2$, in order to have $$0<\frac{xy}{y+1}<1$$ And $$0<\frac{x}{y+1}<1$$ You need to have $$x-1<y<\frac{1}{x-1}$$ Alternative notation Joint support can alternatively be expressed by $$0<x<\min\left(\frac{y+1}{y};y+1\right)$$ $$0<y<+\infty$$ To understand this alternative notation it is enough to do a drawing of support region
{"set_name": "stack_exchange", "score": 2, "question_id": 4292633}
TITLE: If f is continuous on [a,b] and f(a)=f(b) then show that there exists x,y in (a,b) such that f(x)=f(y) QUESTION [1 upvotes]: If f is continuous on $[a,b]$ and $f(a)=f(b)$ then show that there exists $x,y \in (a,b)$ such that $f(x)=f(y)$ It looks obvious if I imagine the graph. But I am not able to prove it. I am trying to employ intermediate value property, but not able to reach to the conclusion. REPLY [1 votes]: Without using IVP: from the statement it's clear that $f$ cannot be strictly monotone in $(a,b)$. If for all $x,y \in (a,b)$ we have $x\neq y \Rightarrow f(x) \neq f(y)$ (otherwise we are done), then that tells $f$ is injective and continuous in $(a,b)$. This link says that a continuous and injective function is strictly monotone, raising a contradiction.
{"set_name": "stack_exchange", "score": 1, "question_id": 4026149}
TITLE: How can a conditional law of a random value following a poisson law be a Binomial law? QUESTION [3 upvotes]: Let say that $X_1,\dots ,X_m$ are independent random variables following Poisson law of parameter $λ_1,\dots, λ_m$. I'm looking for the the conditional law of $X_1$ conditionated by $\{X_1+X_2=k\}$ which is said to be a Binomial law $B(k,\frac{\lambda_1}{\lambda_1+\lambda_2})$. Yet, $\sum\limits_{k\in X_2(\Omega)}{P(X_2=k-j)}$ has sense while $k-j\ge 0$. Thus \begin{align*} \sum_{k\in X_2(\Omega)}{P(X_2=k-j)}&=\sum\limits_{k-j>=0}{\frac{\lambda_2^{k-j}e^{-\lambda_2}}{(k-j)!}}\\[10pt] &= e^{-2\lambda_2} \end{align*} Therefore, we can calculate $(PX_1=j\mid X_1+X_2=k)$ \begin{align*} (PX_1=j\mid X_1+X_2=k)&=\frac{\frac{\lambda_1e^{-\lambda_1-2\lambda_2}}{j!}}{\frac{(\lambda_1+\lambda_2)^ke^{-\lambda_1+\lambda_2}}{k!}} \end{align*} Here I'm stuck, I don't think I can find a Binomial law from here... Have you any hint? Did I di a mistake? REPLY [2 votes]: \begin{align} & \Pr(X_1 = j\mid X_1+X_2=k) = \frac{\Pr(X_1=j\ \&\ X_1+X_2=k)}{\Pr(X_1+X_2=k)} \\[12pt] = {} & \frac{\Pr(X_1=j\ \&\ X_2=k-j)}{\Pr(X_1+X_2=k)} = \frac{\Pr(X_1=j)\Pr(X_2 = k-j)}{\Pr(X_1+X_2=k)} \\[12pt] = {} & \frac{\left( \dfrac{\lambda_1^j e^{-\lambda_1}}{j!} \cdot \dfrac{\lambda_2^{k-j} e^{-\lambda_2}}{(k-j)!} \right)}{\left( \dfrac{(\lambda_1+\lambda_2)^k e^{-(\lambda_1+\lambda_2)}}{k!} \right)} = \frac{k!}{j!(k-j)!} \cdot \left(\frac{\lambda_1}{\lambda_1+\lambda_2}\right)^j \cdot \left(\frac{\lambda_2}{\lambda_1+\lambda_2}\right)^{k-j} \\[15pt] = {} & \binom k j \left(\frac{\lambda_1}{\lambda_1+\lambda_2}\right)^j \cdot \left( 1 - \frac{\lambda_1}{\lambda_1+\lambda_2}\right)^{k-j} \end{align}
{"set_name": "stack_exchange", "score": 3, "question_id": 1507995}
TITLE: Quadratics Word Problem QUESTION [1 upvotes]: The path of a football flying through the air can be modelled by a quadratic equation. The football reaches the ground after 12 seconds in flight and is kicked from a height of 1 meter. The parabola has undergone a vertical reflection and a vertical compression by a factor of 1/6. a) Write an equation to represent the path of the football. b) Does the football reach a height of 8 meters? Please, if you can, explain in simple language and step by step. REPLY [2 votes]: This is probably what they mean: A parabola is characterized by 3 coefficients, so you need 3 pieces of information to determine a parabola. Two of the pieces of information are given directly, as $f(0) = 1$ and $f(12) = 0$. For the third, it seems they are attempting to say that a unit parabola was shifted so that the leading coefficient was scaled from $A=1$ to $A=-1/6$. So you have: $$f(t) = At^2 + Bt + C \tag{1}$$ $$\begin{cases} A=-\frac 16 \\ f(0) = 1 \\ f(12) = 0\end{cases}$$ Can you take it from here? You know that $A = -1/6$, so (1) becomes: $$f(t) = -\frac16t^2 + Bt + C \tag{2}$$ Now you know that $f(0) = 1$, so (2) becomes $$f(0) = -\frac160^2 + B\cdot 0 + C$$ $$1 = 0 + 0 + C$$ $$1 = C$$ So $$f(t) = -\frac16t^2 + Bt + 1\tag{3}$$ Now you just have to find out the value of $B$, so use $f(12) = 0$ (3): $$f(12) = -\frac16\cdot 12^2 + B\cdot12 + 1$$ $$0 = -\frac{144}{6} + 12 B + 1$$ $$23 = 12 B$$ $$\frac{23}{12} = B$$ So $$f(t) = -\frac16 t^2 + \frac{23}{12} t + 1$$ Now you know the equation of the height of the football. Then the question becomes, does it ever reach a height of 8m? So set: $$8 = -\frac16 t^2 + \frac{23}{12} t + 1$$ and solve for $t$ using the quadratic equation. You want to check if there is a positive real number that solves the equation.
{"set_name": "stack_exchange", "score": 1, "question_id": 873784}
TITLE: How many right angled triangles with co-prime integer sides and base of length $28cm$ are there? QUESTION [1 upvotes]: How many right angled triangles with co-prime integer sides and base of length $28cm$ are there? Please help me. My working: I tried assuming that once that base side is $2m = 28$ as $ 2m,m^2-1,m^2+1 $ form Pythagorean triplets. I got one set of possible answer which was $28, 195$ and $197$. Please help me to find the other sets. REPLY [0 votes]: We begin with Euclid's formula shown here as $$ \quad A=m^2-k^2 \qquad B=2mk \qquad C=m^2+k^2\quad$$ We can find all triples with $B=28$ by solving the $B$-function for $k$ and testing a defined range of $m$-values to see which, if any, yield integers. \begin{equation} B=2mk\implies k=\frac{B}{2m}\qquad\text{for}\qquad \bigg\lfloor \frac{1+\sqrt{2B+1}}{2}\bigg\rfloor \le m \le \frac{B}{2} \end{equation} The lower limit ensures $m>k$ and the upper limit ensures $m\ge 2$ $$B=28\implies\bigg\lfloor \frac{1+\sqrt{56+1}}{2}\bigg\rfloor =4 \le m \le \frac{28}{2}=14\\ \quad \text{and we find} \quad m\in\{7,14\}\implies k\in\{2,1\}$$ $$F(7,2)=(45,28,53)\qquad \qquad F(14,1)=(195,28,197)\qquad $$ If we go with non-primitives for side-$B$, we find $$7\times F(2,1)=7\times(3,4,5)=(21,28,35)$$ All $B$-values are multiples of $4$ so there are no other triples. A similar technique for side-$A$ shows just one non-primitive triple $4\times F(4,3)=4\times(7,24,25)\space = \space F(8,6)=(28,96,100)$
{"set_name": "stack_exchange", "score": 1, "question_id": 4017534}
TITLE: Second partial derivatives of thermodynamics potentials at the critical point QUESTION [2 upvotes]: I'm trying to understand the physics of phase transitions, specially at the critical point, but I find myself stuck. For an hydrostatic system, I studied the stability conditions, that lead to (in the case of Helmholtz free energy) $$ \left(\frac{\partial^{2} F}{\partial T^{2}}\right)_{V} \leq 0, \qquad \left(\frac{\partial^{2} F}{\partial V^{2}}\right)_{T} \geq 0 $$ where we are assuming that the second derivatives are not null. If they are, we have to expand the analysis to the first derivative that is different from zero. In class, we have used the second condition to proof (not in an actually formal way) that in a P-V diagram, the critical point has to be an inflection point. I leave here the detailed "proof": Outside the critical point, we have: \begin{equation} dF = -SdT - p dV \quad\rightarrow\quad \left( \frac{\partial F }{\partial V} \right)_{T} = - p \end{equation} At the critical point, we know that the second derivative of any thermodynamic potential has to be zero, so: $$ \left( \frac{\partial^2 F }{\partial V^2} \right)_{T} = 0 \quad\rightarrow\quad \left( \frac{\partial p }{\partial V} \right)_{T} = 0 $$ The third derivative will be as well 0, as the free energy has to be minimum (and not an inflection point). Hence, $$ \left( \frac{\partial^4 F }{\partial V^4} \right)_{T} \geq 0 \quad\rightarrow\quad - \left( \frac{\partial^3 p }{\partial V^3} \right)_{T} \geq 0 $$ and we conclude that, indeed, in the P-V plane the critical point is an inflection point. Well, everything would be fine except that the teacher didn't explained why the second derivative of any thermodynamic potential has to be zero at the critical point. I have searched all over the internet and have not found any satisfactory answer. I have checked the Callen, but it deduces this idea differently. I would appreciate if someone could help me. EDIT: I define the critical point as the end point of a phase equilibrium curve P.D: So sorry for my bad English :) REPLY [0 votes]: Let me start by saying that the statement at the critical point, we know that the second derivative of any thermodynamic potential has to be zero is not unconditionally true. It is enough to think about the behavior of the second derivative of the Helmholtz free energy with respect to temperature. It is proportional to the constant-volume specific heat, which does not vanish at the critical point. I add that providing an answer to this question has been an interesting challenge, and I have achieved my conclusion only after trying different solution strategies and searching in the literature. The short answer is that we cannot prove the vanishing of the vanishing of the second derivative with respect to the volume of the Helmholtz free energy at the critical point if the definition is just the endpoint of the coexistence region. Indeed, the relevant thermodynamic potential function of the pressure should be an increasing concave function. For all the temperatures $T$ along the coexistence curve, the corresponding $g(T,p)$ will have a point with different left and right derivatives. There is no discontinuity of the derivative at and above the critical temperature. Correspondingly to this picture in the pressure domain, the Legendre transformation of $g(T,p)$, say $f(T, V)$, is a convex function of the volume $V$. Above and at the critical temperature, such a function has a continuous second derivative. Below the critical temperature, there is an interval of volumes where the graph is a straight line. The extension of such a straight line (the affine part of the convex function) vanishes at the critical point. The key point is that such a general picture of the behavior of the function $f(T,V)$, does not imply the vanishing of the second derivative $-\frac{\partial^2{f}}{\partial{V}^2}$ automatically. Therefore, the vanishing of such a second derivative (i.e., the horizontal inflection point of the critical isotherm) can be obtained only by adding another condition beyond the constraints originating from the principles of Thermodynamics. Support for such a position comes also from a recent paper by J.C.Obeso-Jureidini, D.Olascoaga, and V.Romero-Rochín. The authors show an additional hypothesis is needed to prove the divergence of the isothermal compressibility at the critical point. A better approach to the problem would be to start with a more physically-based definition of the critical point. Such a choice does not imply renouncing mathematical rigor. I would suggest starting from a definition of the critical point as the point in the thermodynamical space where the thermodynamic potential $f(T, V)$ would cease to be a strictly convex function of the volume due to the appearance of a concave intruder which eventually is suppressed by the phase separation restoring the convexity in the form of a finite interval of rectilinear behavior. In a way, such a choice is equivalent to a definition of the critical point in terms of the horizontal inflection point. This way, emphasis is put on the physical mechanism (the incipient instability of the one-phase region) at the basis of both the critical behavior and the coexistence region.
{"set_name": "stack_exchange", "score": 2, "question_id": 710499}
TITLE: Prove $\{nx \space | \space n \in \mathbb{N} \}$ has no least upper bound. QUESTION [1 upvotes]: Question: Let $F$ be an ordered field. (a) Suppose $S$ is a subset of $F$ and $y$ an element of $F$, and let $T = \{ s + y \space| \space s \in S \}$. Show that if $S$ has a least upper bound, $\sup S$, then $T$ also has a least upper bound, namely $\sup S + y$. (b) Deduce from (a) that if $x$ is a nonzero element of $F$ and we let $S = \{nx \space |\space n \in \mathbb{N} \}$, then $S$ has no least upper bound. My proof for part (a): (a) For some $\alpha \in F$, $\sup S = \alpha \space$ i.e. for all $s \in S$, $\alpha \ge s$ and for any upper bound $\gamma \in F$ of S, $\gamma \ge \alpha$. For any $y \in F$ and all $s\in S$, $\alpha + y \ge s + y$ so $\alpha + y$ is an upper bound of $T$. Furthermore, for all upper bounds $\gamma$ of $S$, $\gamma + y$ is an upper bound of $T$ and since $\gamma + y \ge \alpha + y$ we conclude $\alpha +y = \sup S + y$ is the least upper bound of T. Firstly, in (a), I assume that for all upper bounds $\beta$ of T, there exists some upper bound $\gamma$ of S s.t $\beta = \gamma + y$, but in my proof, I use the fact that for all $s \in S$, $\gamma \ge s \implies \gamma + y \ge s + y$ and this just implies that $\gamma + y$ is an upper bound of $T$ and not the first statement. Secondly, in part (b), isn´t the question assuming that $\mathbb{N} \subset F$, otherwise for any $x \in F$, $nx \notin F$? I tried to prove (b) but I am not sure if it is correct: (b) Suppose S has a least upper bound, let´s call it $\alpha$. $S = \{nx \space | \space n\in \mathbb{N} \} = \{(n-1)x + x \space|\space n \in \mathbb{N}\}$. $\sup(\{(n-1)x + x \space|\space n \in \mathbb{N}\}) = \sup(\{(n-1)x\space | \space n \in \mathbb{N}\})+x = \alpha + x$ . Which is a contradiction, thus $\sup S$ doesn´t exist. Is this second proof correct? In the proof I assumed that $\sup(\{(n-1)x\space | \space n \in \mathbb{N}\}) = \sup (\{nx \space | \space n\in \mathbb{N} \})$ since for all $n \in \mathbb{N}$, $n+1 \in \mathbb{N}$. REPLY [2 votes]: The proof of (a) can be better exposed. Suppose that $b$ is a least upper bound of $S$. You want to prove that $b+y$ is the least upper bound of $T$. For every $s\in S$, it is true that $s\le b$; hence, for every $s\in S$ it is true that $s+y\le b+y$. Hence $b+y$ is an upper bound of $T$. Suppose $c$ is an upper bound of $T$. Hence $s+y\le c$, for every $s\in S$, but then also $s\le c-y$ for every $s\in S$. Since $b$ is the least upper bound of $S$, we conclude that $b\le c-y$, which implies $b+y\le c$. Therefore $b+y$ is the least upper bound of $T$. For (b) there is no need to assume that the natural numbers are in $F$, but it's not restrictive to assume so, because any ordered field has characteristic zero, so it contains a unique copy of the ring of integers, namely the subring generated by $1_F$. However, you must assume $x>0$, not just $x\ne0$, because $x$ is the least upper bound of $S=\{nx:n\in\mathbb{N}\}$ when $x<0$ (if your natural numbers contain $0$, the least upper bound is $0$). I'll work under the assumption that the natural numbers don't contain $0$ (which is contrary to my usual convention, but seems the one you're using). Suppose $x>0$ and $S=\{nx:n\in\mathbb{N}\}$ has an upper bound. Then you can see that $S=\{(n-1)x:n\in\mathbb{N},n>1\}$. Now, with $y=x$, you have $$ T=\{s+x:s\in S\}=\{(n-1)x+x:n>1\}=\{nx:n>1\}\subset S $$ On the other hand, $S=T\cup\{x\}$ and $x<2x\in T$, so $T$ and $S$ share the least upper bound, if it exists. But this contradicts (a), because if $b$ is the least upper bound, then $b=b+x$, so $x=0$.
{"set_name": "stack_exchange", "score": 1, "question_id": 4498228}
TITLE: Are uniformly equivalent metrics with the same bounded sets strongly equivalent? QUESTION [0 upvotes]: Let $d_1$ and $d_2$ be two metrics on the same set $X$. Then $d_1$ and $d_2$ are uniformly equivalent if the identity maps $i:(M,d_1)\rightarrow(M,d_2)$ and $i^{-1}:(M,d_2)\rightarrow(M,d_1)$ are uniformly continuous. And $d_1$ and $d_2$ are strongly equivalent if there exist constants $\alpha,\beta>0$ such that $\alpha d_1(x,y)\leq d_2(x,y)\leq\beta d_1(x,y)$ for all $x,y\in X$. Now if $d_1$ and $d_2$ are strongly equivalent, then they are uniformly equivalent and they have the same bounded sets. My question is, is the converse true? That is, if $d_1$ and $d_2$ are uniformly equivalent and have the same bounded sets, then are they strongly equivalent? If not, is there an example of metrics which are uniformly equivalent and have the same bounded sets but are not strongly equivalent? REPLY [3 votes]: Try $[0,1]$ with $d_1(x,y) = |x-y|$ and $d_2(x,y) = |x^2 - y^2|$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3050402}
\begin{document} \begin{abstract} Let $\g$ be a symmetrisable \KM algebra and $V$ an integrable $\g$--module in category $\O$. We show that the monodromy of the (normally ordered) rational Casimir connection on $V$ can be made equivariant \wrt the Weyl group $W$ of $\g$, and therefore defines an action of the braid group $B_W$ of $W$ on $V$. We then prove that this action is uniquely equivalent to the quantum Weyl group action of $B_W$ on a quantum deformation of $V$, that is an integrable, category $\O$--module $\V$ over the quantum group $\Uhg$ such that $\V/\hbar \V$ is isomorphic to $V$. This extends a result of the second author which is valid for $\g$ semisimple. \end{abstract} \Omit{arXiv abstract: Let g be a symmetrisable Kac-Moody algebra and V an integrable g-module in category O. We show that the monodromy of the (normally ordered) rational Casimir connection on V can be made equivariant with respect to the Weyl group W of g, and therefore defines an action of the braid group B_W of W on V. We then prove that this action is uniquely equivalent to the quantum Weyl group action of B_W on a quantum deformation of V, that is an integrable, category O-module V_h over the quantum group U_h(g) such that V_h/hV_h is isomorphic to V. This extends a result of the second author which is valid for g semisimple. } \Omit{ variant: Let $\g$ be a symmetrisable \KM algebra, and $W$ its Weyl group. The monodromy of the normally ordered Casimir connection of $\g$ defines an action of the pure braid group $P_W$ of $W$ on any category $\O$--module $V$. When $V$ is integrable, we show that this monodromy can be made equivariant \wrt $W$, and therefore defines an action of the braid group $B_W$ of $W$ on $V$. We then prove that this action is uniquely equivalent to the quantum Weyl group action of $B_W$ on any quantum deformation of $V$, that is an integrable, category $\O$--module $\V$ over the quantum group $\Uhg$ such that $\V/\hbar \V$ is isomorphic to $V$. This extends a result of the second author which is valid for $\g$ semisimple. } \maketitle \setcounter{tocdepth}{1} \tableofcontents \newpage \newcommand {\wick}[1]{:\negthinspace\negthinspace #1\negthinspace\negthinspace:} \newcommand {\Oh}{\mathcal O_\hbar} \newcommand {\Ohint}{\O_\hbar\int} \newcommand {\sfh}{\mathsf h} \newcommand {\wtnablak}{\wt{\nabla}_\kappa} \newcommand {\inv}[1]{t_{#1}} \section{Introduction} \subsection{} Let $\g$ be a complex, semisimple Lie algebra, $(\cdot,\cdot)$ an invariant inner product on $\g$, $\h\subset\g$ a Cartan subalgebra, and $\sfR\subset\h^*$ the corresponding root system. Set $\hreg=\h\setminus\bigcup_{\alpha\in\sfR}\Ker (\alpha)$, and let $V$ be a \fd representation of $\g$. The Casimir connection of $\g$ is the flat connection on the holomorphically trivial vector bundle $\IV$ over $\hreg$ with fibre $V$ given by \begin{equation}\label{eq:nablak} \nablak=d- \half{\sfh}\negthickspace\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha}\cdot\calkalpha \end{equation} Here, $\sfh$ is a complex deformation parameter, $\sfR_+\subset\sfR$ a chosen system of positive roots,\footnote{$\nablak$ is in fact independent of the choice of $\sfR_+$} and $\calkalpha\in U\g$ the truncated Casimir operator of the three--dimensional subalgebra $\sl{2}^\alpha\subset\g$ corresponding to $\alpha$ given by \[\calkalpha=x_\alpha x_{-\alpha}+x_{-\alpha} x_\alpha\] where $x_{\pm\alpha}\in\g_{\pm\alpha}$ are root vectors such that $(x_\alpha, x_{-\alpha})=1$ \cite{vtl-2,MTL,DC,FMTV}. Although the Weyl group $W$ of $\g$ does not act on $V$ in general, the action of the Tits extension of $W$ on $V$ can be used to twist the vector bundle $(\IV, \nablak)$ so that it becomes a $W$--equivariant, flat vector bundle $(\wt{\IV}, \wtnablak)$. Its monodromy gives rise to a one--parameter family of actions $\mu_\sfh$ of the braid group $B_W=\pi_1(\hreg/W)$ on $V$ \cite{vtl-2,MTL}. \subsection{} A theorem of the second author asserts that the monodromy of $\wtnablak$ is described by the quantum group $\Uhg$, with deformation parameter given by $\hbar=\pi\iota\sfh$ \cite{vtl-4,vtl-6}. Specifically, if $\V$ is a quantum deformation of $V$, that is a $\Uhg$--module which is topologically free over $\IC\fml$ and such that $\V/\hbar\V\cong V$ as $U\g$--modules, the action of $B_W$ on $V\fml$ given by the formal Taylor series of $\mu^\sfh$ at at $\sfh=0$ is equivalent to that on $\V$ given by the quantum Weyl group operators of $\Uhg$. \subsection{} The goal of the present paper is to extend the description of the monodromy of the Casimir connection in terms of quantum groups to the case of an arbitrary symmetrisable \KM algebra $\g$. This extension requires several new ideas, which we dicuss below. When the root system of $\g$ is infinite, the sum in \eqref{eq:nablak} does not converge. This is easily overcome, however, by replacing each Casimir operator by its normally ordered version \[\wick{\calkalpha}= 2x_{-\alpha}^{(i)}x_\alpha^{(i)}\] where $\{x_{\pm\alpha}^{(i)}\}_{i=1}^{\dim\g_\alpha}$ are dual bases of the root spaces $\g_{\pm\alpha}$, and $\alpha$ is assumed positive. Although still infinite, the sum in \begin{equation}\label{eq:nablank} \wick{\nablak}= d-\sfh\negthickspace\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha}\cdot x_{-\alpha}^{(i)}x_\alpha^{(i)} \end{equation} is now locally finite, provided the representation $V$ lies in category $\O$. Moreover, the connection $\wick{\nablak}$ is flat \cite{FMTV} (we give an alternative proof of this, along the lines of its \fd counterpart, in Section \ref{s:Casimir}). \subsection{} Although it restores convergence, the normal ordering in \eqref{eq:nablank} breaks the $W$--equivariance of the connection $\wick{\nablak}$. The lack of equivariance of its monodromy is measured by a 1--cocycle $\A=\{\A_w\}$ on $W$, given by the monodromy of the abelian connection $d-a_w$, where \[a_w=w_*\wick{\nablak}-\wick{\nablak} =-\sfh\negthickspace\negthickspace\negthickspace\sum_{\alpha\in\sfR_+\cap w\sfR_-} \frac{d\alpha}{\alpha}\cdot\nu^{-1}(\alpha)\] where $\nu:\h\to\h^*$ is the identification given by the bilinear form $(\cdot, \cdot)$. To rectify this, we prove in Section 2 that $\A$ is the coboundary of an explicit abelian cochain $\B$. As a consequence, the monodromy of $\wick{\nablak}$ multiplied by $\B$ gives rise to a 1--parameter family of actions of $B_W$ on any integrable, category $\O$ module $V$. When $\g$ is finite-dimensional, $\B$ is (essentially) the monodromy of the abelian connection \[d- \half{\sfh}\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha}\cdot\left(\calkalpha-\wick{\calkalpha}\right) = d-\half{\sfh}\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha}\nu^{-1}(\alpha)\] In Appendix \ref{s:coda}, which may be of independent interest, we show that the latter expression can effectively be resummed when $\g$ is affine, thus giving an alternative construction of the cochain $\B$ in this case. Our construction is based on the well--known resummation of the series $\sum_{n\geq 0}(z+n)^{-1}$ via the logarithmic derivative $\Psi$ of the Gamma function, through its expansion \[\Psi(z)=\frac{1}{z}+\sum_{n\geq 1}\left(\frac{1}{z+n}-\frac{1}{n}\right)\] \subsection{} The main result of this paper is that the ($W$--equivariant) monodromy of $\wick{\nablak}$ is described by the quantum Weyl group operators acting on integrable, category $\O$--representations of the quantum group $\Uhg$.\bluecomment{state this as a highlighted theorem.} Our strategy is patterned on that of \cite{vtl-4,vtl-6}, and hinges on the notion of braided \qc category. Such a category is, informally speaking, a braided tensor category which carries commuting actions of Artin's braid groups $B_n$, and of a given generalised braid group $B_W$, on the tensor powers of its objects. For $\Uhg$, this structure arises on the category $\Ohint$ of integrable, highest weight modules via the $R$--matrices of all Levi subalgebras of $\Uhg$, and its quantum Weyl group operators. For the category $\Oint$ of integrable, highest weight $\g$--modules, we adapt the argument of \cite{vtl-6} to show that such a structure arises from monodromy of the KZ equations of all Levi subalgebras of $\g$ and that of its Casimir connection. The description of the monodromy of the Casimir connection in terms of quantum Weyl group operators is then deduced by proving that $\Ohint$ and $\Oint$ are equivalent as \qc categories. \subsection{} Such a statement naturally presupposes that $\Ohint$ and $\Oint$ are equivalent as abelian categories. When $\g$ is finite--dimensional, this follows from the fact that $U\g\fml$ and $\Uhg$ are isomorphic as algebras. This is no longer true for a general $\g$, but an equivalence of categories can be obtained via the \nEK quantisation functor \cite{ek-1,ek-2}, which realises $\Uhg$ as (a subalgebra of) the endomorphisms of a suitable fiber functor on $\O$. The EK equivalence, however, is not compatible with the inclusion of Levi subalgebras, something required by the yoga of \qc categories. In \cite{ATL1}, we modified this equivalence by constructing a relative version of the \nEK functor which takes as input an inclusion of Lie bialgebras. The main result of \cite{ATL1} is that the \qc structure on $\Oh$ can be transferred to one on $\O$. \subsection{} We proved in \cite{ATL2} that the corresponding \qc structure on $\O$ is extremely rigid, namely that it is unique up to a unique twist. Thus, the stated equivalence follows once it is proved that the monodromy of the KZ equations for all Levi subalgebras of $\g$ and that of the Casimir equations arise from a braided \qc structure on $\O$. This is proved in \cite{vtl-6} for a finite--dimensional Lie algebra $\g$, but the proof carries over to an arbitrary $\g$, provided one considers both the KZ and Casimir equations with values in a double holonomy algebra which contains the Lie algebra of the pure Artin braid groups and of the generalised pure braid group $P_W$ corresponding to $W$. The proof is then completed by noticing that this holonomy algebra naturally maps to the universal PROPic algebra introduced in \cite{ATL2} and which contains the data underlyuing the \qc structure of the transported $\Ohint$. This former fact is similar to the fact that the Lie algebra of the pure praid groups $P_n$ map to the Hochschild complex of the universal enveloping algebra $U\g$. \Omit{ Would be useful to reiterate that we obtain far strongert rigidity results than in my own work, even for g finite-dimensional Should we mention in the main text of the introduction (as opposed to the Outline that DCP technology goes into proving that the Casimir connection arises from a qC algebra?) Probably yes. Might be in fact be better to have a longer paragraph saying that the differential braided qC structure arises from a) showing that KZ arises from a braided structure (obvious/Drinfeld) b) Casimir from qC structure (DCP tech) and c) welding/fusion operator.} \subsection{Outline of the paper} In Section \ref{s:Casimir}, we review the definition of the (normally ordered) Casimir connection of a symmetrisable \KM algebra $\g$, give an alternative proof of its flatness, and prove that its monodromy can be modified by an abelian cochain so as to define representations of the braid group $B_W$ of the Weyl group $W$ of $\g$. In Section \ref{s:monodromy}, we show that the (appropriately modified) monodromy of the Casimir connection arises from a \qc structure on integrable, category $\O$ representations of $\g$. This follows by a straighforward adaptation of the \DCP construction of fundamental solutions of holonomy equations to the case of an infinite hyperplane arrangement. In Section \ref{s:diff-braid-qC}, we review the definition of braided \qc category, and introduce a double holonomy algebra containing the coefficients of the universal version of both the KZ and Casimir connections. In Section \ref{s:difftwist-fusion}, we adapt the construction of the fusion operator of \cite{vtl-6} so that it takes values in our double holonomy algebra, and show that it gives rise to a braided \qc structure on $\Oint$ which gives rise to the monodromy of the KZ and Casimir equations. In Section \ref{se:Uhg}, we describe the transfer of braided \qc structure from $\Ohint$ to $\O$, following \cite{ATL1} and \cite{ATL2}. Section \ref{se:main} contains our main result about the equivalence of braided \qc categories between $\Ohint$ and $\O$. Appendix \ref{s:props} reviews tha basic definitions of PROPs, Lie bialgebra and Drinfeld--Yetter modules. Finally, in Appendix \ref{s:coda} we show that, in the case of an affine \KM algebra $\g$, the normally ordered Casimir connection can be modified by adding an explicit closed, $\h$--valued one-form such that the sum of the two is equivariant under $W$. \Omit{This form is a resummation of the formal sum $\sum {\alpha>0}d\log\alpha h_\alpha$ and its construction is, in spirit at least, similar to KacÕs construction of the half--sum of the positive roots of a \KM algebra.} \subsection{Acknowledgments} It is a pleasure, as ever, to thank Pavel Etingof for his insightful comments and his interest in the present work. \Omit{ Section 1: The Casimir connection of a Kac-Moody algebra - Definition (oriented, truncated Casimir, fiber= rep in cat O int) - Flatness (reduction to Kohno?s lemma) - van der Lek theorem - Goal: use the Casimir connection to obtain monodromy representations of B_W - Twisting the bundle so that it becomes W-equivariant (up and down construction) - Twisting the (monodromy of the) connection so that it becomes W-equivariant (cocycle, reduction to rank 2) - Thm1: There is a monodromic action of B_g on the representations in cat O int } \section{The Casimir connection of a symmetrisable Kac--Moody algebra}\label{s:Casimir} \subsection{The Casimir connection} Let $\g$ be a symmetrisable \KM algebra with Cartan subalgebra $\h\subset \g$, $\sfR=\{\alpha\}\subset\h^*$ the root system of $\g$ and set\optfootnote {or, more generally, a central extension of a \gKM algebra.} $$\hreg=\h\setminus\bigcup_{\alpha\in\sfR}\Ker(\alpha)$$ Let $\sfR_+\subset\sfR$ be the set of positive roots of $\g$. For any $\alpha \in\sfR^+$, let $\g_{\pm\alpha}\subset\g$ be the root subspaces corresponding to $\pm\alpha$ and let $\{e_{\pm\alpha}^{(i)}\}_{i=1}^{\dim\g_\alpha}$ be bases of $\g_{\pm\alpha}$ which are dual to each other \wrt the invariant inner product $(\cdot,\cdot)$ on $\g$. Set $$\Ku{\alpha}{+}=2\sum_{i=1}^{\dim\g_\alpha}e_{-\alpha}^{(i)}e_\alpha^{(i)}$$ Let $V$ be a $\g$--module lying in category $\O$\optfootnote{it would be better to replace this by the assumption that $V$ is a smooth module in fact, then one could consider the Casimir connection of $\widehat{\g}$ with values in ${\mathbb I}_\chi$... However if one wants to bring in the Weyl group, it will also be necessary to assume integrability which should restrict one to category $\O$} and let $\IV=V\times\hreg$ be the holomorphically trivial vector bundle over $\hreg$ with fibre $V$. Finally, let $\nablah\in\IC$ be a complex parameter. \begin{definition} The Casimir connection of $\g$ is the connection on $\IV$ given by \begin{equation}\label{eq:Casimir} \nablak= d-\frac{\nablah}{2}\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha}\cdot\Ku{\alpha}{+} \end{equation} \end{definition} \noindent The Casimir connection for a semisimple Lie algebra was introduced, and shown to be flat by De Concini \cite{DC} and, independently by Millson--Toledano Laredo \cite{vtl-2,MTL} and Felder--Markov--Tarasov--Varchenko \cite{FMTV}. In \cite{FMTV}, the case of an arbitrary symmetrisable \KM algebra was considered. We give an alternative proof of flatness in this more general case, along the lines of \cite{vtl-2,MTL} in Section \ref{ss:flatness}. \subsection{Local finiteness} Note that the sum in \eqref{eq:Casimir} is locally finite even if $\sfR$ is infinite since, for any $v\in V$, $\Ku{\alpha}{+} v=0$ for all but finitely many $\alpha\in\sfR_+$. Differently said, let $\Pi=\{\alpha_i\}\subset \sfR_+$ be the set of simple roots of $\g$, $\deg:\Pi\rightarrow\IZ_+$ a function such that $\deg^{-1}(n)$ is finite for any $n\in\IZ_+$, and extend $\deg$ to a $\IZ_+$--valued function on $\sfR_+$ by linearity. Let $\lambda_1,\ldots,\lambda_p\in\h^*$ be such that the set of weights of $V$ is contained in the finite union $\bigcup_{i=1}^p D(\lambda_i)$ where $D(\lambda_i)=\{\mu\in\h^*|\mu\leq\lambda_i\}$. Set, for any $n\in\IZ_+$, $$V^n= \bigoplus_{\substack{ \mu\in\h^*:\\[.3 ex] \deg(\lambda_i-\mu)\leq n,\\[.3 ex] \forall i:\medspace \mu\in D(\lambda_i)}} V[\mu]$$ where $V[\mu]$ is the weight space of $V$ of weight $\mu$. Then, $\displaystyle{V=\lim_{\longrightarrow}V^n}$, each $V^n$ is invariant under the operators $\Ku{\alpha}{+}$ and $\Ku{\alpha}{+}$ acts as zero on $V^n$ if $\deg(\alpha)>n$. Thus, if $\IV^n=V^n\times\hreg$ is the trivial vector bundle over $\hreg$ with fibre $V^n$, then $\displaystyle{\IV=\lim_{\longrightarrow}\IV^n}$ and $\displaystyle {\nablak=\lim_{\longrightarrow}\nablak^n}$ where \begin{equation}\label{eq:nablak n} \nablak^n= d-\frac{\nablah}{2}\sum_{\alpha\in\sfR_+^n} \frac{d\alpha}{\alpha}\cdot\Ku{\alpha}{+} \end{equation} and $\sfR_+^n$ is the finite set \begin{equation}\label{eq:Phi n} \sfR_+^n=\{\alpha\in\sfR_+|\thinspace\deg(\alpha)\leq n\} \end{equation} Note also that the pair $(\IV^n,\nablak^n)$ descends to a (trivial) vector bundle with connection on the complement $\h^n\reg$ of the hyperplanes $\Ker(\alpha)$, $\alpha\in\sfR_+^n$, in the \fd vector space \begin{equation}\label{eq:h n} \h^n=\h/(\sfR_+^n)^\perp \end{equation} Finally note that, due to the existence of proportional roots, the forms $\frac{d\alpha}{\alpha}$ need not be pairwise distinct. For example, all positive imaginary roots $m\delta$, $m\in\IZ_+$, of an affine \KM algebra give rise to the same one--form $d\delta/ \delta$. \subsection{Flatness}\label{ss:flatness} \begin{theorem} The connection $\nablak$ is flat for any $\nablah\in\IC$. \end{theorem} \begin{proof} It suffices to prove that the connection $\nablak^n$ defined by \eqref{eq:nablak n} is flat for any $n$. Since $\nablak^n$ is pulled back from the \fd vector space $\h^n$ \eqref{eq:h n}, Kohno's lemma \cite{Kh} implies that the flatness of $\nablak^n$ is equivalent to proving that, for any two--dimensional subspace $U\subset\h^*$ spanned by a subset of $\sfR_+^n$, the following holds on $V^n$ for any $\alpha\in U\cap\sfR_+^n$ $$[\Ku{\alpha}{+},\sum_{\beta\in U\cap\sfR_+^n}\Ku{\beta}{+}]=0$$ Since $\Ku{\alpha}{+}$ acts as $0$ on $V^n$ if $\deg(\beta)>n$, this amounts to proving that, on $V^n$ $$[\Ku{\alpha}{+},\sum_{\beta\in U\cap\sfR_+}\Ku{\alpha}{+}]=0$$ Let $$\g_U=\h\oplus\bigoplus_{\alpha\in U\cap\sfR}\g_\alpha$$ be the subalgebra spanned by $\h$ and the root subspaces corresponding to the elements of $U\cap\sfR$. Then $\g_U$ is a generalized Kac--Moody algebra\optfootnote{I am grateful to V. Kac for a clarification on this point} and, modulo terms in $U\h$, the operator $\sum_{\beta\in U\cap\sfR_+}\Ku{\alpha}{+}$ is its Casimir operator. Since any element in $U\h$ commutes with $\Ku{\alpha}{+}$, the above commutator is therefore zero. \optfootnote{what if $\h$ is infinite--dimensional?} \end{proof} \Omit{ \remark For any $\chi\in\hreg$ and $t\in\h$, let $T_t(\chi)\in\End(V)$ be defined by \begin{equation}\label{eq:Hamiltonian} T_t(\chi)= \sum_{\alpha\in\sfR_+}\frac{\alpha(t)}{\alpha(\chi)}\Ku{\alpha}{+} \end{equation} The flatness of $\nablak$ readily implies, and is in fact equivalent to the fact that, for any fixed $\chi$ and any $t,t'\in\h$, one has $$[T_t(\chi),T_{t'}(\chi)]=0$$ } \Omit{ The above operators were written down by Felder \etal in \cite {FMTV} but not shown, nor in fact claimed to commute there. The proof given in \cite{FMTV} in the case of a \fd Lie algebra $\g$ does not immediately carry over verbatim to the general case since due to the potential presence of proportional roots, the commutator $[T_t(\chi),T_{t'}(\chi)]$, when expressed as in equation \cite{FMTV}[eq. (5), page 143] possesses poles of order 2 in $\chi$ so that the residue argument of \cite{FMTV} is not sufficient. Moreover, the extension from a sum over positive roots to a sum over all roots made at the beginning of the proof of theorem 2.1 in \cite{FMTV} leads to infinities. } \Omit{ \subsection{Examples} \subsubsection{Affine Kac--Moody algebras} Altering our notation slightly, assume now that $\wh{\g}$ is the affine \KM algebra corresponding to a \fd semi--simple Lie algebra $\g$. If $t\in\h\subset\wh{\h}=\h\oplus\IC c\oplus\IC d$, the Hamiltonian given by \eqref{eq:Hamiltonian} reads $$T_t(\chi)= \sum_{\alpha\in\wh{\sfR}_{+}\re} \frac{\alpha(t)}{\alpha(\chi)}\Ku{\alpha}{+}$$ where the sum is now restricted to the set $\wh{\sfR}_{+}\re=\sfR_+\sqcup\sfR\times\{\IZ_+\cdot\delta\}$ of positive real roots of $\wh{\g}$ since $t$ is orthogonal to the imaginary roots $m\delta$, $m\in\IZ_+$. Unlike the terms of the original sum, those in the sum above are not singular on the hyperplane $\delta(\chi)=0$. Thus, one may take $\chi\in\hreg\subset\h$ and get, for varying $t\in \h$ the commuting Hamiltonians \begin{equation*} \begin{split} T_t(\chi) &= \sum_{\alpha\in\wh{\sfR}_{+}\re} \frac{\alpha(t)}{\alpha(\chi)}\Ku{\alpha}{+}\\ &= \sum_{\alpha\in\sfR_+}\frac{\alpha(t)}{\alpha(\chi)} \frac{(\alpha,\alpha)}{2}\left( f_\alpha e_\alpha+ \sum_{n>0}f_\alpha(-n)e_\alpha(n)+e_\alpha(-n)f_\alpha(n) \right) \end{split} \end{equation*} where $e_\alpha\in\g_\alpha,f_\alpha\in\g_{-\alpha}$ are such that $[e_\alpha,f_\alpha]=h_\alpha$ so that $(e_\alpha,f_\alpha)=2/(\alpha,\alpha)$. The above operators were written down by Feigin and Frenkel in \cite{FF}, and shown to commute there, in analogy with those written down in \cite{vtl-2,MTL}. } \subsection{Fundamental group of root system arrangements} Let $\h\ess=\h/\z(\g)$ be the \emph{essential} Cartan. Let $\C_{\IR}$ be the fundamental Weyl chamber in $\h\ess_{\IR}$, and let \[ {\sf Y}_{\IR}=\bigcup_{w\in W}w(\ol{\C}_{\IR}) \] be the real Tits cone. Set ${\sf Y}=\mathring{{\sf Y}}_{\IR}+i\h\ess_{\IR}$. The Weyl group $W$ acts properly discontinously on ${\sf Y}$ \cite{Lo,V1}. The regular points of this action are the points of \[ \sfX={\sf Y}\setminus\bigcup_{\alpha\in\sfR_+}\Ker(\alpha) \] The action on $\sfX$ is now proper and free, and the space $\sfX/W$ is a complex manifold. Let $p:\sfX\to\sfX/W$ be the canonical projection. Fix a point $x_0\in\imath\C$, and set $x=p(x_0)$ as a base point in $\sfX/W$. For $i=1,\dots, n$, define the curve $\gamma_i:x_0\to s_i(x_0)$ in $\sfX$ by \[ \gamma_i(t)=\nu(t)+((1-t)x_0+s_i(x_0)) \] where $\nu:[0,1]\to\h$, with $\nu(0)=0=\nu(1)$. Finally, set $S_i=p\circ\gamma_i\in\pi_1(\sfX/W,x)$. The following result is due to van der Lek \cite{vdL}, and generalises Brieskorn's Theorem \cite{Br} to the case of an arbitrary Weyl group. \begin{theorem} The fundamental group of $\sfX/W$ is the Artin braid grop $B_W$. More precisely, $\pi_1(\sfX/W)$ is presented on the generators $S_1,\dots, S_l$, with relations for any $i,j\in I$ with $m_{ij}<+\infty$ given by \begin{equation} \underbrace{S_iS_jS_i\cdots}_{m_{ij}}=\underbrace{S_jS_iS_j\cdots}_{m_{ij}} \end{equation} \end{theorem} \subsection{Twisting of $(\IV,\nablak)$} Let $W\subset GL(\h)$ be the Weyl group of $\g$. It is well known that $W$ does not act on $V$ in general, but that the triple exponentials \begin{equation}\label{eq:triple} \wt{s}_i =\exp(e_{\alpha_i})\exp(-f_{\alpha_i})\exp(e_{\alpha_i}) \end{equation} corresponding to the simple roots $\alpha_1,\ldots,\alpha_r$ in $\sfR_+$ and a choice of $\sl{2}$--triples $e_{\alpha_i},f _{\alpha_i},h_{\alpha_i}\in\sl{2}^{\alpha_i}$ give rise to an action of an extension $\wt{W}$ of $W$ by the sign group $\IZ_2^r$ \cite{Ti}. The flat vector bundle $(\IV,\nablak)$ is equivariant under $\wt{W}$, and may be twisted into a $W$--equivariant, flat vector bundle $(\wt{\IV},\wt{\nabla}_{\kappa})$ on $\hreg$ as follows \cite{MTL}. Let $\wt{\h}\reg\stackrel{p}{\rightarrow}\hreg$ be the universal cover of $\hreg$ and $\hreg/W$. Since the Tits extension $\wt{W}$ is a quotient of the braid group $B_W= \pi_1(\hreg/W)$, the latter acts on the flat vector bundle $p^*(\IV,\nablak)$ on $\wt{\h}\reg$. By definition, $(\wt{\IV}, \wt{\nabla}_{\kappa})$ is the quotient $p^*(\IV,\nablak)/PB_W$, where $PB_W=\pi_1(\hreg)$ is the pure braid group corresponding to $W$, and carries a residual action of $W=B_W/PB_W$. \subsection{Monodromy representations} The monodromy of $(\wt{\IV},\wt{\nabla}_{\kappa})$, which we shall abusively refer to as the monodromy of the Casimir connection, yields a one--parameter family of representations $\mu_V^\nablah$ of the pure braid group $PB_W$ on $V$ which is obtained as follows. Fix a base point $\wt{x}_0\in\wt{\h}\reg$, and let $x_0,[x_0]$ be its images in $\hreg$ and $\hreg/W$ respectively. The braid group $\pi_1(\hreg/W;[x_0])$ acts on fundamental solutions $\Psi:\wt{\h}\reg\to GL(V)$ of $p^*\nablak$ by $b\bullet\Psi(\wt{x})=b\cdot\Psi(\wt{x}\cdot b)$. If $\Psi$ is a given fundamental solution, then $\mu_\Psi^\nablah (b)=\Psi^{-1}\cdot b\bullet\Psi$ is locally constant function with values in $GL(V)$ and the required monodromy. \subsection{Equivariant extensions}\label{ss:cocycle} As pointed out, the Casimir connection $\nablak$ is not $W$--equivariant. We will prove below, however, that its monodromy can be corrected so as to become $W$--equivariant. Let $x_0$ be a fixed basepoint in the fundamental chamber $\C$ in ${\sf X}$. For each $i\in I$, let $\gamma_i$ be a fixed elementary path from $x_0$ to $s_i(x_0)$ around the wall $\alpha_i=0$.\bluecomment{should say above the wall} For any $i\in I,w\in W$, defined the path $\gamma_{w,i}$ by \[ \gamma_{w,i}=w(\gamma_i): w(x_0)\to (ws_i)(x_0) \] Let $\sfP(\sfX)$ be the groupoid with objects $\{w(x_0)\}_{w\in W}$. The morphisms are defined as follows. For any point $w_1(x_0)\in\sfP(\sfX)$, the morphisms $\sfP(\sfX)(x_0,w_1(x_0))$ are given by the all paths from $x_0$ to $w_1(x_0)$ obtained by composition of the elements $\gamma_{w,i}$. We then define \[ \sfP(\sfX)(w_2(x_0), w_1(x_0))=w_2\sfP(\sfX)(x_0,w_2^{-1}w_1(x_0)) \] Finally, let $\sfH(\sfX)$ be the groupoid with same objects of $\sfP(\sfX)$ and morphisms \[ \sfH(\sfX)(x,y)=\sfP(\sfX)(x,y)/\sim \] where $\gamma\sim\gamma'$ if $\gamma,\gamma'$ are homotopic in $\sfX$. \begin{theorem} The monodromy representation $\mu$ extends to a representation of $B_{W}\simeq\pi_1({\sf X}/W)$. \bluecomment{Not true that $\mu$ extends actually.} In particular, there exists a morphism $\pi_1({\sf X}/W)\to GL(V)$ such that \[ \xymatrix{ \pi_1({\sf X}) \ar[r]^{\mu}\ar[d] & GL(V)\\ \pi_1({\sf X}/W)\ar[ur] & } \] is commutative. \end{theorem} \begin{proof} The monodromy of the Casimir connection $\nabla$ provides a homomorphism \[ \mu:\sfH(\sfX)\to GL(V) \] Let $\mu_w$ denote the monodromy representation of the connection $w^*\nabla$. Let \[ \alpha_w:\sfH(\sfX)\to GL(V) \] be the map $\alpha_w=\mu\mu_w^{-1}$, which measures the lack of $W$--equivariance of $\nabla$ and describes the monodromy of the abelian connection \[ \nabla^{\h}_w=d-\sum_{\substack{\alpha>0\\ w\alpha<0}}\inv{\alpha}d\log(\alpha) \] where $t_{\alpha}=\nu^{-1}(\alpha)$. In order to define a morphism $\pi_1(\sfX/W)\to GL(V)$ satisfying the required property, it is enough to construct a morphism $\beta:\sfH(\sfX)\to GL(V)$ such that \begin{equation}\label{eq:monococycle} \alpha_w=\beta_w\beta^{-1} \end{equation} where $\beta_w(\gamma)=w^{-1}(\beta(w(\gamma)))$.\\ The choice of elements $\{\beta(\gamma_i)\}_{i\in I}$ determines automatically a morphism $\beta: \sfP(\sfX)\to GL(V)$. The result follows by \begin{lemma}\label{lem:beta} There exists a choice of the elements $\{\beta(\gamma_i)\}_{i\in I}$ such that $\beta: \sfP(\sfX)\to GL(V)$ preserves the homotopy relations in $\sfP(\sfX)$ and descends to a morphisms $\beta: \sfH(\sfX)\to GL(V)$. In particular, the choice (for any $a,b\in\IC$) \begin{equation}\label{eq:betachoice} \beta(\gamma_i)=e^{ah_i+bh_i^2} \end{equation} is allowed. \end{lemma} \end{proof} \subsubsection{Proof of Lemma \ref{lem:beta}} For any reduced expression $\ul{s}=({i_1},\cdots, {i_n})$ of $w\in W$, there is a canonical element $\gamma_{\ul{s}}\in\sfP(\sfX)$ from $x_0$ to $w(x_0)$ \[ \gamma_{\ul{s}}=\prod^{\leftarrow}_{k=1,\dots, n} s_{i_1}\cdots s_{i_{k-1}}(\gamma_{i_{k}}) \] and \begin{align*} \beta(\gamma_{\ul{s}})=&\prod_{k=1}^n\beta(s_{i_1}\cdots s_{i_{k-1}}(\gamma_{i_k}))=\\=& \left(\prod_{k=1}^ns_{i_1}\cdots s_{i_{k-1}}(\alpha_{s_{i_1}\cdots s_{i_{k-1}}})(\gamma_{i_k})\right)\left(\prod_{k=1}^ns_{i_1}\cdots s_{i_{k-1}}(\beta(\gamma_{i_k}))\right) \end{align*} We have to show that the map $\beta$ defined by the choice \eqref{eq:betachoice} preserves the homotopy relations in $\sfP(\sfX)$. Following \cite{vdL}, it is enough to prove that $\beta$ is independent of the reduced expression $\ul{s}$. It is easy to see that, if we choose $\beta(\gamma_i)$ as in \eqref{eq:betachoice}, \[ \prod_{k=1}^ns_{i_1}\cdots s_{i_{k-1}}(\beta(\gamma_{i_k}))=\prod_{\substack{\alpha>0\\ w^{-1}\alpha<0}}e^{ah_{\alpha}+bh_{\alpha}^2} \] and it is therefore independent of the reduced expression of $w$. Set $w_{k-1}=s_{i_1}\cdots s_{i_{k-1}}$. Then \[ w_{k-1}(\alpha_{w_{k-1}}(\gamma_{i_k}))=\prod_{\substack{\alpha>0\\ w_{k-1}(\alpha)<0}}\alpha(x_0)^{-\inv{w_{k-1}(\alpha)}}\cdot s_{i_k}(\alpha)(x_0)^{\inv{w_{k-1}(\alpha)}} \] Set $I_k=\{\alpha>0\;|\: w_k(\alpha)<0\}$. It follows that \footnote{ It is enough to observe that $I_k\setminus s_{i_k}I_{k-1}=\{\alpha_{i_k}\}$. } \begin{align*} \prod_{k=1}^nw_{k-1}(\alpha_{w_{k-1}}(\gamma_{i_k}))=&\prod_{k=1}^{n-1}\alpha_{i_k}(x_0)^{\inv{-w_k(\alpha_{i_k})}}\cdot\prod_{\alpha\in I_{n-1}}s_{i_n}(\alpha)(x_0)^{-\inv{-w_{n-1}(\alpha)}}=\\ =&\prod_{k=1}^n\alpha_{i_k}(x_0)^{\inv{-w_k(\alpha_{i_k})}}\cdot\prod_{\substack{\alpha>0\\ w\alpha<0}}\alpha(x_0)^{-\inv{-w(\alpha)}} \end{align*} Therefore it remains to show that \[ A_{\ul{s}}=\prod_{k=1}^n\alpha_{i_k}(x_0)^{\inv{-w_k(\alpha_{i_k})}} \] is independent of the choice of the reduced expression $\ul{s}$. Let $\mathbf{R}^w$ be the diagram with vertices the reduced expressions of $w$. Two reduced expressions, $\ul{s}=(i_1,\dots, i_n)$ and $\ul{s}'=({j_1},\dots, {j_n})$, are connected by an edge if $\ul{s}$ can be obtained by $\ul{s}'$ by replacing a sequence of $m$ indices $i,j,i,\dots$ with $m$ indeces $j,i,j,\dots$, where $m=m_{ij}<\infty$. The diagram $\mathbf{R}^w$ is connected and it is enough to show that $A_{\ul{s}}=A_{\ul{s}'}$ whenever $(\ul{s},\ul{s}')$ is an edge. To see this, we are reduced to consider only the four cases of rank $2$, with $w=w_{ij}$, \ie the longest elements of the subgroups $W_{ij}$ generated by the simple reflections $s_i,s_j$. \begin{itemize} \item[(i)] For $\sfA_1\times\sfA_1$, $\ul{s}=(1,2)$, $\ul{s}'=(2,1)$, and\footnote{in this paragraph only, $\alpha$ stands for $\alpha(x_0)$ for simplicity.} \[ A_{\ul{s}}=\alpha_1^{\inv{\alpha_1}}\alpha_2^{\inv{\alpha_2}}= \alpha_2^{\inv{\alpha_2}}\alpha_1^{\inv{\alpha_1}}=A_{\ul{s}'} \] \item[(ii)] For $\sfA_2$, $\ul{s}=(1,2,1), \ul{s}'=(2,1,2)$ and \[ A_{\ul{s}}= \alpha_1^{\inv{\alpha_1}} \alpha_2^{\inv{\alpha_1+\alpha_2}} \alpha_1^{\inv{\alpha_2}} = \alpha_2^{\inv{\alpha_2}} \alpha_1^{\inv{\alpha_1+\alpha_2}} \alpha_2^{\inv{\alpha_1}} = A_{\ul{s}'} \] \item[(iii)] For $\sfB_2$, $\ul{s}=(1,2,1,2), \ul{s}'=(2,1,2,1)$ and \[ A_{\ul{s}}= \alpha_1^{\inv{\alpha_1}} \alpha_2^{\inv{2\alpha_1+\alpha_2}} \alpha_1^{\inv{\alpha_1+\alpha_2}} \alpha_2^{\inv{\alpha_2}} = \alpha_2^{\inv{\alpha_2}} \alpha_1^{\inv{\alpha_1+\alpha_2}} \alpha_2^{\inv{2\alpha_1+\alpha_2}} \alpha_1^{\inv{\alpha_1}} = A_{\ul{s}'} \] \item[(iv)] For $\sfG_2$, $\ul{s}=(1,2,1,2,1,2), \ul{s}'=(2,1,2,1,2,1)$ and \begin{align*} A_{\ul{s}}=& \alpha_1^{\inv{\alpha_1}} \alpha_2^{\inv{3\alpha_1+\alpha_2}} \alpha_1^{\inv{2\alpha_1+\alpha_2}} \alpha_2^{\inv{3\alpha_1+2\alpha_2}} \alpha_1^{\inv{\alpha_1+\alpha_2}} \alpha_2^{\inv{\alpha_2}}= \\=& \alpha_2^{\inv{\alpha_2}} \alpha_1^{\inv{\alpha_1+\alpha_2}} \alpha_2^{\inv{3\alpha_1+2\alpha_2}} \alpha_1^{\inv{2\alpha_1+\alpha_2}} \alpha_2^{\inv{3\alpha_1+\alpha_2}} \alpha_1^{\inv{\alpha_1}} = A_{\ul{s}'} \end{align*} \end{itemize} \Omit{ Section 2: A differential quasi-Coxeter structure on Oint - Intro: the goal of this section is to show that the monodromy of the Casimir connection arises from/defines a quasi-Coxeter structure on category Oint. - Definition of quasi-Coxeter category - Holonomy algebra (and, I suppose, its map to End(f), f=forgetful on O) - Monodromy à la DCP - Thm2: The monodromy of the (equivariant) Casimir connection defines a quasi-Coxeter structure on Oint } \section{A differential quasi--Coxeter structure on category $\O$}\label{s:monodromy} In this section, we review the definition of \qc category following \cite{ATL1}. We then prove that the monodromy of the Casimir connection defines such a structure on the category $\Oint$ of integrable, highest weight $\g$--modules. This structure is universal, in that it is constructed on the holonomy algebra $\DCPHA{\sfR}$ of the root arrangement in $\h$, and then transferred to $\O$ via a morphism $\DCPHA{\sfR}\to\End(\mathsf f)$, where $\mathsf f$ is the forgetful functor to vector spaces. The differential \qc structure on $\DCPHA{\sfR}$ is obtained by adapting the \DCP construction of the fundamental solution of the holonomy equations \cite{DCP} to the case of an infinite hyperplane arrangement. \subsection{Quasi--Coxeter categories} \subsubsection{Diagrams and nested sets}\label{ss:diagrams} The terminology in \ref{ss:diagrams}--\ref{ss:quotient} is taken from \cite[Part I]{vtl-4} and \cite[Sec. 2]{ATL1}, to which we refer for more details. A {\it diagram} is a nonempty undirected graph $D$ with no multiple edges or loops. We denote the set of vertices of $D$ by $\sfV(D)$. A {\it subdiagram} $B\subseteq D$ is a full subgraph of $D$, that is, a graph consisting of a (possibly empty) subset of vertices of $D$, together with all edges of $D$ joining any two elements of it. Two subdiagrams $B_1,B_2\subseteq D$ are {\it orthogonal} if they have no vertices in common and no two vertices $i\in B_1$, $j\in B_2$ are joined by an edge in $D$. $B_1$ and $B_2$ are {\it compatible} if either one contains the other or they are orthogonal. A {\it nested set} on a diagram $D$ is a collection $\H$ of pairwise compatible, connected subdiagrams of $D$ which contains the connected components $D_1,\dots,D_r$ of $D$. Let $\N_D$ be the partially ordered set of nested sets on $D$, ordered by reverse inclusion. $\N_D$ has a unique maximal element ${\bf 1}= \{D_i\}_{i=1}^r$ and its minimal elements are the maximal nested se. We denote the set of maximal nested sets on $D$ by $\Mns{D}$. Every nested set $\H$ on $D$ is uniquely determined by a collection $\{\H_i\} _{i=1}^r$ of nested sets on the connected components of $D$. We therefore obtain canonical identifications \[\N_D=\prod_{i=1}^r \N_{D_i}\aand \Mns{D}=\prod_{i=1}^r\Mns{D_i}\] \subsubsection{Quotient diagrams}\label{ss:quotient} Let $B\subsetneq D$ be a proper subdiagram with connected components $B_1,\dots, B_m$. \begin{definition} The set of vertices of the {\it quotient diagram} $D/B$ is $\sfV(D)\setminus\sfV(B)$. Two vertices $i\neq j$ of $D/B$ are linked by an edge if and only if the following holds in $D$ \[i\not\perp j\qquad\text{or}\qquad i,j\not\perp B_i\qquad\text{for some }i=1,\dots, m\] \end{definition} For any connected subdiagram $C\subseteq D$ not contained in $B$, we denote by $\ol{C}\subseteq D/B$ the connected subdiagram with vertex set $\sfV(C)\setminus\sfV(B)$. For any $B\subset B'$, we denote by $\Mns{B',B}$ the collection of maximal nested sets on $B'/B$. For any $B\subset B' \subset B''$, there is an embedding \[\cup:\Mns{B'',B'}\times\Mns{B',B}\to\Mns{B'',B}\] such that, for any $\F\in\Mns{B'',B'},\G\in\Mns{B',B}$, \[(\F\cup\G)_{B'/B}=\G\] \subsubsection{$D$--categories} Recall \cite[Section 3]{vtl-4} that, given a diagram $D$, a $D$--algebra is a pair $(A,\{A_B\}_{B\subset D})$, where $A$ is an associative algebra and $\{A_B\}_ {B\subset D}$ is a collection of subalgebras indexed by subdiagrams of $D$, and satisfying \begin{equation*} A_B\subseteq A_{B'}\quad\mbox{ if }B\subseteq B' \qquad\text{and}\qquad [A_B,A_{B'}]=0\quad\mbox{ if }B\perp B' \end{equation*} The following rephrases the notion of $D$--algebras in terms of their category of representations. \begin{definition} A \emph{$D$--category} \[\C=(\{\C_{B}\}, \{F_{BB'}\})\] is the datum of \begin{itemize} \item a collection of $\sfk$--linear \abelian categories $\{\C_{B}\}_{B\subseteq{D}}$ \item for any pair of subdiagrams $B\subseteq B'$, an \exact $\sfk$--linear functor $F_{BB'}:\C_{B'}\to\C_{B}$\footnote{When $B=\emptyset$ we will omit the index $B$.} \item for any $B\subset B'$, $B'\perp B''$, $B',B''\subset B'''$, a homomorphism of $\sfk$--algebras \[ \eta:\sfEnd{F_{BB'}}\to\sfEnd{F_{(B\cup B'')B'''}} \] \end{itemize} satisfying the following properties \begin{itemize} \item For any $B\subseteq D$, $F_{BB}=\id_{\C_B}$. \item For any $B\subseteq B'\subseteq B''$, $F_{BB'}\circ F_{B'B''}=F_{BB''}$. \item For any $B\subset B'$, $B'\perp B''$, $B',B''\subset B'''$, the following diagram of algebra homomorphisms commutes: \[ \xymatrix@C=.1cm@R=.3cm{ & \sfEnd{F_{BB'}}\ar[rd]^{\id\ten\id_{F_{B'B'''}}} \ar[dl]_{\id_{F_{B(B\cup B'')}}\ten\eta} &\\ \sfEnd{F_{BB'}}\ten\sfEnd{F_{B'B'''}}\ar[dr]_{\circ} & & \sfEnd{F_{B(B\cup B'')}}\ten\sfEnd{F_{(B\cup B'')B'''}} \ar[ld]^{\circ}\\ & \sfEnd{F_{BB'''}} & } \] \end{itemize} \end{definition} \begin{rem} It may seem more natural to replace the equality of functors $F_{BB'}\circ F_{B'B''} =F_{BB''}$ by the existence of invertible natural transformations $\alpha_{BB''}^{B'}: F_{BB'}\circ F_{B'B''}\Rightarrow F_{BB''}$ for any $B\subseteq B'$ satisfying the associativity constraints $\alpha_{BB'''}^{B'}\circ F_{BB'}(\alpha_{B'B'''}^{B''})=\alpha_{BB'''}^ {B''}\circ(\alpha_{BB''}^{B'})_{F_{B''B'''}}$ for any $B\subseteq B'\subseteq B'' \subseteq B'''$. A simple coherence argument shows however that this leads to a notion of $D$--category which is equivalent to the one given above. \end{rem} \begin{rem} We will usually think of $\C_{\emptyset}$ as a base category and at the functors $F$ as forgetful functors. Then the family of algebras $\sfEnd{F_B}$ defines, through the morphisms $\alpha$, a structure of $D$--algebra on $\sfEnd {F_D}$. Conversely, every $D$--algebra $A$ admits such a description setting $\C_B=\Rep A_B$ for $B\neq\emptyset$ and $\C_\emptyset=\vect_k$, $F_{BB'} =i_{B'B}^*$, where $i_{B'B}:A_B\subset A_{B'}$ is the inclusion. \end{rem} \begin{rem} The conditions satisfied by the maps $\eta$ imply that, given $B=\bigsqcup_{j=1}^rB_j$, with $B_j\subset D$ connected and pairwise orthogonal, the images in $\sfEnd{F_B}$ of the maps \[\sfEnd{F_{B_j}}\to\sfEnd{F_{B_j}F_{B_jB}}=\sfEnd{F_B}\] pairwise commute. This condition rephrases for the endomorphism algebras the $D$--algebra axiom \[[A_{B'},A_{B''}]=0 \qquad\forall\quad B'\perp B''\] that is equivalent to the condition, for any $B\supset B',B''$, \[A_{B'}\subset A^{B''}_B\] \end{rem} \subsubsection{Labelled diagrams and Artin braid groups} \begin{definition} A {\it labelling} of the diagram $D$ is the assignment of an integer $m_{ij}\in \{2,3,\dots, \infty\}$ to any pair $i,j$ of distinct vertices of $D$ such that \[m_{ij}=m_{ji}\qquad m_{ij}=2\] if and only if $i\perp j$. \end{definition} Let $D$ be a labeled diagram. \begin{definition} The \emph{Artin braid group} $B_D$ is the group generated by elements $S_i$ labeled by the vertices $i\in D$ with relations \[\underbrace{S_iS_j\cdots}_{m_{ij}}=\underbrace{S_jS_i\cdots}_{m_{ij}}\] for any $i\neq j$ such that $m_{ij}<\infty$. We shall also refer to $B_D$ as the braid group corresponding to $D$. \end{definition} \subsubsection{Quasi--Coxeter categories} \begin{definition} A \emph{quasi--Coxeter category of type $D$} \[\C=(\{\C_B\},\{F_{BB'}\},\{\DCPA{\F}{\G}\}, \{S_i\})\] is the datum of \begin{itemize} \item a $D$--category $\C=(\{\C_B\},\{F_{BB'}\})$ \item for any elementary pair $(\F,\G)$ in $\Mns{B,B'}$, a natural transformation \[\DCPA{\F}{\G}\in\sfAut{F_{BB'}}\] \item for any vertex $i\in \mathsf{V}(D)$, an element \[S_{i}\in\sfAut{F_i}\] \end{itemize} satisfying the following conditions \begin{itemize} \item {\bf Orientation.} For any $\F,\G$, \begin{equation*} \DCPA{\F}{\G}=\DCPA{\G}{\F}^{-1} \end{equation*} \item {\bf Transitivity.} For any pair $\F,\G,\H$. \begin{equation*} \DCPA{\F}{\G}\DCPA{\G}{\H}=\DCPA{\F}{\H} \end{equation*} \Omit{ \item {\bf Coherence.} For any elementary sequences $\H_1,\dots,\H_m$ and $\K_1,\dots, \K_l$ in $\Mns{B,B'}$ such that $\H_1=\K_1$ and $\H_m=\K_l$, \begin{equation*} \Phi_{\H_{m-1}\H_{m}}\cdots \Phi_{\H_1\H_2}=\Phi_{\K_{l-1}\K_l}\cdots\Phi_{\K_1\K_2} \end{equation*} } \item {\bf Factorization.} The assignment \[\DCPA{}{}:\Mns{B,B'}^2\to{\sf Aut}(F_{B'B}) \] is compatible with the embedding \[\cup:\Mns{B,B'}\times\Mns{B',B''}\to\Mns{B,B''}\] for any $B''\subset B'\subset B$, \ie the diagram \[ \xymatrix@C=.8in{ \Mns{B,B'}^2\times\Mns{B',B''}^2 \ar[d]_{\cup}\ar[r]^{\DCPA{}{}\times\DCPA{}{}} & \sfAut{F_{B''B'}}\times\sfAut{F_{B'B}} \ar[d]^{\circ}\\ \Mns{B,B''}^2\ar[r]^{\DCPA{}{}} & \sfAut{F_{B''B}} } \] is commutative. \item {\bf Braid relations.} For any pairs $i,j$ of distinct vertices of $B$, such that $2< m_{ij}<\infty$, and elementary pair $(\F,\G)$ in $\Mns{B}$ such that $i\in\F,j\in\G$, the following relations hold in $\sfEnd{F_B}$ \begin{equation*} \sfAd{\DCPA{\G}{\F}}(S_i)\cdot S_j \cdots = S_j\cdot\sfAd{\DCPA{\G}{\F}}(S_i)\cdots \end{equation*} where, by abuse of notation, we denote by $S_i$ its image in $\sfEnd{F_B}$ and the number of factors in each side equals $m_{ij}$. \end{itemize} \end{definition} \Omit{ \begin{rem} It is clear that the factorization property implies the support and forgetful properties as stated in \cite[Def. 3.12]{vtl-4}. \begin{itemize} \item {\bf Support.} For any elementary pair $(\F,\G)$ in $\Mns{B,B'}$, let $S=\supp(\F,\G), Z=\z\supp(\F,\G)\subseteq D$ and \[\wt{\F}=\F|^{\supp(\F,\G)}_{\z\supp(\F,\G)}\qquad \wt{\G}=\G|^{\supp(\F,\G)}_{\z\supp(\F,\G)}\] Then \begin{equation*} \DCPA{\F}{\G}=\id_{BZ}\circ\DCPA{\wt{F}}{\wt{\G}}\circ\id_{B'S} \end{equation*} where the expression above denotes the composition of natural transformations \begin{equation*} \xymatrix{ \C_{B'} \ar@/_15pt/[dddd]_{F_{BB'}}="F" \ar@/^15pt/[dddd]^{F_{BB'}}="G" \ar@{=>}"F";"G"^{\DCPA{\G}{\F}} & &\C_{B'} \ar[d]^{F_{SB'}} \\ &&\C_{S} \ar@/_15pt/[dd]_{F_{ZS}}="tF" \ar@/^15pt/[dd]^{F_{ZS}}="tG" \ar@{=>}"tF";"tG"^{\DCPA{\wt{F}}{\wt{\G}}} \\ &=&\\ &&\C_Z \ar[d]^{F_{BZ}}\\ \C_{B} &&\C_{B} } \end{equation*} \item {\bf Forgetfulness.} For any equivalent elementary pairs $(\F,\G),(\F',\G')$ in $\Mns{B,B'}$ \begin{equation*} \DCPA{\F}{\G}=\DCPA{\F'}{\G'} \end{equation*} \end{itemize} \end{rem} } \subsection{Extended Kac--Moody algebras} \subsubsection{Motivation} A quasi--Coxeter category has an underlying structure of $D$--category, which generalise the notion of $D$--algebra. A first example is given by semisimple Lie algebras. Let $\g$ be a semisimple Lie algebra with Dynkin diagram $D$ and Chevalley generators $\{e_,f_i,h_i\}_{i\in D}$. The collection of diagrammatic subalgebras $\{\g_B\}_{B\subset D}$, where $\g_B$ is generated by $\{e_i,f_i,h_i\}_{j\in B}$ defines a $D$--algebra structure on $\g$. This is not always true for symmetrisable Kac--Moody algebra. Indeed, it is easy to show that the Kac--Moody algebra corresponding to the symmetric irreducible Cartan matrix \[\sfA=\left[ \begin{array}{rrrr} 2&-1&0&0\\ -1&2&-2&0\\ 0&-2&2&-1\\ 0&0&-1&2 \end{array} \right]\] does not admit any $D_{\g}$--algebra structure on $\g(\sfA)$ \cite{ATL1}. \subsubsection{Extended Kac--Moody algebras} Following the suggestion of P. Etingof, we give a modified definition of $\g$, along the lines of \cite{FZ}, characterized by a bigger Cartan subalgebra. Let $\GCM{A}=(a_{ij})_{i,j\in I}\in M_n(\IC)$ be a symmetrisable generalised Cartan matrix. \begin{definition} The \emph{extended Kac--Moody algebra} of $\GCM{A}$ is the $\sfk$--algebra $\ol{\g}=\ol{\g}(\GCM{A})$ with generators $e_i,f_i, h_i, \fcw_i$, $i\in I$, and defining relations \begin{itemize} \item $[h_i,h_j]=[\fcw_i,\fcw_j]=[h_i,\fcw_j]=0$ \item $[h_i,e_j]=a_{ij}e_j, [h_i,f_j]=-a_{ji}f_j$ \item $[\fcw_i,e_j]=\delta_{ij}e_j, [\fcw_i,f_j]=-\delta_{ij}f_j$ \item $\mathsf{ad}(x_i)^{1-a_{ij}}(x_j)=0$ for $i\neq j$ and $x=e,f$ \end{itemize} \end{definition} \begin{proposition} Let $\GCM{A}$ be a symmetrisable generalized Cartan matrix of rank $l$, $\Dyn{D}$ its Dynkin diagram, $\g$ the corresponding Kac--Moody algebra. Let $\{d_{r}\}_{r=l+1}^{n}$ be the completion of $\{h_i\}_{i=1}^n$ to a basis of $\h\subset\g$ defined by the relations \[ [d_r,d_s]=0=[d_r,h_i]\qquad [d_r,e_i]=\delta_{ir} e_i\qquad [d_r,f_i]=-\delta_{ir}f_i \] \begin{itemize} \item[(i)] $\ol{\g}=\ol{\g}_{\Dyn{D}}$ has a canonical structure of $\Dyn{D}$--algebra, given by the collection of subalgebras $\{\ol{\g}_{\Dyn{B}}\}_{\Dyn{B\subset D}}$, where \[ \ol{\g}_{\Dyn{B}}=\langle e_i,f_i,h_i,\fcw_i\;|\; i\in\Dyn{B}\rangle \] \item[(ii)] There is a canonical embedding $\g\subset\ol{\g}$ mapping \[ e_i,f_i,h_i,d_{r}\mapsto e_i,f_i,h_i, \fcw_{r} \] $ i=1,\dots, n, r=l,\dots n$. At the level of Cartan subalgebras the inclusion is compatible with the symmetric, invariant, non--degenerate bilinear forms on $\h,\ol{\h}$. \end{itemize} \end{proposition} Henceforth, we omit the adjective \emph{extended}. \subsection{The holonomy algebra}\label{ss:holonomy1} Let $\g$ be a symmetrisable Kac--Moody algebra with root system $\sfR$ and Dynkin diagram $D$. Let $\sfF_{\rootsys}$ be the free algebra generated by the symbols $\Kh{\alpha}$, $\alpha\in\Rp$. For any $k\in\IZ_{\geq 0}$ set \[ B_k=\{\alpha\;|\; \mathsf{ht}(\alpha)\leq k\} \] and let $J_k$ be the two sided ideal in $\sfF_{\rootsys}$ generated by the symbols $\Kh{\alpha}$, $\alpha\not\in B_k$. Set \[ \ol{\sfF}_{\rootsys}=\lim_k\sfF_{\rootsys}/J_k \] \begin{definition} The holonomy algebra $\DCPHA{\sfR}$ is the quotient of $\ol{\sfF}_{\rootsys}$ by the relations \[ [\Kh{\alpha}, \sum_{\beta\in\Psi}\Kh{\beta}]=0 \] where $\Psi\subset\rootsys$ is any rank two subsystem. \end{definition} Let $\wt{J}_k$ be the two sided ideal generated by $J_k$ and \[ [\Kh{\alpha}, \sum_{\beta\in\Psi\cap B_k}\Kh{\beta}] \] where $\Psi$ is as before. \begin{lemma} There is a canonical isomorphism of algebras \[ \DCPHA{\rootsys}\simeq\lim_k\sfF_{\rootsys}/\wt{J}_k \] \end{lemma} \begin{proof} The canonical projections $\DCPHA{\sfR}\to\sfF_{\sfR}/\wt{J}_k$ induce a surjective homomorphism \[ \DCPHA{\sfR}\to\lim_k\sfF/\wt{J}_k \] with trivial kernel. \end{proof} \subsubsection{Holonomy algebra and category $\O$} Let $\O$ be the category of deformation highest weight integrable $\g$--modules, $\A$ the category of topologically free $\IC[[\nablah]]$--modules, and $\mathsf{f}:\O\to\A$ the forgetful functor. \begin{proposition} The linear map $\xi_{\sfR}:\DCPHA{\sfR}\to\CU$ defined by \[ \xi_{\sfR}(\Kh{\alpha})=\frac{\nablah}{2}\Ku{\alpha}{+} \] where $\Ku{\alpha}{+}$ is the Wick--ordered (truncated) Casimir operator of $\sl{2}^{\alpha}$, is a morphism of algebras, compatible with the natural gradation on $\DCPHA{\sfR}$. \end{proposition} \begin{proof} It follows from the commutation relations proved in \ref{ss:flatness}. \end{proof} \subsubsection{Weak quasi--Coxeter structures on $\DCPHA{\sfR}$} Hereafter, unless otherwise stated, we will adopt the same notation $\DCPHA{\sfR}$ to denote the double holonomy algebra and its completion with respect to the grading. For any subdiagram $B\subset D$, we denote by $\sfR_B$ the corresponding root subsystem, by $\DCPHA{B}$ the holonomy algebra $\DCPHA{\sfR_B}$. For any pair of subdiagrams $B\subset B'\subset D$, we denote by $\DCPHA{B'}^{B}$ the subalgebra of $\DCPHA{B}$--invariant elements in $\DCPHA{B'}$. \begin{definition}\label{def:wqCha} A \emph{weak quasi--Coxeter structure} on $\DCPHA{\sfR}$ is a collection of elements $\DCPA{\F}{\G}\in\DCPHA{\sfR}$, referred to as \emph{De Concini--Procesi associators}, for any $B'\subseteq B$ and for any pair of maximal nested sets $\F,\G\in\Mns{B,B'}$, such that \begin{equation*} \DCPA{\F}{\G}=1\mod(\DCPHA{B})_{\geq 1} \end{equation*} and satisfying the following properties\\ \begin{itemize} \item {\bf Orientation:} for any $\F,\G\in\Mns{B,B'}$ \[ \DCPA{\F}{\G}=\DCPA{\G}{\F}^{-1} \] \item {\bf Transitivity:} for any $\F,\G,\H\in\Mns{B,B'}$ \[ \DCPA{\H}{\F}=\DCPA{\H}{\G}\DCPA{\G}{\F} \] \item {\bf Factorisation:} \[ \DCPA{(\F_1\cup\F_2)}{(\G_1\cup\G_2)}=\DCPA{\F_1}{\G_1}\DCPA{\F_2}{\G_2} \] for any $B''\subseteq B'\subseteq B$, $\F_1,\G_1\in\Mns{B,B'}$ and $\F_2,\G_2\in\Mns{B',B''}$. \end{itemize} \end{definition} \subsection{Monodromy of the Casimir connection} \subsubsection{The Casimir connection}\label{ss:casconn} Let \[ \c=\{h\in\h\;|\; (\alpha_i,h)=0\;\forall i\in D\} \] be the center of $\g$ and set \[ \h\reg=\h/\c\setminus\bigcup_{\alpha\in\Rp}\ker(\alpha) \] \begin{definition} The Casimir connection is the flat connection on $\hreg$ with values in $\DCPHA\rootsys$ \begin{equation*} \nabla=d-\sum_{\alpha\in\Rp}\frac{d\alpha}{\alpha}\Kh{\alpha} \end{equation*} \end{definition} The flatness of $\nabla$ is proved as in \ref{ss:flatness}. \subsubsection{Holomorphic functions}\label{ss:holonomy2} In order to perform the necessary analysis we consider the algebra $\wh{\DCPHA{}}_{\rootsys}$, completion of $\DCPHA{\rootsys}$ with respect to the natural gradation $\deg(\Kh{\alpha})=1$. A solution of the \emph{holonomy equation} \begin{equation}\label{eq:holoeq} dG=\sum_{\alpha\in\Rp}\frac{d\alpha}{\alpha}\Kh{\alpha}G \end{equation} is a holomorphic functions (in its domain of definition) with values in $\wh{\DCPHA{}}_{\rootsys}$. The analytic computations performed with functions with values in $\wh{\DCPHA{}}_{\rootsys}$ are justified by the fact that $\wh{\DCPHA{}}_{\rootsys}$ is the inverse limit of the finite dimensional algebras $\sfF_{\rootsys}/J_{k, n}$ where $J_{k,n}$ is the ideal of the elements of degree $\geq n$ in $\sfF_{\rootsys}/\wt{J}_k$. So a function is determined by a sequence of compatible functions in the finite dimensional algebras $\sfF_{\rootsys}/J_{{k}, n}$. \subsubsection{Blow--up coordinates} Let $\F\in\Mns{D}$ be a maximal nested set on $D$. For any $\alpha\in\Rs{}$, let $p_{\F}(\alpha)$ be the minimal element $B\in\F$ such that $\supp(\alpha)\subseteq B$. Then $p_{\F}$ establishes a one to one correspondence between the simple roots $\{\alpha_1,\dots, \alpha_n\}$ and the subdiagrams in $\F$. For any $B\in\F$, we denote by $\alpha_B$ the simple root corresponding to $B$ under $p$. For any $B\in\F$, we denote by $c_{\F}(B)$ the minimal element in $\F$ which contains properly $B$. When no confusion is possible, we will avoid the index $\F$ and we will write $p(\alpha)$ and $c(B)$ in place of $p_{\F}(\alpha)$, $c_{\F}(B)$. For any $B\in\F$ set \[ x_B=\sum_{i\in B}\alpha_i \] \bluecomment{in the affine case, we may want to impose $x_D=\delta=\sum_i a_i\alpha_i$.} Then $\{x_B\}_{B\in\F}$ defines a set of coordinates on $\hess$. Let $U=\IC^{\F}$ with coordinates $\{u_B\}_{B\in\F}$. Let $\rho:U\to\h\reg$ be the map defined on the coordinates $\{x_B\}$ by \[ x_B=\prod_{B\subseteq C\in\F}u_C \] Then $\rho$ is birational with inverse \[ u_B=\left\{ \begin{array}{cl} x_B & \mbox{if B is maximal in $\F$} \\ {x_B}/{x_{c(B)}}& \mbox{otherwise} \end{array} \right. \] For any $\alpha\in\Rp$ set \[ P_{\alpha}=\frac{\alpha}{x_{p(\alpha)}} \] \begin{lemma}\hfill \begin{itemize} \item[(i)] For any simple root $\alpha$ such that $B=p(\alpha)$, we have \begin{equation}\label{eq:poly1} \alpha=x_B-\sum_{\substack{C\in\F\\c(C)=B}}x_C \qquad\mbox{and}\qquad P_{\alpha}=1-\sum_{\substack{C\in\F\\c(C)=B}}u_C \end{equation} \item[(ii)] If $\gamma\in\Rp$ is the sum of two positive roots $\alpha,\beta\in\Rp$ with $p(\alpha)=A$, $p(\beta)=B$, then for one of these, say $\alpha$, one has $p(\gamma)=p(\alpha)$, $p(\beta)=B\subsetneq p(\gamma)=A$, and \begin{equation}\label{eq:poly2} P_{\gamma}=P_{\alpha}+P_{\beta}\prod_{B\subseteq C\subsetneq A}u_C \end{equation} More precisely, if $\gamma=\sum_{C\subseteq p(\gamma)}m_C\alpha_C$, then \begin{equation}\label{eq:poly3} P_{\gamma}=\sum_{C\subset p(\gamma)}m_C\prod_{C\subseteq E\subsetneq p(\gamma)}u_EP_{\alpha_C} \end{equation} \end{itemize} \end{lemma} \begin{proof} $(i)$ The formula \eqref{eq:poly1} follows by direct computation. Namely, for any simple root $\alpha$, \begin{align*} x_{p(\alpha)}P_{\alpha}=\alpha=x_{p(\alpha)}-\sum_{\substack{C\in\F\\c(C)=B}}x_C \end{align*} Since $u_C=x_C/x_{c(C)}$, one has \begin{align*} P_{\alpha}=1-\sum_{\substack{C\in\F\\c(C)=B}}u_C \end{align*} $(ii)$ One has \begin{align*} P_{\gamma}&=\frac{\gamma}{x_{p(\gamma)}}=\sum_{C\subseteq p(\gamma)}m_C\alpha_C\frac{1}{x_{p(\gamma)}}=\sum_{C\subseteq p(\gamma)}m_C\frac{\alpha_C}{x_C}\frac{x_C}{x_{p(\gamma)}} \end{align*} Formula \eqref{eq:poly3} follows by \[ P_{\alpha_C}=\frac{\alpha_C}{x_{p(\alpha_C)}}=\frac{\alpha_C}{x_C}\qquad\mbox\qquad \frac{x_C}{x_{p(\gamma)}}=\prod_{C\subseteq E\subsetneq p(\gamma)}u_E \] The proof of \eqref{eq:poly2} is similar. \end{proof} \subsubsection{De Concini--Procesi fundamental solutions}\label{ss:DCPsol} For any $\F\in\Mns{D}$ and $B\in\F$, set \[ R_{B}=\sum_{\substack{\alpha\in\Rp\\ p(\alpha)=B}}\Kh{\alpha} \qquad\mbox{and}\qquad \Kh{B}=\sum_{\substack{\alpha\in\Rp\\ \supp(\alpha)\subseteq B}}\Kh{\alpha}=\sum_{\substack{C\in\F\\C\subseteq B}}R_C \] and hence \begin{equation}\label{eq:residues1} \sum_{B\in\F}R_Bd\log(x_B)=\sum_{B\in\F}\Kh{B}d\log(u_B) \end{equation} \begin{equation}\label{eq:residues2} \prod_{B\in\F}u_B^{\Kh{B}}=\prod_{B\in\F}u_B^{\sum_{C\subseteq B}R_C}=\prod_{C\in\F}\prod_{B\supseteq C}u_B^{R_C} =\prod_{C\in\F}x_C^{R_C} \end{equation} \begin{theorem}\label{thm:DCPsol} For any $\F\in\Mns{D}$, let $\U_{\F}\subset U$ be the complement of the zeros of the polynomials $P_{\alpha}$, $\alpha\in\sfR$, and $\B\subset\U_{\F}$ a simply connected set, containing the point $P_{\F}=\cap_{B\in\F}\{u_B=0\}$. There exists a unique holomorphic function $H_{\F}$ on $\B$, such that $H_{\F}(P_{\F})=1$ and, for every determination of $\log(x_B)$, $B\in\F$, the multivalued function \[ G_{\F}=H_{\F}\prod_{B\in\F}x_B^{R_B}=H_{\F}\prod_{B\in\F}u_B^{\Kh{B}} \] is a solution of the holonomy equation $dG=AG$, where $A=\sum_{\alpha}\Kh{\alpha}d\log(\alpha)$. $H_{\F}$ is the unique solution of the differential equation \begin{equation}\label{eq:holoform} dH=\left[A_{\F},H\right]+B_{\F}H \end{equation} with the given initial condition, where \begin{equation}\label{eq:mnsform} A_{\F}=\sum_{B\in\F}R_Bd\log(x_B)=\sum_{B\in\F}\Kh{B}d\log(u_B) \end{equation} and \begin{equation}\label{eq:mnsrest} B_{\F}=A-A_{\F}=\sum_{\alpha\in\Rp}\Kh{\alpha}d\log(P_{\alpha}) \end{equation} which is holomorphic around in $\B$. \bluecomment{The theorem is not stated precisely. In order to avoid problem with the possibly infinite collection $\{P_{\alpha}=0\}$ we construct the solution $H_{\F}$ by iterative approximation $H_{\F}^{(m)}$ in the open sets $\U_{\F}^{(m)}$ defined as the complement in $U$ of the union of $P_{\alpha}=0$, $\alpha\in B_m$.} \end{theorem} \subsubsection{}\label{lem:holo} We start the proof of the theorem with the following \begin{lemma} For any $B\in\F$, \[ [\Kh{B},B_{\F}]_{u_B=0}=0 \] \end{lemma} \begin{proof} For any $\alpha\in\Rp$, \[ \alpha=x_{p(\alpha)}P_{\alpha}=\prod_{B\supseteq p(\alpha)}u_BP_{\alpha} \] It follows \[ d\log(\alpha)=\sum_{B\supseteq p(\alpha)}\frac{du_B}{u_B}+d\log(P_{\alpha}) \] One has \begin{align*} A&=\sum_{\alpha\in\Rp}\Kh{\alpha}d\log(\alpha)=\\ &=\sum_{B\in\F}\sum_{\substack{\alpha\in\Rp\\p(\alpha)\subseteq B}}\Kh{\alpha}\frac{du_B}{u_B}+B_{\F}=\\ &=\sum_{B\in\F}\frac{\Kh{B}}{u_B}du_B+B_{\F} \end{align*} Write $B_{\F}=\sum_{B\in\F}P_Bdu_B$. Then $A\wedge A=0$, implies \begin{equation}\label{eq:flatness1} [\frac{\Kh{B}}{u_B}+P_B,\frac{\Kh{C}}{u_C}+P_C]=0 \end{equation} Multiplying by $u_B$, and then setting $u_B=0$, one gets \begin{equation}\label{eq:flatness2} [\Kh{B},\frac{\Kh{C}}{u_C}+P_C]_{u_B=0}=0 \end{equation} Since, for every $B,C\in\F$, $[\Kh{B},\Kh{C}]=0$, \eqref{eq:flatness2} implies \begin{equation} [\Kh{B},P_C]_{u_B=0}=0\qquad\mbox{and}\qquad [\Kh{B},B_\F]_{u_B=0}=0 \end{equation} \end{proof} \subsubsection{Proof of Theorem \ref{ss:DCPsol}} It is enough to prove that there exists a unique solution $H$ of the equation \eqref{eq:holoform} with initial condition $H(P_{\F})=1$.\\ For $m=0$, $\U^{(0)}_{\F}=U$, $H^{(0)}$ is the constant function $1$, and we can choose $\B^{(0)}=U$. Now let $m>0$ and let $\U^{(k)}_{\F}$ be the complement of $\bigcup_{\alpha\in B_m}\{P_{\alpha}=0\}$ in $U$. Let $\B^{(m)}$ any simply connected open set in $\B^{(m-1)}\cap\U^{(m)}_{\F}$ containing $P_{\F}$. \bluecomment{For fundamental maximal nested set, let $\C$ be the complexification of the fundamental Weyl chamber. Let $\B$ be a simply connected open set in $\U_{\F}\cap\C$ such that $P_{\F}\in\ol{\B}$ (this is always possible). Then we define $H_{\F}$ as holomorphic function on $\B$ with a boundary condition on $P_{\F}$.} We assume that $H^{(m)}=\sum_{k\geq0}H_{k}$, $H_{k}$ being of degree $k$ in $\DCPHA\rootsys$, with $H_{0}(P_{\F})=1$ and $H_{k}(P_{\F})=0$, $k>0$. Equation \eqref{eq:holoform} is then equivalent to the recursive system \begin{equation}\label{eq:recursiveH1} dH_{k+1}=[A_{\F},H_{k}]+B_{\F}H_{k} \end{equation} and $dH_{0}=0$, with the given initial conditions. $dH_{0}=0$ implies $H_{0}=1$ and $dH_{1}=B_{\F}H_{0}$. The form $B_{\F}$ is holomorphic around $P_{\F}$. Assume by induction that $[A_{\F},H_{k}]+B_{\F}H_{k}$ is holomorphic. We claim that, choosing the solution of \eqref{eq:recursiveH1} $H^{(k+1)}$ with $H_{k+1}(P_{\F})=0$, $[A_{\F},H^{k+1}]$ is again holomorphic. To see this it is enough to show that $[\Kh{B},H^{k+1}]$ vanishes on $u_B=0$. Assume by induction that $[\Kh{B}, H^{k}]_{u_B=0}=0$ for all $B\in\F$. Then we have \begin{align*} [\Kh{B}, dH^{k+1}]=[\Kh{B},[A_{\F},H_{k}]]+[\Kh{B},B_{\F}]H_{k}+B_{\F}[\Kh{B},B_{\F}] \end{align*} By induction and Lemma \ref{lem:holo}, all terms vanish on $u_B=0$. $[A_{\F},H_{k+1}]$ is again holomorphic, and the result follows. \subsubsection{Properties of the De Concini--Procesi associators}\label{ss:DCP} Let $\F,\G\in\Mns{D}$ be two fundamental maximal nested sets and let $G_{\F}, G_{\G}$ be the two associated solutions. By the general theory, comparing $G_{\F}, G_{\G}$ on the fundamental Weyl chamber, there is an invertible multiplicative constant, which we denote by $\DCPA{\F}{\G}$, such that \[ G_{\G}=G_{\F}\DCPA{\F}{\G} \] We refer to the constants $\DCPA{\F}{\G}$ as the De Concini--Procesi associators. \bluecomment{One should consider the functions $H_{\F}, H_{\G}$ as holomorphic function on a simply connected set in the fundamental Weyl chamber, with a \emph{boundary} condition on $P_{\F}$.} \begin{theorem} The De Concini--Procesi associators satisfy the following properties: \begin{itemize} \item[(i)] {\bf Orientation:} $\DCPA{\F}{\G}=\DCPA{\G}{\F}^{-1}$.\\ \item[(ii)] {\bf Transitivity:} $\DCPA{\F}{\G}=\DCPA{\F}{\H}\DCPA{\H}{\G}$.\\ \item[(iii)] {\bf Support:} $\DCPA{\F}{\G}\in\DCPHA{\supp(\F,\G)}$.\\ \item[(iv)] {\bf Central Support:} $\DCPA{\F}{\G}$ commutes with $\{\Kh{\alpha}\;|\; \alpha\in\sfR_{\zsupp(\F,\G)}\}$.\\ \item[(v)] {\bf Forgetfulness:} For any equivalent elementary pairs $(\F,\G), (\F',\G')$, $\DCPA{\F}{\G}=\DCPA{\F'}{\G'}$. \end{itemize} \end{theorem} \begin{proof} $(i)-(ii)$ The properties of orientation and transitivity follow directly from the definition of $\DCPA{\F}{\G}$. $(iii)-(iv)$ It is enough to prove the properties of support and central support for all $\F,\G$ which differ by only one element. We can assume there is a subdiagram $B=\supp(\F,\G)$ and two vertices $i,j\in B$ such that $\zsupp(\F,\G)=B\setminus\{i,j\}$,\ie $\F=\U\cup\{B\setminus\{i\}\}$ and $\G=\U\cup\{B\setminus\{j\}\}$, where $\U=\F\cap\G$. Let $\{u_C^{\F}\}_{C\in\F}, \{u_C^{\G}\}_{C\in\G}$ be two sets of coordinates and $R_C^{\F}, R_C^{\G}$ two residues. Then $u_C^{\F}=u_C^{\G}$ and $R_C^{\F}=R_C^{\G}$ for every $C\in\U\setminus\{B\}$, and \[ R_{B\setminus\{i\}}^{\F}+R_B^{\F}=R_{B\setminus\{j\}}^{\G}+R_B^{\G}=:K \] as it corresponds with the sum of all $\Kh{\alpha}$ such that $\supp(\alpha)$ is contained in $B$ but not in $B\setminus\{i,j\}$. Set $R_i=R_{B\setminus\{i\}}^{\F}$, $R_j=R_{B\setminus\{j\}}^{\G}$, $x_i=x_{B\setminus\{i\}}$, $x_j=x_{B\setminus\{j\}}$, $u_i=u^{\F}_{B\setminus\{i\}}$, and $u_j=u_{B\setminus\{j\}}^{\G}$. Then we have \begin{align*} G_{\F}&=H_{\F}\prod_{C\in\F}x_C^{R^{\F}_C}= H_{\F}\prod_{C\in\U\setminus B}x_C^{R_C}x_B^{R_B^{\F}}x_i^{R_i}= H_{\F}\prod_{C\in\U\setminus B}x_C^{R_C}x_B^Ku_i^{R_i}\\ G_{\G}&=H_{\G}\prod_{C\in\G}x_C^{R^{\G}_C}= H_{\G}\prod_{C\in\U\setminus B}x_C^{R_C}x_B^{R_B^{\G}}x_j^{R_j}= H_{\G}\prod_{C\in\U\setminus B}x_C^{R_C}x_B^Ku_j^{R_j} \end{align*} The functions $H_{\F}u_i^{R_i}$, $H_{\G}u_j^{R_j}$ satisfy the same equation \begin{equation*} dF=AF-FA_{\z} \qquad\mbox{where}\qquad A_{\z}=Kd\log(x_B)+\sum_{C\in\U\setminus\{B\}}R_Cd\log(x_C) \end{equation*} Restricted to $\{u_C=0\}_{C\in\U}$, the two functions satisfy the same holonomy equation, and therefore they differ by a constant $\DCPA{}{}$ in the algebra generated by the residues of this equation, which commute with the elements $K$ and $\{R_C\}_{C\in\U\setminus\{B\}}$. It follows that the same is true for $G_{\F}$ and $G_{\G}$, and $\DCPA{\F}{\G}=\DCPA{}{}$. \end{proof} \begin{corollary}\label{cor:HAqC} The collection of De Concini--Procesi associators defines a weak quasi--Coxeter structure on the holonomy algebra $\DCPHA{\sfR}$. \end{corollary} \subsection{The differential quasi--Coxeter structure}\label{sss:DCP qC} The notion of weak quasi--Coxeter structure is similarly defined for the algebra $\CU$ and the category $\O$. Moreover, any weak quasi--Coxeter structure on the holonomy algebra $\DCPHA{\sfR}$ induces a weak quasi--Coxeter structure on $\CU$ and $\O$ by the mean of the morphism \[ \xi_{\sfR}:\DCPHA{\sfR}\to\CU \] The correction of the monodromy performed in \ref{ss:cocycle} allows to extend the induced structure to a quasi--Coxeter structure. \begin{theorem}\label{pr:QKnabla} Set $\hbar=\pi\iota\nablah$. Then, \begin{enumerate} \item The associators $\DCPA{\F}{\G}$ and local monodromies \begin{equation}\label{eq:Snablak} S_{i,C}^\nabla= \wt{s}_i\cdot \exp\left(\hbar/2\cdot\sfC_{\alpha_i}\right) \end{equation} endow the category $\O$ with a quasi--Coxeter structure $\Qnablak$ of type $D$. \item For any $V\in\O$ and \mns $\F$, the representation \[\pi_\F:B_W\to GL(V\fml)\] obtained from the \qc structure $\Qnablak$ coincides with the monodromy of $(\wt{\IV},\wt{\nabla}_\kappa)$ expressed in the fundamental solution $\Psi_\F$. \end{enumerate} \end{theorem} \Omit{ Section 3: A differential braided quasi-Coxeter structure on Oint - Intro: we show in this section that the quasi-Coxeter structure on Oint constructed in Section 2 is braided, where the braiding arises from the (monodromy of) the KZ equations for all Levi subalgebras of g. - Definition of braided quasi-Coxeter category - Preamble: KZ equations and braided tensor structure on Oint - Double Holonomy algebra - Thm3: There exists a braided quasi-Coxeter structure on Oint interpolating the monodromy of the KZ and the Casimir connection } \section{A differential braided quasi--Coxeter structure on category $\O$}\label{s:diff-braid-qC} In this section, we review the notion of {\it braided} \qc category \cite{ATL1}. We will prove in Section \ref{s:difftwist-fusion} that the \qc structure on $\Oint$ which arises from the monodromy of the Casimir connection is part of a braided \qc one which also gives rises to the monodromy of the KZ equations for $\g$ and all its Levi subalgebras. Similarly to its non--braided counterpart, the latter structure is universal in that it can be defined on a double holonomy algebra $\DBLHA{\sfR}{n}$ generated by the algebra $\DCPHA{\sfR}$ introduced in \ref{ss:holonomy1} and the holonomy algebra $\DKHA{n}$ of the pure braid group on $n$ strands. The algebra $\DBLHA{\sfR}{n}$ is introduced and studied in this section. \subsection{Braided quasi--Coxeter categories} \subsubsection{Strict $D$--monoidal categories} \begin{definition} A \emph{strict $D$--monoidal category} $\C=(\{\C_B\}, \{F_{BB'}\}, \{J_{BB'}\}\})$ is a $D$--category $\C=(\{\C_B\}, \{F_{BB'}\}\})$ where \begin{itemize} \item for any $B\subseteq D$, $(\C_B,\ten_B)$ is a strict monoidal category \item for any $B\subseteq B'$, the functor $F_{BB'}$ is endowed with a tensor structure $J_{BB'}$ \end{itemize} with the additional condition that, for every $B\subseteq B'\subseteq B''$, $J_{BB'}\circ J_{B'B''}=J_{BB''}$. \end{definition} \subsubsection{$D$--monoidal categories} \begin{definition}\label{def:D-qba} A \emph{$D$--monoidal category} \[\C=(\{(\C_B,\ten_B,\Phi_B)\}, \{F_{BB'}\}, \{J^{\F}_{BB'}\})\] is the datum of \begin{itemize} \item A $D$--category $(\{(\C_B\}, \{F_{BB'}\})$ such that each $(\C_B,\ten_B,\Phi_B)$ is a tensor category, with $\C_{\emptyset}$ a strict tensor category, \ie $\Phi_{\emptyset} =\id$. \item for any pair $B\subseteq B'$ and $\F\in\Mns{B,B'}$, a tensor structure $J^{BB'}_{\F}$ on the functor $F_{BB'}:\C_{B'}\to\C_{B}$ \end{itemize} with the additional condition that, for any $B\subseteq B'\subseteq B''$, $\F\in\Mns{B'',B'}$, $\G\in\Mns{B',B}$, \[J_{BB'}^\G\circ J_{B'B''}^\F=J_{BB''}^{\F\cup\G}\] \end{definition} \subsubsection{Braided $D$--monoidal categories} \begin{definition} A \emph{braided $D$--monoidal category} \[\C=(\{(\C_B,\ten_B,\Phi_B,\beta_B)\}, \{(F_{BB'}, J_{BB'}^{\F}\})\] is the datum of \begin{itemize} \item a $D$--monoidal category $(\{(\C_B,\ten_B,\Phi_B)\},\{(F_{BB'}, J_{BB'}^{\F}\})$ \item for every $B\subseteq D$, a commutativity constraint $\beta_B$ in $\C_B$, defining a braiding in $(\C_B,\ten_B,\Phi_B)$. \end{itemize} \end{definition} \begin{rem} Note that the tensor functors $(F_{BB'},J_{BB'}^\F):\C_{B'}\to\C_B$ are \emph{not} assumed to map the commutativity constraint $\beta_{B'}$ to $\beta_B$. \end{rem} \subsubsection{Braided quasi--Coxeter categories} \begin{definition}\label{def:qc-cat} A \emph{quasi--Coxeter braided monoidal category of type $D$} \[\C=(\{(\C_B,\ten_B, \Phi_B,\beta_B)\}, \{(F_{BB'}, J^{\F}_{BB'})\},\{\DCPA{\F}{\G}\},\{S_i\})\] is the datum of \begin{itemize} \item a quasi--Coxeter category of type $D$, \[\C=(\{\C_B\},\{F_{BB'}\}, \{\DCPA{\F}{\G}\}, \{S_i\})\] \item a braided $D$--monoidal category \[\C=(\{(\C_B,\ten_B,\Phi_B,\beta_B)\}, \{(F_{BB'}, J^{BB'}_{\F})\})\] \end{itemize} satisfying the following conditions \begin{itemize} \item for any $B\subseteq B'$, and $\G,\F\in\Mns{B,B'}$, the natural transformation $\DCPA{\F}{\G}\in\sfAut{F_{BB'}}$ determines an isomorphism of tensor functors $(F_{BB'},J_{BB'}^\G)\to(F_{BB'},J_{BB'}^\F)$, that is for any $V,W\in\C_{B'}$, \[(\DCPA{\G}{\F})_{V\ten W}\circ (J_{BB'}^{\F})_{V,W}=(J^{\G}_{BB'})_{V,W} \circ((\DCPA{\G}{\F})_V\ten(\DCPA{\G}{\F})_W)\] \item for any $i\in D$, the following holds: \begin{equation*} \Delta_{J_i}(S_i)=(R_i)^{21}_{J_i}\cdot(S_i\ten S_i) \end{equation*} \end{itemize} \end{definition} \subsection{The KZ--holonomy algebra} \subsubsection{Cosimplicial structure on $\CU$}\label{sss:cosimpUg} As before, let $\O$ be the category of deformation highest weight integrable $\g$--modules, $\A$ the category of topologically free $\IC[[\nablah]]$--modules, and $\mathsf{f}:\O\to\A$ the forgetful functor. Set $\CU^n=\mathsf{End(f^{\boxtimes n})}$, where $\sff^{\boxtimes 1}=\sff$, and $\sff^{\boxtimes n}:\O^n\to\A$, $\sff^{\boxtimes n}(V_1,\dots, V_n)=V_1\ten\cdots\ten V_n$. The tower of algebras $\{\CU^n\}_{n\geq0}$ is a cosimplicial complex of algebras \[\xymatrix@C=.5cm{\sfk\ar@<-2pt>[r]\ar@<2pt>[r] & \sfEnd{\ff} \ar@<3pt>[r] \ar@<0pt>[r] \ar@<-3pt>[r] & \sfEnd{\ff^{\boxtimes 2}} \ar@<-6pt>[r]\ar@<-2pt>[r]\ar@<2pt>[r]\ar@<6pt>[r] & \sfEnd{\ff^{\boxtimes 3}} \quad\cdots}\] with face morphisms $d_n^i:\sfEnd{\ff^{\boxtimes n}}\to\sfEnd{\ff^{\boxtimes n+1}}$, $i=0,\dots, n+1$, given by \[(d_0^0 \varphi)_X: \xymatrix{ \ff(X)\ar[r] & \ff(X)\ten{\bf 1} \ar[r]^{1\ten\varphi} & \ff(X)\ten{\bf 1} \ar[r] & \ff(X) } \] \[(d_0^1 \varphi)_X: \xymatrix{ \ff(X)\ar[r] & {\bf 1}\ten\ff(X) \ar[r]^{\varphi\ten1} & {\bf 1}\ten\ff(X) \ar[r] & \ff(X) } \] where ${\bf 1}$ is the trivial module, $X\in\O$, $\varphi\in\sfk$, and \[ (d_n^i \varphi)_{X_1,\dots, X_{n+1}}= \left\{ \begin{array}{ll} \id\ten \varphi_{X_2,\dots, X_{n+1}} & i=0\\ \varphi_{X_1,\dots, X_i\ten X_{i+1},\dots X_{n+1}} & 1\leq i \leq n\\ \varphi_{X_1,\dots, X_n}\ten\id & i=n+1 \end{array} \right. \] for $\varphi\in\sfEnd{\ff^{\boxtimes n}}$, $X_i\in\O$, $i=1,\dots, n+1$. The degeneration homomorphisms $s_n^i:\sfEnd{\ff^{\boxtimes n}}\to\sfEnd{\ff^{\boxtimes n-1}}$, for $i=1,\dots, n$, are \[(s_n^i \varphi)_{X_1, \dots, X_{n-1}}=\varphi_{X_1,\dots, X_{i-1}, {\bf 1}, X_{i}, \dots, X_{n-1} }\] \Omit{ The morphisms $\{s_n^i\}, \{d_n^i\}$ satisfy the standard relations \[ \begin{array}{lr} d_{n+1}^jd_n^i=d^i_{n+1}d_n^{j-1} & i<j \\ s_n^js_{n+1}^i=s_n^is_{n+1}^{j+1} & i\leq j \end{array} \] \[ s_{n+1}^jd_n^i=\left\{ \begin{array}{lr} d_{n-1}^is_n^{j-1} & i<j\\ \id & i=j,j+1\\ d_{n-1}^{i-1}s_n^j & i>j+1 \end{array} \right. \] } \subsubsection{The holonomy algebra of the KZ--connection} Let $\DKHA{n}$ be the algebra generated over $\IC$ by the elements $\{\Th{ij}\}$, $1\leq i<j\leq n$ with relations \[ [\Th{ij},\Th{ik}+\Th{jk}]=0\qquad [\Th{ij},\Th{kl}]=0 \] for any $i,j,k,l$ such that $\{i,j\}\cap\{k,l\}=\emptyset$. \subsubsection{The cosimplicial structure on $\{\DKHA{n}\}_{n\geq0}$}\label{sss:cosimp1} The algebras $\DKHA{n}$ are naturally endowed with a cosimplicial structure. The insertion coproduct maps \[ d_n^k:\DKHA{n}\to\DKHA{n+1}\qquad k=0,1,\dots, n+1 \] are defined by \[ d_n^k(\Th{ij})=\delta_{ki}(\Th{ij}+\Th{i+1,j})+\delta_{kj}(\Th{ij}+\Th{i,j+1})\qquad k=1,\dots, n \] and \[ d_n^0(\Th{ij})=\Th{i+1,j+1}\qquad d_n^{n+1}(\Th{ij})=\Th{ij} \] The degeneration homomorphisms $s_n^k:\DKHA{n}\to\DKHA{n-1}$, $k=1,\dots, n$ are \[ s_n^k(\Th{ij})=(1-\delta_{ki}-\delta_{kj})\Th{ij} \] \subsubsection{The holonomy algebra and category $\O$} Let $r\in\CU^2$ be the classical $r$--matrix. The following proposition is well-known \cite{e3}. \begin{proposition} The linear map $\xi_n: \DKHA{n}\to\CU^n$ defined by \[ \xi_n(\Th{ij})=\nablah\cdot(r^{ij}+r^{ji}) \] is a morphism of algebras, compatible with cosimplicial structure and the natural gradation on $\DKHA{n}$\footnote{As in $\DCPHA{\sfR}$, $\deg\Th{ij}=1$.}. \end{proposition} \subsection{The double holonomy algebra}\label{ss:doubleholo} \subsubsection{A $\sfR$--refinement of $\DKHA{n}$} We now defined an extended version of the holonomy algebra, which naturally embed into $\CU^n$. \begin{definition} Let $\sfR$ be a fixed root system. The $\sfR$--holonomy algebra $\DBLHA{\sfR}{n}^\prime$ is the algebra over $\IC$ generated by the symbols$\{\Tdh{ij}{}, \Tdh{ij}{\alpha}\}$, $1\leq i<j\leq n, \alpha\in\rootsys\cup\{0\}$ with relations \begin{equation}\label{eq:Omega} [\Tdh{ij}{},\Tdh{ik}{}+\Tdh{jk}{}]=0\qquad [\Tdh{ij}{},\Tdh{kl}{}]=0 \end{equation} and \begin{equation}\label{eq:Omegaalpha} [\Tdh{ij}{\alpha}, \Tdh{kl}{\beta}]=0\qquad\Tdh{ij}{}=\sum_{\alpha\in\sfR\cup0} \Tdh{ij}{\alpha} \end{equation} for any $i,j,k,l$ such that $\{i,j\}\cap\{k,l\}=\emptyset$ and $\alpha,\beta\in\rootsys\cup\{0\}$. \footnote{If $|\sfR|=+\infty$, then the relation \eqref{eq:Omegaalpha} is to be intended as in \ref{ss:holonomy1}.} Finally, we assume $[\Tdh{ij}{0},\Tdh{kl}{0}]=0$ for every $i,j,k,l$. \end{definition} The tower of algebras $\{\DBLHA{\sfR}{n}^\prime\}_{n\geq0}$ is naturally endowed with a cosimplicial structure, which extends the cosimplicial structure on $\DKHA{n}$. in particular, \[ d_n^k(\Tdh{ij}{\alpha})=\delta_{ki}(\Tdh{ij}{\alpha}+\Tdh{i+1,j}{\alpha})+ \delta_{kj}(\Tdh{ij}{\alpha}+\Tdh{i,j+1}{\alpha})\qquad k=1,\dots, n \] Similarly for the degeneration maps. \begin{proposition} The linear map $\xi: \DBLHA{\sfR}{n}^\prime\to\CU^n$ defined by \[ \xi_{\sfR,n}^{\prime}(\Tdh{ij}{})=\nablah\cdot r^{ij}\qquad \xi_{\sfR,n}^{\prime}(\Tdh{ij}{\alpha})=\nablah\cdot r^{ij}_{\alpha} \] is a morphism of algebras, compatible with the cosimplicial structure and the natural grading on $\DBLHA{\sfR}{n}^{\prime}$. \end{proposition} Concretely, let $\g$ be the Kac--Moody algebra associated with $\rootsys$. Then \[ r^{ij}_{\alpha}=(e_{\alpha})_{a}^{(i)}\ten (e_{-\alpha})^{a, (j)} \] where $\{(e_{\alpha})_{a}\},\{(e_{-\alpha})^{a}\}$ are dual basis of $\g_{\alpha}$,$\g_{-\alpha}$ and \[ \Omega_{ij}^0=x_a^{(i)}\ten x^{a,(j)} \] where $\{x_a\},\{x^a\}$ are dual basis of $\h$.\\ There is a natural action of $\h^{\ten n}$ on $\DKHA{n}$, \ie for any $h\in\h$ \[ h^{(k)}\cdot\Tdh{ij}{\alpha}=(\delta_{ki}-\delta_{kj})\alpha(h)\Tdh{ij}{\alpha} \] \subsubsection{Double holonomy algebra}\label{sss:double-holonomy} We now extend the set of generators in order to describe simultaneously the $n$th--Casimir connection. \begin{definition} Let $\DBLHA{\sfR}{n}$ be the algebra generated over $\IC$ by the elements $\{\Tdh{ij}{},\Tdh{ij}{\alpha}\}$ with the relations \eqref{eq:Omega}, \eqref{eq:Omegaalpha}, and by the elements $\Kdh{\alpha}{(n)}$ and $\Kdh{\alpha,k}{}$, $\alpha>0$, $k=1,\dots, n$, with the relations \[ [\Kdh{\alpha}{(n)}, \sum_{\beta\in\Psi}\Kdh{\beta}{(n)}]=0 \] for any subsystem $\Psi\subset\rootsys$, $\rk(\Psi)=2$, and $\alpha\in\Psi$. Finally, we add the relations \[ [\Tdh{ij}{},\Kdh{\alpha}{(n)}]=0 \] for all $1\leq i,j\leq n$, $\alpha\in\sfR_+$, \[ [\Tdh{ij}{\alpha}, \Kdh{\beta,k}{}]=0 \] if $k\not\in\{i,j\}$, and \begin{equation}\label{eq:Kdecomp} \Kdh{\alpha}{(n)}=\sum_{i<j}\Tdh{ij}{\alpha}+\Tdh{ij}{-\alpha}+\sum_{k=1}^n\Kdh{\alpha,k}{} \end{equation} \end{definition} \subsubsection{Weight decomposition} The elements $\Kdh{\alpha}{(n)}$ should be thought of concretely in $\CU^n$ as $\frac{\nablah}{2}\Delta^{(n)}(\Ku{\alpha}{+})$ where $\Ku{\alpha}{+}$ is the Wick--ordered (truncated) Casimir element. The action of $\h^{\ten n}$ on $\DBLHA{\sfR}{n}$ extends to the elements $\Kdh{\alpha}{(n)}$. Namely, in $\CU^n$, we have \begin{align*} \Delta^{(n)}(\Ku{\alpha}{+})&=2\Delta^{(n)}(e_{-\alpha}e_{\alpha})=\\ &=2\left(\sum_{i<j}\Omega_{ij}^{\alpha}+\Omega_{ij}^{-\alpha}+\Ku{\alpha}{0}\right) \end{align*} where $\Ku{\alpha}{0}=\sum_{i=1}^n(1^{\ten k-1}\ten\Ku{\alpha}{+}\ten 1^{\ten n-k})$ is a weight zero element. Therefore we define the action of $\h$ by \[ h^{(k)}\cdot\Kdh{\alpha, l}{}=0 \] and \[ h^{(k)}\cdot\Kdh{\alpha}{(n)}=h^{(k)}\cdot\left(\sum_{i<j}\Tdh{ij}{\alpha}+\Tdh{ij}{-\alpha}\right) \] consistently with the relation \eqref{eq:Kdecomp}. \subsubsection{The cosimplicial structure on $\{\DBLHA{\sfR}{n}\}_{n\geq0}$}\label{sss:cosimp2} The algebras $\DBLHA{\sfR}{n}$ are naturally endowed with a cosimplicial structure. Set \[ \Kdh{\alpha,k}{(m)}=\sum_{k\leq i<j\leq m+k-1}\Tdh{ij}{\alpha}+\Tdh{ij}{-\alpha}+ \sum_{l=k}^{m+k-1}\Kdh{\alpha,l}{} \] so that $\Kdh{\alpha}{(n)}=\Kdh{\alpha,1}{(n)}$ and $\Kdh{\alpha,i}{}=\Kdh{\alpha,i}{(1)}$. The insertion coproduct maps \[ d_n^k:\DBLHA{\sfR}{n}\to\DBLHA{\sfR}{n+1}\qquad k=0,1,\dots, n+1 \] are defined on $\Tdh{ij}{}, \Tdh{ij}{\alpha}$ as in the case of $\DBLHA{\sfR}{n}^{\prime}$, and on $\Kdh{\alpha,i}{(m)}$ by \[ d_n^k(\Kdh{\alpha,i}{(m)})= \left\{ \begin{array}{lcl} \Kdh{\alpha,i+1}{(m)} & \mbox{if} & k<i\\ \Kdh{\alpha,i}{(m+1)} & \mbox{if} & k=i,\dots, m+i\\ \Kdh{\alpha,i}{(m)} & \mbox{if} & k=m+i+1,\dots, n \end{array} \right. \] for $k=1,\dots, n$, and \[ d_n^0(\Kdh{\alpha,i}{(m)})=\Kdh{\alpha,i+1}{(m)} \qquad d_n^{n+1}(\Kdh{\alpha,i}{m})=\Kdh{\alpha,i}{m} \] The degeneration homomorphisms $s_n^k:\DBLHA{\sfR}{n}\to\DBLHA{\sfR}{n-1}$, $k=1,\dots, n$ are similarly defined. In particular, \[ s_n^k(\Kdh{\alpha,i}{(m)})= \left\{ \begin{array}{lcl} \Kdh{\alpha,i-1}{(m)} & \mbox{if} & k<i\\ \Kdh{\alpha,i}{(m-1)} & \mbox{if} & k=i,\dots, m+i\\ \Kdh{\alpha,i}{(m)} & \mbox{if} & k=m+i+1,\dots, n \end{array} \right. \] \subsubsection{Double holonomy algebra and category $\O$} \begin{proposition} The linear map $\xi_{\sfR,n}: \DBLHA{\sfR}{n}\to\CU^n$ defined by \begin{align*} \xi_{\sfR,n}(\Tdh{ij}{\alpha})=\nablah\cdot r_{\alpha}^{ij} \qquad \xi_{\sfR,n}(\Tdh{ij}{-\alpha})=\nablah\cdot r_{\alpha}^{ji} \end{align*} and \begin{align*} \xi_{\sfR,n}(\Kdh{\alpha}{(n)})=\frac{\nablah}{2}\cdot\Delta^{(n)}(\Ku{\alpha}{+}) \qquad \xi_{\sfR,n}(\Kdh{\alpha,k}{})=\frac{\nablah}{2}\cdot(1^{\ten k-1}\ten\Ku{\alpha}{+}\ten 1^{\ten n-k}) \end{align*} is a morphism of cosimplicial algebras, compatible with the action of $\h$ and the natural gradation on $\DBLHA{\sfR}{n}$ \footnote{All generators have $\deg=1$.}. In particular, \[ \xi_{\sfR,n}(\Kdh{\alpha,k}{(m)})= \frac{\nablah}{2}(1^{\ten k-1}\ten\Delta^{(m)}(\Ku{\alpha}{+})\ten 1^{\ten n-m-k+1}) \] \end{proposition} \begin{proof} The relations satisfied by $\Tdh{ij}{}, 1\leq i<j\leq n$, follow from the commutativity of the diagram \[ \xymatrix{\DBLHA{\sfR}{n} \ar[r]^{\xi} & \CU^n\\ \DKHA{n} \ar[u] \ar[ur]_{\xi} &} \] The $tt$--relations \[ [\Kdh{\alpha}{(n)}, \sum_{\beta\in\Psi}\Kdh{\alpha}{(n)}]=0 \] are satisfied by the elements $\Ku{\alpha}{+}\in\CU^n$ as in Prop.\ref{prop:comm-rel}. The commutativity relations \[ [\Tdh{ij}{},\Kdh{\alpha}{(n)}]=0 \] follow from the $\g$--invariance of $r^{ij}+r^{ji}$ in $\CU^n$. The last statement is clear. \end{proof} \subsection{The differential braided quasi--Coxeter structure} \begin{definition} A \emph{weak braided quasi--Coxeter structure} on $\DCPHA{\sfR}$ is the datum of \begin{enumerate} \item for each connected subdiagram $B\subseteq D$, an $R$--matrices $R_B\in\DBLHA{B}{2}$ and an associator $\Phi_B\in\DBLHA{B}{3}$, which are of the following form \[R_B=\exp\left(\frac{\Tdh{B}{}}{2}\right) \aand \Phi_B=\Phi'_B(\Tdh{B,12}{},\Tdh{B,23}{})\] where $\Phi'_B$ is a Lie associator. \item for each pair of subdiagrams $B^\prime\subseteq B\subseteq D$ and maximal nested set $\F\in\Mns{B^\prime,B}$, a relative twist $J^{BB^\prime}_{\F}\in\DBLHA{B}{2}^{B'}$, satisfying \begin{equation} J^{BB'}_{\F}=1\mod(\DBLHA{B}{2})_{\geq1} \end{equation} \begin{equation} \left(\Phi_B\right)_{J^{BB^\prime}_{\F}}=\Phi_{B^\prime} \end{equation} \footnote{For any element $\Phi\in\DBLHA{\sfR}{3}$ and $J\in\DBLHA{\sfR}{2}$, we set \[ \Phi_J=d_2^3(J)\cdot d_2^1(J)\cdot\Phi\cdot d_2^2(J)^{-1}d_2^0(J)^{-1} \] } and satisfying the \emph{factorization property} \[ J_{\F_1\cup\F_2}^{BB^{\prime\prime}}= J_{\F_1}^{BB^\prime}\cdot J_{\F_2}^{B^\prime B^{\prime\prime}} \] where $B''\subseteq B'\subseteq B$, $\F_1\in\Mns{B,B^\prime}$ and $\F_2\in\Mns{B^\prime,B''}$. \item for any $B'\subseteq B$ and pair of maximal nested sets $\F,\G\in\Mns{B,B'}$, an element $\DCPA{\F}{\G}\in\DCPHA{B}$ such that \begin{equation*} \DCPA{\F}{\G}=1\mod(\DCPHA{B})_{\geq 1} \end{equation*} and satisfying the properties \ref{def:wqCha}. \end{enumerate} \end{definition} In the next section, we will prove the following \begin{theorem} The weak quasi--Coxeter structure on $\DCPHA{\sfR}$ extends to a weak braided quasi--Coxeter structure. \end{theorem} This induces a weak braided quasi--Coxeter structure on $\CU$, and therefore on $\O\int$, which can be completed into a braided quasi--Coxeter one. In particular, we get \begin{theorem} There exists a structure of braided quasi--Coxeter category on $\O\int$, which interpolates the monodromy of the KZ and the Casimir connections. \end{theorem} \Omit{ Section 4: Proof of Thm 3: Differential twists and the fusion operator - Differential Twist - Braided quasi-Coxeter structure defined by a differential twist - Dynamical KZ, the joint system KZ-C - The fusion operator - Thm4: Reg lim of fusion operator defines a differential twist } \section{Differential twists and the fusion operator}\label{s:difftwist-fusion} This section follows \cite[Sec. 3--7]{vtl-6} closely, and adapts the construction of the differential twist and the fusion operator to the double holonomy algebra $\DBLHA{\sfR}{2}$ introduced in \ref{ss:doubleholo}. In particular, we introduce the notion of differential twist with values in $\DBLHA{\sfR}{2}$, and show that it induces a weak quasi--Coxeter structure on $\DCPHA{\sfR}$. We then show that a differential twist can be obtained by constructing an appropriate fusion operator with values in $\DBLHA{\sfR}{2}$. \subsection{Differential twist} Let $\C_\IR=\{t\in\h|\,\alpha_i(t)>0,\,\,\forall i\in\bfI\}\subset\h\ess_\IR$ be the fundamental chamber of $\h$, and set $\C=\C_\IR+i\h\ess_\IR$. Let $\DBLHA {\sfR}{2}$ be the double holonomy algebra, and define $\wt{r}\in\DBLHA{\sfR}{2}$ by \[ \wt{r}=\frac{1}{2}\sum_{\alpha\in\sfR_+}\Tdh{}{\alpha}-\Tdh{}{-\alpha} \] \begin{definition} A {\it differential twist} of type $\sfR$ is a holomorphic map $F=F_{\sfR}:\C\to\DBLHA{\sfR}{2}$ such that \begin{enumerate} \item\label{it:veps} $s_2^1(F)=1=s_2^2(F)$. \item\label{it:norm} $F=1+f\mod(\DBLHA{\sfR}{2})_{\geq 2}$, where $f\in(\DBLHA{\sfR}{2})_1$ satisfies $\Alt_2 f=\wt{r}$. \footnote{There is an action of the symmetric group $\SS_n$ on $\DBLHA{\sfR}{n}$ defined by $(k,k+1)\Kdh{\alpha}{(n)}=\Kdh{\alpha}{(n)}$ and \begin{align*} (k,k+1)\Kdh{\alpha,i}{}&= \left\{ \begin{array}{lcl} \Kdh{\alpha,i}{} & \mbox{if} & i\neq k,k+1\\ \Kdh{\alpha,i+1}{} & \mbox{if} & i=k\\ \Kdh{\alpha,i-1}{} & \mbox{if} & i=k+1\\ \end{array} \right. \\ (k,k+1)\Tdh{ij}{\alpha}&= \left\{ \begin{array}{lcl} \Tdh{ij}{\alpha} & \mbox{if} & \{i,j\}\cap\{k,k+1\}=\emptyset\\ \Tdh{ij}{-\alpha} & \mbox{if} & \{i,j\}=\{k,k+1\}\\ \Tdh{i+1,j}{\alpha} & \mbox{if} & i=k\\ \Tdh{i-1,j}{\alpha} & \mbox{if} & i=k+1\\ \Tdh{i,j+1}{\alpha} & \mbox{if} & j=k\\ \Tdh{i,j-1}{\alpha} & \mbox{if} & j=k+1\\ \end{array} \right. \end{align*} } \item\label{it:Phi} In $\DBLHA{\sfR}{3}$, $(\Phi\KKZ)_F=1^{\otimes 3}$, where \[ (\Phi\KKZ)_F=d_2^3(F)\cdot d_2^1(F)\cdot\Phi\KKZ\cdot d_2^2(F)^{-1}d_2^0(F)^{-1} \] \item\label{it:PDE} F satisfies \begin{equation}\label{eq:2-Casimir} d F=\sum_{\alpha\in\Rp}\frac{d\alpha}{\alpha} \Bigl((\Kdh{\alpha,1}{}+\Kdh{\alpha,2}{})\cdot F-F\cdot \Kdh{\alpha}{(2)}\Bigr) \end{equation} \end{enumerate} \end{definition} \subsubsection{Compatibility with De Concini--Procesi associators}\label{ss:diffl DCP} Fix henceforth a positive, adapted family $\beta=\{x_B\}_{B\subseteq D} \subset\h^*$. Let $D$ be the Dynkin diagram corresponding to the root system $\sfR$. For any \mns $\F\in\Mns{D}$, let $\Psi_\F:\C\to\DCPHA{\sfR}$ be the fundamental solution of $\nablak$ corresponding to $\F$, and $\DCPA{\G}{\F}=\Psi_\G^{-1}\cdot\Psi_\F$ the corresponding De Concini--Procesi associator. Let $F_{\sfR}:\C\to\DBLHA{\sfR}{2}$ be a differential twist for $\Rs{}$, and set \[F_\F=(\Psi_\F^{\otimes 2})^{-1}\cdot F\cdot d_1^1(\Psi_\F)\] The following is straightforward. \begin{lemma}\label{le:diffl DCP} The following holds \begin{enumerate} \item $F_\F$ is constant on $\C$. \item $s_2^1(F_\F)=1=s_2^2(F_\F)$. \item $(\Phi\KKZ)_{F_\F}=1^{\otimes 3}$. \item $F_\F=\Phi_{\F\,\G}^{\otimes 2}\cdot F_\G\cdot d_1^1(\Phi_{\G\,\F})^{-1}$. \end{enumerate} \end{lemma} \subsubsection{Notation}\label{ss:preamble} Fix $i\in\bfI$, let $\ol{\sfR}\subset\sfR$ be the root system generated by the simple roots $\{\alpha_j\}_{j\neq i}$, $\olg\subset\g$ the subalgebra spanned by the root vectors and coroots $\{x_\alpha,\alpha^\vee\}_{\alpha \in\olPhi}$, $\olg\supset\olh\ess\subset\h\ess$ its essential Cartan subalgebra, and $\oll=\g\oplus\h\ess$ the corresponding Levi subalgebra of $\g$. Similarly, we denote by $\DCPHA{\ol{\sfR}}\subset\DCPHA{\sfR}$ the holonomy algebra of $\ol{\sfR}$. The inclusion of root systems $\ol{\sfR}\subset\sfR$ gives rise to a projection $\pi:\h\ess\to\olh\ess$ determined by the requirement that $\alpha(\pi(t))=\alpha(t)$ for any $\alpha\in\ol{\sfR}$. The kernel of $\pi$ is the line $\IC\cow{i}$ spanned by the $i$th fundamental coweight of $\h$. We shall coordinatise the fibres of $\pi$ by restricting the simple root $\alpha_i$ to them. This amounts to trivialising the fibration $\pi:\h\ess\to\ol{\h}\ess$ as $\h\ess\simeq\IC \times\ol{\h}\ess$ via $(\alpha_i,\pi)$. The inverse of this isomorphism is given by $(w,\olmu)\to w\cow{i}+\imath (\olmu)$, where $\imath:\ol\h\to\h$ is the embedding with image $\Ker(\alpha_i)$ given by \begin{equation}\label{eq:emb i} \imath(\ol{t})=\ol{t}-\alpha_i(\ol{t})\cow{i} \end{equation} \Omit{Note that $\imath$ maps the fundamental coweights $\{\lambda_j^{\olg,\vee}\}$ of $\olg$ to their counterparts $\{\lambda_j^{\g,\vee}\}$ for $\g$, and that it differs from the embedding $\olh\subset\h$ given by the inclusion $\olg\subset\g$, which maps the coroots $\alpha^\vee$ of $\olg$ to the corresponding ones of $\g$.} Denote by \begin{equation}\label{eq:K olK} \Kdh{}{}=\sum_{\alpha\in\sfR_+}\Kdh{\alpha}{}\aand \ol{\Kdh{}{}}=\sum_{\alpha\in\ol{\sfR}_+}\Kdh{\alpha}{} \end{equation} the Casimir operators of $\g$ and $\ol{\g}$ in $\DCPHA{\sfR}$ and $\DCPHA{\ol{\sfR}}$, respectively. \subsubsection{Asymptotics of the Casimir connection for $\alpha_i\to\infty$}\label{se:Fuchs infty} Fix a vertex $i\in D$, and set $\ol{D}=D\setminus\{\alpha_i\}$. Fix $\ol{\mu}\in\ol{\h}\ess=\h_{\ol{D}}\ess$, and consider the fiber of $\pi:\h\ess\to\olh\ess$ at $\olmu$. Since the restriction of $\alpha\in\sfR$ to $\pi^{-1}(\olmu)$ is equal to $\alpha(\cow{i})\alpha_i+\alpha(\imath(\olmu))$, the restriction of the Casimir connection $\nabla$ to $\pi^{-1}(\olmu)$ is equal to \[\nabla_{i,\olmu}= d-\sum_{\alpha\in\sfR_+\setminus\ol{\sfR}} \frac{d\alpha_i}{\alpha_i-w_\alpha}\Kdh{\alpha}{}\] where $w_\alpha=-\alpha(\imath(\olmu))/\alpha(\cow{i})$. Set \begin{equation}\label{eq:Rmu} R_\olmu=\max\{|w_\alpha|\}_{\alpha\in\sfR\setminus\ol{\sfR}} \end{equation} \begin{proposition}\label{pr:Fuchs infty}\hfill \begin{enumerate} \item For any $\olmu\in\olh\ess$, there is a unique holomorphic function \[H_\infty:\{w\in \IP^1|\,|w|>R_\olmu\}\to\DCPHA{\sfR}\] such that $H_\infty(\infty)=1$ and, for any determination of $\log(\alpha_i)$, the function $\Upsilon_\infty=H_\infty(\alpha_i) \cdot \alpha_i^{\Kdh{}{}-\ol{\Kdh{}{}}}$ satisfies \[\left(d-\sum_{\alpha\in\sfR_+\setminus\ol{\sfR}} \frac{d\alpha_i}{\alpha_i-w_\alpha}\Kdh{\alpha}{}\right)\Upsilon_\infty =\Upsilon_\infty\,d\] \item The function $H_\infty(\alpha_i,\olmu)$ is holomorphic on the simply--connected domain $\D_\infty\subset\IP^1\times\ol{\h}$ given by \begin{equation}\label{eq:D infty} \D_\infty=\{(w,\ol\mu)|\,|w|>R_\olmu\} \end{equation} and, as a function on $\D_\infty$, $\Upsilon_\infty$ satisfies \[\left(d-\sum_{\alpha\in\sfR_+}\frac{d\alpha}{\alpha} \Kdh{\alpha}{}\right)\Upsilon_\infty= \Upsilon_\infty \left(d-\sum_{\alpha\in\ol{\sfR}_+}\frac{d\alpha}{\alpha} \Kdh{\alpha}{}\right)\] \end{enumerate} \end{proposition} \subsubsection{Asymptotics for $\alpha_i\to\infty$ and fundamental De Concini--Procesi solutions} Let $\F$ be a \mns on $D$, set $\olcalF=\F\setminus\{D\}$ and $\alpha_i=\alpha^D_\F$. Let \[\Psi_\F:\C\to\DCPHA{\sfR} \aand \Psi_{\olcalF}:\olC\to\DCPHA{\ol{\sfR}}\] be the fundamental solutions of the Casimir connection for $\sfR$ and $\ol{\sfR}=\sfR_{D\setminus\alpha_i}$ corresponding to $\F$, $\olcalF$ respectively, and a positive, adapted family $\{x_B\}_{B\subseteq D}$. Regard $\Psi_{\olcalF}$ as being defined on $\C$ via the projection $\pi:\h\ess\to\olh\ess$. The result below expresses $\Psi_\F$ in terms of $\Psi_{\olcalF}$ and the solution $\Upsilon_\infty$ given by Proposition \ref {pr:Fuchs infty}. \begin{proposition}\label{pr:infty factorisation} The following holds \[\Psi_\F= \Upsilon_\infty\cdot\Psi_{\olcalF}\cdot x_D(\cow{i})^{\Kdh{}{}-\ol{\Kdh{}{}}}\] \end{proposition} \subsection{Differential twists and braided quasi--Coxeter structures} \subsubsection{Relative twists}\label{ss:relative twist} Let $F=F_{\sfR}$ be a differential twist for $\sfR$, $\alpha_i\in D$ a simple root, and $\Upsilon_\infty$ the solution of the Casimir equations given by Proposition \ref{pr:Fuchs infty}, where we are using the standard determination of $\log$. Define $F_\infty:\C\to\DBLHA{\sfR}{2}$ by \[F_\infty=(\Upsilon_\infty^{\otimes 2})^{-1}\cdot F\cdot d_1^1(\Upsilon_\infty)\] Then, $F_\infty$ satisfies \begin{enumerate} \item $s_2^1(F_\infty)=1=s_2^2(F_\infty)$. \item \[ F_\infty=1+f\mod(\DBLHA{\sfR}{2})_{\geq 2} \] where $f\in(\DBLHA{\sfR}{2})_1$ satisfies $\Alt_2 f=\ol{r}$. \footnote{$\ol{r}=\sum_{\alpha\in\ol{\sfR}_+}\Tdh{}{\alpha}-\Tdh{}{-\alpha}$.} \item $(\Phi\KKZ)_{F_\infty}=1^{\otimes 3}$. \item \[d F_\infty=\sum_{\alpha\in\ol{\sfR}_+}\frac{d\alpha}{\alpha} \Bigl((\Kdh{\alpha,1}{}+\Kdh{\alpha,2}{})\cdot F_\infty-F_\infty\cdot \Kdh{\alpha}{(2)}\Bigr)\] \end{enumerate} Let $\olC$ be the complexified chamber of $\olg$, and $\olF=F_{\ol{\sfR}}: \olC\to\DBLHA{\ol{\sfR}}{2}$ a differential twist for $\ol{\sfR}$. Since the projection $\pi:\h\ess\to\olh\ess$ maps $\C$ to $\olC$, we may regard $\olF$ as a function on $\C$, and define $F'_{(D;\alpha_i)}:\C\to\DBLHA{\sfR}{2}$ by \[F'_{(D;\alpha_i)}=\olF^{-1}\cdot F_\infty\] \begin{proposition}\label{pr:relative twist} $F'_{(D;\alpha_i)}$ satisfies the following properties \begin{enumerate} \item $s_2^1(F'_{(D;\alpha_i)})=1=s_2^2(F'_{(D;\alpha_i)})$. \item \[ F'_{(D;\alpha_i)}=1+f\mod(\DBLHA{\sfR}{2})_{\geq 2} \] where $f\in(\DBLHA{\sfR}{2})_1$ satisfies $\Alt_2 f=\wt{r}_{D}-\wt{r}_{D\setminus\alpha_i}$ \footnote{For every $B\subset D$, we set $\wt{r}_B=\sum_{\alpha\in\sfR_B}\Tdh{}{\alpha}-\Tdh{}{-\alpha}$.}. \item $(\Phi\KKZ)_{F'_{(D;\alpha_i)}}=\Phi_{D\setminus\alpha_i}$. \item \[d F'_{(D;\alpha_i)}=\sum_{\alpha\in\ol{\sfR}_+}\frac{d\alpha}{\alpha} [\Kdh{\alpha}{(2)},F'_{(D;\alpha_i)}]\] In particular, if $F'_{(D;\alpha_i)}$ is invariant under $\DCPHA{\ol{\sfR}}$, then it is constant on $\C$. \end{enumerate} \end{proposition} \subsubsection{Centraliser property} Let $\{F_B\}$ be a collection of differential twists for the subsystems $\sfR_B \subset\sfR$, where $B$ is a subdiagram of $D$, such that if $B$ has connected components $\{B_i\}$, then $F_B=\prod_i F_{B_i}$. \begin{definition} The collection $\{F_B\}$ has the {\it centraliser property} if, for any $\alpha \in B\subseteq D$, the relative twist $F_{(B,\alpha)}$ defined in \ref{ss:relative twist} is invariant under $\DCPHA{B\setminus\alpha}$ (and in particular constant). \end{definition} \subsubsection{Factorisation}\label{ss:factorisation} Let $\{F_B\}_{B\subseteq D}$ be a collection of differential twists with the centraliser property. For any $\alpha_i\in B\subseteq D$, set \begin{equation}\label{eq:reltwist} F_{(B;\alpha_i)}= \left(x_B(\cow{i})^{-(\Kdh{B}{}-\Kdh{B\setminus\alpha_i}{})}\right)^{\otimes 2} \cdot F'_{(B;\alpha_i)}\cdot d_1^1\left(x_B(\cow{\alpha_i})^{\Kdh{B}{}-\Kdh{B\setminus\alpha_i}{}}\right) \end{equation} where $F'_{(B;\alpha_i)}\in\DBLHA{B}{2}$ is the relative twist defined in \ref{ss:relative twist}, and $\{x_B\}_{B\subseteq D}$ is a positive, adapted family. The (constant) twist $F_{(B;\alpha_i)}$ is invariant under $\DCPHA{B\setminus\alpha_i}$, and has the properties (1)--(4) given in Proposition \ref{pr:relative twist}. The following lemma is a direct consequence of Proposition \ref{pr:infty factorisation}. \begin{lemma}\label{le:factorisation} Let $\F$ be a \mns on $D$, and $F_\F$ the twist defined in \ref{ss:diffl DCP}. Then, the following holds \[F_\F=\stackrel{\longrightarrow}{\prod_{B\in\F}}F_{(B;\alpha^B_\F)}\] where the product is taken with $F_{(B;\alpha^B_\F)}$ to the right of $F_{(C;\alpha^C_\F)}$ if $B\supset C$. \end{lemma} \subsubsection{Weak braided quasi--Coxeter structure} The following is a direct consequence of Proposition \ref{pr:relative twist}, and Lemma \ref{le:factorisation} and \ref{ss:diffl DCP}. \begin{proposition} Let $\{F_B:\C\to\DBLHA{B}{2}\}$ be a collection of differential twists satisfying the centraliser property. Then the elements $\{F_{(B;\alpha)}\}$ defined in \ref{le:factorisation} satisfy \begin{enumerate} \item $s_2^1(F_{(B;\alpha_i)})=1=s_2^2(F_{(B;\alpha_i)})$. \item \[ F_{(B;\alpha_i)}=1+f\mod(\DBLHA{\sfR}{2})_{\geq 2} \] where $f\in(\DBLHA{\sfR}{2})_1$ satisfies $\Alt_2 f=\wt{r}_{B}-\wt{r}_{B\setminus\alpha_i}$. \item $(\Phi\KKZ)_{F_{(B;\alpha_i)}}=\Phi_{B\setminus\alpha_i}$. \item For any two $\F,\G\in\Mns{D}$, \[ F_{\F}=\DCPA{\F}{\G}^{\ten 2}\cdot F_{\G}\cdot d_1^1(\DCPA{\G}{\F}) \] where $F_{\F}, F_{\G}$ are defined as in \ref{le:factorisation}. \end{enumerate} In particular, the twists $\{F_{(B;\alpha_i)}\}$ define a weak braided quasi--Coxeter structure on $\DCPHA{\sfR}$. \end{proposition} \subsection{The fusion operator} \subsubsection{Dynamical KZ equation}\label{sss:DKZ2} \Omit{ Let $\gX_2=\IC^2\setminus\{z_1=z_2\}$ be the configuration space of two points in $\IC$. Fix $z_2=0$, set $z=z_1$, and consider the following connection on the trivial bundle over $\IC^{\times}$ with fiber $\DBLHA{\sfR}{2}$} The dynamical KZ equation is the connection on the trivial bundle over $\IC^{\times}$ with fiber $\DBLHA{\sfR}{2}$ given by \begin{equation}\label{eq:DKZ2} \nabla_z=d-\left(\frac{\Tdh{}{}}{z}+\sfad{\mu^{(1)}}\right)dz \end{equation} \subsubsection{The joint KZ--Casimir system} The following connection on the trivial bundle over $\IC^{\times} \times\h\reg\ess$ with fiber $\DBLHA{\sfR}{2}$ gives a non--trivial coupling of the KZ equations and the Casimir connection. \[\nabla=d-\left(\frac{\Tdh{}{}}{z}+\sfad{\mu^{(1)}}\right)dz -\sum_{\alpha\in\sfR_+}\Kdh{\alpha}{(2)}\frac{d\alpha}{\alpha} -z\sfad{d\mu^{(1)}}\] The (exact) coupling term $\sfad{\mu^{(1)}}dz+z\sfad{d\mu^{(1)}}= d(z\sfad{\mu^{(1)}})$ was introduced in \cite{FMTV} when the equations take values in $U\g^{\otimes 2}$, and the corresponding connection shown to be flat. \subsubsection{Fundamental solution of the dynamical KZ equation near $z=0$} The following is straighforward. \begin{proposition}\label{pr:Fuchs 0}\hfill \begin{enumerate} \item For any $\mu\in\h$, there is a unique holomorphic function $H_0:\IC\to\DBLHA{\sfR}{2}$ such that $H_0(0,\mu)\equiv 1$ and, for any determination of $\log(z)$, the $\sfEnd{\DBLHA{\sfR}{2}}$--valued function $\Upsilon_0(z,\mu)=e^{z\sfad\muone}\cdot H_0(z,\mu)\cdot z^{\Tdh{}{}}$ satisfies \[\left(d_z-\left(\frac{\Tdh{}{}}{z}+\ad\muone\right) dz \right)\Upsilon_0=\Upsilon_0\,d_z\] \item $H_0$ and $\Upsilon_0$ are holomorphic functions in $\mu$, and $\Upsilon_0$ satisfies \[\left(d_\h-\sum_{\alpha\in\Phi_+}\frac{d\alpha}{\alpha} \Kdh{\alpha}{(2)}-z\sfad(d\muone)\right)\Upsilon_0= \Upsilon_0\left(d_\h-\sum_{\alpha\in\Phi_+}\frac{d\alpha}{\alpha} \Kdh{\alpha}{(2)}\right)\] \end{enumerate} \end{proposition} \subsubsection{Fundamental solutions of the dynamical KZ equations near $z=\infty$}\label{ss:fund solution} The connection \eqref{eq:DKZ2} has an irregular singularity of Poincar\'e rank one at $z=\infty$. Let $\IH_\pm=\{z\in\IC|\,\Ima(z)\gtrless 0\}$. Let \[\C=\{t\in\h_\IR|\,\alpha(t)>0\text{ for any }\alpha\in\Rp\}\] be the fundamental Weyl chamber of $\h$. Set $\imath=\sqrt{-1}$. \begin{theorem}[\cite{vtl-6}]\label{th:Hn} For any $\mu\in\C$, there is a unique holomorphic function \[H_\pm:\IH_\pm\to\DBLHA{\sfR}{2}\] such that $H_\pm(z)$ tends to $1$ as $z\to\infty$ in any sector of the form $|\arg(z)|\in(\delta,\pi-\delta)$, $\delta>0$ and, for any determination of $\log(z)$, the $\sfEnd{\DBLHA {\sfR}{2}}$--valued function \[\Psi_\pm(z)=H_\pm(z)\cdot e^{z\sfad(\mu^{(1)})}\cdot z^{\Tdh{}{0}}\] is a fundamental solution of the dynamical KZ connection $\nabla_z$. \end{theorem} \subsubsection{From the fusion operator to the differential twist}\label{ss:diffqC} Let $\Psi_{\pm}$ be the fundamental solutions of $\nabla_z$ given by Theorem \ref{ss:fund solution}, and define the {\it fusion operators} $J_{\pm}:\IH_{\pm} \times\C\to \DBLHA{\sfR}{2}$ by \[J_{\pm}(z,\mu)=\Psi_{\pm}(z,\mu)(1)\] \begin{theorem}[\cite{vtl-6}] Each of the holomorphic maps $\ol{J}_{\pm}:\C\to \DBLHA{\sfR}{2}$ defined by \[\ol{J}_{\pm}=\Upsilon_0^{-1}\cdot J_{\pm}\] defines a differential twist for $\sfR$ satisfying the centraliser property. In particular, there is a weak quasi--Coxeter structure on $\DCPHA{\sfR}$ interpolating the monodromy of the KZ--Casimir system. \end{theorem} \Omit{ Section 5: Universal quasi-Coxeter structures and quantum groups - Braided quasi-Coxeter structure on Uhg - A generalized EK equivalence - The universal Casimir algebra - Universal quasi-Coxeter structure - Thm 5: The qC structure on Uhg is universal. } \section{Quantum groups and universal quasi--Coxeter structures}\label{se:Uhg} In this section, we point out that the universal $R$--matrix of the Levi subalgebras of the quantum group $\Uhg$ and its quantum Weyl group operators give rise to a braided \qc structure on the category $\Ohint$ of integrable, highest weight modules. We then review the fact that this structure can be transferred to $\Oint$ following \cite{ATL1}. The corresponding structure is universal, although in a different sense than that of Section \ref{s:diff-braid-qC}. It arises from a PROP introduced in \cite{ATL2} which describes a bialgebra endowed with a root decomposition, together with a collection of Drinfeld--Yetter modules. The two different notions of universality will be related in Section \ref{se:main}. \subsection{$\PROP$s}\label{s:props} We briefly review the notion of \emph{product-permutation category} ($\PROP$), Drinfeld--Yetter modules of Lie bialgebras, and their \propic description. For more details, we refer the reader to \cite{ek-2, ATL2}. Let $\sfk$ be a field of characteristic zero. A $\PROP$ $(\C,S)$ is the datum of \begin{itemize} \item a strict, symmetric monoidal $\sfk$--linear category $\C$ whose objects are the non--negative integers, such that $[n]\ten[m]=[n+m]$. In particular, $[n]=[1]^{\ten n}$ and $[0]$ is the unit object; \item a bigraded set $S=\bigsqcup_{m,n\in\IZ_{\geq0}}S_{nm}$ of morphism of $\C$, with \[S_{nm}\subset\C([n],[m])\] \end{itemize} such that any morphism in $\C$ can be obtained by composition, tensor product or linear combination over $\sfk$ of the morphisms in $S$ and the permutation maps $\sfk\SS_n\subset{\C}([n],[n])$. Every $\PROP$ $(\C,S)$ has a presentation in terms of generators and relations. Let $\F_S$ be the $\PROP$ freely generated over $S$. There is a unique symmetric tensor functor $\F_S\to\C$, and $\C$ has the form $\F_S/\I$, where $\I$ is a tensor ideal in $\F_S$ (\ie a collection of subspaces $\I_{nm}\subset{\C}([n],[m])$, such that composition or tensor product (in any order) of any morphism in $\C$ with any morphism in $\I$ is still in $\I$). \subsubsection{Examples} \begin{itemize} \item {\bf Associative algebras.} Let $\preAlg$ be the {\sc Prop} $\F_S/\I$, where the set $S$ consist of two elements $\iota\in S_{0,1}$ (the unit) and $m\in S_{2,1}$ (the multiplication), and $\I$ is the ideal generated by the relations \begin{align*} &m\circ(m\ten\id)=m\circ(\id\ten m)\\ &m\circ(\iota\ten\id)=\id=m\circ(\id \ten\iota) \end{align*} \item {\bf Lie algebras.} Let $\preLA$ be the $\PROP$ generated by the set $S$ consisting of one element $\mu\in S_{2,1}$ (the bracket) subject to the relations \begin{equation}\label{eq:bracket} \mu+\mu\circ(21)=0\qquad \mu\circ(\mu\ten\id)\circ((123)+(312)+(231))=0 \end{equation} \end{itemize} \subsubsection{The $\PROP$s $\LCA$ and $\LBA$}\label{ss:exa} The $\PROP$ of Lie coalgebras $\LCA$ is generated by a morphism $\delta$ in bidegree $(1,2)$ with relations dual to \eqref{eq:bracket}, namely \begin{equation}\label{eq:cobracket} \delta+(21)\circ\delta=0 \aand ((123)+(312)+(231))\circ(\delta\ten\id)\circ \delta=0 \end{equation} The $\PROP$ of Lie bialgebras $\LBA$ is generated by $\mu$ in bidegree $(2,1)$ and $\delta$ in bidegree $(1,2)$ satisfying \eqref{eq:bracket}, \eqref{eq:cobracket}, and the \emph{cocycle condition} \begin{equation}\label{eq:cocycon} \delta\circ\mu=(\id-(21))\circ\id\ten\mu\circ\delta\ten\id \circ(\id-(21)) \end{equation} \subsubsection{Drinfeld doubles} Let $(\a,[\cdot,\cdot]_{\a},\delta_{\a})$ be a Lie bialgebra over $\sfk$. The Drinfeld double $\ga$ of $\a$ is the Lie algebra defined as follows. As a vector space, $\ga=\a\oplus\a^*$. The pairing $\iip{\cdot}{\cdot}:\a\ten\a^*\to\sfk$ extends uniquely to a symmetric, non--degenerate bilinear form on $\ga$, such that $\a,\a^*$ are isotropic subspaces. The Lie bracket on $\ga$ is then defined as the unique bracket compatible with $\iip{\cdot} {\cdot}$, \ie such that \[ \iip{[x,y]}{z}=\iip{x}{[y,z]} \] for all $x,y,z\in\ga$. It coincides with $[\cdot,\cdot]_{\a}$ on $\a$, and with the bracket induced by $\delta_{\a}$ on $\a^*$. The mixed bracket for $a\in\a,b\in\a^*$ is equal to \[ [a,b]=\sfad^*(b)(a)-\sfad^*(a)(b) \] where $\sfad^*$ denotes the coadjoint action of $\a^*$ on $\a$ and of $\a$ on $\a^*$, respectively. The Lie algebra $\ga$ is a (topological) quasitriangular Lie bialgebra, with cobracket $\delta=\delta_{\a}\oplus(-\delta_{\a^*})$, where $\delta_{\a^*}$ is the (topological) cobracket on $\a^*$ induced by $[\cdot,\cdot]_{\a}$, and $r$--matrix $r\in\ga\ctp\ga$ corresponding to the identity in $\End(\a)\simeq\a\ctp\a^*\subset\ga\ctp\ga$. Explicitly, if $\{a_i\}_{i\in I},\{b^i\}_{i\in I}$ are dual bases of $\a$ and $\a^*$ respectively, then $r=\sum_{i\in I}a_i\ten b^i\in\a\ctp\a^*$. \subsubsection{Drinfeld--Yetter modules}\label{sss:DYEq} A Lie bialgebra $(\a,[\cdot,\cdot]_{\a}, \delta_{\a})$ has a natural category of representations called Drinfeld--Yetter modules, and denoted $\DrY{\a}$. A triple $(V,\pi,\pi^*)$ is a Drinfeld--Yetter module on $\a$ if $(V,\pi)$ is an $\a$--module, that is the map $\pi:\a\otimes V\to V$ satisfies \begin{equation}\label{eq:action} \pi\circ\mu=\pi\circ(\id\ten\pi)-\pi\circ(\id\ten\pi)\circ(21) \end{equation} $(V,\pi^*)$ is an $\a$--comodule, that is the map $\pi^*:V\to \a\otimes V$ satisfies \begin{equation}\label{eq:coaction} \delta\circ\pi^*=(21)\circ(\id\ten\pi^*)\circ\pi^*-(\id\ten\pi^*)\circ\pi^* \end{equation} and the maps $\pi,\pi^*$ satisfy the following compatibility condition in $\End(\a\ten V)$: \begin{equation}\label{eq:actcoact} \pi^*\circ\pi-\id\ten\pi\circ(12)\circ\id\ten\pi^*=[\cdot,\cdot]_{\a}\ten\id\circ\id\ten\pi^*- \id\ten\pi\circ\delta_{\a}\ten\id \end{equation} The category $\DrY{\a}$ is a symmetric tensor category, and is equivalent to the category $\Eq{\ga}$ of equicontinuous $\ga$--modules \cite{ek-1}. A $\ga$--module is equicontinuous if the action of the $b^i$'s is locally finite, \ie for every $v\in V$, $b^i\cdot v=0$ for all but finitely many $i\in I$. In particular, given $(V,\pi)\in\Eq{\ga}$, the coaction of $\a$ on $V$ is given by \[ \pi^*(v)=\sum_{i}a_i\ten b^i\cdot v\in\a\ten V \] where the equicontinuity condition ensures that the sum is finite. The action of the $r$--matrix of $\g_\a$ on the tensor product $V\ten W\in \Eq{\ga}$ corresponds, under the identification $\Eq{\ga}\simeq\DrY{\a}$ with the map $r_{VW}:V\ten W\to V\ten W$ given by \begin{equation}\label{eq:rmatrix} r_{VW}=\pi_V\ten\id\circ\,(12)\,\circ\id\ten\pi^*_W \end{equation} \subsection{The Casimir algebra}\label{s:casimiralg} We now review the construction of the \emph{universal Casimir algebra} $\mathfrak{U}_{\sfR}$ introduced in \cite{ATL2}. For some generalities on $\PROP$s and Lie bialgebras, we refer the reader to the Appendix \ref{s:props}. \subsubsection{The Casimir category} Let $\sfR$ be a fixed root system of rank $l$ with simple roots $\{\alpha_1,\dots, \alpha_l\}$, and $\sfR_+\subset\sfR$ the corresponding positive roots.\\ Let $\wt{\LBA}_{\sfR}$ be the $\PROP$ with generators $\mu\in S_{21},\delta\in S_{12}, \pi_{\alpha}\in S_{11},\alpha\in\Rp$. The morphism $\mu$ and $\delta$ satisfy the relations \begin{equation}\label{eq:bracket2} \mu+\mu\circ(21)=0 \qquad \mu\circ(\mu\ten\id)\circ((123)+(312)+(231))=0 \end{equation} \begin{equation}\label{eq:cobracket2} \delta+(21)\circ\delta=0 \qquad ((123)+(312)+(231))\circ(\delta\ten\id)\circ \delta=0 \end{equation} and are related by the cocycle condition \begin{equation}\label{eq:cocycon2} \delta\circ\mu=(\id-(21))\circ\id\ten\mu\circ\delta\ten\id \circ(\id-(21)) \end{equation} The morphisms $\{\pi_{\alpha}\}_{\alpha\in\sfR_+}$ satisfy \begin{equation}\label{eq:idemR2} \pi_{\alpha}\circ\pi_{\beta}=\delta_{\alpha,\beta}\pi_{\alpha} \end{equation} The compatibility condition with $\mu$ and $\delta$ is given by the relation \begin{equation}\label{eq:projbraR2} \pi_{\alpha}\circ\mu=\sum_{\beta+\gamma=\alpha}\mu\circ(\pi_{\beta}\ten\pi_{\gamma}) \end{equation} and \begin{equation}\label{eq:projcobraR2} \delta\circ\pi_{\alpha}=\sum_{\beta+\gamma=\alpha}(\pi_{\beta}\ten\pi_{\gamma})\circ\delta \end{equation} We wish to impose the additional completeness relation \begin{equation}\label{eq:completeness2} \sum_{\alpha\in\Rp}\pi_\alpha=\id_{[1]} \end{equation} To this end, let $p\in\IN$, and denote by $\sfk[\Rp^p]\fin$ the algebra of functions on $\Rp^p$ with finite support. The vector space $\wt{\mathsf{LBA}}_{\sfR}([p],[q])$ is naturally a $(\sfk[\Rp^q]\fin,\sfk[\Rp^p]\fin)$--bimodule. More specifically, let $\ul{\alpha}=(\alpha_{i_1},\dots, \alpha_{i_n})\in\sfR^p$ and $\delta_{\ul{\alpha}}\in\sfk[\sfR_+^p]$ the characteristic function on $\ul{\alpha}$. Then $\delta_{\ul{\alpha}}$ acts on $\wt{\mathsf{LBA}}_{\sfR}([p],[q])$ by precomposition with the map \[ \pi_{\ul{\alpha}}=\bigotimes_{k=1}^p\pi_{\alpha_{i_k}}:[p]\to[p] \] Similarly for $\sfk[\sfR_+^q]$.\\ Let $\mathsf{LBA}_{\sfR}$ be the (topological) $\PROP$ with morphisms \begin{equation} \mathsf{LBA}_{\sfR}([p],[q])= \sfk[\Rp^p]\ten_{\sfk[\Rp^p]\fin}\wt{\mathsf{LBA}}_{\sfR}([p],[q]) \ten_{\sfk[\Rp^q]\fin}\sfk[\Rp^q] \end{equation} Then, the function ${\bf 1}\in\sfk[\Rp^p]$ acts on $[p]$ as $\id_{[p]}$. \begin{definition} The Casimir category $\LBA_{\sfR}$ is the Karoubi envelope of the (topological) $\PROP$ $\mathsf{LBA}_{\sfR}$. \end{definition} \begin{example} Let $\g$ be Kac--Moody algebra of with root system $\sfR$. Then the positive Borel subalgebra $\b=\h\oplus\bigoplus_{\alpha\in\sfR_+}\g_\alpha$ is an $\LBA_{\sfR}$--module. \end{example} \subsubsection{The Casimir algebra}\label{ss:dyclba} \begin{definition} The category $\DrY{\sfR}^n$, $n\geq0$, is the multicolored $\PROP$ generated by $n+1$ objects $\ACDY{1}$ and $\VCDY{k}$, $k=1,\dots,n$, and morphisms \[ \mu:\ACDY{2}\to\ACDY{1}\qquad\delta:\ACDY{1}\to\ACDY{2}\qquad \pi_{\alpha}:\ACDY{1}\to\ACDY{1}, \alpha\in\Rp \] \[ \pi_k:\ACDY{1}\ten \VCDY{k}\to \VCDY{k}\qquad \pi_k^*:\VCDY{k}\to\ACDY{1}\ten \VCDY{k} \] such that $(\ACDY{1},\mu,\delta,\pi_{\alpha})$ is a $\LBA_{\sfR}$--module in $\DrY{\sfR}^n$, and, for every $k=1,\dots, n$, $(\VCDY{k},\pi_k,\pi_k^*)$ is a Drinfeld--Yetter module over $\ACDY{1}$. \end{definition} \begin{definition} The $n$--Casimir algebra is the algebra of endomorphisms \[ \PCU{\sfR}{n}=\mathsf{End}_{\DrY{\sfR}^n}\left(\bigotimes_{k=1}^n\VCDY{k}\right) \] \end{definition} \subsubsection{Casimir algebra and completions}\label{sss:CAandcomp} Let $\g$ be a \KM algebra with root system $\sfR$, $\b=\h\oplus\bigoplus_{\alpha\in\sfR_+}\g_{\alpha}$ the corresponding positive Borel subalgebra, and $\gb=\g\oplus\h$ the Drinfeld double of $\b$. Let $\DrY{\b}$ be the category of Drinfeld--Yetter modules over $\b$. For any $n$--tuple $\{V_k,\pi_k,\pi_k^*\}_{k=1}^n$ of Drinfeld--Yetter $\b$--modules, there is a unique tensor functor \[\G_{(\b,V_1,\dots, V_n)}:\DrY{\sfR}^{n}\longrightarrow\vect_{\sfk}\] such that $\ADY{1}\mapsto\b,\VDY{k}\mapsto V_k$. \begin{proposition}\label{pr:universal action} Let $\ff\hspace{-0.1cm}:\DrY{\b}\to\vect_{\sfk}$ be the forgetful functor, and $\U_{\b}^n=\sfEnd{\ff^{\boxtimes n}}$. The functors $\G_{(\b,V_1,\dots, V_n)}$ induce an algebra homomorphism \[ \rho^n_{\b}:\PCU{\sfR}{n} \to\U_{\b}^n \] \end{proposition} \subsubsection{Cosimplicial structure} The tower of algebras $\{\U_{\b}^n\}$ is naturally endowed with a cosimplicial structure, as described in \ref{sss:cosimpUg}. This can be lifted to the algebras $\PCU{\sfR}{n}$. For every $n\geq1$ and $i=0,1,\dots, n+1$, there are faithful functors \[ \D_n^{(i)}:\DY{n}\to\DY{n+1} \] given by \[ \D_n^{(i)}= \left\{ \begin{array}{ll} \G_{([1],\VDY{2},\dots,\VDY{n+1})} & i=0 \\ \G_{([1],\VDY{1},\dots, \VDY{i}\ten\VDY{i+1},\dots, \VDY{n+1})} & 1\leq i\leq n\\ \G_{([1], \VDY{1}, \dots, \VDY{n})} & i=n+1 \end{array} \right. \] These induce algebra homomorphisms \[ \Delta_n^{(i)}:\PCU{\sfR}{n}\to\PCU{\sfR}{n+1} \] which are universal analogues of the insertion/coproduct maps on $U\gb^{\otimes n}$. They give the tower of algebras $\{\PCU{\sfR}{n}\}_{n\geq0}$ a cosimplicial structure. The morphisms $\rho_{\b}^n:\PCU{\sfR}{n}\to\U_{\b}^n$ defined in \ref{sss:CAandcomp} are compatible with the face morphisms. \subsubsection{A basis of $\PCU{\sfR}{n}$} For any $p\in\IN$ and $\ul{p}=(p_1,\ldots,p_n)\in\IN^n$ such that $|\ul{p}| =p_1+\cdots+p_n=p$, define the maps \begin{equation}\label{eq:higher pi} \pi^{(\ul{p})}:[p]\ten\bigotimes_{k=1}^n\VDY{k}\to\bigotimes_{k=1}^n\VDY{k} \end{equation} as the ordered composition of $p_i$ actions on $\VDY{i}$. Similarly for \begin{equation}\label{eq:higher pi*} {\pi^*}^{(\ul{p})}:\bigotimes_{k=1}^n\VDY{k}\to[p]\ten\bigotimes_{k=1}^n\VDY{k} \end{equation} The following provides an explicit basis of the algebra $\PCU{\sfR}{n}=\pEnd{\DrY{\sfR}^n}{\ten \VCDY{k}}$. \begin{proposition} The set of endomorphisms of $\VDY{1}\ten\cdots\ten\VDY{n}$ \[ \colarch{\ul{N}}{\ul{\alpha}}{\sigma}{\ul{N}'}= \pi^{(\ul{N})}\circ\pi_{\ul{\alpha}}\ten\id\circ\;\sigma\ten\id\circ{\pi^*}^{(\ul{N}^\prime)} \] for $N\geq0$, $\sigma\in\SS_N$, $\ul{\alpha}\in\Rp^N$, $\ul{N},\ul{N}^\prime\in\IN^n$ such that $|\ul{N}|=N=|\ul{N}^\prime|$, is a basis of $\PCU{\sfR}{n}$. \end{proposition} \subsubsection{A diagrammatic representation of the morphisms in $\DrY{\sfR}^{n}$, $n=1$} The endomorphisms of $\VDY{1}\in\DrY{\sfR}^1$ given by \[ \colarch{N}{\ul{\alpha}}{\sigma}{N}=\pi^{(N)}\circ\pi_{\ul{\alpha}}\ten\id\circ \sigma\ten\id\circ\;{\pi^*}^{(N)} \] for $N\geq 0$ and $\sigma\in\SS_N$ are a basis of $\PCU{\sfR}{}=\pEnd{\DrY{1\sfR}^1}{\VDY{1}}$.\\ We represent the morphisms $\mu,\delta,\pi,\pi^*$ in $\DY{1}$ with the oriented diagrams \[ \xy (0,0)*{ \xy (-3,0)*{\mu:}; (5,0);(10,0)**\dir{-}?(.5)*\dir{>}; (0,5);(5,0)**\dir{-}?(.5)*\dir{>}; (0,-5);(5,0)**\dir{-}?(.5)*\dir{>}; \endxy }; (30,0)*{ \xy (-3,0)*{\delta:}; (0,0);(5,0)**\dir{-}?(.5)*\dir{>}; (5,0);(10,5)**\dir{-}?(.5)*\dir{>}; (5,0);(10,-5)**\dir{-}?(.5)*\dir{>}; \endxy } \endxy \] and \[ \xy (30,5)*{ \xy (-3.5,.5)*{\pi^*:}; (0,0);(10,0)**\dir{-}; (5,0);(10,5)**\dir{-}?(.5)*\dir{>}; \endxy }; (0,5)*{ \xy (-3,0)*{\pi:}; (0,0);(10,0)**\dir{-}; (0,5);(5,0)**\dir{-}?(.5)*\dir{>}; \endxy } \endxy \] The morphisms $\pi_{\alpha}$ are represented by \[ \xy (-14,10)*{\pi_{\alpha}:}; (-10,10);(-2,10)**\dir{-}; (10,10);(2,10)**\dir{-}; (0,10)*{\alpha};(0,10)*\xycircle(2,2){-}="s"; \endxy \] A non--trivial endomorphism of $\VDY{1}$ is represented as a linear combination of oriented diagrams, necessarily starting with a coaction and ending with an action. The compatibility relation \eqref{eq:actcoact} \begin{align*} &\\ &\actcoact\\ \end{align*} allows to reorder $\pi$ and $\pi^*$. The cocycle condition \eqref{eq:cocycon} allows to reorder brackets and cobrackets as in $\LBA$. Finally, the relations \eqref{eq:action}, \eqref{eq:coaction}\\ \begin{align*} &\module\\ &\comodule\\ \end{align*} allow to remove from the graph every $\mu$ and every $\delta$ involved. It follows that every endomorphism of $\VDY{1}$ is a linear combination of the elements $\colarch{N}{\ul{\alpha}}{\sigma}{N}$ \[ \arch{\varphi_{\ul{\alpha},\sigma}}{N}{25} \] where $\varphi_{\ul{\alpha},\sigma}=\sigma\circ\pi_{\ul{\alpha}}$, for some $N\geq 0$, $\sigma\in\SS_N$, and $\ul{\alpha}\in\sfR_+^N$. \subsubsection{The $r$--matrix and the Casimir element} The equivalence between the category of Drinfeld--Yetter $\b$--modules and the category of {equicontinuous} $\gb$--modules gives an isomorphism between the algebra $\U_{\b}^n$ and a completion of the universal enveloping algebra $U\gb^{\ten n}$. In particular, under this identification, the action of the classical $r$--matrix of $\gb$ on a tensor product $V\ten W$ of equicontinuous $\gb$--modules is identified with the endomorphism \[ \pi_V\ten\id\circ\sigma_{12}\circ\id\ten\pi^{*}_W \] and the action of the normally ordered Casimir on $V$ is identified with the endomorphism \[ \pi_V\circ\pi^*_V \] These special elements belongs in fact to the image of $\rho^n_{\b}:\PCU{\sfR}{n}\to\U_{\b}^n$. In particular, the $r$--matrix is the image of the element of $\PCU{\sfR}{2}$ \[ \xy (-5,-5)*{\VDY{1}}; (-5,-10)*{\VDY{2}}; (0,-5);(20,-5)**\dir{-}; (0,-10);(20,-10)**\dir{-}; (10,0)*{}="A"; (5,-10);(15,-5)**\crv{(10,5)}?(.65)*\dir{>}; \endxy \] and the normally ordered Casimir is the image of the element of $\PCU{\sfR}{}$ \[ \xy (-5,-5)*{\VDY{1}}; (0,-5);(20,-5)**\dir{-}; (10,0)*{}="A"; (5,-5);(15,-5)**\crv{(10,5)}?(.5)*\dir{>}; \endxy \] Moreover the elements $\Tu{}{\alpha}\in\U_{\b}^2$, $\alpha\in\sfR\cup\{0\}$ correspond to the diagrams \[ \xy (-5,-5)*{\VDY{1}}; (-5,-10)*{\VDY{2}}; (0,-5);(20,-5)**\dir{-}; (0,-10);(20,-10)**\dir{-}; (9,2)*{\alpha};(9,2)*\xycircle(2,2){-}="A"; (5,-10);"A"**\crv{(5,2)}?(.5)*\dir{>}; (15,-5);"A"**\crv{(12,2)}?(.25)*\dir{<}; \endxy \] and the elements $\Ku{\alpha}{+}\in\U_{\b}$, $\alpha\in\sfR_+$ correspond to the diagrams \[ \xy (-5,-5)*{\VDY{1}}; (0,-5);(20,-5)**\dir{-}; (10,2)*{\alpha};(10,2)*\xycircle(2,2){-}="A"; (5,-5);"A"**\crv{(5,5)}?(.25)*\dir{>}; "A";(15,-5)**\crv{(15,5)}?(.75)*\dir{>}; \endxy \] \subsubsection{$D$--algebra structure} Let $D$ be the Dynkin diagram associated to the root system $\sfR$. We associate to any subdiagram $B\subset D$ the root subsystem $\sfR_{B}\subset\sfR$, and the corresponding subalgebra $\PCU{B}{}=\PCU{\sfR_B}{}\subset \PCU{\sfR}{}$. \begin{proposition} The collection of subalgebras $\{\PCU{B}{}\}_{B\subset D}$ defines a $D$--algebra structure on $\mathfrak{U}_{\sfR}$. \end{proposition} \subsubsection{Weak quasi--Coxeter structures on $\PCU{\sfR}{}$} \begin{definition} A \emph{weak Coxeter structure} on $\PCU{\sfR}{}$ is the datum of \begin{enumerate} \item for each connected subdiagram $B\subseteq D$, an $R$--matrices $R_B\in\PCU{B}{2}$ and associator $\Phi_B\in\PCU{B}{3}$ of the form \[R_B=e^{\Tp{B}{}/2} \aand \Phi_B=\Phi'_B(\Tp{B,12}{},\Tp{B,23}{})\] where $\Phi'_B$ is a Lie associator. \item for each pair of subdiagrams $B^\prime\subseteq B\subseteq D$ and maximal nested set $\F\in\Mns{B^\prime,B}$, a relative twist $J^{BB^\prime}_{\F}\in(\PCU{B}{2})^{B'}$, satisfying \begin{equation} J^{BB'}_{\F}=1\mod(\PCU{B}{2})^{B'}_{\geq1} \end{equation} and \begin{equation} \left(\Phi_B\right)_{J^{BB^\prime}_{\F}}=\Phi_{B^\prime} \end{equation} which is compatible with \emph{vertical decomposition} \[ J_{\F_1\cup\F_2}^{BB^{\prime\prime}}= J_{\F_1}^{BB^\prime}\cdot J_{\F_2}^{B^\prime B^{\prime\prime}} \] where $B''\subseteq B'\subseteq B$, $\F_1\in\Mns{B,B^\prime}$ and $\F_2\in\Mns{B^\prime,B''}$. \item for any $B'\subseteq B$ and pair of maximal nested sets $\F,\G\in\Mns{B,B'}$, a gauge transformation, referred to as a \emph{De Concini--Procesi associators}, $\DCPA{\F}{\G}\in\PCU{B}{B'}$, satisfying \begin{equation} \DCPA{\F}{\G}=1\mod(\PCU{B}{B'})_{\geq 1} \end{equation} \begin{equation} \gauge{\DCPA{\F}{\G}}{J_{\G}}=J_{\F} \end{equation} and such that the following holds \begin{itemize} \item {\bf Orientation:} for any $\F,\G\in\Mns{B,B'}$ \[ \DCPA{\F}{\G}=\DCPA{\G}{\F}^{-1} \] \item {\bf Transitivity:} for any $\F,\G,\H\in\Mns{B,B'}$ \[ \DCPA{\H}{\F}=\DCPA{\H}{\G}\DCPA{\G}{\F} \] \item {\bf Factorisation:} \[ \DCPA{(\F_1\cup\F_2)}{(\G_1\cup\G_2)}=\DCPA{\F_1}{\G_1}\DCPA{\F_2}{\G_2} \] for any $B''\subseteq B'\subseteq B$, $\F_1,\G_1\in\Mns{B,B'}$ and $\F_2,\G_2\in\Mns{B',B''}$. \end{itemize} \end{enumerate} \end{definition} \subsection{Universal Coxeter structures on Kac--Moody algebra} Let $\b$ be the positive Borel subalgebra of a Kac--Moody algebra with root system $\sfR$. Let $\DrY{\b}\int$ be the category of deformation integrable $\h$--diagonalisable Drinfeld--Yetter $\b$--modules, and let $\mathfrak{U}$ be the algebra of endomorphisms of the fiber funtor from $\DrY{\b}\int$ to the category of $\IC[[\hbar]]$--modules. We pointed out in \cite{ATL2} that a weak quasi--Coxeter structure on $\PCU{\sfR}{}$ induces a weak quasi--Coxeter structure on $\U_{\b}$ (called \emph{universal}) and therefore on the category $\DrY{\b}\int$, through the morphism of cosimplicial algebras $\PCU{\sfR}{n}\to\U_{\b}^n$. Let $\b_i$, $i\in D$, be the Borel subalgebras corresponding to the simple roots $\alpha_i, i\in D$, and $\U_i$ the corresponding completed cosimplicial algebras. \begin{definition} A \emph{universal Coxeter structure} on $\DrY{\b}^{\intm}$ is the data of a universal weak Coxeter structure on $\U_{\b}$ and a collection of operators, called \emph{local monodromies}, $S_i\in\U_i$, $i\in D$, of the form \begin{equation}\label{eq:locmon} S_i=\wt{s}_i\cdot\ul{S}_i \end{equation} where $\ul{S}_i\in\U_i^{\h}$, $\ul{S}_i=1\mod\hbar$, and $\wt{s}_i=\exp(e_i)\exp(-f_i)\exp(e_i)$, satisfying the coproduct identity \begin{equation}\label{eq:coxcoprod} \Delta_{J_i}(S_i)=(R_i)_{J_i}^{21}(S_i\ten S_i) \end{equation} and the generalized braid relations of type $D$. Namely, for any pairs $i,j$ of distinct vertices of $B$, such that $2< m_{ij}<\infty$, and elementary pair $(\F,\G)$ in $\Mns{B}$ such that $i\in\F,j\in\G$, the following relations hold in $\U_{\b}$, \begin{equation}\label{eq:coxbraid} \sfAd{\DCPA{\G}{\F}}(S_i)\cdot S_j \cdots = S_j\cdot\sfAd{\DCPA{\G}{\F}}(S_i)\cdots \end{equation} where the number of factors in each side equals $m_{ij}$. \end{definition} \subsection{The standard quasi--Coxeter structure on $\DJ{\g}$}\label{ss:transfer} Let $\g$ be a symmetrisable Kac--Moody algebra with root system $\sfR$, Dynkin diagram $D$ and positive Borel subalgebra $\b$. Then, the collection of diagrammatic subalgebras $\{\g_B\}_{B\subset D}$ and $\{\b_B\}_{B\subset D}$ endow, respectively, $\g$ and $\b$ with a standard $D$--algebra structure. Let $\DJ{\b_B}$ be the Drinfeld--Jimbo quantum group corresponding to $\b_B$, and $\DrY{\DJ{\b_B}}\int$ the category of $\h$--diagonalizable integrable Drinfeld--Yetter $\DJ{\b_B}$--modules.\\ The universal $R$--matrix and the quantum Weyl group operators induce on $\DrY{\b}\int$ a \emph{standard} braided quasi--Coxeter structure. More specifically, \begin{enumerate} \item the universal $R$--matrix $R_{B}^{\hbar}$ of $\DJ{\g_B}$ endows $\DrY{\DJ{\b_B}}\int$ with a braided tensor structure; \item the restriction functors $\mathsf{Res}_B^{B'}: \DrY{\DJ{\b_B}}\int\to\DrY{\DJ{\b_{B'}}}\int$ endow the collection of braided tensor categories $\DrY{\DJ{\b_B}}\int$ with a structure of braided $D$--category; \item finally, the quantum Weyl group operators $S_i^{\hbar}\in\DrY{\DJ{\b_i}}\int$ defined in \cite{l} satisfy the braid relations and the coproduct identity \[ \Delta_{\hbar}(S_i^{\hbar})=R_i^{21}\cdot(S_i^{\hbar}\ten S_i^{\hbar}) \] and endow $\DrY{\DJ{\b}}\int$ with a structure of (strict) braided quasi--Coxeter category. \end{enumerate} In \cite{ek-1,ek-2,ek-6}, Etingof and Kazhdan constructed an equivalence of braiding tensor categories \[ \mathsf{EK}:\DrY{\b}\stackrel{\sim}{\longrightarrow}\DrY{\DJ{\b}} \] where $\DrY{\b}$ is the category of deformation Drinfeld--Yetter $\b$--modules. The functor $\mathsf{EK}$ preserves the weight decomposition, and it provides an equivalence at the level of category $\O$ \[ \mathsf{EK}:\O\stackrel{\sim}{\longrightarrow}\O_{\hbar} \] In \cite{ATL1}, we generalised their construction and we used the resulting equivalence to transfer the braided quasi--Coxeter structure of $\DJ{\b}$ to the category $\DrY{\b}\int$. \begin{theorem}\cite{ATL1}\hfill \begin{enumerate} \item There exists a structure of braided quasi--Coxeter category on $\DrY{\b}\int$ and an equivalence of quasi--Coxeter categories \[ \mathsf{\Psi}: \DrY{\b}\int\stackrel{\sim}{\longrightarrow}\DrY{\DJ{\b}}\int \] whose underlying tensor functors are $\mathsf{EK}_B$. \item The equivalence $\mathsf{\Psi}$ descends to an equivalence of braided quasi--Coxeter categories \[ \mathsf{\Psi}: \O\int\stackrel{\sim}{\longrightarrow}\O_{\hbar}\int \] \item The braided quasi--Coxeter structure induced on $\DrY{\b}\int$ and $\O\int$ by the quantum group $\DJ{\g}$ is universal, \ie it comes from a weak quasi--Coxeter structure on $\PCU{\sfR}{}$. \end{enumerate} \end{theorem} \Omit{ Section 6: The monodromy theorem - Thm 6: The differential qC structure on Thm3 is universal (i.e. the map double holonomy ?> End(f) factorises through a map double holonomy ?> universal Casimir algebra). - Thm 7: There is an equivalence of qC categories between Oint and Oint_h, unique up a unique twist. (trivial=follows by uniqueness) - Thm 8: The quantum Weyl group operators describe the monodromy of the Casimir connection (=> defined over Q). } \section{The monodromy theorem}\label{se:main} This section contains the main result of this paper. We prove that the double holonomy algebras $\DBLHA{\sfR}{n}$, which underlie the braided \qc structure on the category $\Oint$, maps to the Casimir algebras $\PCU{\sfR}{n}$, which underlie the braided \qc structure transferred from the quantum group $\Uhg$. We then use the rigidity statement of \cite{ATL2} to prove that these two structures are equivalent, and in particular that the monodromy of the Casimir connection of $\g$ is given by the quantum Weyl group operators of $\Uhg$. \subsection{The double holonomy algebra and the Casimir algebra} \subsubsection{Commutation statements in $\PCU{\sfR}{}$}\label{sss:commstat} Set $\Kp{}{}=\pi\circ\pi^*$ and $\Kp{\alpha}{}=\pi\circ\pi_{\alpha}\ten\id\circ\pi^*$, $\alpha\in\sfR_+\cup\{0\}$. The following is proved in \cite{ATL2}. \begin{proposition}\label{prop:comm-rel}\hfill \begin{itemize} \item[(i)] $\Kp{}{}$ is central in $\PCU{\sfR}{}$. \item[(ii)] For any $\alpha\in\Rp$, $[\Kp{0}{},\Kp{\alpha}{}]=0$. \item[(iii)] For any $B\subseteq D$, $\alpha\in\Rs{B}$, \[ [\Kp{\alpha}{}, \sum_{\beta\in\Rs{B}}\Kp{\beta}{}]=0 \] \item[(iv)] For any subsystem $\Psi\subset\Rp$, $\rk(\Psi)=2$, $\alpha\in\Psi$, \[ [\Kp{\alpha}{}, \sum_{\beta\in\Psi}\Kp{\beta}{}]=0 \] \end{itemize} \end{proposition} \subsubsection{From the double holonomy algebra to the Casimir algebra} Let $\Tp{ij}{\alpha}$, $\alpha\in\sfR_+\cup{0}$ be the endomorphism of $\VDY{1}\ten\cdots\ten\VDY{n}$ in $\PCU{\sfR}{2}$ \[ \xy (-5,-10)*{\VDY{i}}; (-5,-20)*{\VDY{j}}; (0,-10);(20,-10)**\dir{-}; (0,-20);(20,-20)**\dir{-}; (10,-13.5)*{\vdots}; (9,2)*{\alpha};(9,2)*\xycircle(2,2){-}="A"; (5,-20);"A"**\crv{(5,2)}?(.5)*\dir{>}; (15,-10);"A"**\crv{(12,2)}?(.25)*\dir{<}; \endxy \] and set $\Tp{ij}{-\alpha}=\Tp{ji}{\alpha}$. Let $\Kp{\alpha,i}{}\in\PCU{\sfR}{n}$ be the endomorphism of $\VDY{1}\ten\cdots\ten\VDY{n}$ \[ \xy (-5,-10)*{\VDY{i}}; (0,-10);(20,-10)**\dir{-}; (10,5)*{\vdots}; (10,-13.5)*{\vdots}; (9,-2)*{\alpha};(9,-2)*\xycircle(2,2){-}="A"; (5,-10);"A"**\crv{(5,0)}?(.25)*\dir{>}; "A";(15,-10)**\crv{(15,0)}?(.75)*\dir{>}; \endxy \] \begin{proposition}\label{pr:holo-Casimir}\hfill \begin{enumerate} \item The linear maps $\eta_{\sfR,n}:\DBLHA{\sfR}{n}\to\PCU{\sfR}{n}$ defined by \begin{equation*} \eta_{\sfR,n}(\Tdh{ij}{\alpha})=\Tp{ij}{\alpha} \qquad \eta_{\sfR,n}(\Kdh{\alpha}{(n)})=\Delta^{(n)}(\Kp{\alpha}{}) \qquad \eta_{\sfR,n}(\Kdh{\alpha,i}{})=\Kp{\alpha,i}{}\\ \end{equation*} are morphisms of algebras, compatible with the cosimplicial structure and the natural gradation on $\DBLHA{\sfR}{n}, \PCU{\sfR}{n}$. \item The following \[ \xymatrix{ \DBLHA{\sfR}{n}\ar[r]^{\xi_{\sfR,n}}\ar[d]_{\eta_{\sfR,n}} & \U_{\b}^n\\ \PCU{\sfR}{n}\ar[ur]_{\rho_{\b}^n} } \] is a commutative diagram of morphisms of cosimplicial algebras. \item \end{enumerate} \end{proposition} \begin{proof} $(i)$ The result follows directly from the commutation relation of \ref{sss:commstat}.\\ $(ii)$ The commutativity follows by direct inspection. \end{proof} \subsection{Equivalence of quasi--Coxeter structures}\label{s:equivalence} We now prove the main result of the paper. In the previous section we constructed a weak quasi--Coxeter structure on $\mathfrak{U}_{\sfR}$, and therefore on $\mathfrak{U}$, interpolating the monodromy of the KZ and the Casimir connection. \begin{theorem}\hfill \begin{enumerate} \item[(i)] The weak braided quasi--Coxeter structure on $\PCU{\sfR}{}$ coming from the quantum group $\DJ{\g}$ is equivalent to the one constructed in Sections \ref{s:monodromy}, \ref{s:diff-braid-qC} and \ref{s:difftwist-fusion}, induced by the KZ and the Casimir connections. $\nabla_{\scriptscriptstyle{\operatorname{KZ}}}, \nabla_{\scriptscriptstyle{\operatorname{C}}}$. \item[(ii)] The differential weak braided quasi--Coxeter structure on $\PCU{\sfR}{}$ extends uniquely to a braided quasi--Coxeter structure on $\U_{\b}$ with local monodromies \[ S_{i,C}=\wt{s}_ie^{\frac{\hbar}{2}\mathsf{C}_i}\qquad i\in D \] \item[(iii)] There exists an equivalence of quasi--Coxeter categories \begin{equation*} \Psi:\DrY{\b}^{\nabla\KKZ,\Cconn}\to\DrY{\DJ{\b}}^{R_{\hbar}, S_{\hbar}} \end{equation*} where $\DrY{\DJ{\b}}^{R_{\hbar}, S_{\hbar}}$ is the category $\DrY{\DJ{\b}}\int$ endowed with the standard quasi--Coxeter structure induced by the quantum group $\DJ{\g}$, and $\DrY{\b}^{\nabla\KKZ,\Cconn}$ is the category $\DrY{\b}\int$ endowed with the differential quasi--Coxeter structure induced by $\nabla\KKZ$ and $\Cconn$ and extended as in $(ii)$. \item[(iv)] The braid group representation defined by the action of the quantum Weyl group operators is equivalent to the monodromy representation of the Casimir connection defined in Section \ref{s:Casimir}. \end{enumerate} \end{theorem} \begin{proof} $(i)$ The result follows by \ref{pr:holo-Casimir} and by uniqueness of the weak quasi--Coxeter structure on the Casimir algebra. Namely, we proved in \cite{ATL2} the following \begin{enumerate} \item[(a)] Up to a unique equivalence, there exists a unique weak braided quasi--Coxeter structure on $\PCU{\sfR}{}$. \item[(b)] A weak braided quasi--Coxeter structure on $\PCU{\sfR}{}$ can be completed to at most one braided quasi--Coxeter structure on $\U_{\b}$. \end{enumerate} $(ii)$ The operators $S_{i,C}$ satisfy the coproduct identity \eqref{eq:coxcoprod}. By $(b)$, they satisfy the braid relations \eqref{eq:coxbraid} and extend the weak structure to a braided quasi--Coxeter structure on $\U_\b$. $(iii)$ The equivalence follows from Theorem \ref{ss:transfer}, $(ii)$, and by $(b)$. $(iv)$ Clear. \end{proof} \appendix \section{The Casimir connection of an affine Kac--Moody algebra}\label{s:coda} In this section, we prove that every affine Lie algebra admits two equivariant extensions of the Casimir connection, $A_{\kappa}:=A+A_{\h}$ and $A_C:=A_{\kappa}+A_{S^2\h}$, where \[ A=\frac{\nablah}{2}\sum_{\alpha\in\sfR_+}\Ku{\alpha}{+}\frac{d\alpha}{\alpha} \] such that \begin{itemize} \item[(i)] the connections, $\nabla=d-A_{\kappa}$ and $\nabla=d-A_{C}$ are flat and $W$--equivariant; \item[(ii)] $\Res_{\alpha_i=0}A_{\kappa}=\kappa_i/2$, $i\in I$; \item[(ii)] $\Res_{\alpha_i=0}A_C=C_i/2$, $i\in I$; \end{itemize} where $\kappa_i$ and $C_i$ are, respectively, the truncated and the full Casimir element of $\sl{2}^{\alpha_i}$. \subsection{The Dilogarithm} For any $\delta\in\IC^{\times}$, set \[ \Psi^{\pm}_{\delta}(x)=\sum_{n>0}\left(\frac{1}{\pm x+n\delta}-\frac{1}{n\delta}\right)=\Psi^{\mp}_{\delta}(-x) \] \begin{lemma}\hfill \begin{itemize} \item[(i)] $\Psi^{\pm}_{\delta}(x)$ is holomorphic on $\IC\setminus\IZ_{\neq0}\delta$ \item[(ii)] $\displaystyle\Psi^+_{\delta}(x+\delta)=\Psi^+_{\delta}(x)-\frac{1}{x+\delta}$ \item[(iii)] $\displaystyle\Psi^-_{\delta}(x+\delta)=\Psi^-_{\delta}(x)-\frac{1}{x}$ \end{itemize} \end{lemma} Set $\Psi^{\pm}=\Psi^{\pm}_1$ and $\Psi=\Psi^++\Psi^-$. \subsection{The form $A_{\h}$}\label{ss:form1} Let $\g$ be an affine Kac--Moody of rank $l+1$ with root system $\sfR$, and let $\fdim{\g}$ be the corresponding finite dimensional Lie algebra with root system $\fdim{\sfR}$. Let $\h\subset\g$ be the unique realization of the associated Cartan matrix, and \[ \h=\fdim{\h}\oplus\IC c\oplus\IC d \] where $\fdim{\h}\subset\fdim{\g}$, $c=\sum_{i=0}^l a_i^{\vee}\alpha_i^{\vee}$ and $d$ is defined by $\alpha_i(d)=\delta_{i,0}$. Let $(-,-)$ be the normalized non--degenerate bilinear form on $\h$, and $\nu:\h\to\h^*$ the isomorphism induced by $(-,-)$ \cite{Ka}. Let $\delta=\sum_{i=0}^la_i\alpha_i$. Set \begin{equation} A_{\h}=\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right) +B\frac{d\delta}{\delta} \end{equation} where \begin{equation} A_{\beta}\left(\frac{\beta}{\delta}\right)= \frac{1}{2}\left[ \left( \frac{\delta}{\beta}+\Psi\left(\frac{\beta}{\delta}\right) \right)h_{\beta}-\frac{\beta}{\delta}\left( 2+\Psi\left(\frac{\beta}{\delta}\right) \right)c \right]d\left(\frac{\beta}{\delta}\right) \end{equation} and $B=\rho^{\vee}\in\h$ is a fixed solution of \begin{equation}\label{eq:defB} \langle B, \alpha_i\rangle=1\qquad i=0,1,\dots, l \end{equation} \begin{theorem} The form $A+A_{\h}$ defines a flat and $W$--equivariant connection with residues \begin{equation*} \Res_{\alpha_i=0}A+A_{\h}=f_ie_i+\frac{1}{2}h_i=\frac{1}{2}\kappa_i \end{equation*} \end{theorem} \subsection{Proof of Theorem \ref{ss:form1}} \subsubsection{} Let first consider the case $\g=\wh{\sl{2}}$. We first assume that $A_{\h}$ has the form \begin{equation*} A_{\h}=\left(\Sfd{\theta}h+\Tfd{\theta}c\right)\Diffd{\theta}+B(\delta)d\delta \end{equation*} Then $A_{\h}$ is closed and \begin{equation*} A_{\h}=\frac{1}{\delta}\left(\Sfd{\theta}h+\Tfd{\theta}c\right)d{\theta} -\frac{\theta}{\delta}\left(\Sfd{\theta}h+\Tfd{\theta}c\right)d{\delta}+B(\delta)d\delta \end{equation*} Let $W^{\scriptscriptstyle{\operatorname{ext}}}$ the \emph{extended} Weyl group, \ie $W^{\scriptscriptstyle{\operatorname{ext}}}=W\rtimes\mathsf{Aut}(D_{\g})$. The $W^{\scriptscriptstyle{\operatorname{ext}}}$--equivariance of $A+A_{\h}$ is equivalent to the conditions \begin{eqnarray} \label{eq:equi1}s_1^*A_{\h}&=&A_{\h}-\frac{h}{\theta}d\theta\\ \label{eq:equi2}\gamma^*A_{\h}&=&A_{\h} \end{eqnarray} where $s_1$ is the simple reflection on $\theta$ and $\gamma$ is induced by the symmetry of the Dynkin diagram of $\wh{\sl{2}}$. In particular, \begin{equation} \begin{array}{lll} s_1(\theta)=-\theta & s_1(\delta)=\delta & s_1(\Lambda)=\Lambda\\ \gamma(\theta)=-\theta+\delta & \gamma(\delta)=\delta & \gamma(\Lambda)=\frac{\theta}{2}-\frac{c}{4}+\Lambda\\ \end{array} \end{equation} \subsubsection{} The conditions \eqref{eq:equi1},\eqref{eq:equi2} are equivalent to the system of equations ($z=\theta/\delta$) \begin{eqnarray} \label{eq:S1} S(-z)&=&S(z)-\frac{1}{z}\\ \label{eq:T1} -T(-z)&=&T(z)\\ \label{eq:S2} S(1-z)&=&S(z)\\ \label{eq:T2} T(z)+T(1-z)&=&-S(1-z) \end{eqnarray} and \begin{eqnarray} \label{eq:Sd1} \frac{z}{\delta}S(-z)+(s_1^*B(\delta))_{(h)}&=& -\frac{z}{\delta}S(z)+B(\delta)_{(h)}\\ \label{eq:Td1} \frac{z}{\delta}T(-z)+(s_1^*B(\delta))_{(c)}&=&-\frac{z}{\delta}T(z)+B(\delta)_{(c)}\\ \label{eq:Sd2} -\frac{z}{\delta}S(1-z)+(\gamma^*B(\delta))_{(h)}&=&-\frac{z}{\delta}S(z)+B(\delta)_{(h)}\\ \label{eq:Td2} \frac{z}{\delta}\left[S(1-z)+T(1-z)\right]+(\gamma^*B(\delta))_{(c)}&=&-\frac{z}{\delta}T(z)+ B(\delta)_{(c)} \end{eqnarray} \subsubsection{} Assume the existence of functions $S,T$ satisfying \eqref{eq:S1}, \eqref{eq:T1}, \eqref{eq:S2}, \eqref{eq:T2}. Then the function $B(\delta)$ should satisfy \begin{eqnarray} s_1^*B(\delta)&=&B(\delta)-\frac{h}{\delta}\\ \gamma^*B(\delta)&=&B(\delta) \end{eqnarray} The general solution is easily computed to be \begin{equation*}\label{eq:genB} B(\delta)=\frac{1}{\delta}(\frac{h}{2}+2d+f(\delta)c) \end{equation*} where $f(\delta)$ is any function in $\delta$. In particular, it satisfies the condition \eqref{eq:defB}. \subsubsection{} Assume now the existence of a function $S$ satisfying \eqref{eq:S1}, \eqref{eq:S2}. Then if we force $T$ to be of the form \begin{equation*} T(z)=p(z)S(z)+q(z) \end{equation*} where $p,q$ are two polynomials in $z$, then \eqref{eq:T1}, \eqref{eq:T2} are equivalent to the system \begin{eqnarray*} p(z)+p(-z)&=&0\\ q(z)+q(-z)&=&\frac{1}{z}p(-z)\\ p(z)+p(1-z)&=&-1\\ q(z)+q(1-z)&=&0 \end{eqnarray*} A solution is given by \begin{equation*} p(z)=-z\qquad\mbox{and}\qquad q(z)=\frac{1}{2}-z \end{equation*} It follows that the general solution for $T$ has the form \begin{equation*}\label{eq:genT} T(z)=-z\left(S(z)+1\right)+\frac{1}{2}+E(z) \end{equation*} where $E(z)$ is any function satisfying \begin{eqnarray*} E(-z)&=&-E(z)\\ E(z)&=&-E(1-z) \end{eqnarray*} \subsubsection{} Finally, we need to solve the equations \eqref{eq:S1} and \eqref{eq:S2}, which are equivalent to the system \begin{eqnarray*} S(-z)&=&S(z)-\frac{1}{z}\\ S(z+1)&=&S(z)-\frac{1}{z} \end{eqnarray*} A particular solution is given by \begin{equation*} S(z)=\frac{1}{2}\left(\frac{1}{z}+\Psi(z)\right) \end{equation*} It follows that the general solution to \eqref{eq:S1} and \eqref{eq:S2} is given by the formula \begin{equation*}\label{eq:gene} S(z)=\frac{1}{2}\left(\frac{1}{z}+\Psi(z)\right)+e(z) \end{equation*} where $e(z)$ is any function satisfying \begin{eqnarray*} e(-z)&=&e(z)\\ e(z+1)&=&e(z) \end{eqnarray*} \subsubsection{} Setting $e=E=f=0$, we get, for $\g=\wh{\sl{2}}$, \begin{equation*}\label{eq:sl2sol} A_{\h}=A_{\theta}= \frac{1}{2}\left[ \left( \frac{\delta}{\theta}+\Psi\left(\frac{\theta}{\delta}\right) \right)h-\frac{\theta}{\delta}\left( 2+\Psi\left(\frac{\theta}{\delta}\right) \right)c \right]d\left(\frac{\theta}{\delta}\right)+\left(\frac{h}{2}+2d\right)\frac{d\delta}{\delta} \end{equation*} and the resulting connection $\nabla=d-(A+A_{\h})$ is flat and $W$--equivariant. A simple computation shows that \begin{eqnarray*} \Res_{\theta=0}A+A_{\h}&=&\frac{1}{2}\kappa_{\theta}d(\theta)\\ \Res_{\theta=\delta}A+A_{\h}&=&\frac{1}{2}\kappa_{\delta-\theta}d(\delta-\theta) \end{eqnarray*} \subsubsection{} Let now $\g$ be an affine Kac--Moody algebra and set \begin{equation*} A_{\h}=\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right) +B\frac{d\delta}{\delta} \end{equation*} where \begin{equation*} A_{\beta}\left(\frac{\beta}{\delta}\right)= \frac{1}{2}\left[ \left( \frac{\delta}{\beta}+\Psi\left(\frac{\beta}{\delta}\right) \right)h_{\beta}-\frac{\beta}{\delta}\left( 2+\Psi\left(\frac{\beta}{\delta}\right) \right)c \right]d\left(\frac{\beta}{\delta}\right) \end{equation*} and $B\in\h$. \subsection{} It follows from the case $\g=\wh{\sl{2}}$ that the form $A_{\beta}(\beta/\delta)$ satisfies \begin{eqnarray} \label{eq:formA1} A_{-\beta}\left(\frac{-\beta}{\delta}\right)&=&A_{\beta}\left(\frac{\beta}{\delta}\right) -\frac{h_{\beta}}{\beta}d\beta +\frac{h_{\beta}}{\delta}d\delta\\ \label{eq:formA2} A_{-\beta+\delta}\left(\frac{-\beta+\delta}{\delta}\right)&=&A_{\beta}\left(\frac{\beta}{\delta}\right) \end{eqnarray} \subsection{} For every $i=1,\dots,l$, the simple reflection $s_i$ permutes the elements in $\fpr\setminus\{\alpha_i\}$, and \begin{equation*} s_i^*\left(\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right)\right)= \sum_{\beta\in\fpr\setminus\{\alpha_i\}} A_{\beta}\left(\frac{\beta}{\delta}\right) +A_{-\alpha_i}\left(\frac{-\alpha_i}{\delta}\right) \end{equation*} By \eqref{eq:formA1}, \begin{equation*} s_i^*\left(\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right)\right)= \sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right)-\frac{h_i}{\alpha_i}d\alpha_i+\frac{h_i}{\delta}d\delta \end{equation*} Therefore the $\mathring{W}$--equivariance of the form $A+A_{\h}$ is equivalent to the condition \begin{equation}\label{eq:condB} s_i(B)=B-h_i\quad\Longleftrightarrow\quad \langle \alpha_i , B \rangle=1 \end{equation} \subsection{} Let $\beta\in\fpr$. Then we say that $\beta\in R_k$, $k=0,-1,-2$, if $\langle \beta, h_0\rangle=k$. In particular, $R_{-2}=\{\theta\}$, and \[ s_0(\beta)=\left\{ \begin{array}{lcl} \beta & \mbox{if} & k=0\\ -(\theta-\beta)+\delta & \mbox{if} & k=-1\\ -\theta+2\delta & \mbox{if} & k=-2 \end{array} \right. \] It follows from \eqref{eq:formA2} that \[ A_{-(\theta-\beta)+\delta}\left(\frac{-(\theta-\beta)+\delta}{\delta}\right)=A_{\theta-\beta}\left(\frac{\theta-\beta}{\delta}\right) \] Therefore \begin{equation*} s_0^*\left(\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right)\right)= \sum_{\beta\in\fpr\setminus\{\theta\}} A_{\beta}\left(\frac{\beta}{\delta}\right)+A_{-\theta+2\delta}\left(\frac{-\theta+2\delta}{\delta}\right) \end{equation*} By \eqref{eq:formA1} and \eqref{eq:formA2}, \begin{align*} A_{-\theta+2\delta}\left(\frac{-\theta+2\delta}{\delta}\right)=&A_{\theta-\delta}\left(\frac{\theta-\delta}{\delta}\right)=\\ =&A_{\delta-\theta}\left(\frac{\delta-\theta}{\delta}\right)-\frac{h_0}{\alpha_0}d\alpha_0+\frac{h_0}{\delta}d\delta=\\ =&A_{\theta}\left(\frac{\theta}{\delta}\right)-\frac{h_0}{\alpha_0}d\alpha_0+\frac{h_0}{\delta}d\delta \end{align*} The condition $s_0^*(A+A_{\h})=A+A_{\h}$ is therefore equivalent to \[ s_0(B)=B-h_0\quad\Longleftrightarrow\quad\langle B, \alpha_0\rangle=1 \] \subsection{} It follows that for any choice of a solution $B=\rho^{\vee}\in\h$ of the equations \[ \langle B, \alpha_i\rangle=1\qquad i=0,1,\dots, l \] the form $A_{\kappa}=A+A_{\h}$, where \begin{equation} A_{\h}=\sum_{\beta\in\fpr} A_{\beta}\left(\frac{\beta}{\delta}\right) +B\frac{d\delta}{\delta} \end{equation} and \begin{equation} A_{\beta}\left(\frac{\beta}{\delta}\right)= \frac{1}{2}\left[ \left( \frac{\delta}{\beta}+\Psi\left(\frac{\beta}{\delta}\right) \right)h_{\beta}-\frac{\beta}{\delta}\left( 2+\Psi\left(\frac{\beta}{\delta}\right) \right)c \right]d\left(\frac{\beta}{\delta}\right) \end{equation} defines a flat, $W$--equivariant connection with residues \[ \Res_{\alpha_i=0}A_{\kappa}=\frac{1}{2}\kappa_id\alpha_i \] The theorem is proved. \subsection{The form $A_{S^2\h}$} We now show that the equivariant connection $\nabla=d-(A+A_{\h})$ can be extended with a closed, $W$--equivariant one form $A_{S^2\h}$ with values in $S^2\h$, in order to {correct} the residues and obtain \begin{equation*} \Res_{\alpha_i=0}A+A_{\h}+A_{S^2\h}=\frac{1}{2}C_id\alpha_i \end{equation*} This provides an analogue of the (finite--dimensional) Casimir connection with coefficients $C_{\alpha}$.\\ Set \begin{equation} A_{S^2\h}=\sum_{\beta\in\fpr}\frac{\pi}{2}\cot\left(\pi\frac{\beta}{\delta}\right) (h_{\beta}-\frac{\beta}{\delta}c)^2 d\left(\frac{\beta}{\delta}\right) \end{equation} \begin{theorem} The form $A_{S^2\h}$ is closed and $W$--equivariant. The form $A_{C}=A+A_{\h}+A_{S^2\h}$ defines a flat and $W$--equivariant connection with residues \begin{equation*} \Res_{\alpha_i=0}A_{C}=\frac{1}{2}\kappa_i+\frac{1}{2}h_{\alpha_i}=\frac{1}{2}C_i \end{equation*} \end{theorem} \begin{proof} Let first consider the case $\g=\wh{\sl{2}}$. We have \begin{equation*} A_{S^2\h}=\frac{\pi}{2}\cot\left(\pi\frac{\theta}{\delta}\right)(h-\frac{\theta}{\delta}c)^2 d\left(\frac{\theta}{\delta}\right) \end{equation*} $A_{S^2\h}$ is closed with residues \begin{eqnarray} \Res_{\theta=0} A_{S^2\h} &=&\frac{1}{2}h^2d\theta\\ \Res_{\theta=\delta} A_{S^2\h} &=& \frac{1}{2}(-h+c)^2d(\delta-\theta) \end{eqnarray} It remains to prove that the form is $W$--equivariant. It is enought to observe that \begin{equation*} s_1^*A_{S^2\h}=\frac{\pi}{2}\cot\left(\pi\frac{-\theta}{\delta}\right)(-h-\frac{-\theta}{\delta}c)^2 d\left(\frac{-\theta}{\delta}\right)=A_{S^2\h} \end{equation*} and \begin{equation*} s_0^*A_{S^2\h}=\frac{\pi}{2}\cot\left(\pi\frac{-\theta+2\delta}{\delta}\right)(-h+2c-\frac{-\theta+2\delta}{\delta}c)^2 d\left(\frac{-\theta+2\delta}{\delta}\right)=A_{S^2\h} \end{equation*} Let now $\g$ be an affine Kac--Moody algebra. Then we set \begin{equation} A_{S^2\h}=\sum_{\beta\in\fpr}\frac{\pi}{2}\cot\left(\pi\frac{\beta}{\delta}\right)(h_{\beta}-\frac{\beta}{\delta}c)^2 d\left(\frac{\beta}{\delta}\right) \end{equation} Clearly, $A_{S^2\h}$ is closed with the required residues. In order to prove the $W$--equivariance it is enough to observe that \begin{align*} \sum_{\beta\in\fpr}\frac{\pi}{2}\cot\left(\pi\frac{\beta}{\delta}\right)(h_{\beta}-\frac{\beta}{\delta}c)^2d\left(\frac{\beta}{\delta}\right)= \frac{\delta}{4}\sum_{\alpha\in\prr}\left(\frac{1}{\alpha}-\frac{1}{w(\alpha)}\right) (h_{\alpha}-\frac{\alpha}{\delta}c)^2d\left(\frac{\alpha}{\delta}\right) \end{align*} for any element of the Weyl group $w\in W$. The result follows. \footnote{ In the case of $\g=\wh{\sl{2}}$, one has \begin{align*} \pi&\cot\left(\pi\frac{\theta}{\delta}\right)d\left(\frac{\theta}{\delta}\right)= \delta\left[\frac{1}{\theta}+\sum_{n>0}\left(\frac{1}{\theta+n\delta}-\frac{1}{-\theta+n\delta}\right)\right] d\left(\frac{\theta}{\delta}\right)=\\ &=\frac{\delta}{2}\left(\frac{1}{\theta}-\frac{1}{s_1(\theta)}\right)d\left(\frac{\theta}{\delta}\right)+ \frac{\delta}{2}\sum_{n>0}\left(\frac{1}{\theta+n\delta}-\frac{1}{s_1(\theta+n\delta)}\right) d\left(\frac{\theta+n\delta}{\delta}\right)\\ &\hspace{1.545in}+\frac{\delta}{2}\sum_{n>0}\left(\frac{1}{s_1(-\theta+n\delta)}-\frac{1}{-\theta+n\delta}\right)d\left(-\frac{-\theta+n\delta}{\delta}\right)=\\ &=\frac{\delta}{2}\sum_{\alpha\in\prr}\left(\frac{1}{\alpha}-\frac{1}{s_1(\alpha)}\right) d\left(\frac{\alpha}{\delta}\right) \end{align*} Similarly for higher rank $\g$ and $w\in W$. } \end{proof} \remark\; The expression of the form $A_{S^2\h}$ for $\g=\wh{\sl{2}}$ has been computed in the following way. We first assume \begin{equation} A_{S^2\h}=A_{\theta}(\theta,\delta)d\theta+A_{\delta}(\theta,\delta)d\delta \end{equation} where \[ A_{\theta}(\theta,\delta)=S(\theta,\delta)h^2+T(\theta,\delta)hc+U(\theta,\delta)c^2 \] and similarly for $A_{\delta}$. The condition of $W$--equivariance (for a fixed value $\delta\in\IC^*$) gives a sistem of difference equation in $\theta$ for the functions $S,T,U$ which is easily solved with functions of the form $p(x)\cot(x)$, where $p$ is a polynomial. More specifically, $S,U$ are odd functions in $\theta$, $T$ is an even function in $\theta$ (invariance with respect to $s_1$) such that \begin{eqnarray} S(\theta+\delta)&=&S(\theta)\\ T(\theta+\delta)&=&T(\theta)-2S(\theta)\\ U(\theta+\delta)&=&U(\theta)+S(\theta)-T(\theta) \end{eqnarray} where the system follows from the invariance with respect to the translation $\theta\mapsto\theta-\delta$. The condition $dA=0$ then provides a formula for $A_{\delta}$. Specifically, one gets to a general solution of the form \begin{equation} A_{S^2\h}=\frac{\pi}{2}\cot\left(\pi\frac{\theta}{\delta}\right)(h-\frac{\theta}{\delta}c)^2 d\left(\frac{\theta}{\delta}\right) + B(\delta)d\delta \end{equation} where $B(\delta)$ is a $W$--equivariant function (which has been chosen equal to zero). \newpage
{"config": "arxiv", "file": "1512.03041.tex"}
TITLE: Let $a,b\ge 0$ be integers such that $2^na+b$ is a perfect square $\forall n>0$ QUESTION [4 upvotes]: Let $a,b\ge 0$ be integers such that $2^na+b$ is a perfect square $\forall n>0$. Show that $a=0$. I have no idea how to start this problem since I don't see any arithmetic proprety that I can use in such a problem. Thanks ! REPLY [6 votes]: Let $2^n a+ b =a_n^2$ for some sequence $(a_n)_{n=1}^\infty$. We shall prove that the sequence $(a_n)$ is constant, which will establish that $a=0$. Suppose that $a\neq 0$. Then, $a_n$ is a strictly increasing sequence. Observe that, $$ a_n^2 = 2^n a+b\Rightarrow (2a_n)^2 = 2^{n+2}a+4b=a_{n+2}^2 +3b \Rightarrow 3 b=(2a_n-a_{n+2})(2a_n+a_{n+2}). $$ Now, this immediately implies, for all $n$, $2a_n +a_{n+2}$ divides $3b$, which in turn yields $2a_n+a_{n+2}\leqslant 3b$ for all $n$. However, $b$ is fixed; and thus, if $a\neq 0$, the sequence $(a_n)$ is unbounded, yielding a contradiction. Edit To address John's comment below: This all hold of course provided $2a_n-a_{n+2}\neq 0$ for all $n$. Say, there is an $n$ making this object $0$, which would yield $b=0$. Now, we have $2^n a$ is a perfect square for all $n$, which is a contradiction: If $2^n a =x^2$ then $2^{n+1}a =2x^2$, which cannot be a perfect square (easy to see, let $y^2=2x^2$, then $2\mid y$, letting $y=2y_1$, we get $x^2=2y_1^2$, yielding $2\mid x$, which, in turn yielding $y_1^2=2x_1^2$, an infinite descent is occurring).
{"set_name": "stack_exchange", "score": 4, "question_id": 3268067}
\section{Introduction}\label{sec:intro} Traditional constitutive modeling is based on constitutive or material laws to describe the explicit relationship among the measurable material states, e.g., stresses and strains, and internal state variables (ISVs) based on experimental observations, mechanistic hypothesis, and mathematical simplifications. However, limited data and functional form assumptions inevitably introduce errors to the model parameter calibration and model prediction. Moreover, with the pre-defined functions, constitutive laws often lack generality to capture full aspects of material behaviors \cite{HeJBM2020,he2021deep}. Path-dependent constitutive modeling typically applies models with evolving ISVs in addition to the state space of deformation \cite{coleman1967thermodynamics,horstemeyer2010historical}. The ISV constitutive modeling framework has been effectively applied to model various nonlinear solid material behaviors, e.g., elasto-plasticity \cite{kratochvil1969thermodynamics,simo1992associative}, visco-plasticity \cite{simo1992algorithms}, and material damage \cite{perzyna1986internal}. However, ISVs are often non-measurable, which makes it challenging to define a complete and appropriate set of ISVs for highly nonlinear and complicated materials, e.g., geomechanical materials. Further, the traditional ISV constitutive modeling approach often results in excessive complexities with high computational cost, which is undesirable in practical applications. \XH{In recent years, machine learning (ML) based data-driven approaches have demonstrated successful applications in various engineering problems, such as solving partial differential equations \cite{raissi2019physics,he2021physics,karniadakis2021physics,kadeethum2021framework}, system or parameter identification \cite{brunton2016discovering,raissi2019physics,cranmer2020discovering,tartakovsky2020physics,haghighat2021physics,kadeethum2021framework,he2022physics}, data-driven computational mechanics \cite{kirchdoerfer2017data,ayensa2018new,he2019physics,eggersmann2019model,he2020physics,he2021deep,kanno2021kernel,bahmani2021manifold}, reduced-order modeling \cite{xie2019non,bai2021non,kaneko2021hyper,kim2022fast,fries2022lasdi,he2022glasdi,kadeethum2022non}, material design \cite{bessa2017framework,butler2018machine}, etc. ML models, such as deep neural networks (DNNs), have emerged as a promising alternative for constitutive modeling due to their strong flexibility and capability in extracting complex features and patterns from data \cite{bock2019review}}. DNNs have been applied to model a variety of materials, including concrete materials \cite{ghaboussi1991knowledge}, hyper-elastic materials \cite{Shen2005}, visco-plastic material of steel \cite{furukawa1998implicit}, and homogenized properties of composite structures \cite{lefik2009artificial}. DNN-based constitutive models haven been integrated into finite element solvers to predict path- or rate-dependent materials behaviors \cite{lefik2003artificial,hashash2004numerical,jung2006neural,Stoffel2019,zhang2020using}. Recently, physical constraints or principles have been integrated into DNNs for data-driven constitutive modeling, including symmetric positive definiteness \cite{xu2021learning}, material frame invariance \cite{ling2016machine}, and thermodynamics \cite{vlassis2021sobolev,masi2021thermodynamics}. However, to model path-dependent materials, the DNN-based constitutive models require fully understood and prescribed material's internal states, which is difficult for materials with highly nonlinear and complicated path-dependent behaviors and limits their applications in practice. Recurrent neural networks (RNNs) designed for sequence learning have been successfully applied in various domains, such as machine translation and speech recognition, due to their capability of learning history-dependent features that are essential for sequential prediction \cite{lipton2015critical,yu2019review}. The RNN and gated variants, e.g., the long short-term memory (LSTM) \cite{hochreiter1997long} cells and the gated recurrent units (GRUs) \cite{cho2014properties, chung2014empirical}, have been applied to path-dependent materials modeling \cite{heider2020so}, including plastic composites \cite{mozaffar2019deep}, visco-elasticity \cite{chen2021recurrent}, and homogeneous anisotropic hardening \cite{gorji2020potential}. RNN-based constitutive models have also been applied to accelerate multi-scale simulations with path-dependent characteristics \cite{wang2018multiscale,ghavamian2019accelerating,wu2020recurrent,logarzo2021smart,wu2022recurrent}. Recently, Bonatti and Mohr \cite{bonatti2022importance} proposed a self-consistent RNN for path-dependent materials such that the model predictions converge as the loading increment is decreased. However, these RNN-based data-driven constitutive models may not satisfy the underlying thermodynamics principles of path-dependent materials. In this study, we propose a thermodynamically consistent machine-learned ISV approach for data-driven modeling of path-dependent materials, which relies purely on measurable material states. The first thermodynamics principle is integrated into the model architecture whereas the second thermodynamics principle is enforced by a constraint on the network parameters. In the proposed model, an RNN is trained to infer intrinsic ISVs from its hidden (or memory) state that captures essential history-dependent features of data through a sequential input. The RNN describing the evolution of the data-driven machine-learned ISVs follows the thermodynamics second law. In addition, a DNN is trained simultaneously to predict the material energy potential given strain, ISVs, and temperature (for non-isothermal processes). Further, model robustness and accuracy is enhanced by introducing \textit{stochasticity} to inputs for model training to account for uncertainties of input conditions in testing. The remainder of this paper is organized as follows. The background of thermodynamics principles is first introduced in Section \ref{sec:thermodynamics}. In Section \ref{sec:rnn}, DNNs and RNNs are introduced and their applications to path-dependent materials modeling are discussed. Section \ref{sec:tcrnn} introduces the proposed thermodynamically consistent machine-learned ISV approach for data-driven modeling of path-dependent materials, where two thermodynamically consistent recurrent neural networks (TCRNNs) are discussed. Finally, in Section \ref{sec:result}, the effectiveness and generalization capability of the proposed TCRNN models are examined by modeling an elasto-plastic material and undrained soil under cyclic shear loading. A parametric study is conducted to investigate the effects of the number of RNN steps, the internal state dimension, the model complexity, and the strain increment on the model performance. Concluding remarks and discussions are summarized in Section \ref{sec:conclusion}.
{"config": "arxiv", "file": "2205.00578/sec1-introduction.tex"}