input
stringlengths
2.56k
275k
output
stringclasses
1 value
to be able to automatically number pages with a preset. Coreldraw can already do what I need using the PageNumbering macro but I have to enter the individual settings for each template every time. I simply want the included PageNumbering macro but with the option to save 5 presets. If possible I would like the option of saving the presets to a file that can be shared with other workstations so I don’t have to manually recreate. Note that this technique requires that the following paragraphs in the outline begin with numbers, not letters, that are chosen with Insert, Outline…, such as the “Paragraph” or “Legal” outline. If a Letter is used, the tally will be a letter, too. In a perfect world, cut and stack is largely controlled by a commercial printer’s RIP imposition software such as EFI Fiery or AGFA Apogee. However, I am aware that this software is out of reach of many people and do appreciate the lower tech ways of accomplishing the same task. After SQL Server restarts and a sequence number is needed, the starting number is read from the system tables (23). The cache amount of 15 numbers (23-38) is allocated to memory and the next non-cache number (39) is written to the system tables. Note  By default, Word follows the number with a tab that is set at 0.25″ after the number. If you set your Indent Position to be larger than the Number Position, this will control the position of the tab after the number as well as the text that follows it. But if you want text to wrap back to the margin, the default 0.25″ tab will appear. Value used to increment (or decrement if negative) the value of the sequence object for each call to the NEXT VALUE FOR function. If the increment is a negative value, the sequence object is descending; otherwise, it is ascending. The increment cannot be 0. The default increment for a new sequence object is 1. Modern VINs are based on two related standards, originally issued by the International Organization for Standardization (ISO) in 1979 and 1980: ISO 3779[4] and ISO 3780,[5] respectively. Compatible but different implementations of these ISO standards have been adopted by the European Union and the United States, respectively.[6] I have an opportunity to make a good impression at a new job, but ive never had the initiative to learn how to write a lisp routine. at my work we make walk-in coolers and freezers. these coolers are modular with a standard wall piece measuring a certain length. these wall sections have to be numbered sequentially. is there any way to automate this? sometimes we get change order from the client and it means going back and manually changing multiple texts (sometimes in the hundreds), but i thought if there was a way these wall sections could be scheduled somehow that when a new wall panel is inserted in the beginning of the sequence it would update everything after it automatically. At this point, you’re ready to load your card stock into the printer and print the tickets. I recommend that you preview the print job before sending it to the printer to make sure everything’s in order. To print the tickets, do the following: These numbers are updated when you sort them with your data. The sequence may be interrupted if you add, move, or delete rows. You can manually update the numbering by selecting two numbers that are in the right sequence, and then dragging the fill handle to the end of the numbered range. Create the text frame for the numbers in InDesign sequential numbering if there are more than one ticket per page, link them. Place the text file into the first frame with a shift click to auto flow and create new pages. This video tutorial demonstrates how easy it is to automatically number raffle tickets in Microsoft Word. Simply download a raffle tickets template, download our number series file, and Mail Merge to create the ticket numbers. Layers provide an effective way to organize the objects you create with CorelDRAW. By using layers, you can reduce your work time, make it easier to handle the job at hand, and increase the accuracy of your designs. Note: Quick Parts is one way to store and use the SEQ fields — you can use this Quick Parts method and/or the AutoCorrect method described earlier in this article. As far as I can tell, you do not have to have the field codes displayed to add these fields as Quick Parts, but it may be easier to see which is which if you do. ## “palabra de números de página no consecutivos imprimir las etiquetas numeradas secuencialmente en la palabra” En lectura diagonal el lector solamente lee los pasajes especiales de un texto, como títulos, la primera frase de un párrafo, palabras acentuadas tipográficamente (negrito, cursivo), párrafos importantes (resumen, conclusión) y el entorno de términos importantes como fórmulas («2x+3=5»), listas («primer», «segundo», …), conclusiones («por eso») y términos técnicos («costos fijos»). Se llama lectura diagonal porque la mirada se mueve rápidamente de la esquina de izquierda y arriba a la esquina de derecha y abajo. De ese modo es posible leer un texto muy rápido a expensas de detalles y comprensión del , en lo que se conoce como movimiento sacádico. La velocidad de desplazamiento es relativamente constante entre unos y otros individuos, pero mientras un lector lento enfoca entre cinco y diez letras por vez, un lector habitual puede enfocar aproximadamente una veintena de letras; también influye en la velocidad lectora el trabajo de identificación de las palabras en cuestión, que varía en relación a su conocimiento por parte del lector o no. Ejercicio: crear en un documento con orientación vertical, y márgenes izquierdo y derecho de 2,5 cms. y superior e inferior de 3 cms. crear un salto de sección del tipo página siguiente, y formatear la nueva sección con orientación horizontal y márgenes izquierdo y derecho de 4 cms. y superior e inferior de 3.5 cms. Adde adopción autor Beignier Braibant ción Code pénal Codex codificación de derecho codificación francesa codificaciones privadas codificar codification codification du droit Código civil europeo Código civil francés Código de comercio Código de Hammurabi Código de procedimiento Código Napoleón Código penal códigos adoptados comisión Commission supérieure Common Law compilación conceptde code conjunto consagración constituye contemporáneo Cornu Corpus juris civilis crisis Dalloz decano Carbonnier derecho constante derecho de familia derecho romano desarrollo disposiciones droit comparé ejemplo elaboración específicamente evolución Francia fuentes del derecho Gény historique Ibíd incluso infra interpretación Introduction juridique juristas Justiniano legislativa LGDJ lois manera mente modificaciones mundo Napoleón normas Nuevo Código civil observa países parece Parlamento política politique Portalis procedimiento civil promulgado propósito del Código proyecto de Código Quebec Rapport recodificación reforma reglas respecto RID comp Savigny siglo XIX social supérieure de codification supra técnica textos unificación Vanderlinden Por ejemplo, para crear un efecto de “Figura A”, introduzca la palabra “Figura” y un espacio antes de los metacaracteres de numeración (como, por ejemplo, Figura ^#.^t). Así se añadirá la palabra “Figura” seguida de un número secuencial (^#), un punto y una tabulación (^t). Para la interdependencia secuencial, utilice una fuerza de tarea o ¡ntegradores para una coordinación horizontal mayor. A un nivel más alto de interdependencia (la interdependencia recíproca), lo más apropiado podría ser una estructura … Saludos, en este artículo comentas ““Impresión de Datos Variables” nos a la posibilidad de imprimir información variada de manera automática, como puede ser una secuencia de números, o una lista de nombres, y sin limitación podrían ser nombres, números, fotografiás, códigos de barra, o cualquier otro tipo de información simultáneamente”, mi duda es como agregar imágenes en este tipo de introducción de datos a Corel. quien me podra ayudar hice mi factura muy
a function of the energy. We can see that the secondary peak of irreversible transitions at higher energies correspond to stresses $\sigma$ close to and above the yield stress (the stress at the yielding/irreversibility transition), which is $\sigma_y \sim 2.5$ in units of the simulation and that reversible events are much scarcer in this region. In Fig~\ref{Fig2a}(b) we compare the energy drops due to reversible and irreversible plastic events. We can see that both exhibit power-law behavior. The irreversible events, while showing a strong cutoff, give rise to much larger energy drops in general and correspond to large collective particle rearrangements (avalanches). In Fig~\ref{Fig2b}(c,d) we show a density plot of the stress drops $\Delta\sigma$ and stresses $\sigma$ associated with reversible and irreversible plastic events, respectively. The figure reveals that the events accompanied by large stress avalanches are concentrated close to and above the yield stress and exhibit a secondary peak in the density plot of the irreversible events. While it is obvious that close to yielding the system experiences a large number of large irreversible events, the figures also clearly shows the presence of a large number of irreversible events with small stress drops at stresses much below yield. In the following we shall argue that these events play a role in the transient dynamics observed in simulations under oscillatory shear at sub-yield strain amplitudes \cite{fiocco2013oscillatory,regev2013onset,kawasaki2016macroscopic,regev2018critical}. \begin{figure*}[t!] \begin{center} \includegraphics[width=2\columnwidth]{Fig4.pdf} \caption{ (a) The SCC size distribution taken from the full $8$ catalogues (in blue) exhibits a heavy tail. The solid line is a power-law exponent $2.67$ and serves as a guide to the eye. Colors other than blue correspond to distributions derived from the same catalogues but only up to a maximal generation number of $24,28,32,36$ demonstrating that the distribution of SCC sizes becomes stable for networks significantly smaller than the ones used to calculate the exponent. (b) Plastic deformation history leading from the initial state $O$ to a mesostate $A$ of the catalog after $g = 40$ plastic events. Each vertical blue line is an intermediate mesostate $P$ with its stability range $(\gamma^-[P],\gamma^+[P])$, while the horizontal line segments in black ($\mathbf{U}$) and red ($\mathbf{D}$) that connect adjacent mesostates indicate the strains at which the corresponding plastic events occurred. For each mesostate $A$ and deformation history, we can identify the largest and smallest strains under which a $\mathbf{U}$-, respectively $\mathbf{D}$-transition occurred, $\gamma^\pm_{\rm max}$, as illustrated by the extended horizontal lines. (c) Deformation path history dependence of $k_{\rm REV}$: each dot represents a mesostate of catalog $\#1$. The coordinates of each dot represent the largest positive and negative strains $\gamma_{\rm max}^\pm$, {\em cf.} panel (b), that were required to reach a specific mesostate, while their color represents how many reversible transitions $k_{\rm REV}=0, 1$, or $2$, go out of it, as indicated in the legend. The location of the yield strain in both positive and negative direction have been marked by dotted vertical and horizontal lines. The region highlighted by the light blue triangle contains the set of all mesostates that can be reached without ever applying a shear strain whose magnitude exceeds $\vert \gamma_{\rm max}^\pm \vert = 0.085$. The prevalence of mesostates with $k_{\rm REV} = 2$ (blue dots) inside this region, implies that mesostates reached by applying strains whose magnitudes remain below $0.085$ undergo predominantly reversible transitions, {\em i.e.} lead to mesostates that are part of the same SCC. (d) Scatter plot of the mesostates with $\vert \gamma_{\rm max}^\pm \vert \le 0.085$ across the $5$ catalogs with $40$ or more generation. As was the case for the single data set shown in panel (c) of this figure, the region $\vert \gamma_{\rm max}^\pm \vert \le 0.085$ shows a high degree of reversibility across all $5$ catalogs: the region contains $9298$ mesostates out of which $7728$ have $k_{\rm REV} = 2$ and $1194$ have $k_{\rm REV} = 1$ outgoing irreversible transitions. Inset: mean SCC size that a mesostate belongs to, given that it is stable at some strain $\gamma$ calculated from all $8$ catalogs. Error bars represent the standard deviation of fluctuations around the mean. The figure shows that mesostates stable at large strains tend to belong to small SCCs. } \label{Fig3} \end{center} \end{figure*} \subsection{AQS transition graph topology} Fig~\ref{Fig3}(a) shows the size distribution of SSCs extracted from all eight catalogs. The solid line is a power-law behavior with exponent of $2.67$ and serves as a guide to the eye. We estimated the power-law exponent and its uncertainty using the maximum-likelihood method described in \cite{clauset2009power}, and by considering only the $24488$ SCCs with sizes $s_{\rm SCC} \ge s_{\rm min} = 4$. This choice was motivated by the empirical observation that small SCCs containing mesostates at the largest generations of the catalog limit are more likely to increase in size, if the catalog is augmented by going to higher number of generations. The exponent depends on the choice of cutoff $s_{\rm min}$: for $s_{\rm min} = 1, 2, 3$, and $4$, we obtain (number of data points indicated in parentheses) the exponents $2.033 \pm 0.003$ $(169049)$, $2.529 \pm 0.005$ $(81528)$, $2.60 \pm 0.01$ $(40021)$, and $2.67 \pm 0.01$ $(24488)$, respectively. The exponents for $s_{\rm min} = 2, 3$, and $4$ fall all into a an interval between $2.5$ and $2.7$, while the exponent of $2.203$ obtained with the cut-off $s_{\rm min} = 1$ seems to be significantly different. In fact, as we will show shortly, close to yielding there is a proliferation of SCCs with size one and this affects the estimate of the exponent. Thus the distribution of SCC sizes is broad, following a power-law $s_{\rm SCC}^{-\alpha}$, with an exponent of about $\alpha = 2.67$ and with the main source of uncertainty in $\alpha$ coming from the choice of the lower cut-off $s_{\rm min}$. Fig~\ref{Fig3}(a) also compares this distribution against the distributions obtained by limiting the generation number in the catalogues to a maximal generation number. It is clear that the distribution does not change significantly. Next, we ask for the ``location'' of SCCs in the transition graph by looking for correlations between the plastic deformation history of a mesostate $A$ and the number of reversible transitions that are going out of it, $k_{\rm REV}[A]$. Recall that each mesostate in our catalog is reached from the reference configuration $O$ by a sequence of $\mathbf{U}$- and $\mathbf{D}$-transitions. We call this the plastic deformation history path of $A$, as illustrated in Fig.~\ref{Fig3}(b). Additional details on deformation history are provided in Section \ref{supp:scc_identification} of the Appendix. For each mesostate and deformation history path, we can identify the largest and smallest strains under which a $\mathbf{U}$-, respectively $\mathbf{D}$-transition occurred, $\gamma^\pm_{\rm max}$. These values are indicated in Fig~\ref{Fig3}(b) by the horizontal dashed lines. Fig~\ref{Fig3}(c) shows a scatter plot obtained from catalog $\#1$ of our data set. Here each dot corresponds to a mesostate $A$ that is placed at $(\gamma^-_{\rm max}[A],\gamma^+_{\rm max}[A])$. Since $\gamma^-_{\rm max}[A] < \gamma^+_{\rm max}[A]$, the dots are scattered above the central diagonal of the figure. The location of the yield strain $\gamma_y = 0.135$ of the sample is indicated by the dashed vertical and horizontal lines. We have color-coded the mesostates according to the number $k_{\rm REV}[A]$ of outgoing reversible transitions, with blue, light red and gray corresponding to $2$, $1$, and $0$ possible reversible transitions, respectively. Note that multiple mesostates can have the same values of the extremal strains $\gamma^\pm_{\rm max}$ and hence will be placed at the same location in the scatter plot. In order to reveal correlations between the straining history and $k_{\rm REV}$, we have first plotted the data points for which $k_{\rm REV} = 2$, next those for which $k_{\rm REV} = 1$, and finally, $k_{\rm REV} = 0$. In spite of this over-plotting sequence, there appears a prominent central ``blue'' region that is bounded by $\gamma^-_{\rm max} \ge -0.085$ and $\gamma^+_{\rm max}
12:21 AM Wazzap, y'all? Yo, @FenderLesPaul, what subfield are you interested in? I can hook you up with some sweet labs, you might say. Hi pal :-) @guest Hi. How are you? @guest Incredibly tired, but pretty good. Very exciting results in the lab. Cool @DanielSank 12:29 AM What are you up to, @guest? @BernardMeurer? I'm reading some stuff on bytes and strings on Python 3 :p And preparing to go out for some 'aided recovery' of my application denial @BernardMeurer Ah yes indeed. I think FederLesPaul's specialization is quantum gravity. @guest Eek. Well, he ought to meet up with D. Marolf. @guest I'm pretty sure @DanielSank's specialization is pure and applied magics 12:31 AM @BernardMeurer Heh, why do you say that? Because he's kidding :P @DanielSank Because you work with some really amazing stuff that is just magical (in the sense of enchanting and surprising rather than of it being voodoo) @guest I'm pretty sure it's the chinese that manufacture the Kindle Dammit, your edit killed the joke :D It's all good in da hood. By the way, @DanielSank, did that number include area code? I never know @BernardMeurer Yes. 12:41 AM Alrighty, awesome. Maybe I get into UCSB and then me, you, and @FenderLesPaul can be bffs and go for strolls on the plentiful daisy fields as we sing and jump @BernardMeurer Perhaps. I gotta quit reading books to my cousin. They're making my imagination weird @BernardMeurer The problem with going for strolls in and around UCSB, is that many of the best walking routes take you past the lagoon or the slough. Either way you might be in for a rather strong odor (depending on the recent weather). I like to walk so I just got used to it. Used to know all the path around the lagoon and out near campus point. Also where you could scale the cliffs if you (a) had misjudged the tides of (b) the campus police might misconstrue your presence on the beach near a party as "minor in possession". Not that the latter applies to you, but it applied to me for a while. 12:59 AM @guest five or so Hahahaha thanks for the tips @dmckee; what about daisy fields tho? Isn't reading to him or her considered home schooling? @BernardMeurer The Chancellor's Lawn sometimes has dandelions. Does that count? 1 hour later… 2:08 AM @DanielSank quantum gravity I'll be working with a person in KITP (ideally Don Marolf) @FenderLesPaul I believe you wanted to ping DS there. (Use the little arrow! :P) Oh oops 2:42 AM @FenderLesPaul So you're definitely coming here? @DanielSank The only school I'd consider over UCSB is Princeton if I get in which basically means yeah I am coming there @FenderLesPaul I guess Princeton is a good option because of your subfield? yeah @FenderLesPaul Screw that. Surfing. Haha totally 2:54 AM Friend of mine used to bring a big wheely black board to the beach and do his QFT homework. Try that in Princeton. To be straight for a moment - KITP is pretty neat. There are a lot of conferences and people going through there all the time. If you don't mind I'd like to give you two bits of advice: 1. wherever you go, make sure there is more than one person you'd be interested to work with. 2. Make sure you go somewhere that the students are happy. By the way, assuming your user name reflects your interests, if you do come here you have to teach me to play rock guitar, and I'll teach you classical. @DanielSank Thanks for the advice! I really appreciate it :) as for people to work with, yeah that's one reason UCSB was my top choice Yes, item #1 is important. since there are around 4 or 5 people I'd love to work for That's good. mainly Don, Gary Horowitz, Joe Polchinski, and David Bernstein 2:59 AM Good for you. Those are all excellent researchers and nice people. I'm told the students are happy there; a friend of mine who graduated from Cornell 2 years ago is there now working for Joe and loves it And yeah I would love to teach you rock guitar haha that sounds dope Yeah, UCSB is pretty good on the grad student happiness scale. This sounds dumb, but sunshine really does help. Yeah I'd imagine 3:00 AM Princeton is obviously an awesome department too. Don't get me wrong. Hehe 3 hours later… 5:49 AM WAIT @DanielSank Are you into classical? @BernardMeurer Yes. That's my favorite piece of music. ^ I haven't played in years though, so I'm pretty terrible at the moment. It's on my todo list to play more and regain my former glory. 6:19 AM @FenderLesPaul Well then, looks like you, me, and @DanielSank will have to meet up in person, since we're all gonna be there ;) 2 hours later… 7:59 AM So, would anyone else prefer if we taught radians in school instead of degrees? 8:27 AM @SirCumference: We were taught radians prior to trigonometry ; our book had a dedicated chapter on circular measure problems. My elementary school math teacher didn't know what a radian was... It blows my mind It was included in our std.x syllabus; it still is. @SirCumference: WTF!! You're the minority @SirCumference: Nay!! Yay, I'm afraid I've never met one person who was taught radians before degrees 8:30 AM @SirCumference: Add to the bonus, we were taught centesimal system also ;) In fact, why on Earth are we taught degrees? They're useless and arbitrary @SirCumference: Won't say such but yes much less useful than the radian ... You don't think they're arbitrary? Let me ask you this: why are there 360 degrees in a circle? There could have been 480, or 173, or 91004 They're baseless At least radians have an exact, indisputable definition @SirCumference: why on Earth, are we taught Newtonian Mechanics? They're useless in the long run! We could have learned Hamiltonian Formalism, then Special Relativity ;P Oh boy, you missed a long conversation I argued against teaching Newtonian mechanics ### Newtonian vs Relativistic physics Jan 29 at 0:01, 1 hour 1 minute total – 213 messages, 8 users, 4 stars Bookmarked Jan 29 at 1:03 by Sir Cumference 8:34 AM @SirCumference: Jaws dropped!!! So yeah, an hour long conversation... But I can understand Newtonian mechanics. GR and QFT are too complicated to teach most high school students And Newtonian mechanics gives good approximations at non-relativistic speeds Yet things like degrees and script handwriting are useless Why are we teaching this? @SirCumference: even if we are taught, there must be at some point, we need to be taught circular measure and it's good to get this topic included in the syllabi of our country. Degrees? They're infinitely better exaggeration But you know what I mean 8:41 AM @SirCumference:We are consuming degrees from std.iv days. But at last in std.ix, we got to learn the circular measure; some intricate problems and yeh we excel at it and from that day, we hardly face degree in our curriculum. come to Ind;P I know, but we shouldn't even be teaching degrees They should just die out @SirCumference: They are indeed arbitrary but can't be useless! If it gets abolished, then so do Imperial system of measures. Imperial is different but isn't necessarily worse Crap @SirCumference: Yes. Not worse. But degrees are just nonsensical 8:43 AM Worst!!! They have no base Agreed. 9:37 AM @BernardMeurer: Heard Coldplay's Hymn for the Weekend? 1 hour later… 10:43 AM @SirCumference: Scientific fact: If you took all the veins from your body and laid them end to end, you would die. Splendid!! The most interesting fact, I've ever known. 2 hours later… 12:44 PM @SirCumference Imperial is necessarily worse 1:10 PM @DavidZ are you around? Listening to your song now @user36790 @BernardMeurer: Imperial sucks!! @BernardMeurer I suppose so @DavidZ If someone is a partner in a company, could one say they are self employed? I have no idea @BernardMeurer: where is our 10yr old 0celo?? 1:23 PM @user36790 Banned for the weekend @DavidZ darn it @BernardMeurer: WTF!! Why?? When??? Okay! no reason @user36790 Why: None of our business, he was doing something wrong When: A day ago, or so @David Z: Can I ask you a question? ........... 1:27 PM @user36790 No. No questions allowed. In fact for asking that one you go directly to jail, do not pass go, do not collect $200 :-P Seriously though, (1) yes (2) I do agree with Bernard's advice :O Okay, let you take an isolated gas system in a container of volume$V$;$N$and$E$are number of particles and energy of the system-both are fixed. During the equilibrium, what would
z\sim\int_{-\infty}^0 \frac{e^{\alpha |z|}}{(1+|z|^3)^\beta}\md z=\infty. \] Thus $f(T,\mu)$ is not defined. \end{remark} For this specific problem, define \begin{equation} \cP_G(t)=\left \{\mu \mathrm{\ is\ Gaussian,\ with\ Var}(\mu)<\frac{\beta\Lambda(t)}{2(1-\beta)}\right\}. \end{equation} \begin{lemma} For any $0\leq t_1<T$, $\cP_G(t_1)$ is $\theta'$-invariant locally at any $t\in [0,T)$, for any wealth independent strategy $\theta'$. \end{lemma} \begin{proof} If the strategy is independent from wealth, $\xi\sim \mu\in \cP_G(t_1)$, we have \[ X^{t,\xi,\theta'}_r=\xi+\Theta^{\theta'}(r)+\sqrt{\Sigma^{\theta'}(r)}\eta \] with continuous $\Theta^{\theta'}(\cdot)$ and $\Sigma^{\theta'}(\cdot)$ satisfying $\Theta^{\theta'}(t)=\Sigma^{\theta'}(t)=0$ and $\eta\sim N(0,1)$, independent of $\xi$. As an example, if $\theta'=\theta$ we have $\Theta^{\theta'}(\cdot)=\Gamma(t)-\Gamma(\cdot)$, $\Sigma^{\theta'}(\cdot)=\Lambda(t)-\Lambda(\cdot)$. Therefore we conclude Var$(X^{t,\xi,\theta'}_r)=\mathrm{Var}(\mu)+\Sigma^{\theta'}(r)$. Because $\mathrm{Var}(\mu)<\frac{\beta\Lambda(t)}{1-\beta} $, it is clear that this will also hold for distribution of $X^{t,\xi,\theta'}$ once we choose $r$ sufficiently close to $t$. \end{proof} The following result leads us to the equilibrium master equations. \begin{proposition}\label{RDUTLderivative} With $f$ defined in (\ref{fdefRDUT}), we have for any $0\leq t_0<t_1<T$, $\epsilon_0>0$ sufficiently small, $f\in W^{1,1}([t_0,t_1-\epsilon_0]\times \cP_G(t_1);H^1)$ and for any $\mu \in \cP_G(t_1)$, \begin{align} &\partial_{\mu}f(t,\mu)(y)=E_{\eta\sim N(0,1)}w'\left(p_{\mu}(t,\sqrt{\Lambda(t)}\eta+\Gamma(t)+y)\right )e^{-\alpha(\sqrt{\Lambda(t)}\eta+\Gamma(t)+y)},\\ &\partial_v\partial_{\mu}f(t,\mu)(y)=E_{\eta\sim N(0,1)}w'\left( p_{\mu}(t,\sqrt{\Lambda(t)}\eta+\Gamma(t)+y)\right )\eta e^{-\alpha(\sqrt{\Lambda(t)}\eta+\Gamma(t)+y)} \end{align} \end{proposition} \begin{proof} See Appendix \ref{proofRDUT}. \end{proof} As a direct consequence, when $\mu=\delta_x$, we have \begin{align*} &p_{\delta_x}(t,z)=1-\Phi\left(\frac{z-x-\Gamma(t)}{\Lambda(t)}\right),\\ &\partial_{\mu}(t,\delta_x)(x)=E_{\eta\sim N(0,1)}w'\left(1-\Phi(\eta)\right)e^{-\alpha\left(\sqrt{\Lambda(t)}\eta+\Gamma(t)+x\right )},\\ &\partial_v\partial_{\mu}(t,\delta_x)(x)=\frac{1}{\sqrt{\Lambda(t)}}E_{\eta\sim N(0,1)}w'\left(1-\Phi(\eta)\right)\eta e^{-\alpha\left(\sqrt{\Lambda(t)}\eta+\Gamma(t)+x\right )}. \end{align*} Here and afterwards, $\Phi$ is the cumulative distribution function of standard normal distribution. We see from (\ref{EME1}) and (\ref{EME2}) that \[ \theta(t)=\mathrm{argmax}_{\theta\in \mR}\left\{\theta \mu(t)\partial_{\mu}(t,\delta_x)(x)+\frac{1}{2}\theta^2\sigma(t)^2\partial_v \partial_{\mu}(t,\delta_x)(x)\right\}, \] and one necessary condition is \begin{equation}\label{thetadef} \theta(t)=\frac{-\mu(t)\partial_{\mu}f(t,\delta_x)(x)}{\sigma(t)^2\partial_v\partial_{\mu}f(t,x,\delta_x)(x)}=\frac{ \mu(t)}{\alpha\sigma(t)^2}\cdot \lambda(t), \end{equation} where \begin{align*} &\lambda(t)=\alpha^2\sqrt{\Lambda(t)}\frac{h(\sqrt{\Lambda(t)})}{h'(\sqrt{\Lambda(t)})},\\ &h(x)=E_{\eta\sim N(0,1)}w'(\Phi(\eta))e^{\alpha \eta x}, \end{align*} and $\Lambda$ satisfies the following equation: \begin{equation}\label{lambdaODE} \left\{ \begin{aligned} &\Lambda'(t)=-\alpha^2\left[\frac{\mu(t)}{\sigma(t)}\cdot \frac{h(\sqrt{\Lambda(t)})}{h'(\sqrt{\Lambda(t)})}\right]^2\Lambda(t),\\ &\Lambda(T)=0. \end{aligned} \right. \end{equation} To make $\theta(\cdot)$ defined in (\ref{thetadef}) an equilibrium, one also needs $\partial_v\partial_{\mu}(t,\delta_x)(x)< 0$, which is equivalent to $h'(\sqrt{\Lambda(t)})\geq 0$. This is assured under Assumption 4.2 in \citet{Hu2021}, which is satisfied by many well-known distortion function. We omit the details here. \vskip 10pt \subsection{Dynamic mean-ES portfolio choice}\label{MES} In this subsection we present another example that is closely related to probability distortion, which we call dynamic mean-ES (MES, for short) portfolio choice. Here, ES represents {expected shortfall}, which is now a popular risk measure. To be specific, for $\mu\in \cP_2(\mR)$ and $\alpha_0\in (0,1)$, we define \[ \ES_{\alpha_0}(\mu)=-\frac{1}{\alpha_0}\int _0^{\alpha_0}F_{\mu}^{-1}(\alpha)\md \alpha, \] where $F_{\mu}^{-1}$ is the right continuous quantile function of $\mu$ \endnote{See e.g. footnote 5 of \citet{Xu2016} for specific definition. Here we choose to define expected shortfall following \citet{He2015}. There are also other definitions of expected shortfall, e.g., in \citet{Zhou2017}. These differences are not essential.}. The objective function is given by \[ g(\mu)= E_{\xi\sim \mu}\xi-\gamma \ES_{\alpha_0}(\mu), \] where $\gamma>0$ is the risk aversion, similar to the classical MV setting. That is to say, the agent has a trade off between the expected return and the risk, which is represented by expected shortfall. Portfolio choice under such a criterion is extremely important in both aspects of academic and industrial application. The dynamic version of this problem, where time-inconsistency occurs and the intra-person equilibrium is desired, is new to the literature. Our approach turns out to be powerful for the studying of this problem. To appropriately state the portfolio choice problem, we fix a $\underline{x}\leq 0$, which is the lowest wealth level allowed. We consider the {\it proportion of excess earnings} invested into the risky asset as the strategy. In other words, we consider the following dynamic instead of (\ref{wealth}): \begin{equation}\label{wealthp}\tag{\ref{wealth}'} \md X_t = \mu(t)\theta(t,X_t)(X_t-\underline{x})\md t +\sigma(t)\theta(t,X_t)(X_t-\underline{x})\md W_t. \end{equation} Denoting by $X^{t,x,\theta}$ the solution of (\ref{wealthp}), we choose the admissible set $\Ui$ as \[ \Ui=\left\{\theta\in \Ui_0:X^{t,x,\theta}_s>\underline{x} \mathrm{\ a.\ s.\ },\forall s\in [t,T],x>\underline{x} \right \}. \] If we further define $G=(\underline{x},\infty)$, it is clear that $G$ is a $(\underline{x},\Ui)$-allowed set. On the other hand, the underlying set of probability measure is chosen as \begin{equation}\label{MESPdef} \cP_{\MES}=\bigcup_{\delta>0}\cP_{\MES}^{\delta}, \end{equation} where \[ \cP_{\MES}^{\delta}=\left\{\mu\in \cP_2(\mR):E_{\xi\sim \mu}|\xi-\underline{x}|^{-2}<\delta^{-2}\right\}. \] For the candidate equilibrium strategy $\hat{\theta}(t,x)=\theta(t)$ (independent of wealth), we list the following notations dependent on $\hat{\theta}$, while we choose not to write such a dependence explicitly: \begin{align*} &X^{t,\xi}_T=\underline{x}+(\xi-\underline{x})e^{\Gamma_1(t)+\sqrt{\Lambda_1(t)}\eta},\ \eta\sim N(0,1)\ \ \mbox{\ being independent\ of \ } \xi,\\ &\Gamma_1(t)=\int_t^T\big [\mu(s)\theta(s)-\frac{1}{2}\theta(s)^2\sigma(s)^2\big ]\md s,\\ &\Lambda_1(t)=\int_t^T\theta(s)^2\sigma(s)^2 \md s. \end{align*} It is clear that $\cP_{\MES}^{\delta}$ is locally invariant under any wealth independent strategy $\theta$. Therefore, by Theorem \ref{EMEthm}, (\ref{EME2}) can be used to determine $\theta(t)$. We consider the auxiliary function : \begin{equation}\label{MESfdef} f(t,\xi)=EX^{t,\xi}_T+\frac{\gamma}{\alpha_0}\int_0^{\alpha_0}F^{-1}_{X^{t,\xi}_T}(\alpha) \md \alpha. \end{equation} It is seen that $f$ is law-invariant, hence can be seen as a functional with distribution argument. Therefore we can equivalently write $f=f(t,\mu)$. By direct calculation, we have \begin{equation}\label{FXTdef} F_{X^{t,\xi}_T}(z)=E\left[1-H(t,\xi,z)\right], \end{equation} with \begin{equation}\label{Hdef} H(t,y,z)=\left\{ \begin{aligned} &\left[1-\Phi\left(\frac{\log(z-\underline{x})-\log(y-\underline{x})-\Gamma_1(t)}{\sqrt{\Lambda_1(t)}}\right) \right]I_{\{y>\underline{x}\}}, &z>\underline{x},\\ &I_{\{y\leq \underline{x}\}}, &z=\underline{x},\\ &I_{\{y\geq \underline{x}\}}+\Phi\left(\frac{\log(\underline{x}-z)-\log(\underline{x}-y)-\Gamma_1(t)}{\sqrt{\Lambda_1(t)}}\right) I_{\{y<\underline{x}\}},&z<\underline{x}. \end{aligned} \right. \end{equation} The following proposition leads us to the highly nonlinear ODE determining $\theta(t)$: \begin{proposition}\label{MESprop} With $f$ defined in (\ref{MESfdef}), for any $\delta>0$ and sufficiently small $\epsilon>0$, we have $f\in W^{1,1}([t,t+\epsilon]\times\cP^{\delta}_{\MES};H^1)$, and for any $\xi\sim \mu\in \cP_{\MES}$, we have \begin{align} &\partial_{\mu}f(t,\mu)(y)=Ee^{\Gamma_1(t)+\sqrt{\Lambda(t)}\eta}+\frac{\gamma}{\alpha_0}\int_{-\infty}^{F^{-1}_{X^{t,\xi}_T}(\alpha_0)}\partial_y H(t,y,z)\md z,\label{MESpartialmuf}\\ &\partial_v\partial_{\mu}f(t,\mu)(y)=\frac{\gamma}{\alpha_0}\int_{-\infty}^{F^{-1}_{X^{t,\xi}_T}(\alpha_0)}\partial_{yy} H(t,y,z)\md z. \label{MESpartialvpartialmuf} \end{align} In particular, if we take $\mu=\delta_x$ for $x>\underline{x}$, then \begin{align} &\partial_{\mu}f(t,\delta_x)(x)=Ee^{\Gamma_1(t)+\sqrt{\Lambda(t)}\eta}+\frac{\gamma}{\alpha_0}\int_{-\infty}^{\Phi^{-1}(\alpha_0)}e^{\sqrt{\Lambda_1(t)}z+\Gamma_1(t)}\Phi'(z)\md z,\label{MESpartialmufx}\\ &\partial_v\partial_{\mu}f(t,\delta_x)(x)=\frac{\gamma}{\alpha_0(x-\underline{x})}\int_{-\infty}^{\Phi^{-1}(\alpha_0)}\left[\frac{z}{\sqrt{\Lambda_1(t)}}-1\right]e^{\sqrt{\Lambda_1(t)}z+\Gamma_1(t)}\Phi'(z)\md z.\label{MESpartialvpartialmufx} \end{align} \end{proposition} \begin{proof} See Appendix \ref{MESproof}. \end{proof} To find equilibrium strategy, for $x>0$, we define \begin{align*} &F_1(x)=E_{\eta\sim N(0,1)} e^{\eta x}=e^{x^2/2},\\ &F_2(x)=\int_{-\infty}^{\Phi^{-1}(\alpha_0)}e^{xz}\Phi'(z)\md z=e^{x^2/2}\Phi(\Phi^{-1}(\alpha_0)-x),\\ &F_3(x)=\int_{-\infty}^{\Phi^{-1}(\alpha_0)}\left[\frac{z}{x}-1\right]e^{xz}\Phi'(z)\md z=-\frac{1}{x}e^{x^2/2}\Phi'\left(\Phi^{-1}(\alpha_0)-x \right). \end{align*} Then, by (\ref{EME2}), $\theta$ should satisfy: \begin{equation}\label{MESsolution} \theta(t)=-\frac{\mu(t)(x-\underline{x})}{\sigma(t)^2(x-\underline{x})^2}\frac{\partial_{\mu}f(t,\delta_x)(x)}{\partial_v\partial_{\mu}f(t,\delta_x)(x)}=-\frac{\mu(t)}{\sigma(t)^2}\frac{F_1(\sqrt{\Lambda_1(t)})+\frac{\gamma}{\alpha_0}F_2(\sqrt{\Lambda_1(t)})}{\frac{\gamma}{\alpha_0}F_3(\sqrt{\Lambda_1(t)})}. \end{equation} Rephrasing into an equation of $\Lambda_1(t)$, and using definition of $\Lambda_1(t)$, we have \begin{equation}\label{MESODE} \Lambda_1'(t)=-\Lambda_1(t)\left[\frac{\mu(t)}{\sigma(t)}\cdot \frac{1+\frac{\gamma}{\alpha_0}\Phi(\Phi^{-1}(\alpha_0)-\sqrt{\Lambda_1(t)})}{\frac{\gamma}{\alpha_0}\Phi'(\Phi^{-1}(\alpha_0)-\sqrt{\Lambda_1(t)})} \right]^2 \end{equation} with terminal condition $\Lambda_1(T)=0$. By Theorem \ref{EMEthm}, if (\ref{MESODE}) admits a positive solution $\Lambda_1(t)$, then the dynamic MES problem has an equilibrium solution given by (\ref{MESsolution}). Otherwise, dynamic MES problem does not have equilibrium strategy in the form of (\ref{MESsolution}). In fact, it is highly likely that (\ref{MESODE}) does not have positive solution (the zero solution being unique) because unlike in (\ref{lambdaODE}), the right hand is non-singular here, i.e., the denominator is bounded below from 0. We choose to leave the details for future studying. \begin{comment} To be specific, we can write \begin{align*} &\hat{\theta}(t,x)=\theta(t)(x-\underline{x}),\\ &\theta(t)=\frac{\mu(t)}{\sigma(t)^2}\lambda(t),\\ &\lambda(t)=\frac{F_1(\Lambda_1(t))+\frac{\gamma}{\alpha_0}F_2(\Lambda_1(t))}{-F_3(\Lambda_1(t))}. \end{align*} Remarkably, even in the constant investment opportunity case, the equilibrium strategy is intrinsically variant from time to time. In Figure, a simple numerical experiment shows how $\lambda(\cdot)$ is influenced by risk aversion $\gamma$ and the ES level $\alpha_0$. \end{comment} \begin{remark} Following the same procedure, but appropriately revising the formulation of problem, it can be verified that intrinsically, dynamic MES problem does not admit equilibrium strategy that investing a fixed {\it amount} of money into risky asset. \end{remark} \subsection{Nonlinear functions of expectations}\label{nonlinearexpectation} In this subsection we consider a specific form of reward function with general controlled dynamics. Let \[ g(\mu)=E_{\xi\sim \mu}g(\xi)+ F(E_{\xi\sim \mu}\xi), \] where $g\in C^0(\mR)$, $F\in C^1(\mR)$, and the dynamic of $X$ be like (\ref{dynamics}). With the convention in Section \ref{ProblemFormulation}, we have the auxiliary function $f$ taking the form \begin{equation}\label{nonlinearfdef} f(t,\mu)=E_{\xi\sim \mu}g(X^{t,\xi}_T)+F(E_{\xi\sim \mu}X^{t,\xi}_T). \end{equation} In \citet{Bjork2017} and \citet{He2021} as well as many other seminal papers on time-inconsistent control problems with nonlinear functions of expectations, the so-called extended HJB equations are derived, which incorporate the nonlinear function $F$ in the equations. One of the key points is to introduce the functions \begin{equation}\label{h12def} h_1(t,x)=\mE^{t,x,\hat{u}}g(X_T)=g(X^{t,x,\hat{u}}_T),\ h_2(t,x)=\mE^{t,x,\hat{u}}X_T=\mE X^{t,x,\hat{u}}_T. \end{equation} In this section, we will show that our equilibrium master equation degenerates to extended HJB under some weaker conditions. First of all, we shall introduce the set of measures we are interested in. Let \[ \cP_0=\left \{\mu\in \cP_2(\mR^d):\int_{\mR^d}|x|^q\mu(\md x)<\infty, \forall q\geq 2 \right \}=\bigcap_{q\geq 2}\cP_q(\mR^d). \] From standard SDE theory (with Lipshitz coefficient, e.g., in \citet{Yong1999}), we know $\cP_0$ is $([0,T'];u)$-invariant for any $u\in \Ui$ and $T'>0$. Now we state the assumptions we impose on $h_1$ and $h_2$, which is weaker than classical assumptions because it requires only integrability with respect to measures in $\cP_0$. Moreover, it is more flexible if the desired invariant set $\cP_0$ can be further reduced in some specific problems. \begin{definition} For a function $f:[0,T]\times \mR^d\to \mR, (t,x)\mapsto f(t,x)$, we say that $f\in H^{1,2}_{\cP_0,\infty}$ if: \begin{itemize} \item[(1)] $f\in C^{0,2}([0,T]\times \mR^d)$, whose derivative with respect to $t$ is defined everywhere on $[0,T]$. \item[(2)] For any $t\in [0,T]$, $\mu\in \cP_0$, $f(t,\cdot),\partial_t f(t,\cdot),\partial_x f(t,\cdot),\partial_{xx}f(t,\cdot)\in L^2_{\mu}$; \item[(3)] For any bounded $\cK\subset \cP_0$ (in the sense of Remark \ref{boundedremark}), $\sup\limits_{\mu\in \cK,t\in [0,T]}\|f(t,\cdot)\|_{H^{1,2}_{\mu}}<\infty$, where \[ \|f(t,\cdot)\|^2_{H^{1,2}_{\mu}}\triangleq \|f(t,\cdot)\|^2_{L^2_{\mu}}+ \|\partial_tf(t,\cdot)\|^2_{L^2_{\mu}}+ \|\partial_xf(t,\cdot)\|^2_{L^2_{\mu}}+ \|\partial_{xx}f(t,\cdot)\|^2_{L^2_{\mu}}. \] \end{itemize} \end{definition} With $f$ defined in (\ref{nonlinearfdef}) and $h_1$, $h_2$ given in (\ref{h12def}), we have the following result, which leads us directly to the corresponded equilibrium master equation. \begin{proposition}\label{propnonlinear} Suppose $h_i\in H^{1,2}_{\cP_0,\infty}$ \ for $i=1,2$, we have $f\in W^{1,1}([0,T]\times \cP_0;H^1)$, and \begin{align} &\partial_{\mu}f(t,\mu)(y)=\partial_x h_1(t,y)+F'\left (E_{\xi\sim \mu}h_2(t,\xi)\right )\partial_x h_2(t,y),\label{Lderivativenonlinear1}\\ &\partial_v\partial_{\mu}f(t,\mu)(y)=\partial_{xx} h_1(t,y)+F'\left (E_{\xi\sim \mu}h_2(t,\xi)\right )\partial_{xx} h_2(t,y)\label{Lderivativenonlinear2} \end{align} \end{proposition} \begin{proof} See Appendix \ref{proofnonlinear}. \end{proof} In particular, if we take $\mu=\delta_x$ for $x\in \mR^d$, then (\ref{EME2}) becomes \[ \sup_{\bu\in \bU}\left\{ \cA^{\bu}_Xh_1(t,x)+F'(h_2(t,x))\cA^{\bu}_Xh_2(t,x) \right \}=0, \] which is seen to be exactly the extended HJB in \citet{He2021} (see (3.7) and (3.9) therein). Here $\cA^u_X$ is the generator associated with the diffusion $X$, i.e., \[ \cA^u_X = \partial_t + b^u(t,x)\partial_x+\frac{1}{2}\mathrm{Tr}\left(\Sigma^u(t,x)\partial_{xx}\right ). \] \begin{remark} It is not hard to see that if $f\in C^{1,2}$ and its derivatives up to order 2 in space, and up to order 1 in time, are all of polynomial growth in $x$, then $f\in H^{1,2}_{\cP_0,\infty}$. Therefore, we slightly reduce the assumptions needed with the help of weak It$\hato$'s calculus established in the present paper. \end{remark} \vskip 15pt \section{Conclusion}\label{conclusion} Problems with distribution dependent rewards appear naturally and widely and
student returns an exam with something like “I have calculated the mass of the Earth to be 5.97366729297353452283 x 1024 kg”, the grader knows immediately that the student doesn’t grok significant figures (the correct answer is “the Earth’s mass is 6 x 1024 kg, why all the worry?”).  With that in mind, the grader is now a step closer to making up a grade.  The student, for their part, could have saved some paper. Answer Gravy: You can think of a number with an error as being a “random variable“.  Like rolling dice (a decidedly random event that generates a definitively random variable), things like measuring, estimating, or rounding create random numbers within a certain range.  The better the measurement (or whatever it is that generates the number), the smaller this range.  There are any number of reasons for results to be inexact, but we can sweep all of them under the same carpet labeling them all “error”; keeping track only of their total size using (usually) standard deviation or variance.  When you see the expression “3±0.1″, this represents a random variable with an average of 3 and a standard deviation of 0.1 (unless someone screwed up or is just making up numbers, which happens a lot). When adding two random variables, (A±a) + (B±b), the means are easy, A+B, but the errors are a little more complex.  (A±a) + (B±b) = (A+B) ± ?.  The standard deviation is the square root of the variance, so a2 is the variance of the first random variable.  It turns out that the variance of a sum is just the sum of the variances, which is handy.  So, the variance of the sum is a2 + b2 and (A±a) + (B±b) = A+B ± √(a^2+b^2). When adding numbers using significant digits, you’re declaring that a=0.5 x 10-D1 and b=0.5 x 10-D2, where D1 and D2 are the number of significant digits each number has.  Notice that if these are different, then the bigger error takes over.  For example, $\sqrt{\left(0.5\cdot10^{-1}\right)^2 + \left(0.5\cdot10^{-2}\right)^2} = 0.5\cdot 10^{-1}\sqrt{1 + 10^{-2}} \approx 0.5\cdot 10^{-1}$.  When the digits are the same, the error is multiplied by √2 (same math as last equation).  But again, sig figs aren’t a filbert brush, they’re a rolling brush.  √2?  That’s just another way of writing “1”. The cornerstone of “sig fig” philosophy; not all over the place, but not super concerned with details. Multiplying numbers is one notch trickier, and it demonstrates why sig figs can be considered more clever than being lazy normally warrants.  When a number is written in scientific notation, the information about the size of the error is exactly where it is most useful.  The example above of “1234 x 0.32″ gives some idea of how the 10’s and errors move around.  What that example blurred over was how the errors (the standard deviations) should have been handled. First, the standard deviation of a product is a little messed up: $(A\pm a)(B\pm b) = AB \pm\sqrt{A^2b^2 + B^2a^2 + a^2b^2}$.  Even so!  When using sig figs the larger error is by far the more important, and the product once again has the same number of sig figs.  In the example, 1234 x 0.32 = (1.234 ± 0.0005) (3.2 ± 0.05) x 10-2.  So, a = 0.0005 and b = 0.05.  Therefore, the standard deviation of the product must be: $\begin{array}{ll} \sqrt{A^2b^2 + B^2a^2 + a^2b^2} \\[2mm] = Ab\sqrt{1 + \frac{B^2a^2}{A^2b^2} + \frac{a^2}{A^2}} \\[2mm] = (1.234) (0.05) \sqrt{1.0000069} \\[2mm] \approx(1.234)(0.05)\\[2mm] \approx 0.05 \end{array}$ Notice that when you multiply numbers, their error increases substantially each time (by a factor of about 1.234 this time).  According to Benford’s law, the average first digit of a number is 3.440*. As a result, if you’re pulling numbers “out of a hat”, then on average every two multiplies should knock off a significant digit, because 3.4402 = 1 x 101. Personally, I like to prepare everything algebraically, keep track of sig figs and scientific notation from beginning to end, then drop the last 2 significant digits from the final result.  Partly to be extra safe, but mostly to do it wrong. *they’re a little annoying, right? Posted in -- By the Physicist, Conventions, Equations, Math | 1 Comment ## Q: If time slows down when you travel at high speeds, then couldn’t you travel across the galaxy within your lifetime by just accelerating continuously? Physicist: Yup!  But sadly, this will never happen. This is a good news / really bad news situation.  On the one hand, it is true (for all intents and purposes) that if you travel fast enough, time will slow down and you’ll get to your destination is surprisingly little time.  The far side of the galaxy is about 100,000 lightyears away, so it will always take at least 100,000 years to get there.  However, the on-board clocks run slower (from the perspective of anyone “sitting still” in the galaxy) so the ship and everything on it may experience far less than 100,000 years. First, when you read about traveling to far-off stars you’ll often hear about “constant acceleration drives”, which are rockets capable of accelerating at a comfortable 1g for years at a time (“1g” means that people on the rocket would feel an acceleration equivalent to the force of Earth’s gravity).  However!  Leaving a rocket on until it’s moving near the speed of light is totally infeasible.  A rocket capable of 1g of acceleration for years is a rocket that can hover just above the ground for years.  While this is definitely possible for a few seconds or minutes (“retro rockets“), you’ll never see people building bridges on rockets, or hanging out and having a picnic for an afternoon or three on a hovering rocket.  Spacecraft in general coast ballistically except for the very beginning and very end of their trip (excluding small corrections).  For example, the shuttle (before the program was shut down) could spend weeks coasting along in orbit, but the main rockets only fire for the first 8 minutes or so.  And those 8 minutes are why the shuttle weighs more than 20 times as much on the launch pad than when it weighs when it lands. The big exception is ion drives, but a fart produces more thrust than an ion drive (seriously) so… meh. Rockets: in a hurry for a little while and then not for a long while. In order to move faster, a rocket needs to carry more fuel, so it’s heavier, so it needs more fuel, etc.  The math isn’t difficult, but it is disheartening.  Even with antimatter fuel (the best possible source by weight) and a photon drive (exhaust velocity doesn’t get better than light speed), your ship would need to be 13 parts fuel to one part everything else, in order to get to 99% of light speed. That said, if somehow you could accelerate at a comfortable 1g forever, you could cross our galaxy (accelerating halfway, then decelerating halfway) in a mere 20-25 years of on-board time.  According to every one else in the galaxy, you’d have been cruising at nearly light speed for the full 100,000 years.  By the way, this trip (across the Milky Way, accelerate halfway, decelerate halfway, anti-matter fuel, photon drives) would require a fuel-to-ship ratio of about 10,500,000,000 : 1.  Won’t happen. The speed of light is still a fundamental limit, so if you were on the ship you’ll still never see stars whipping by faster than the speed of light (which you might expect would be necessary to cross 100,000 light years in only 25 years).  But relativity is a slick science; length contraction and time dilation are two sides of the same coin.  While everyone else in the galaxy explains the remarkably short travel time in terms of the people on the ship moving
\section{Introduction} Simplified Boardgames is a class of fairy-chess-like games, first introduced in~\cite{Bjornsson12Learning}, and slightly extended in~\cite{Kowalski15Testing} (see~\cite{Gregory15TheGRL} for an alternative extension). The class was developed for the purpose of learning the game rules through the observation of plays. The Simplified Boardgames language describes turn-based, two player, zero-sum games on a rectangular board with piece movements being a subset of a regular language. Here we provide a formal specification for Simplified Boardgames. Despite the fact that the class has been used in several papers, its formal grammar was still not clearly defined, and some issues were left ambiguous. Such a definition is crucial for further research concerning AI contests, procedural content generation, translations, etc. For comparison, Metagame system, which can be seen as Simplified Boardgames predecessor, had its grammar explicitly declared in~\cite{Pell92METAGAME}. \section{Syntax and semantics} In this section we present the formal grammar for Simplified Boardgames, inspired by the look of training records provided by Bj\"ornsson in his initial work \cite{Bjornsson12Learning}. The grammar construction is also affected by our experiences concerning Simplified Boardgames, especially in the domain of procedural content generation. The version presented differs only slightly comparing to the versions used in \cite{Kowalski15Testing,Kowalski15Procedural,Kowalski16Evolving}. \subsection{Grammar} The formal grammar in EBNF is presented in Figure~\ref{fig:grammar}. C-like comments can appear anywhere in the game definition: ``//'' starts a line comment and every next character in the line is ignored. ``/*'' starts a multiline comment and every character is ignored until the first occurrence of ``*/''. \begin{figure}[!ht] \centering \begin{grammar} <sbg> ::= `<\!\,<' <name> `>\!\,>' `<BOARD>' <board> `<PIECES>' <pieces> `<GOALS>' <goals> <name> ::= alphanumspace \{alphanumspace\} <board> ::= <nat> <nat> \{ <row> \} <row> ::= `|' \{"[.a-zA-Z]"\} `|' <pieces> ::= \{ "[A-Z]" <regexp> `&' \} <regexp> ::= <rsc> | <regexp> <regexp> | <regexp> `+' <regexp> | `(' <regexp> `)' [<power>] <rsc> ::= `(' <int> `,' <int> `,' <on> `)' [<power>] <power> ::= `^' <nat> | `^' `*' <on> ::= "[epw]" <goals> ::= <nat> `&' \{ <goal> `&' \} <goal> ::= `#' <letter> <nat> | `@' <letter> <squares> <letter> ::= "[a-zA-Z]" <squares> ::= <nat> <nat> \{ `,' <nat> <nat> \} \end{grammar} \caption{Formal grammar for Simplified Boardgames game rules.} \label{fig:grammar} \end{figure} The start non-terminal symbol is ``sbg''. The ``nat'' non-terminal stands for a natural number (thus a non-empty sequence of digits), while ``int'' stands for a signed integer (thus it is ``nat'' optionally preceded by ``-''). The ``alphanumspace'' non-terminal generates all alphanumerical characters or a space. \subsection{Example} An exemplary game called Gardner\footnote{\url{ http://en.wikipedia.org/wiki/Minichess#5.C3.975_chess}}, formatted according to Simplified Boardgames grammar is presented partially in Figure~\ref{fig:gardner}. It is $5\times 5$ chess variant proposed by Martin Gardner in 1969 and weakly solved in 2013 \cite{Mhalla13Gardner} -- the game-theoretic value has been proved to be a draw. The starting position looks as in the regular chess with removed columns $f$, $g$, $h$, and rows $3$, $4$, $5$. The rules are those of classical chess without the two squares move for pawns, en-passant moves and castling. Additionally, as a countermeasure for not supporting promotions, our implementation provides additional winning condition by reaching the opponent's backrank with a pawn. \begin{figure}[!ht] \centering \begin{verbatim} <<Simplified Gardner>> <BOARD> 5 5 |rnbqk| |ppppp| |.....| |PPPPP| |RNBQK| <PIECES> // P - pawn, R - rook, N - knight, B - bishop, Q - queen, K - king P (0,1,e) + (-1,1,p) + (1,1,p) & R (0,1,e)(0,1,e)^* + (0,1,e)^*(0,1,p) + (0,-1,e)(0,-1,e)^* + (0,-1,e)^*(0,-1,p) + (1,0,e)(1,0,e)^* + (1,0,e)^*(1,0,p) + (-1,0,e)(-1,0,e)^* + (-1,0,e)^*(-1,0,p) & N (2,1,e) + (2,-1,e) + ... + (-1,-2,p) & B (1,1,e) + (1,1,p) + (1,1,e)^2 + (1,1,e)(1,1,p) + (1,1,e)^3 + (1,1,e)^2(1,1,p) + (1,1,e)^4 + (1,1,e)^3(1,1,p) + ... + (-1,-1,e)^4 + (-1,-1,e)^3(-1,-1,p) & Q (0,1,e)(0,1,e)^* + (0,1,e)^*(0,1,p) + (0,-1,e)(0,-1,e)^* + (0,-1,e)^*(0,-1,p) + (1,0,e)(1,0,e)^* + (1,0,e)^*(1,0,p) + (-1,0,e)(-1,0,e)^* + (-1,0,e)^*(-1,0,p) + (1,1,e)(1,1,e)^* + (1,1,e)^*(1,1,p) + (1,-1,e)(1,-1,e)^* + (1,-1,e)^*(1,-1,p) + (1,-1,e)(1,-1,e)^*+(1,-1,e)^*(1,-1,p)+(-1,-1,e)(-1,-1,e)^*+(-1,-1,e)^*(-1,-1,p) & K (0,1,e) + (0,1,p) + (0,-1,e) + (0,-1,p) + ... + (-1,-1,e) + (-1,-1,p) & <GOALS> 100 & @P 0 4, 1 4, 2 4, 3 4, 4 4 & @p 0 0, 1 0, 2 0, 3 0, 4 0 & #K 0 & #k 0 & \end{verbatim} \caption{The Simplified Boardgames version of Gardner.}\label{fig:gardner} \end{figure} \subsection{Semantics} The game is played between two players, \emph{black} and \emph{white}, on a rectangular board. White player is always the first to move. The board size is given by the two numbers in the \texttt{<BOARD>} section, generated from ``board'' non-terminal, which represents the \emph{width} and the \emph{height}, respectively. Subsequently the initial position is given: empty squares are represented by dots, white pieces as the uppercase letters, and black pieces as the lowercase letters. To be considered as valid, there must be exactly \emph{height} rows and \emph{width} columns. Although it may be asymmetric, the initial position is given from the perspective of the white player, i.e.\ forward means ``up'' for white, and ``down'' for black. During a single turn, a player has to make a move using one of his pieces. Making a move is done by choosing the piece, and change its position according to the specified movement rule for this piece. At any time, at most one piece can occupy a square, so finishing a move on a square containing a piece (regardless of the owner) results in removing it (capturing). Note that in the situation when the destination square is the starting one, the whole board remains unchanged. No piece addition is possible. After performing a move, the player gives control to the opponent. The movement rules of available game pieces are declared in the \texttt{<PIECES>} section and generated from the ``pieces'' non-terminal. One piece can have at most one movement rule, which consists of the letter of the piece and a regular expression. A piece without the movement rule is allowed but cannot be moved. For a given piece, the set of legal moves is the set of words described by a regular expression over an alphabet $\Sigma$ containing triplets $(\Delta x, \Delta y, \mathit{on})$, where $\Delta x$ and $\Delta y$ are relative column/row distances, and $\mathit{on} \in\{e, p, w\}$ describes the content of the destination square: $e$ indicates an empty square, $p$ a square occupied by an opponent piece, and $w$ a square occupied by an own piece. We assume that $x \in \{-\mathit{width}+1,\ldots,\mathit{width}-1\}$, and $y\in \{-\mathit{height}+1,\ldots,\mathit{height}-1\}$, and so $\Sigma$ is finite. While the piece's owner is defined by the case (upper or lower), its letter encode the piece's type. Pieces with the same type have the same language of legal moves, thus declaration is made for the white pieces only. Note, however, that a positive $\Delta y$ means forward, which is a subjective direction, and differs in meaning depending on the player. Consider a piece and a word $w \in \Sigma^*$ that belongs to the language described by the regular expression in the movement rule for this piece. Let $w=a_1a_2\ldots a_k$, where each $a_i=(\Delta x_i, \Delta y_i, \mathit{on}_i)$, and suppose that the piece stands on a square $\langle x, y \rangle$. Then, $w$ describes a move of the piece, which is applicable in the current board position if and only if, for every $i$ such that $1 \le i \le k$, the content condition $\mathit{on}_i$ is fulfilled by the content of the square $\langle x+\sum_{j=1}^i \Delta x_j, y+\sum_{j=1}^i \Delta y_j \rangle$. The move of $w$ changes the position of the piece piece from $\langle x, y\rangle$ to $\langle x+\sum_{i=1}^k \Delta x_k, y+\sum_{k=1}^k\Delta y_i \rangle$. In contrast to the Bj\"{o}rnsson's definition, rules where the same square is visited more then once are allowed. Technically, we found this restriction superfluous. Note that during computation of legal moves, the board position is not changed, so the field with relative coordinates $(0,0)$ always contains the player's moving piece. Hence, $(0, 0, \mathit{w})$ is always legal, while $(0, 0, \mathit{e})$ is always illegal. An example of how move rules work is shown in Figure~\ref{fig:chessboard}. \begin{figure}[!ht] \centering \includegraphics[scale=0.25]{examplechess.png} \caption{A chess example. Two legal moves for the queen on $d4$ are shown. The capture to $f5$ is codified by a word $(1,1,e)(1,1,p)$, while move to $a3$ is encoded by $(-1, 0,e)(-1,0,e)(-1,0,e)$. The move to $f3$ is illegal, as in the language of queen's moves, no move can end on a square containg own's piece. The $d5-f6$ knight move is a direct jump codified by a
\section{Introduction}\label{Intro} Biological membranes mainly consist of a lipid bilayer into which proteins are embedded. The mass (or volume) ratio between proteins and lipids ranges between 0.25 (lung surfactant) and 4 (purple membrane of halobacteria). Typical, biomembranes display a protein-lipid ratio of approximately one. This includes the extra-membraneous parts of the proteins and surface-associated protein. Therefore, the intra-membrane parts of the proteins represent a smaller fraction of the central part of the membrane. Thus, even in densely crowded biological membranes most of the in-plane membrane area consists of lipids. \\ Textbook models about the electrical transport properties of membranes consider the lipid bilayer as an electrical insulator. For instance, the Hodgkin- Huxley model of the nervous impulse considers the membrane as an inert capacitor and attributes currents to conductance changes occurring in ion selective protein channels \cite{Hodgkin1952}. The activity of protein ion channels is associated to quantized current events of order of 1-50 pA (pico Amp\`eres) at a voltage of order 100mV corresponding to channel conductances of about 10-500 pS (pico Siemens). The typical open-dwell time of such channels is of the order of a few milliseconds.\\ Interestingly, however, synthetic lipid membranes close to phase transitions are not inert but are very permeable for small molecules \cite{Blicher2009}, ions \cite{Papahadjopoulos1973, Nagle1978b, Sabra1996, Blicher2009} and water \cite{Jansen1995}. Further, they display stepwise conductance events very similar to those of proteins, i.e., with similar conductances and life times (some early publications: \cite{Yafuso1974, Antonov1980, Boheim1980, Kaufmann1983a, Kaufmann1983b, Antonov1985}). In their appearance, they are practically indistinguishable from recordings of protein-containing membranes. Although this is surprisingly little known, it constitutes a severe problem for the investigation of biological membranes and protein channels. These events will be called lipid channels throughout this manuscript. The finding of an enhanced lipid membrane permeability close to transitions is particularly important since biomembranes seem to exist in a state slightly (about 10-15$^{\circ}$) above melting transitions. The membrane compositions of fish and bacteria adapt upon changes in body temperature, pressure or solvent conditions such that this membrane state is maintained (reviewed in \cite{Heimburg2007a}). It seems therefore plausible to assuming a functional purpose for preserving a particular physical state of the membrane. \\ The most common ways to measure conductances of membranes are the black lipid membrane (BLM) technique by Montal \& Mueller \cite{Montal1972} and the patch-clamp technique pioneered by Neher \& Sackmann \cite{Neher1976}. In the Montal-Mueller technique an artificial membrane is formed over a small teflon hole (normally with a diameter of 50-250 $\mu$m). The patch-clamp technique relies on the measurement of currents through small membrane patches defined by a glass capillary of a diameters ranging from 1-10 $\mu$m. While the Montal-Mueller technique is suited for measuring artificial systems of defined composition, e.g., one single protein species reconstituted into a single lipid bilayer, the patch-clamp method is mainly used for recording currents through biological membranes with complex and often unknown composition. Patch pipettes have also been used for measurements on synthetic BLMs \cite{Coronado1983, Hanke1984}. In both techniques, the observed membrane area is much larger than that of a protein. The diameter of an ion channel protein, e.g. the potassium channel, is about 5nm. This implies that the smallest patch in a patch-clamp experiment is already 40000 times larger than the cross-sectional area of the protein. The hole in an BLM experiment is rather 400 million time larger than a typical protein channel area (assuming an aperture of 100 $\mu$m). However, the electrophysiological experiment in itself cannot tell where along which path the ion currents flow. Thus, if one wants to attribute a particular current event to a protein one has to assume that at least 99.998\% of the surrounding membrane (including the other membrane proteins) are inert in the electrophysiological experiment. Therefore, it can generally not be excluded on the basis of the experiment itself that during an electrophysiological experiments one also finds currents through the lipid membrane. In the Hodgkin-Huxley paper \cite{Hodgkin1952}, the possibility of leak currents is included. If these are structureless small amplitude currents in the background this does not represent a major problem. However, leak currents represent a significant conceptual problem to protein channel recordings if they display a similar signature as the protein events. \\ The purpose of this review is to demonstrate that in fact the ion currents through lipid membranes resemble those through proteins to a degree that they become indistinguishable. Their occurrence depends on temperature, lateral tension, membrane-associated drugs as anesthetics and neurotransmitters, pH, calcium concentration, voltage and other thermodynamic variables. Here, we review the literature on membrane permeability and lipid ion channels, and introduce into the thermodynamics of their creation. In this context we discuss a possible role for proteins as membrane perturbations that alter the state of the lipid membrane. \section{Macroscopic changes of permeability in transitions}\label{MacroscopicExp} \subsection{Experiments} During the 1960s and 70s, many researchers were investigating the permeability of biomembranes for ions. Proteins were assumed being the major players. However, accurate controls of the permeability of the lipid membrane for ions was needed as a control. Hauser et al. \cite{Hauser1972} studied the permeability of lipid extracts from egg yolk or ox brain and found the permeability for ions being small compared to the permeability of biological membranes. The authors concluded that the role of the lipid bilayer for permeability is negligible for the case of a biological membrane suggesting that the predominant part of permeability has to be attributed to proteins.\\ \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{Figure1a} \parbox[c]{8cm} { \caption{\textit{Left: Permeability of DPPG vesicles for radioactive sodium, $^{22}Na^+$, close to the melting transition, adapted from \cite{Papahadjopoulos1973}. Right: Permeability of DMPC membranes for Co$^2+$. Courtesy O.G. Mouritsen, cf. \cite{Sabra1996}. The dotted profiles are a gide to the eye. The dotted vertical line marks the melting transition temperature, T$_m$.} \label{Figure1a}}} \end{center} \end{figure} However, it was soon shown that the permeability can be orders of magnitude larger close to the chain melting regime of lipid membranes. Papahadjopoulos et al. \cite{Papahadjopoulos1973} were the first to demonstrate that the permeability for sodium ions (they used radiolabeled $^{22}$Na$^{+}$ ions) increased by at least a factor of 100 in the phase transitions of dipalmitoyl phosphatidylglycerol (DPPG) and dipalmitoyl phosphatidylcholine (DMPC) in agreement with the phase transitions of these lipids as measured by fluorescence changes of embedded markers. The permeability curve for DPPG membranes is shown in Fig. \ref{Figure1a} (left). The permeation time scales in this paper seem extremely small (of the order of hours) which might be related to the fact that none of the data points was recorded at the phase transition directly. The permeation profile for DPPC was found to be similar similar (not shown). It was demonstrated that cholesterol both abolishes the permeability maximum and the chain melting discontinuity. Along the same lines Sabra et al. \cite{Sabra1996} found that the permeability of dimyristoyl phophatidylcholine (DMPC) membranes for Co$^{2+}$ was drastically enhanced in the phase transition regime (Fig. \ref{Figure1a}, right). These authors also demonstrated that the insecticide lindane changes the permeability. This phenomenon will be discussed in more detail in section \ref{Variables.3}.\\ Jansen and collaborators \cite{Jansen1995} showed that membranes in their transition are much more permeable to water (Fig. \ref{Figure1b}, left). Vesicles filled with D$_2$O display a contrast with respect to an H$_2$O background leading to enhanced light scattering in an optical experiment. Permeation for D$_2$O was monitored in a rapid mixing stopped-flow experiment. It leads to a mixing of H$_2$O and D$_2$O and a loss of scattering contrast. The authors found that the permeability of a DMPC membrane changes strongly in the phase transition regime such that exchange of water from a vesicle is complete after 2ms, the fastest time that could be recorded in this experiment. The same finding was reported for other lipids as DPPC and distearoyl phosphatidylcholine (DSPC), a series of phosphatidyl ethanolamines, phosphatidyl serines and phosphatidylglycerols, rendering this study one of
spectrum in each time slot. Stochasticity means that no one can observe the network information in the future, as the network changes randomly over time. Asymmetry means that the currently realized information of an SU is the private information, and cannot be observed by other entities, such as the SR and other SUs. Nevertheless, in the hybrid spectrum market, the allocation of spectrum in the whole time period must be jointly optimized due to the time-coupling constraint on the contract user demand (e.g., each contract user requires a specific number of spectrum in the whole period). This implies that solving the optimal spectrum allocation problem directly would require the complete network information in all time slots. As mentioned above, however, in practice the SR only has partial knowledge (e.g., the stochastic distribution) about the future network information because of the stochasticity of information. Moreover, it cannot observe all of the realized current network information, especially the private information of SUs, due to the asymmetry of information. Therefore, the key research problem becomes the following: ###### Problem 1 How should the SR optimally allocate the idle spectrum in the given period among the contract users and spot market users to maximize the spectrum efficiency, taking into consideration the spatial spectrum reuse, information stochasticity, and information asymmetry? To solve this problem, we first derive an off-line optimal policy that maximizes the ex-ante expected spectrum efficiency based on the stochastic distribution of network information. We then design an on-line Vickrey¨CClarke¨CGroves (VCG) auction that elicits SUs’ private information realized in every time slot. Based on the elicited network information and the derived off-line policy, the VCG auction determines the real-time allocation and pricing of every spectrum. Such a solution technique (i.e., off-line policy and on-line auction) allows us to optimally allocate every spectrum in an on-line manner under stochastic and asymmetric information. We further show that with the spatial spectrum reuse, the proposed VCG auction relies on solving the maximum weight independent set (MWIS) problem, which is well-known to be NP-hard. Thus, it is not suitable for on-line implementation, especially in a large-scale market. This motivates us to further study low-complexity sub-optimal solutions. To this end, we propose a heuristics approach based on an on-line VCG-like mechanism with polynomial-time complexity, and further characterize the corresponding performance loss bound analytically. Our numerical results indicate that the heuristics approach exhibits good and robust performance (e.g., reaches at least 70% of the optimal efficiency in our simulations). In summary, we list the key results and the corresponding section numbers in Table 1. It is important to note that the main contribution of this work is not the development of new auction theory, but rather the formulation of the hybrid spectrum market and the solution techniques (including the use of auction theory) to optimize the spectrum utilization in a given hybrid market. Specifically, the main contributions of this paper are as follows: • New modeling and solution technique: We propose and study a hybrid spectrum market, which has both the reliability of futures market and the flexibility of spot market. Hence, it is highly desirable for QoS differentiations in secondary spectrum utilization. To the best of our knowledge, this is the first paper to study such a hybrid spectrum market with spatial spectrum reuse. • Optimal solution under stochastic and asymmetric information: We analyze the optimal spectrum allocation in an exogenous hybrid market (in a particular time period) under stochastic and asymmetric information systematically. Our proposed solution consists of two parts: (i) an off-line allocation policy that maximizes the ex-ante expected spectrum efficiency based on the stochastic network information; and (ii) an on-line VCG auction that determines the real-time allocation of every (idle) spectrum based on the realized network information and the pre-derived policy. Such a solution technique allows us to optimally allocate every spectrum in an on-line manner. • Heuristic solution with polynomial-time complexity: We propose a heuristic approach based on an on-line VCG-like mechanism with polynomial-time complexity, and further characterize the corresponding performance loss bound analytically. This polynomial-time solution is particularly useful for achieving the efficient spectrum utilization in a large-scale network. • Performance evaluation: We provide extensive numerical results to evaluate the performance of the proposed solutions. Our numerical results show that: (i) the proposed optimal allocation significantly outperforms the traditional greedy allocations, e.g., with an average increase of 20% in terms of the expected spectrum efficiency; and (ii) the proposed heuristics approach exhibits good and robust performance, e.g., reaching at least 70% of the optimal efficiency in our simulations. The rest of this paper is organized as follows. After reviewing the literature in Section id1, we describe the system model in Section id1, and present the problem formulation in Section id1. Then we derive the off-line optimal policy in Section id1, and design the on-line VCG mechanisms in Section id1. In Section id1, we analyze the performance loss in the low-complexity heuristic solution. in Section id1, we provide the detailed simulation results. We finally conclude in Section id1. A major motivation of this work is to establish economic incentives and improve spectrum utilization efficiency in dynamic spectrum access (DSA) and cognitive radio networks (CRNs). There are several comprehensive surveys on the technical aspects of DSA and CRNs (see Haykin (2005), Akyildiz et al. (2006), Buddhikot (2007), Zhao & Sadler (2007)). Kasbekar et al. (2010) and Muthusamy et al. (2011) considered the secondary spectrum trading in a hybrid market. In their settings, primary sellers offer two types of contracts: the guaranteed-bandwidth contract and the opportunistic-access contract. The main difference between these two prior papers and our paper lies in the formulation of the guaranteed contract. Specifically, in Kasbekar et al. (2010) and Muthusamy et al. (2011), the guaranteed-bandwidth contract provides guaranteed access to a certain amount of bandwidth at every time slot. In our model, the guaranteed-delivery contract provides guaranteed access to a total amount of bandwidth in one time period; nevertheless, the bandwidth delivery at every time slot can be different, depending on the PUs’ own demand. The main advantage of our approach is its flexibility in shifting secondary demand across time slots (to comply with the PUs’ random demand). That is, it enables opportunistic delivery of a small (or large) bandwidth to SUs in those time slots that the PUs’ own demand is high (or low). Additionally, our model is also more practically relevant to a wide range of applications, which do not require fixed data delivery per time slot, but demand a guaranteed average data rate over each time period. Furthermore, the underlying market models are also different. Kasbekar et al. (2010) and Muthusamy et al. (2011) assumed that the demand (supply) markets have infinite liquidity. That is, any bandwidth amount supplied by the seller can be sold out (any bandwidth amount demanded by the buyer can be bought from the market) at an “outside fixed price”. In this sense, their market models are closely related to the ideal competitive market. We assume that the market price is endogenously determined by the associated seller and buyer (through, for example, an VCG mechanism). Thus, we essentially consider the monopoly market. Abhishek et al. (2012) also considered a hybrid market, in which a cloud service provider sells its service to users via two different pricing schemes: pay-as-you-go (PAYG) and spot pricing. Under the PAYG, users are charged a fixed price per unit time. Under the spot pricing, users compete for services via using an auction. They focused on the optimal market formulation, that is, the service provider selects different PAYG prices such that different users will choose different pricing schemes or market types; consequently, different hybrid markets will be formulated. In our work, we focused on the optimal spectrum allocation in a given hybrid market. In other words,
self.China_sz_a_60 = tmp_data_60 self.China_sz_a_240 = tmp_data_240 elif DataContext.iscountryUS(): if indicator == "NASDAQ": self.symbols_l_nasdaq = tmp_symbol_exchange_l self.symbols_exchange_l_nasdaq = tmp_symbol_l self.US_nasdaq_15 = tmp_data self.US_nasdaq_30 = tmp_data_30 elif indicator == "NYSE": self.symbols_l_nyse = tmp_symbol_exchange_l self.symbols_exchange_l_nyse = tmp_symbol_l self.US_nyse_15 = tmp_data self.US_nyse_30 = tmp_data_30 elif indicator == "AMEX": self.symbols_l_amex = tmp_symbol_exchange_l self.symbols_exchange_l_amex = tmp_symbol_l self.US_amex_15 = tmp_data self.US_amex_30 = tmp_data_30 logger.debug("--- It is done with preparation of data --- " + indicator) @time_measure def csqsnapshot_t(codes, indicators, options=""): return c.csqsnapshot(codes, indicators, options) connections = threading.local() stock_group = {"科创板": 'tech_startup', "中小企业板": 'small', "创业板": 'startup', "主板A股": 'sh_a', "主板": 'sz_a', "NASDAQ": 'nasdaq', "NYSE": 'nyse', "AMEX": 'amex'} columns = ['gid', 'open', 'close', 'high', 'low', 'volume', 'time', 'isGreater'] root_path = r'/Users/shicaidonghua/Documents/stocks/quant_akshare/' symbol_paths = {'small': root_path + 'small_symbols.csv', 'startup': root_path + 'startup_symbols.csv', 'tech_startup': root_path + 'tech_startup_symbols.csv', 'sh_a': root_path + 'sh_a_symbols.csv', 'sz_a': root_path + 'sz_a_symbols.csv', 'nasdaq': root_path + 'nasdaq_symbols.csv', 'nyse': root_path + 'nyse_symbols.csv', 'amex': root_path + 'amex_symbols.csv'} time_windows_15 = [0 for i in range(100)] # set 100 so as to test after market time_windows_30 = [0 for i in range(100)] # set 100 so as to test after market time_windows_60 = [0 for i in range(100)] # set 100 so as to test after market sectors_CN = {'000001': "优选股关注", '007180': "券商概念", '007224': "大飞机", '007315': "半导体", '007205': "国产芯片", '007039': "生物疫苗", '007001': "军工", '007139': "医疗器械", '007146': "病毒防治", '007147': "独家药品", '007162': "基因测序", '007167': "免疫治疗", '007188': "健康中国", '007195': "人工智能", '007200': "区块链", '007206': "新能源车", '007212': "生物识别", '007218': "精准医疗", '007220': "军民融合", '007243': "互联医疗", '007246': "体外诊断", '007284': "数字货币", '007332': "长寿药", '007336': "疫苗冷链", '007339': "肝素概念", '014010018003': "生物医药", '004012003001': "太阳能", '015011003003': "光伏", '007371': "低碳冶金", '018001001002001': "新能源设备与服务", '007068': "太阳能", '007005': "节能环保", '007152': "燃料电池", '007307': "HIT电池", '007370': "光伏建筑一体化", '007369': "碳化硅", '007003': "煤化工", '007004': "新能源", '007007': "AB股", '007008': "AH股", '007009': "HS300_", '007010': "次新股", '007013': "中字头", '007014': "创投", '007017': "网络游戏", '007019': "ST股", '007020': "化工原料", '007022': "参股券商", '007024': "稀缺资源", '007025': "社保重仓", '007028': "新材料", '007029': "参股期货", '007030': "参股银行", '007032': "转债标的", '007033': "成渝特区", '007034': "QFII重仓", '007035': "基金重仓", '007038': "黄金概念", '007040': "深圳特区", '007043': "机构重仓", '007045': "物联网", '007046': "移动支付", '007048': "油价相关", '007049': "滨海新区", '007050':"股权激励", '007051': "深成500", '007053': "预亏预减", '007054': "预盈预增", '007057': "锂电池", '007058': "核能核电", '007059': "稀土永磁", '007060': "云计算", '007061': "LED", '007062': "智能电网", '007072': "铁路基建", '007074': "长江三角", '007075': "风能", '007076': "融资融券", '007077': "水利建设", '007079': "新三板", '007080': "海工装备", '007082': "页岩气", '007083': "参股保险", '007085': "油气设服", '007089': "央视50_", '007090': "上证50_", '007091': "上证180_", '007093': "食品安全", '007094': "中药", '007096': "石墨烯", '007098': "3D打印", '007099': "地热能", '007100': "海洋经济", '007102': "通用航空", '007104': "智慧城市", '007105': "北斗导航", '007108': "土地流转", '007109': "送转预期", '007110': "大数据", '007111': "中超概念", '007112': "B股", '007113': "互联金融", '007114': "创业成份", '007116': "智能机器", '007117': "智能穿戴", '007118': "手游概念", '007119': "上海自贸", '007120': "特斯拉", '007122': "养老概念", '007124': "网络安全", '007125': "智能电视", '007131': "在线教育", '007133': "二胎概念", '007137': "电商概念", '007136': "苹果概念", '007138': "国家安防", '007140': "生态农业", '007142': "彩票概念", '007143': "沪企改革", '007145': "蓝宝石", '007148': "粤港自贸", '007149': "超导概念", '007150': "智能家居", '007153': "国企改革", '007154': "京津冀", '007155': "举牌", '007159': "阿里概念", '007160': "氟化工", '007161': "在线旅游", '007164': "小金属", '007165': "国产软件", '007166': "IPO受益", '007168': "全息技术", '007169': "充电桩", '007170': "中证500", '007172': "超级电容", '007173': "无人机", '007174': "上证380", '007175': "人脑工程", '007176': "沪股通", '007177': "体育产业", '007178': "赛马概念", '007179': "量子通信", '007181': "一带一路", '007182': "2025规划", '007183': "5G概念", '007184': "航母概念", '007186': "北京冬奥", '007187': "证金持股", '007190': "PPP模式", '007191': "虚拟现实", '007192': "高送转", '007193': "海绵城市", '007196': "增强现实", '007197': "无人驾驶", '007198': "工业4.0", '007199': "壳资源", '007201': "OLED", '007202': "单抗概念", '007203': "3D玻璃", '007204': "猪肉概念", '007207': "车联网", '007209': "网红直播", '007210': "草甘膦", '007211': "无线充电", '007213': "债转股", '007214': "快递概念", '007215': "股权转让", '007216': "深股通", '007217': "钛白粉", '007219': "共享经济", '007221': "超级品牌", '007222': "贬值受益", '007223': "雄安新区", '007225': "昨日涨停", '007226': "昨日连板", '007227': "昨日触板", '007228': "可燃冰", '007230': "MSCI中国", '007231': "创业板综", '007232': "深证100R", '007233': "租售同权", '007234': "养老金", '007236': "新零售", '007237': "万达概念", '007238': "工业互联", '007239': "小米概念", '007240': "乡村振兴", '007241': "独角兽", '007244': "东北振兴", '007245': "知识产权", '007247': "富士康", '007248': "天然气", '007249': "百度概念", '007251': "影视概念", '007253': "京东金融", '007254': "进口博览", '007255': "纾困概念", '007256': "冷链物流", '007257': "电子竞技", '007258': "华为概念", '007259': "纳米银", '007260': "工业大麻", '007263': "超清视频", '007264': "边缘计算", '007265': "数字孪生", '007266': "超级真菌", '007268': "氢能源", '007269': "电子烟", '007270': "人造肉", '007271': "富时罗素", '007272': "GDR", '007275': "青蒿素", '007276': "垃圾分类", '007278': "ETC", '007280': "PCB", '007281': "分拆预期", '007282': "标准普尔", '007283': "UWB概念", '007285': "光刻胶", '007286': "VPN", '007287': "智慧政务", '007288': "鸡肉概念", '007289': "农业种植", '007290': "医疗美容", '007291': "MLCC", '007292': "乳业", '007293': "无线耳机", '007294': "阿兹海默", '007295': "维生素", '007296': "白酒", '007297': "IPv6", '007298': "胎压监测", '007299': "CRO", '007300': "3D摄像头", '007301': "MiniLED", '007302': "云游戏", '007303': "广电", '007304': "传感器", '007305': "流感", '007306': "转基因", '007308': "降解塑料", '007309': "口罩", '007310': "远程办公", '007311': "消毒剂", '007312': "医废处理", '007313': "WiFi", '007314': "氮化镓", '007316': "特高压", '007317': "RCS概念", '007318': "天基互联", '007319': "数据中心", '007320': "字节概念", '007321': "地摊经济", '007322': "三板精选", '007323': "湖北自贸", '007324': "免税概念", '007325': "抖音小店", '007326': "地塞米松", '007328': "尾气治理", '007329': "退税商店", '007330': "蝗虫防治", '007331': "中芯概念", '007333': "蚂蚁概念", '007334': "代糖概念", '007335': "辅助生殖", '007337': "商汤概念", '007338': "汽车拆解", '007340': "装配建筑", '007341': "EDA概念", '007342': "屏下摄像", '007343': "MicroLED", '007344': "氦气概念", '007345': "刀片电池", '007346': "第三代半导体", '007347': "鸿蒙概念", '007348': "盲盒经济", '007349': "C2M概念", '007350': "eSIM", '007351': "拼多多概念", '007352': "虚拟电厂", '007353': "数字阅读", '007354': "有机硅", '007355': "RCEP概念", '007356': "航天概念", '007357': "6G概念", '007358': "社区团购", '007359': "碳交易", '007360': "水产养殖", '007361': "固态电池", '007362': "汽车芯片", '007363': "注册制次新股", '007364': "快手概念", '007365': "注射器概念", '007366': "化妆品概念", '007367': "磁悬浮概念", '007368': "被动元件", '007372': "工业气体", # There is unavailable value of gain/loss and money flow for the below sectors '007373': "电子车牌", '007374': "核污染防治", '007375': "华为汽车", '007376': "换电概念", '007377': "CAR - T细胞疗法", '073259': "碳交易"} sectors_US = {'000001': "优选股关注", '201001': "中概股"} # param: echo=True that is used to show each sql statement used in query engine = create_engine("postgresql+psycopg2://Raymond:123123@localhost:5432/Raymond", encoding='utf-8') class DataSource(enum.Enum): EAST_MONEY = 0 AK_SHARE = 1 YAHOO = 2 SNAPSHOT = 3 EFINANCE = 4 def getdbconn(): if 'connection' not in connections.__dict__: connections.connection = psycopg2.connect( user="Raymond", password="123123", host="127.0.0.1", port="5432", database="Raymond") logger.info('Connect to Raymond\'s database - {}\n current connection is {}\n thread ident is {} and native thread id is {}\n'. format(connections.connection.get_dsn_parameters(), connections.connection, threading.get_ident(), threading.get_native_id())) return connections.connection def createtable(symbols: list, exchange: str, period: int): conn = getdbconn() csr = getdbconn().cursor() stock_name_array = map(str, symbols) symbols_t = ','.join(stock_name_array) stock_symbols = '{' + symbols_t + '}' logger.debug('%s - %s' % (exchange, stock_symbols)) statement_sql = "" create_table = "" if DataContext.iscountryChina(): if period == 15: create_table = "create_table_c" elif period == 30: create_table = "create_table_c_30" elif period == 60: create_table = "create_table_c_60" elif period == 240: create_table = "create_table_c_240" elif DataContext.iscountryUS(): if period == 15: create_table = "create_table_u" elif period == 30: create_table = "create_table_u_30" if create_table != "": statement_sql = "call " + create_table + "(%s,%s);" csr.execute(statement_sql, (stock_symbols, exchange)) conn.commit() def droptable(symbols: list, exchange: str): conn = getdbconn() csr = getdbconn().cursor() stock_name_df_array = map(str, symbols) symbols_t = ','.join(stock_name_df_array) stock_symbols = '{' + symbols_t + '}' logger.debug('%s - %s' % (exchange, stock_symbols)) csr.execute("call drop_table_c(%s,%s);", (stock_symbols, exchange)) conn.commit() update_stat = " do update set open=excluded.open,close=excluded.close,high=excluded.high,low=excluded.low,volume=excluded.volume;" do_nothing = " do nothing;" def inserttab(exchange: str, symbol: str, stock_df: pd.DataFrame, datasource: DataSource, period=15, transientdf: pd.DataFrame=None, type_func=1): conn = getdbconn() csr = getdbconn().cursor() if DataContext.iscountryChina(): if datasource == DataSource.AK_SHARE: stock_day = stock_df['day'].tolist() header_o = 'open' header_c = 'close' header_h = 'high' header_l = 'low' header_v = 'volume' elif datasource == DataSource.SNAPSHOT: stock_day = stock_df.index.tolist() header_o = 'open' header_c = 'close' header_h = 'high' header_l = 'low' header_v = 'volume' elif datasource == DataSource.EAST_MONEY: header_o = 'OPEN' header_c = 'CLOSE' header_h = 'HIGH' header_l = 'LOW' header_v = 'VOLUME' if type_func == 1: stock_day = stock_df.index.tolist() elif type_func == 2: header_d = 'DATES' elif datasource == DataSource.EFINANCE: stock_day = stock_df['日期'].tolist() header_o = '开盘' header_c = '收盘' header_h = '最高' header_l = '最低' header_v = '成交量' statement_start = "insert into china_" elif DataContext.iscountryUS(): if datasource == DataSource.YAHOO: stock_day = stock_df.index.tolist() header_o = 'Open' header_c = 'Close' header_h = 'High' header_l = 'Low' header_v = 'Volume' statement_start = "insert into us_" stock_open = stock_df[header_o] stock_close = stock_df[header_c] stock_high = stock_df[header_h] stock_low = stock_df[header_l] stock_volume = list(map(int, stock_df[header_v].tolist())) if period == 15: count: int = 0 for each_time in stock_day: if DataContext.iscountryUS(): csr.execute(statement_start + exchange + "_tbl (gid,crt_time,open,close,high,low,volume) " + "values (%s,%s,%s,%s,%s,%s,%s) on conflict on constraint time_key_" + exchange + update_stat, (str(symbol), str(each_time), "{:.4f}".format(stock_open[count]), "{:.4f}".format(stock_close[count]), "{:.4f}".format(stock_high[count]), "{:.4f}".format(stock_low[count]), str(stock_volume[count]))) elif DataContext.iscountryChina(): csr.execute(statement_start + exchange + "_tbl (gid,crt_time,open,close,high,low,volume) " + "values (%s,%s,%s,%s,%s,%s,%s) on conflict on constraint time_key_" + exchange + update_stat, (str(symbol), str(each_time), str(stock_open[count]), str(stock_close[count]), str(stock_high[count]), str(stock_low[count]), str(stock_volume[count]))) count += 1 conn.commit() logger.debug("%s - rows are %d for period 15 mins" %
from the considerations above on the coupling factor $w(\lambda)$ of the gauge fields in the DBI action at intermediate ($\lambda = \morder{1}$) and large ($\lambda \to\infty$) values of the coupling is given in~\eqref{wconstraint}. The choice of this potential, however, does have a strong effect on the qualitative behavior on the spectra for low excitation numbers. It is expected to affect strongly the masses of the vector and axial mesons which are identified with the fluctuation modes of the bulk gauge fields. In particular, the lightest meson is usually either a vector or a scalar depending on the choice of $w(\lambda)$. This kind of properties depend on the behavior of the potentials at $\lambda = \mathcal{O}(1)$ and $r= \mathcal{O}(1)$, where the solutions are not analytically tractable, and therefore need to be analyzed numerically. The natural expectation, which can be confirmed by numerics, is that when $w(\lambda)$ and $\kappa(\lambda)$ have qualitatively similar $\lambda$-dependence, then the spectra of the vector and the scalar mesons look qualitatively similar. In practice this means that we will choose the string-motivated value for the power laws of both coupling $\kappa(\lambda)$ and $w(\lambda)$, i.e., $\kappa_p=4/3=w_p$. To conclude, the only choice of potentials that results in exactly linear trajectories is given by~\eqref{finalchoice}. Further, the meson trajectories have the same slopes if $w_p<\kappa_p=4/3$, or also if we have the critical power law $w_p=\kappa_p=4/3$ and in addition $w_\ell<\kappa_\ell=-1/2$. \subsection{Examples of potentials} \label{sec:potentials} In~\cite{jk} and in~\cite{alho} we discussed two classes of potentials $V_g$, $V_{f0}$, $\kappa$, and $a$, which we called potentials I and potentials II. They can be defined as follows. \begin{itemize} \item \textbf{Both Potentials I \& II.} \begin{align} \label{potIandIIcommon} V_{g}(\lambda) & = V_0\left[1+V_1 \lambda + V_2 \lambda^2 \frac{\sqrt{1+\log(1+\frac{\lambda}{\lambda_0})}}{\left(1+\frac{\lambda}{\lambda_0}\right)^{2/3}}\right]\,, \nonumber\\ V_{f0}(\lambda) & = W_0\left[1+W_1 \lambda + W_2 \lambda^2\right]\,. \end{align} \item \textbf{Potentials I.} \begin{equation} \label{potIdefs} a(\lambda) = a_0\,,\qquad \kappa(\lambda) = \frac{1}{\left(1+\frac{3a_1}{4}\lambda\right)^{4/3}}\,. \end{equation} \item \textbf{Potentials II.} \begin{equation} \label{potIIdefs} a(\lambda) = a_0\,\frac{1+a_1 \lambda + \frac{\lambda^2}{\lambda_0^2}}{\left(1+\frac{\lambda}{\lambda_0}\right)^{4/3}}\,, \qquad \kappa(\lambda) = \frac{1}{\left(1+\frac{\lambda}{\lambda_0}\right)^{4/3}}\,. \end{equation} \end{itemize} Here the coefficients are fixed by matching to perturbative QCD as discussed above, except for $W_0$, which remains as a free parameter. We also set $\ell(x=0)=1$, and choose the parameter $\lambda_0$, which only affects the higher order coefficients of the UV expansions, such that the higher order coefficients have approximately the same relative size as with standard scheme choices in perturbative QCD. Explicitly, the coefficients satisfy \begin{align} V_0 &= 12\, , \qquad V_1 = \frac{11}{27 \pi^2}\,,\qquad V_2= \frac{4619}{46656 \pi ^4}\, ; \nonumber \\ W_1 &= \frac{24+(11-2 x) W_0}{27 \pi ^2 W_0}\,,\qquad W_2 = \frac{24 (857-46 x)+\left(4619-1714 x+92 x^2\right) W_0}{46656 \pi ^4 W_0}\,; \nonumber\\ a_0 &= \frac{12-x W_0}{8} \, , \qquad a_1 = \frac{115-16 x}{216 \pi ^2} \, , \qquad \lambda_0 = {8 \pi^2} \,. \end{align} As we discussed above, we consider two qualitatively different choices for $W_0$: either constant $W_0$, which satisfies \begin{equation} \label{W0range} 0<W_0<24/11\,, \end{equation} or $W_0$ fixed such that the pressure agrees with the Stefan-Boltzmann (SB) result at high temperatures \cite{alho} (without the need to include $x$ dependence in the normalization of the action). The latter option gives (when $\ell(x=0)=1$) \begin{equation} W_0 = \frac{12}{x}\left[1-\frac{1}{(1+\frac{7}{4}x)^{2/3}}\right]\qquad \textrm{(Stefan-Boltzmann)}\,, \end{equation} so that the AdS radius is \begin{equation} \ell(x) = \sqrt[3]{1+\frac{7}{4}x}\,. \end{equation} The finite temperature phase diagram is of the exceptional type of Fig.~\ref{fig:finiteTpd} (right) for potentials I if $W_0$ is large or SB normalized,~\cite{alho}, so that there is a chirally symmetric phase at small $x$, as discussed above. An acceptable value of $W_0$ for potentials~I is therefore, e.g., $W_0=3/11$, which is relatively close to the lower limit of the range~\eqref{W0range}. For potentials II all choices produce the standard phase diagram of Fig.~\ref{fig:finiteTpd} (left). Based on the earlier discussion in this section we notice that \begin{itemize} \item \textbf{Potentials I} were chosen such that the power behavior of $\kappa(\lambda)$ and $a(\lambda)$ is the critical one, $\kappa_p=4/3$ and $a_p=0$. These potentials admit a regular IR solution with exponential tachyon, $\tau \sim \tau_0 e^{C r}$, where $C$ can be computed in terms of the potentials and $\tau_0$ is an integration constant (see Appendix~\ref{app:tachyonIR} for details). The asymptotic trajectories of masses in all towers are linear but have logarithmic corrections. \item \textbf{Potentials II} have instead $\kappa_p=4/3$ and $a_p=2/3$. These potentials admit a regular IR solution with $\tau \sim \sqrt{C r +\tau_0}$, and the asymptotic trajectories of masses in all towers are quadratic. \end{itemize} We also see that potentials I can be quite easily modified so that the asymptotic trajectories are exactly linear and that logarithmic corrections are absent: we need to add a critical logarithmic correction to $\kappa(\lambda)$ such that $\kappa_\ell = -1/2$ and we are sitting at the red circle of Fig.~\ref{fig:IRmaps} (right). An explicit choice is \begin{equation} \label{logmodk} \kappa(\lambda) = \frac{1}{\left(1+\frac{3a_1}{4}\lambda\right)^{4/3}}\sqrt{1+\frac{1}{D}\log\left[1+\left(\frac{\lambda} {\lambda_0}\right)^2\right]} \,. \end{equation} There is however the following observation. For these potentials we find that the regular solution has the tachyon IR asymptotics \begin{equation} \tau(r) \sim \tau_0 r^C\,, \end{equation} where the coefficient \begin{equation} C = \frac{27\times 3^{1/3} \sqrt{D}\, (115-16 x)^{4/3} \left(12-x\, W_0\right)}{295616\times 2^{1/6}} \end{equation} must be larger than one. For this to happen for all reasonable $W_0$ and for all values of $x$ up to $x_c$ we need a relatively large $D$, e.g., $D=200$. This means that the logarithmic modification in~\eqref{logmodk} sets in only at very high values of $\lambda$, i.e., only close to the IR singularity. Therefore the logarithmic correction term is expected to only cause minor changes to observables such as the finite temperature phase diagram and low lying masses. Finally we also need to choose the function $w(\lambda)$. As argued above, it should not vanish too fast in the IR, and should have qualitatively similar $\lambda$ dependence as $\kappa(\lambda)$. An intriguing choice would be $w(\lambda) = w_0 \kappa(\lambda)$, where $w_0 = \ell^2 \sqrt{2}/\sqrt{3}$ due to~\eqref{kappawrelation} as we have chosen $\kappa_0=1$. However, as detailed in Appendix~\ref{app:Regge}, such a choice of $w(\lambda)$ is the critical one which would often make the slopes of, e.g., the asymptotic vector and axial vector trajectories to be different. If we want the slopes to be the same, $w(\lambda)$ should vanish slightly slower in the IR than $\kappa(\lambda)$. Therefore a reasonable choice, which would work with potentials I and also together with $\kappa(\lambda)$ of~\eqref{logmodk}, would be \begin{equation} \label{logmodw} w(\lambda) =\frac{w_0}{\left(1+\frac{3a_1}{4}\lambda\right)^{4/3}}\left\{1+\frac{1}{D}\log\left[1+\left(\frac{\lambda} {\lambda_0}\right)^2\right]\right\}\,. \end{equation} We have done the numerical analysis of the next two sections by using the following choices: \begin{itemize} \item \textbf{Potentials I with $W_0=3/11$ and $w(\lambda)=\kappa(\lambda)$.} The motivation for this choice is to mimic (at qualitative level, without fitting any of the numerical results to QCD data) the physics of real QCD in the Veneziano limit. We have checked that the finite temperature phase diagram has the standard structure of Fig.~\ref{fig:finiteTpd} (left) when $W_0=3/11$. Notice that we did not implement the logarithmic corrections of~\eqref{logmodk} and~\eqref{logmodw}, but as we have argued, these factors only affect slightly the numerical results. \item \textbf{Potentials II with SB normalized $W_0$ and $w(\lambda)=1$.} This choice might not model QCD as well as the first one, but the motivation is merely to pick a background with different IR structure in order to see how model dependent our results are. \end{itemize} \section{Spectra: numerical results} \label{sec:results} Here we present the results of the numerical solution of the fluctuation equations for two different classes of potentials (I and II) as specified above in Sec.~\ref{sec:potentials}. We compute the spectrum of all excitation modes as a function of $x$ and for zero quark mass. To find the mass spectrum one has to require normalizability of the wave functions of the fluctuation modes both in the IR and in the UV. Then, the numerical integration of the fluctuation equations leads to discrete towers of masses corresponding to physical states. In practice, the computation proceeds as follows. We choose a set of potentials (I or II) and a value of $x$ below the critical one $x_c$. The dominant background, which has a nontrivial tachyon profile, is then constructed~\cite{jk} by shooting from the IR and matching to the IR expansions of the various fields given in Appendix~\ref{app:bgUVIR}. The coefficients of the fluctuation equations, which are discussed in Sec.~\ref{quadfluct} and in Appendix~\ref{app:quadfluctdet}, are then evaluated on the background. After this the fluctuation equations are solved by shooting from the IR and
that "while it is disappointing that we could not confirm the cometary activity with telescopic observations it is consistent with the fact that ʻOumuamua's acceleration is very small and must therefore be due to the ejection of just a small amount of gas and dust." The ISSI team considered a number of mechanisms by which ʻOumuamua could have escaped from its home system. For example, the object could have been ejected by a gas giant planet orbiting another star. According to this theory, Jupiter created our own solar system's Oort cloud, a population of small objects only loosely gravitationally bound to our Sun in a gigantic shell extending to about a third of the distance to the nearest star. Some of the objects in our Oort cloud eventually make it back into our solar system as long period comets while others may have slipped past the influence of the Sun's gravity to become interstellar travelers themselves. The research team expects that ʻOumuamua will be the first of many interstellar visitors discovered passing through our solar system and they are collectively looking forward to data from the Large Synoptic Survey Telescope (LSST) which is scheduled to be operational in 2022. The LSST, located in Chile, may detect one interstellar object every year and allow astronomers to study the properties of objects from many other solar systems. While the ISSI team hopes that LSST will detect more interstellar objects they think it is unlikely that astronomers will ever detect an alien spacecraft passing through our solar system and they are convinced that ʻOumuamua was a unique and extremely interesting but completely natural object. The research paper, "The Natural History of ʻOumuamua," the ʻOumuamua ISSI Team (Michele Bannister, Asmita Bhandare, Piotr Dybczyński, Alan Fitzsimmons, Aurélie Guilbert-Lepoutre, Robert Jedicke, Matthew Knight, Karen Meech, Andrew McNeill, Susanne Pfalzner, Sean Raymond, Colin Snodgrass, David Trilling and Quanzhi Ye), was published in the journal Nature Astronomy on July 1, 2019. This work was supported by the UK Science and Technology Facilities Council (Award Nos. ST/P0003094/1 and ST/L004569/1), the National Science Foundation (Award Nos. AST1617015 and 1545949), NASA (Award Nos. GO/DD-15405, GO/DD-15447, NAS 5-26555, NNX17AK15G and 80NSSC18K0829), the National Science Centre in Poland (Award No. 2015/17/B/ST9/01790) and the European Research Council (Award No. 802699). The content of this article does not necessarily reflect the views of these organizations. More information about The PanSTARRS project can be found at the PanSTARRS project website http://panstarrs.ifa.hawaii.edu The Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) is a wide-field survey observatory operated by the University of Hawaiʻi Institute for Astronomy. The Minor Planet Center is hosted by the Harvard-Smithsonian Center for Astrophysics and is a sub-node of the Planetary Data System Small Bodies Node at the University of Maryland (http://www.minorplanetcenter.net ). JPL hosts the Center for Near-Earth Object Studies (CNEOS). All are projects of NASA's Near-Earth Object Observations Program, and elements of the agency's Planetary Defense Coordination Office within NASA's Science Mission Directorate. Founded in 1967, the Institute for Astronomy at the University of Hawaii at Manoa conducts research into galaxies, cosmology, stars, planets, and the sun. Its faculty and staff are also involved in astronomy education, deep space missions, and in the development and management of the observatories on Haleakalā and Maunakea. The Institute operates facilities on the islands of Oahu, Maui, and Hawaii. ## Contents Oort was born in Franeker, a small town in the Dutch province of Friesland, on April 28, 1900. He was the second son of Abraham Hermanus Oort, [9] a physician, who died on May 12, 1941, and Ruth Hannah Faber, who was the daughter of Jan Faber and Henrietta Sophia Susanna Schaaii, and who died on November 20, 1957. Both of his parents came from families of clergymen, with his paternal grandfather, a Protestant clergyman with liberal ideas, who "was one of the founders of the more liberal Church in Holland" [10] and who "was one of the three people who made a new translation of the Bible into Dutch." [10] The reference is to Henricus Oort (1836–1927), who was the grandson of a famous Rotterdam preacher and, through his mother, Dina Maria Blom, the grandson of theologian Abraham Hermanus Blom, a "pioneer of modern biblical research". [10] Several of Oort's uncles were pastors, as was his maternal grandfather. "My mother kept up her interests in that, at least in the early years of her marriage", he recalled. "But my father was less interested in Church matters." [10] In 1903 Oort's parents moved to Oegstgeest, near Leiden, where his father took charge of the Endegeest Psychiatric Clinic. [3] Oort's father, "was a medical director in a sanitorium for nervous illnesses. We lived in the director's house of the sanitorium, in a small forest which was very nice for the children, of course, to grow up in." Oort's younger brother, John, became a professor of plant diseases at the University of Wageningen. In addition to John, Oort had two younger sisters and an elder brother who died of diabetes when he was a student. [3] Oort attended primary school in Oegstgeest and secondary school in Leiden, and in 1917 went to Groningen University to study physics. He later said that he had become interested in science and astronomy during his high-school years, and conjectured that his interest was stimulated by reading Jules Verne. [3] His one hesitation about studying pure science was the concern that it "might alienate one a bit from people in general", as a result of which "one might not develop the human factor sufficiently." But he overcame this concern and ended up discovering that his later academic positions, which involved considerable administrative responsibilities, afforded a good deal of opportunity for social contact. Oort chose Groningen partly because a well known astronomer, Jacobus Cornelius Kapteyn, was teaching there, although Oort was unsure whether he wanted to specialize in physics or astronomy. After studying with Kapteyn, Oort decided on astronomy. "It was the personality of Professor Kapteyn which decided me entirely", he later recalled. "He was quite an inspiring teacher and especially his elementary astronomy lectures were fascinating." [10] Oort began working on research with Kapteyn early in his third year. According to Oort one professor at Groningen who had considerable influence on his education was physicist Frits Zernike. After taking his final exam in 1921, Oort was appointed assistant at Groningen, but in September 1922, he went to the United States to do graduate work at Yale and to serve as an assistant to Frank Schlesinger of the Yale Observatory. [4] At Yale, Oort was responsible for making observations with the Observatory's zenith telescope. "I worked on the problem of latitude variation", he later recalled, "which is quite far away from the subjects I had so far been studying." He later considered his experience at Yale useful as he became interested in "problems of fundamental astronomy that [he] felt was capitalized on later, and which certainly influenced [his] future lectures in Leiden." Personally, he "felt somewhat lonesome in Yale", but also said that "some of my very best friends were made in these years in New Haven." [10] ### Early discoveries Edit In 1924, Oort returned to the Netherlands to work at Leiden University, where he served as a research assistant, becoming Conservator in 1926, Lecturer in 1930, and Professor Extraordinary in 1935. [4] In 1926, he received his doctorate from Groningen with a thesis on the properties of high-velocity stars. The next year, Swedish astronomer Bertil Lindblad proposed that the rate of rotation of stars in the outer part of the galaxy decreased with distance from the galactic core, and Oort, who later said that he believed it was his colleague Willem de Sitter who had first drawn his attention to Lindblad's work, realized that Lindblad was correct and
\section{Introduction} An inverse atmospheric dispersion problem is stated as follows: given the topography, the meteorological conditions and a set of detector readings determine where and when the hazardous substance was released and in which quantities. The problem is known as a source reconstruction problem, and it is easy to state but harder to solve due to being, like most inverse problems, ill-posed \cite{Tikhonov1963},\cite{Enting2002}. Spurred by various applications including the locating of industrial plants \cite% {Marchuk1986}, determining the amount of radioactive nuclides released from Chernobyl \cite{GHL} and Fukushima \cite{StohlEtAl}, pin-pointing nuclear tests \cite{RingbomEtAl}, estimation of material released from volcanoes \cite{TheysEtAl},\cite{StohlEtAl2010},\cite{GrahnEtAl} a number of different methods for addressing the inverse problem have been suggested. Even though the main difference perhaps lies in the interpretation of the results these methods are usually divided into two main categories: the probabilistic approach with its Bayesian methods and the deterministic approach with its optimisation methods. In the Bayesian setting a likelihood function is calculated and weighted with any a priori information that one has at hand to yield a posterior probability density function, which is then sampled to yield an estimate of the sought source term (see e.g. \cite{Stuart2010} for an introduction to general Bayesian inverse problems). In the deterministic setting with optimisation methods a norm is devised under which the sensor response of candidate sources is compared with the given sensor readings. The candidate source that best fits the given sensor readings (minimizes the distances under the chosen norm) is then deemed the solution to the inverse problem. For linear inverse atmospheric dispersion problems these methods come in many different flavours and have often been devised with a given application in mind: usually there are - a priori imposed - restrictions on the source characteristics, e.g. the method may assume that the source is well localised (located at a single point in space) and that the release was instantaneous. For example Yee and coauthors have written a series of papers adapting the Bayesian method to inverse dispersion problems of increasing complexity \cite{KYL2007},\cite{Yee2007},\cite{YF2010},\cite{Yee2012} and \cite{Yee2012B}. To use an optimisation method the inverse dispersion problem has to be cast in a manner where the distance (under a chosen norm) function between model sensor data and the given sensor readings can be minimized. Usually a least squares solution is sought, and in \cite{BP2015} conditions under which the least square problem is well defined is presented. Much of the literature focuses on the problem where it is a-priori assumed that there is only a single source, see e.g. \cite{RL1998},\cite{THG2007},\cite{AYH2007} and \cite% {ISS2012}. There are however exceptions, for example in \cite{SSI2012} the renormalisation method (least square method under the renormalisation norm) presented in \cite{ISS2012} is generalised to cover an unknown number of point sources, and in \cite{Bocquet2005} the space-time has been discretised and an optimal source term is constructed by forming a union of (space-time) grid sized point sources. In this paper we make a contribution to the literature on optimisation methods by applying a bilevel optimisation method, see \cite{Bard1998}, to a linear inverse dispersion problem. A bilevel optimisation method splits the optimisation problem in two: into a leader (upper level) problem and a follower (lower level) problem and they are solved concurrently rather than simultaneously. We consider dispersion problems where the source is a single point source emitting at a constant rate. This problem is well suited to a bilevel optimisation method where the follower problem concerns solving for the emission rate and the leader problem pinpointing the location of the source. For the bilevel optimisation method to work properly the follower problem is required to have minima, ideally a strict minimum. We therefore study local strict convexity of the follower problem, see Theorem \ref% {thm:local_convexity} for sufficient conditions. We then explore the concept of local strict convexity and its connection to ill-possedness of inverse problems through a few toy-model examples. Following this the paper is rounded off with the bilevel optimisation method being applied to dispersion data from a series of wind tunnel experiments of urban environments of varying complexity. The wind tunnel data was collected as part of the European Defence Agency category B project MODITIC. In all cases the boundary layer is neutrally stable, and we only consider cases where the released gas is neutrally buoyant making the dispersion problem linear. \section{Bilevel optimization problems} A bilevel optimization problem is a constrained optimization problem where the constraints also includes an optimization problem. The problem is divided into an upper level or leader problem, with decision variables $% \boldsymbol{x}\in X\subseteq \mathbb{R}^{n}$, and a lower level or follower problem, with decision variables $\boldsymbol{y}\in Y\subseteq \mathbb{R}% ^{m} $. Here $X$ and $Y$ may be restricted to integers or nonnegative values. We follow the notation of \cite{Bard1998}, p. 6. The leader problem has the form \begin{eqnarray*} V &=&\min_{\boldsymbol{x}\in X}F\left( \boldsymbol{x},\boldsymbol{y}\left( \boldsymbol{x}\right) \right) \\ \boldsymbol{G}\left( \boldsymbol{x},\boldsymbol{y}\left( \boldsymbol{x}% \right) \right) &\leq &\boldsymbol{0} \end{eqnarray*}% where $F$ and $\boldsymbol{G}$, respectively, are the leader objective function and constraint function, and $\boldsymbol{y}\left( \boldsymbol{x}% \right) $ is an optimal solution to the follower problem% \begin{eqnarray*} v\left( \boldsymbol{x}\right) &=&\min_{\boldsymbol{y}\in Y}f\left( \boldsymbol{x},\boldsymbol{y}\right) \\ \boldsymbol{g}\left( \boldsymbol{x},\boldsymbol{y}\right) &\leq &\boldsymbol{% 0}\text{.} \end{eqnarray*}% An ambiguity occurs if the follower problem has several optimal solutions, i.e., $\boldsymbol{y}\left( x\right) $ is set--valued. Then the follower is indifferent towards these points, but the leader objective may be different for different points in $\boldsymbol{y}\left( x\right) $, and there is no way for the leader to direct the follower to the upper level optimal point. Therefore, there may be no optimal solution to the bilevel program although all functions are continuous and $X,Y$ are compact, cf. \cite{Bard1998}, p. 11. In our problem the sets $X,Y$ will be positive orthants, the follower objective function $f\left( \boldsymbol{x},\boldsymbol{y}\right) $ will be a Mahalanobis distance function measuring the discrepancy between model data and measurements. The leader objective function $F$ will have the form% \begin{equation*} F\left( \boldsymbol{x},\boldsymbol{y}\right) =\exp \left( -\lambda \sum y_{i}\right) +f\left( \boldsymbol{x},\boldsymbol{y}\right) \end{equation*}% hence minimizing the least square function, but penalizing large values of $% \boldsymbol{y}$, which will act as a regularization of the problem ($\lambda >0$ is a regularization parameter). \section{The follower problem} In the setting we are considering we have a priori assumed that the source is a point source releasing a neutrally buoyant substance at a constant rate. Under these assumptions the source location $\boldsymbol{x}\in \mathbb{% R}^{d}$ is a\ nonlinear model parameter while the emission rate $\boldsymbol{% y}\in \mathbb{R}^{1}$ is a linear model parameter. In general, our model formulation allows a linear combination of basic sources, with a linear positive weight vector $y\in \mathbb{R}^{n}$. To solve the inverse problem we need a source-sensor relationship. Since the problem is linear the source-sensor relationship is given a matrix relationship, which for computational efficiency \cite{Marchuk1986} is expressed through the adjoint formulation of the problem, thus $A:\mathbb{R}^{d}\rightarrow \mathbb{R}% _{+}^{m\times n}$ is a matrix function with nonnegative elements (no sinks are considered) and the adjoint model data $\boldsymbol{\mu }=\boldsymbol{% \mu }\left( \boldsymbol{x},\boldsymbol{y}\right) =A\left( \boldsymbol{x}% \right) \boldsymbol{y}\in \mathbb{R}^{m}$. The measured data $\boldsymbol{z}% \in \mathbb{R}^{m}$ is the sensor response.\newline We regard $\boldsymbol{z}$ as a random vector, and we assume that the adjoint model data $\boldsymbol{\mu }$ represent the mean of $\boldsymbol{z}$% . We also assume that the components $z_{i}$ of $\boldsymbol{z}$ are statistically independent and that the variance of $z_{i}$ is% \begin{equation*} var\left( z_{i}\right) =\sigma ^{2}\left( \mu _{i}\right) \end{equation*}% where $\sigma :\mathbb{R\rightarrow R}_{+}$ is a given function. We let the follower objective function be the \emph{Mahalanobis distance} between $% \boldsymbol{z}$ and $\boldsymbol{\mu }$, viz., \begin{equation} f\left( \boldsymbol{x},\boldsymbol{y}\right) =\sum_{i=1}^{m}\frac{\left( z_{i}-\mu _{i}\right) ^{2}}{\sigma ^{2}\left( \mu _{i}\right) }\text{.} \label{eqn:f} \end{equation}% In particular, we want to be able to choose a scale invariant distance function, giving equal emphasis to all $\mu _{i}$, regardless of their size. \subsection{Local convexity of the follower problem} If the follower problem is strictly convex, the follower problem has a unique optimal solution $\boldsymbol{y}\left( \boldsymbol{x}\right) $ for each $\boldsymbol{x}\in X$, and by mild assumptions on $f\left( \boldsymbol{x% },\boldsymbol{y}\right) $, an envelope theorem holds (e.g., \cite% {MilgromSegal2002}, Theorem 2, p. 586), which implies that the optimal value function% \begin{equation*} V\left( \boldsymbol{x}\right) =f\left( \boldsymbol{x},\boldsymbol{y}\left( \boldsymbol{x}\right) \right) =\inf_{\boldsymbol{y\in Y}}\;f\left( \boldsymbol{x},\boldsymbol{y}\right) \end{equation*}% is continuous. Therefore, the leader problem \begin{equation*} \inf_{\boldsymbol{x\in X}}F\left( \boldsymbol{x},\boldsymbol{y}\left( \boldsymbol{x}\right) \right) \end{equation*}% has a solution since% \begin{equation*} F\left( \boldsymbol{x},\boldsymbol{y}\left( \boldsymbol{x}\right) \right) =\exp \left( -\lambda \sum_{i}y_{i}\left( \boldsymbol{x}\right) \right) V\left( \boldsymbol{x}\right) \text{. } \end{equation*}% However, in our setting the only situation when the follower problem is guaranteed to be strictly convex is when $\sigma \left( \mu \right) $ is constant, i.e., the classical least square method. However, we may derive conditions for \emph{local convexity}, as the following theorem shows. \begin{theorem} \label{thm:local_convexity}Assume that \begin{equation} f\left( y_{1},...,y_{n}\right) =\sum_{i=1}^{m}\frac{r_{i}^{2}}{\sigma ^{2}\left( \mu _{i}\right) } \label{def:f} \end{equation}% where% \begin{eqnarray*} \mu _{i} &=&\sum_{j}a_{ij}y_{j} \\ r_{i} &=&z_{i}-\mu _{i}\text{. } \end{eqnarray*}% and $A=\left( a_{ij}\right) \in \mathbb{R}^{m\times n}$. Consider a
\section{Comparison of entropy production bounds} Figure \ref{fig:EP_Bounds} shows the entropy production rate of the specific model considered in the main text, along with the different lower bounds discussed: the Jensen bound~\eqref{main_entropy_inequality}, TUR~\eqref{TURIneq}, and second law ($\dot{\Sigma}\geq0$). \begin{figure}[h] \includegraphics[width=0.8\columnwidth]{EP_Fig.pdf} \caption{\label{fig:EP_Bounds} Comparison of model entropy production with various lower bounds. Entropy production during time $\tau \equiv \ell^2/D_\mathrm{m}$ of the specific model considered in the main text (blue solid curve), our derived Jensen bound~\eqref{main_entropy_inequality} (black dotted), the TUR~\eqref{TURIneq} (gray dashed), and the second law (red dot-dashed), each as a function of $f_\mathrm{max}/f_\mathrm{chem}$. Uncertainties are smaller than the widths of the curves. Parameters are $N=2$ motors, $\betaf_\mathrm{chem}\ell = 15$, $\beta\kappa\ell^2 = 7$, and $D_\mathrm{c}/D_\mathrm{m} = 1/3$.} \end{figure} \section{Trade-off between velocity and power consumption} Figure~\ref{fig:PV_Pareto} shows the trade-off between power consumption and velocity due to parametric variation of the motor number $N$ and barrier heights. Since this Pareto frontier~\eqref{PV_frontier} depends on $D_\mathrm{c}/D_\mathrm{m}$, we hold that ratio constant. Computational constraints limit us to small $N$. When the motors face no barriers ($f_\mathrm{max}/f_\mathrm{chem}=0$), the system exactly saturates the Pareto frontier~\eqref{PV_frontier}. As $f_\mathrm{max}/f_\mathrm{chem}$ increases, the performance trade-off degrades, falling increasingly far from the Pareto frontier. \begin{figure}[h] \includegraphics[width=0.8\columnwidth]{PV_Pareto_Fig.pdf} \caption{\label{fig:PV_Pareto} Trade-off between power consumption $P_\mathrm{chem}$ and scaled velocity $\langle v \rangle/v_{\rm max}$ in the example system, plotted parametrically for $N = \{1,2,4,8,16,32\}$ and different $f_\mathrm{max}$ (colors). Black dotted curve: Pareto frontier~\eqref{PV_frontier}. Other parameters same as Fig.~\ref{fig:Dynamics}: $\betaf_\mathrm{chem}\ell = 15$, $\beta\kappa\ell^2 = 7$, and $D_\mathrm{c}/D_\mathrm{m} = 1/30$. Uncertainties are smaller than the widths of the points.} \end{figure} \section{Linear systems saturate the Jensen bound} Here we prove that a collective-transport system with only linear forces saturates the Jensen bound on entropy production~\eqref{main_entropy_inequality}. Consider a linear system composed of $N+1$ subsystems with positions denoted $\{x_1,...,x_{N+1}\}$, for the first $N$ subsystems the motors, and the last the cargo, so $x_{N+1} \equiv x_\mathrm{c}$ and $D_{N+1}\equivD_\mathrm{c}$. The system has constant force vector $\bm{f}$ and potential \begin{equation} V(\bm{x}) = V_0 + \sum_{i=1}^N\sum_{j=i+1}^{N+1} k_{ij}(x_i-x_j)^2. \end{equation} We neglect linear terms in the potential since they can be incorporated into the constant forces, and do not allow terms of the form $k_i x_i^2$ that depend on the absolute position of one subsystem, since they preclude the existence of a nonequilibrium steady state. The cargo may in general be subject to a non-zero external force, $f_{N+1} \equiv f_\mathrm{ext}$. The dynamics of this system are most simply written in Langevin form as \begin{equation} \dot{\bm{x}} = \beta \bm{D}}%{\underline{\underline{D}} \left[\bm{f} - \bm{A}\bm{x} \right] + \bm{\eta}(t). \end{equation} Here $\bm{D}}%{\underline{\underline{D}}$ is the diffusivity matrix which, under the assumption of multipartite dynamics, is diagonal with entries $D_{ij} = D_i \delta_{ij}$, for Kroneker delta-function $\delta_{ij}$. The matrix $\bm{A}$ satisfies $A_{ij} = \partial_{x_i}\partial_{x_j}V(\bm{x})$, and the vector-valued random noise $\bm{\eta}(t)$ has zero mean and covariance matrix \begin{equation} \left\langle \bm{\eta}(t)\,\bm{\eta}^\top(t')\right\rangle = 2\bm{D}}%{\underline{\underline{D}} \delta(t-t'). \end{equation} The solution is a multivariate Gaussian distribution with mean vector $\bm{\mu}$ and covariance matrix $\bm{C} = \langle \bm{x}\x^\top - \bm{\mu}\mean^\top\rangle$ satisfying the differential equations~\cite[Section 3.2]{risken1996fokker} \begin{subequations} \begin{align} \dot{\bm{\mu}} & = \beta \bm{D}}%{\underline{\underline{D}}\left[\bm{f} - \bm{A}\bm{\mu}\right],\\ \dot{\bm{C}} & = -\beta\bm{D}}%{\underline{\underline{D}}\left[ \bm{A}\bm{C} + \bm{A}^\top \bm{C}\right] + 2\bm{D}}%{\underline{\underline{D}}. \label{eq:covode} \end{align} \end{subequations} By definition $\bm{A}$ and $\bm{C}$ are symmetric, so $\bm{A}=\bm{A}^\top$, $\bm{C}=\bm{C}^\top$, and $\bm{C}^{-1} = \left(\bm{C}^{-1}\right)^\top$. The entropy production rate for the $i$th subsystem is~\cite{horowitz2015multipartite}: \begin{subequations} \begin{align} \dot{\Sigma}_i & = \frac{1}{D_i}\left\langle \left(\frac{J_i(\bm{x},t)}{P(\bm{x},t)}\right)^2\right\rangle\\ & = \frac{1}{D_i} \left\langle \frac{1}{P(\bm{x},t)^2}\left(\beta D_i f_i P(\bm{x},t) - \beta D_i(\bm{A}\bm{x})_iP(\bm{x},t) - D_i\frac{\partial}{\partial x_i}P(\bm{x},t)\right)^2\right\rangle\\ & = \frac{1}{D_i} \left\langle\left(\beta D_i f_i - \beta D_i(\bm{A}\bm{x})_i - D_i\frac{\partial}{\partial x_i}\ln P(\bm{x},t)\right)^2\right\rangle\\ & = \frac{1}{D_i}\left\langle \underbrace{(\beta D_if_i)^2 - 2(\beta D_i)^2 f_i(\bm{A}\bm{x})_i + (\beta D_i(\bm{A}\bm{x})_i)^2}_{1} \underbrace{- 2\beta (D_i)^2 \left[f_i - (\bm{A}\bm{x})_i\right] \frac{\partial}{\partial x_i}\ln P(\bm{x},t)}_{2} \underbrace{+ D_i^2 \left[\frac{\partial}{\partial x_i}\ln P(\bm{x},t)\right]^2}_{3}\right\rangle. \label{eq:threeTerms} \end{align} \end{subequations} For clarity, we separately evaluate the three terms in this lengthy expression. The first term is \begin{subequations} \begin{align} \frac{1}{D_i}\left\langle (\beta D_if_i)^2 - 2(\beta D_i)^2 f_i(\bm{A}\bm{x})_i + (\beta D_i(\bm{A}\bm{x})_i)^2\right\rangle & = \frac{(\beta D_i)^2}{D_i}\left[f_i^2 - 2f_i(\bm{A}\bm{\mu})_i + \left\langle(\bm{A}\bm{x})_i^2\right\rangle\right]\\ & = \frac{(\beta D_i)^2}{D_i}\left[f_i^2 - 2f_i(\bm{A}\bm{\mu})_i + \left\langle\bm{A}\bm{x}\x^\top\bm{A}\right\rangle_{ii}\right]\\ & = \frac{(\beta D_i)^2}{D_i}\left[f_i^2 - 2f_i(\bm{A}\bm{\mu})_i + \left(\bm{A}\bm{\mu}\mean^\top\bm{A}\right)_{ii} + \left(\bm{A}\bm{C}\bm{A}\right)_{ii}\right]\\ & = \frac{1}{D_i}\left[(\beta D_i)^2\left(f_i - (\bm{A}\bm{\mu})_i\right)^2 \right] + \beta^2D_i(\bm{A}\bm{C}\bm{A})_{ii}\\ & = \frac{(\dot{\bm{\mu}}_i)^2}{D_i} + \beta^2D_i(\bm{A}\bm{C}\bm{A})_{ii}\label{eq:eqs1}\\ & = \frac{\langle v\rangle^2}{D_i} +\beta D_i\left(\bm{A}\right)_{ii}.\label{firstterm} \end{align} \end{subequations} In the last line we took the steady-state limit so that $\dot{\bm{\mu}}_i = \langle v\rangle$. We further assumed that in the steady-state limit each term in the covariance matrix is linear in $t$: \begin{equation} \bm{C} = \bm{u} t + \bm{v}, \end{equation} where both $\bm{u}$ and $\bm{v}$ must be symmetric. This linearity in $t$ is necessary to obtain a constant effective diffusivity in the steady-state limit. The differential equation~\eqref{eq:covode} for the covariance then simplifies to \begin{subequations} \begin{align} \bm{u} & = -2\beta\bm{D}}%{\underline{\underline{D}}\bm{A}\bm{u} t -2\beta\bm{D}}%{\underline{\underline{D}}\bm{A}\bm{v} + 2\bm{D}}%{\underline{\underline{D}}. \end{align} \end{subequations} Since the left-hand side is independent of $t$, the right-hand side must be as well. For this to be true for general $\bm{D}}%{\underline{\underline{D}}$, we must have $\bm{A}\bm{u}=0$. We then evaluate the rightmost term in \eqref{eq:eqs1}: \begin{subequations} \begin{align} \bm{A}\bm{C}\bm{A} & = \beta^{-1}\bm{A} - \frac{1}{2}\beta^{-1}\bm{D}}%{\underline{\underline{D}}^{-1}\dot{\bm{C}}\bm{A}\\ & = \beta^{-1}\bm{A} - \frac{1}{2}\beta^{-1}\bm{D}}%{\underline{\underline{D}}^{-1}\bm{u}\bm{A}\\ & = \beta^{-1}\bm{A} - \frac{1}{2}\beta^{-1}\bm{D}}%{\underline{\underline{D}}^{-1}\left(\bm{A}\bm{u}\right)^\top\\ & = \beta^{-1}\bm{A}. \end{align} \end{subequations} In \eqref{eq:covode}, the left-hand side is independent of $t$, so the right-hand side must be also, thus $\bm{A}\bm{u} = 0.$ The second term in \eqref{eq:threeTerms} is \begin{subequations} \begin{align} \frac{1}{D_i}\left\langle- 2\beta (D_i)^2 \left[f_i - (\bm{A}\bm{x})_i\right] \frac{\partial}{\partial x_i}\ln P(\bm{x},t) \right\rangle & = -2\frac{\beta D_i^2}{D_i}\left\langle \left[f_i - (\bm{A}\bm{x})_i\right] \frac{\partial}{\partial x_i}\left(-\frac{1}{2}(\bm{x}-\bm{\mu})^\top\bm{C}^{-1}(\bm{x}-\bm{\mu})\right) \right\rangle\\ & = -2\frac{\beta D_i^2}{D_i}\left\langle \left[f_i - (\bm{A}\bm{x})_i\right]\left(-\frac{1}{2}\left[(\bm{x}-\bm{\mu})^\top\bm{C}^{-1}\right]_i - \frac{1}{2}\left[\bm{C}^{-1}(\bm{x}-\bm{\mu})\right]_i\right) \right\rangle\\ & = 2\beta D_i\left\langle \left[f_i - (\bm{A}\bm{x})_i\right] \left( \bm{C}^{-1}(\bm{x}-\bm{\mu})\right)_{i}\right\rangle\\ & = 2\beta D_i f_i \left\langle \left( \bm{C}^{-1}(\bm{x}-\bm{\mu})\right)_{i}\right\rangle - 2\beta D_i \left\langle \bm{A}\bm{x}\left( \bm{C}^{-1}(\bm{x}-\bm{\mu})\right)_{i}\right\rangle\\ & = -2\beta D_i\left\langle \left(\bm{A}\bm{x}(\bm{x}-\bm{\mu})^\top\bm{C}^{-1}\right)_{ii}\right\rangle\\ & = -2\beta D_i\left\langle \left(\bm{A}\bm{C}\cov^{-1}\right)_{ii}\right\rangle\\ & = -2\beta D_i\left(\bm{A}\right)_{ii}.\label{secondterm} \end{align} \end{subequations} Finally, the third term in \eqref{eq:threeTerms} is \begin{subequations} \begin{align} \frac{1}{D_i}\left\langle D_i^2 \left[\frac{\partial}{\partial x_i}\ln P(\bm{x},t)\right]^2\right\rangle & = D_i \left\langle \left[-\frac{1}{2}\frac{\partial}{\partial x_i}\left((\bm{x}-\bm{\mu})^\top\bm{C}^{-1}(\bm{x}-\bm{\mu})\right)\right]^2\right\rangle\\ & = D_i \left\langle \left( \bm{C}^{-1}(\bm{x}-\bm{\mu})\right)_{i}^2\right\rangle\\ & = D_i\left\langle \left(\bm{C}^{-1}(\bm{x}-\bm{\mu})(\bm{x}-\bm{\mu})^\top\bm{C}^{-1}\right)_{ii}\right\rangle\\ & = D_i \left(\bm{C}^{-1}\left\langle(\bm{x}-\bm{\mu})(\bm{x}-\bm{\mu})^\top\right\rangle\bm{C}^{-1}\right)_{ii}\\ & = D_i \left(\bm{C}^{-1}\bm{C}\cov^{-1}\right)_{ii}\\ & = D_i \left(\bm{C}^{-1}\right)_{ii}\\ & = \beta D_i (\bm{A})_{ii}.\label{thirdterm} \end{align} \end{subequations} To derive the last line we used \begin{subequations} \begin{align} \bm{C}^{-1} & = \bm{A}\A^{-1}\bm{C}^{-1}\bm{A}^{-1}\bm{A}\\ & = \bm{A}\left(\bm{A}\bm{C}\bm{A}\right)^{-1}\bm{A}\\ & = \bm{A} \left(\beta^{-1}\bm{A}\right)^{-1}\bm{A}\\ & = \beta\bm{A}\A^{-1}\bm{A}\\ & = \beta \bm{A}. \end{align} \end{subequations} Summing the three terms~\eqref{firstterm}, \eqref{secondterm}, and \eqref{thirdterm}, the entropy production rate of the $i$th subsystem is \begin{subequations} \begin{align} \dot{\Sigma}_i & = \frac{1}{D_i}\langle v\rangle^2 + \beta D_i\left(\bm{A}\right)_{ii} - 2\beta D_i\left(\bm{A}\right)_{ii} + \beta D_i\left(\bm{A}\right)_{ii}\\ & = \frac{1}{D_i}\langle v\rangle^2. \end{align} \end{subequations} Thus the total rate of entropy production is \begin{equation} \begin{aligned} \dot{\Sigma} & = \sum_{i=1}^{N+1}\dot{\Sigma}_i\\ & = \left(\sum_{i=1}^{N+1}\frac{1}{D_i}\right)\langle v\rangle^2\\ & = \left(\frac{1}{D_\mathrm{c}} + \sum_{i=1}^N\frac{1}{D_i}\right)\langle v\rangle^2\\ & = \frac{\langle v\rangle^2}{D_\mathrm{bare}}, \end{aligned} \end{equation} exactly saturating the Jensen bound~\eqref{main_entropy_inequality}. \section{Comparison with experiments} Figure~\ref{fig:Experiment} shows experimental measurements of velocity and efficiency for myosin motors in several different animal tissues from Ref.~\cite{purcell2011nucleotide}. For maximum velocity $v_\mathrm{max} = 12\,\mu\mathrm{m/s}$ (to our knowledge, the highest observed in animal muscle tissue~\cite{piazzesi2002size}), our predicted Pareto frontier~\eqref{thermeffineq} indeed bounds the experimentally observed performance. The assumption of a global $v_\mathrm{max}$ across many different species is reasonable so long as the difference between species-specific myosin motors comes predominantly from different potentials $V(\bm{x})$ as opposed to differences in the chemical driving force and bare diffusivity. \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{ExperimentalComp.pdf} \caption{\label{fig:Experiment} Myosin motors across various animals obey theoretical Pareto frontier. Red points: experimental measurements of efficiency $\eta_\mathrm{T}$ and velocity $\langle v\rangle$ for myosin motors from different animal species~\cite{purcell2011nucleotide}. Black dotted line: predicted Pareto frontier~\eqref{thermeffineq} for $v_{\rm max} = 12$ $\mu$m/s.} \end{figure} \section{Barrier heights in real systems} Here we use experimental data to estimate the heights of energy barriers separating metastable states for kinesin motors. Recall from \eqref{eq:total_potential} that the $i$th motor has a periodic potential $V_i(x_{i}) = \frac{1}{2}E^\ddagger \cos \left(2\pix_{i}/\ell\right)$ with barrier height $E^\ddagger$, period $\ell$, and maximum conservative force $f_\mathrm{max} = E^\ddagger/(2\ell)$. The Kramers rate~\cite{kramers1940brownian} for an uncoupled motor hopping forward from one energy minimum to the next is \begin{subequations} \begin{align} k_+& = \frac{\betaD_\mathrm{m}}{2\pi}\sqrt{\left|\frac{\partial^2V_i}{\partial x_i^2}\right|_{x_i=a}\cdot \left|\frac{\partial^2V_i}{\partial x_i^2}\right|_{x_i=b}} \,\, e^{-\beta E_b^+}\\ & = \frac{\pi\beta D_\mathrm{m} E^\ddagger}{\ell^2} \, e^{-\beta E_b^+},\label{eq:forwardrate} \end{align} \end{subequations} where $a$ is the position of the bottom of the current potential minimum and $b$ is the position of the peak of the energy barrier to the right. The effective barrier height is $E_b^+ =E^\ddagger - f_\mathrm{chem}\ell/2$. Note that the cosine potential has second derivative of magnitude $2\pi^2E^\ddagger/\ell^2$ at both minima and peaks (points $a$ and $b$). Likewise, the rate for the motor hopping backward to the previous minimum is \begin{equation}\label{eq:backwardrate} k_- = \frac{\pi\beta D_\mathrm{m} E^\ddagger}{\ell^2}\,e^{-\beta E_b^-}, \end{equation} where this time the effective barrier height is $E_b^- =E^\ddagger + f_\mathrm{chem}\ell/2$. Analysis of experimental data~\cite{carter2005mechanics} yields step rates for kinesin of $k_+ = 133.0/$s and $k_- = 0.2/$s~\cite{vu2016discrete}, and step size $\ell=8.2$ nm. Combining these with previous estimates of the motor diffusivity $D_\mathrm{m}\approx\mathcal{O}(10^{-3})\,\mu$m$^2$/s~\cite{brown2019pulling,leighton2022performance}, solving the two equations \eqref{eq:forwardrate} and \eqref{eq:backwardrate} for the two remaining parameters yields estimates $f_\mathrm{chem}\ell\approx 7\,k_\mathrm{B}T$ and $E^\ddagger = 2f_\mathrm{max}\ell\approx 6\,k_\mathrm{B}T$. Accordingly, $f_\mathrm{max}/f_\mathrm{chem}\approx0.4$ sets the scale for our numerical
dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight 16265   Wed Jul 28 20:20:09 2021 YehonathanUpdateGeneralThe temperature sensors and function generator have arrived in the lab I put the temperature sensors box on Anchal's table (attachment 1) and the function generator on the table in front of the c1auxey Acromag chassis (attachment 2). Attachment 1: 20210728_201313.jpg Attachment 2: 20210728_201607.jpg 16264   Wed Jul 28 17:10:24 2021 AnchalUpdateLSCSchnupp asymmetry [Anchal, Paco] I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm $\pm$ 0.9 cm. ### Method • One of the arms is misalgined both at ITM and ETM. • The other arm is locked and aligned using ASS. • The SRCL oscillator's output is changed to the ETM of the chosen arm. • The AS55_Q channel in demodulation of SRCL oscillator is configured (phase corrected) so that all signal comes in C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT. • The rotation angle of AS55 RFPD is scanned and the C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT is averaged over 10s after waiting for 5s to let the transients pass. • This data is used to find the zero crossing of AS55_Q signal when light is coming from one particular arm only. • The same is repeated for the other arm. • The difference in the zero crossing phase angles is twice the phase accumulated by a 55 MHz signal in travelling the length difference between the arm cavities i.e. the Schnupp Asymmetry. I measured a phase difference of 5 $\pm$1 degrees between the two paths. The uncertainty in this measurement is much more than gautam's 15956 measurement. I'm not sure yet why, but would look into it. Quote: I used the Valera technique to measure the Schnupp asymmetry to be $\approx 3.5 \, \mathrm{cm}$, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. Attachment 1: Lsch.pdf 16263   Wed Jul 28 12:47:52 2021 YehonathanUpdateCDSOpto-isolator for c1auxey To simulate a differential output I used two power supplies connected in series. The outer connectors were used as the outputs and the common connector was connected to the ground and used as a reference. I hooked these outputs to one of the differential analog channels and measured it over time using Striptool. The setup is shown in attachment 3. I tested two cases: With reference disconnected (attachment 1), and connected (attachment 2). Clearly, the non-referred case is way too noisy. Attachment 1: SUS-ETMY_SparePDMon0_NoRef.png Attachment 2: SUS-ETMY_SparePDMon0_Ref_WithGND.png Attachment 3: DifferentialOutputTest.png 16262   Wed Jul 28 12:00:35 2021 YehonathanUpdateBHDSOS assembly After receiving two new tubes of EP-30 I resumed the gluing activities. I made a spreadsheet to track the assemblies that have been made, their position on the metal sheet in the cleanroom, their magnetic field, and the batch number. I made another batch of 6 magnets yesterday (4th batch), the assembly from the 2nd batch is currently being tested for bonding strength. One thing that we overlooked in calculating the amount of glue needed is that in addition to the minimum 8gr of EP-30 needed for every gluing session, there is also 4gr of EP-30 wasted on the mixing tube. So that means 12gr of EP-30 are used in every gluing session. We need 5 more batches so at least 60gr of EP-30 is needed. Luckily, we bought two tubes of 50gr each. 16261   Tue Jul 27 23:04:37 2021 AnchalUpdateLSC40 meter party [ian, anchal, paco] After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM (POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments of each cavity was ok. We were also able to lock PRMI using IFO_CONFIGURE>Restore PRMI carrier. This was very weird to us. We were pretty sure that the aligment is correct, so we decided to cehck the POX POY signal chain. There was essentially no signal coming at POX11 and there was a -100 offset on it. We could see some PDH signal on POY11 but not enough to catch the locks. We tried running IFO_CONFIGURE>LSC OFFSETS to cancel out any dark current DC offsets. The changes made by the script are shown in attachment 1. We went to check the tables and found no light visible on beam finder cards on POX11 or POY11. We found that ITMX was stuck on one of the coils. We unstuck it using the shaking method. The OPLEVs on ITMX after this could not be switched on as the OPLEV servo were railing to limits. But when we ran Restore XARM (POX) again, they started working fine. Something is done by this script that we are not aware of. We're stopping here. We still can not lock any of the single arms. Wed Jul 28 11:19:00 2021 Update: [gautam, paco] Gautam found that the restoring of POX/POY failed to restore the whitening filter gains in POX11 / POY11. These are meant to be restored to 30 dB and 18 dB for POX11 and POY11 respectively but were set to 0 dB in detriment of any POX/POY triggering/locking. The reason these are lowered is to avoid saturating the speakers during lock acquisition. Yesterday, burt-restore didn't work because we restored the c1lscepics.snap but said gains are actually in c1lscaux.snap. After manually restoring the POX11 and POY11 whitening filter gains, gautam ran the LSCOffsets script. The XARM and YARM were able to quickly lock after we restored these settings. The root of our issue may be that we didn't run the CARM & DARM watch script (which can be accessed from the ALS/Watch Scripts in medm). Gautam added a line on the Transition_IR_ALS.py script to run the watch script instead. Attachment 1: Screenshot_2021-07-27_22-19-58.png 16260   Tue Jul 27 20:12:53 2021 KojiUpdateBHDSOS assembly 1 or 2. The stained ones are just fine. If you find the vented 1/4-20 screws in the clean room, you can use them. For the 28 screws, yeah find some spares in the clean room (faster), otherwise just order. 16259   Tue Jul 27 17:14:18 2021 YehonathanUpdateBHDSOS assembly Jordan has made 1/4" tap holes in the lower EQ stop holders (attachment). The 1/4" stops (schematics) fit nicely in them. Also, they are about the same length as the small EQ stops, so they can be used. However, counting all the 1/4"-3/4" vented screws we have shows that we are missing 2 screws to cover all the 7 SOSs. We can either: 1. Order new vented screws. 2. Use 2 old (stained but clean) EQ stops. 3. Screw holes into existing 1/4"-3/4" screws and clean them. 4. Use small EQ stops for one SOS. etc. Also, I found a mistake in the schematics of the SOS tower. The 4-40 screws used to hold the lower EQ stop holders should be SS and not silver plated as noted. I'll have to find some (28) spares in the cleanroom or order new ones. Attachment 1: 20210727_154506.png 16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement [gautam, yehonathan, paco] We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties. Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the
you are viewing a single comment's thread. [–] 2 points3 points sorry, this has been archived and can no longer be voted on As a twentysomething currently (re)learning algebra and trigonometry in order to enter an engineering program, I empathize with what many of those schoolchildren alluded to in the article feel and go through. I think much of that math anxiety is emotional or psychological, considering one is being measured in a high pressure environment, where the results determine whether your life advances or not. Children might benefit more if they are encouraged and taught how to manage success and failure. [–] 1 point2 points sorry, this has been archived and can no longer be voted on Can you explain what you mean by (re)learning? I'm at a point in my college career where I'm forced to enter math classes in order to proceed. I cannot take any other classes until I get math knocked out. The problem is that I am two or three classes below college level algebra. I've failed the first go round in college level algebra, and am scared that I won't be able to get a college diploma because I have 12 years of public school math ruining my chances of understanding higher math. My question is this-- how does one deprogram themselves and actually learn math? The class that I failed was based around factoring polynomials and linear equations. I cannot understand that shit. AT ALL. I don't know where to turn. I studied, I went to meetings with my college math instructor at her classroom at the local highschool where she teaches full time. I still could not pass. Is there some set of books or software or program that I can buy that might teach me math in a novel new way so that I might understand it? [–]Discrete Math 1 point2 points sorry, this has been archived and can no longer be voted on There is no royal road to mathematics I'm afraid. But if you have specific questions you should ask in Can you describe what you don't understand about these equations? [–] 0 points1 point sorry, this has been archived and can no longer be voted on I appreciate your response, but I can see now that you can't help me. [–] 1 point2 points sorry, this has been archived and can no longer be voted on You need a good one on one tutor that can address your difficulties and explain things in a way that you can understand it. You can look for one via your school's tutoring service, via recommendation, craigslist, etc. A good private tutor will run you $30-80 an hour, depending where you live. Look for a tutor that can really speak TO you, rather than AT you - this is very important. What he meant by relearning is the idea of having a new perspective on the material years later... taking trig in high school, many students learn the steps necessary to solve a particular type of problem and can do relatively well on a midterm or final designed to test those types of problems (regurgitation), but years later when they revisit the material again, they are able to contextualize that same type of problem in the larger scheme of what trigonometry is all about. Coming "full circle" like that results in true comprehension and allows for one to solve new types of problems, ones that are within the same realm of math but have not explicitly covered before. A couple more concepts relearnt like that, and now we've got a person that's actually got a pretty good understanding of trig, and the right mindset of how to approach mathematics in general. Now, that's where the true learning can begin. In your case, you've been bogged down by an education system that caters to the general - perhaps you don't think or learn about math the same way others do. That's fine! You need to find someone that understands the way you learn and the way you think, and then convey these mathematical concepts in that way. Those people are out there, you just have to look for them. Good luck! [–] 0 points1 point sorry, this has been archived and can no longer be voted on Thanks for this. [–] 1 point2 points sorry, this has been archived and can no longer be voted on I may have some perspective on this. I've spent the past four years learning math that I either haven't seen in nearly 40 years, or never had in the first place. I was your typical high verbal scores/lower math scores person, and spent over 25 years in industry as a technical writer in the software industry. I quit a few years ago and decided to go into teaching - figured I would be a language arts teacher. Well, I started working as a tutor in a school, and when I saw how they were teaching middle school math these days, I got all excited about math in a way I hadn't been since I was that age (MANY years ago). I had always said I was a words person, not a numbers person - well, I decided to dump that perception of myself, and made the decision to become a math teacher! I passed the Massachusetts teachers exam, and have spent the remaining years reteaching and teaching myself the math so that I can get it right. And sometimes it is hard - VERY hard. It's actually great for me that it is so difficult, because now I know exactly how students feel who are struggling with their math. What worked for me? At the beginning, I bought just about all of the MathTutorDVD series by Jason Gibson. They are not cheap, but they are cheaper than textbooks and courses. Jason breaks concepts and techniques down into simple steps, and gives you a lot of confidence and methods to get going quickly. That helped me a LOT at the beginning (he even has a video on using the TI calculators, which can help quite a bit, especially if you end up in a statistics class). But the videos don't cover everything...not even Khan Academy. Khan Academy web site has literally thousands of videos, and a good portion of them is Sal showing how to solve problems. I'm one of those few people who actually find his delivery somewhat annoying and distracting, but he DOES deliver a valuable resource. And don't discount what Velium said. I've had to purchase a lot of used math books, because, as many others have said in other discussions, one book rarely describes everything well. Try to find a variety of books on the same topic (College Algebra, for example) and see if one section in one book will explain a concept better than the same section in another book. It's annoying to have to bounce around between texts, I know - I'd love to find a one-source-with-everything-I-need textbook, but no luck so far. Try not to give up on yourself. I've had days and weeks when I would be trying to understand a problem, and convinced I would never be able to even see what it was I was supposed to do, and then bam - I get it. Other times, it just meant slogging through every sub-step along the way until I arrived at the solution. The big thing to remember is this: there are millions of people who have approached this material and were able to understand it at some point. From the way you are able to write, it's clear you're not brain damaged, which means you likely have the physical equipment to do this. Trust those two propositions, and keep searching for the books and/or people who can help you see
process, and distribute a volumetric quantity (a gallon) of crude oil. The report offers the equation E_Tp = [(m_c C_c + m_o C_o ) (T_R-T_O)]/[m_c] as a way to calculate it. But this is the energy of the sensible part of the oil-water mixture above the reference temperature T_O. It does not include the chemical energy of the crude oil and the formula cannot be reconciled with the definition of E_Tp. The following equation also appears in the report E_Tp = integral_{t_1}^{t_2} T_0 \sigma^dot_cv dt\] Thus there are two equations to use for calculating E_Tp and there is no mention what the independent variables are and what is calculated using these equations. If the value of E_Tp is calculated this way then how is the previous equation used? The only unknowns are the reservoir temperature T_R and the oil-water ratio, if the total flow rate is determined from the depletion rate equation. The reservoir temperature can be measured, so the unknown seems to be the water oil ratio. However, the report makes use of an empirical equation for the oil/water ratio as a function of the percent depletion of the reservoir. Finally last equation can only be used to calculate the change in exergy, and this would necessitate a new symbol to be introduced for exergy, and this is not the same as energy. The report next presents calculation of the oil extraction trajectory that is based on Hubbert's methodology. The calculations are in close agreement what others have found., with cumulative production 2357 Gb that is somewhat larger than what Campbell and Laherrere's value 2123 Gb. It is now well known that the in the calculations based on logistic equation there is a slow drift to large values of the ultimate production as more data has been included in the calculations with the passing of the In the same section is also a discussion of the surface water cut as a function of the percent of oil extracted from a reservoir. The curve is then rotated in order to satisfy two criteria set by the authors. Now a rotation of a curve is a mathematical transformation and a curve cannot be arbitrarily rotated without destroying the underlying mathematical theory. Furthermore, the report states that E_Tp cannot exceed E_G, the crude oil's specific exergy. The terminology is again used loosely applied to both energy and exergy. Returning to the calculation in Section 4.1 of the report for calculating$E_{Tp}$by the equation E_Tp = [(m_c C_c + m_o C_o ) (T_R-T_O)]/[m_c] The statement on top of page 19 suggests that the water cut is an input parameter, in which case the value of E_Tp depends only on the reservoir temperature. The reservoir temperature in turn is a function of the depth of the well, owing to the geothermal gradient. This would allow this equation to be used to calculate the sensible energy of oil-water mixture. But what purpose does this serve? The sensible heat of the crude oil is not used in any significant way. The crude oil cools as it enters the ground facilities and it cools further as it is transported in the pipelines. No power is generated from the sensible part of the crude oil's energy. Only the chemical energy is valuable upon combustion. The rest of the report relates to how prices are linked to the energy delivered. There is no theory to predict how prices adjust to either temporary surplus or deficit. From what has been discussed above, the thermodynamic analysis is incorrect and therefore any calculations and graphs based on this analysis must also be unreliable. Readers have noted that the so called analysis predicts a peak in oil production during the 2017-2018 time frame and troubles by 2023. That this coincides with the time others have judged the difficulties to appear, seems to give the report a superficial credibility. If the authors have a better handle on how much energy is expended in oil production, they can form the EROIE ratio and it would constitute an independent check on the work of Hall and his coworkers on EROEI. Such an independent analysis would have some value Rune Likvern says: 02/10/2017 at 10:34 pm Seppo, +1 000 000! I am (and many others) now awaiting Hills rebuttal to this. #### [Jul 19, 2016] Berman: Two years into the global oil-price collapse, it seems unlikely that prices will return to sustained levels above$70 per barrel any time soon or perhaps, ever ##### See much more realistic Fernando oil prices forecast ###### peakoilbarrel.com texas tea , 07/16/2016 at 4:58 pm I like Art and now he thinks and writes but I also think that he as some Dennis Gartman blood in him, he holds many ideas at the same time and he can argue any of them very well. These articles seem to contradict each other a bit, but they are at least thoughtful.. and now for something completely different: https://www.donaldjtrump.com/press-releases/an-america-first-energy-plan "Two years into the global oil-price collapse, it seems unlikely that prices will return to sustained levels above $70 per barrel any time soon or perhaps, ever. That is because the global economy is exhausted" ~A.Berman ca. July. 2016 "But from 2008 to 2015, oil production actually fell in 27 of 54 countries despite record high price. Thus, while peak oil critics have been proven right in North America they have been proven wrong in half of the World's producing countries" ~ E. Mearns ca. July, 2016 It looks like my posts at this fine blog for the past 2 – 2.5 years are finally being read and understood ….. Maybe one day even Dennis will get the message……. ……one can only hope….. "…while indeed initiated by geology, this time "PEAK" shall be by the way – and in the form of low prices…" ~ Petro's main theme for the past 2 years on POB Be well, Petro P.S.: a little hubris and arrogance is healthy now and then…. Fernando Leanme , 07/16/2016 at 8:06 am Here's my forecast Javier , 07/16/2016 at 9:22 am Unlikely, Fernando. I see very high volatility in oil prices heading our way. Sustained high prices are only possible with a very good economy or with a very low production (you only sell to the elites). On the other hand the value of money could tank with a monetary crisis, and oil prices could rise to millions of dollars per barrel. Stavros Hadjiyiannis , 07/16/2016 at 12:42 pm I totally agree with you. I see the oil price rising well over 100 bucks per barrel before the end of the decade. As for the persistent fantasies that Russian oil output will decline. The exact opposite will happen in the long-term. Russian oil reserves easily dwarf anybody else's. The concluding paragraph on the oil reserves of the Bazhenov formation in SW Siberia reaches an unequivocal conclusion: "Giant recoverable oil reserves contained in the fractures suggest that the Jurassic reservoir is a primary oil accumulation which has no analog all over the world. Therefore, we believe that Russia has the largest hydrocarbon reserves in the world." shallow sand , 07/16/2016 at 6:07 pm Any info on how the first wells in this play are performing. Seems it is difficult to find much information online about them. Javier , 07/16/2016 at 4:53 am Petro, It is fine and dandy that you show some arrogance when the data is starting to support your hypothesis, however I must point out that a lot of people have been coming to the same conclusions at about the same time. There are a lot of clever people in the world. Ron Patterson has been onto oil decline for a very long time from studying oil production data.
two results together to get the final product. Note that the product has more digits than either of the two original numbers.You can follow the same procedure to multiply two binary numbers:10101  011011010100000101011010100000100010001Notice that each of the partial products is either zero or a copy of 10101 shifted leftward some number of digits. The partial products that are zero can, of course, be skipped. Accordingly, in order to multiply in binary, the computer simply starts with 0 in the accumulator and works through the second number to be multiplied (01101 in the example), checking whether each digit of it is 1 or 0. Where it finds a 0, it does nothing; where it finds a 1, it adds to the accumulator a copy of the first number, shifted leftward the appropriate number of places.binary number a number expressed in binary (base-2) notation, a system that uses only two digits, 0 and 1. Binary numbers are well suited for use by computers, since many electrical devices have two distinct states: on and off. Writing numbers in binary requires more digits than writing numbers in decimal, so binary numbers are cumbersome for people to use. Each digit of a binary number represents a power of 2. The right-most digit is the 1’s digit, the next digit leftward is the 2’s digit, then the 4’s digit, and so on:Decimal Binary20 = 1 121 = 2 1022 = 4 10023 = 8 100024 = 16 10000Table 4 shows examples of numbers written in binary and decimal form.See also DECIMAL NUMBER; HEXADECIMAL NUMBER; OCTAL. 53 binary subtraction TABLE 4 DECIMAL-BINARY EQUIVALENTS Decimal Binary Decimal Binary 0 0 11 1011 1 1 12 1100 2 10 13 1101 3 11 14 1110 4 100 15 1111 5 101 16 10000 6 110 17 10001 7 111 18 10010 8 1000 19 10011 9 1001 20 10100 10 1010 21 10101 binary search a method for locating a particular item from a list of items in alphabetical or numerical order. Suppose you need to find the location of a particular word in a list of alphabetized words. To execute a binary search, look first at the word that is at the exact middle of the list. If the word you’re looking for comes before the midpoint word, you know that it must be in the first half of the list (if it is in the list at all). Otherwise, it must be in the second half. Once you have determined which half of the list to search, use the same method to determine which quarter, then which eighth, and so on. At most, a binary search will take about N steps if the list contains about 2N items.binary subtraction a basic operation in computer arithmetic. The easiest way to subtract two binary numbers is to make one of the numbers neg-ative and then add them. Circuits for doing binary addition are readily constructed with logic gates (see BINARY ADDITION). The negative coun-terpart of a binary number is called its 2-complement.Suppose that we have a number x, represented as a binary number with k digits. The 2-complement of x (written as x) isx = 2k – xThen, to find the difference a – x we can computea – x = a + x – 2kThis is easier than it looks, for two reasons. First, subtracting 2k is triv-ial, because 2k is a binary number of the form 1000, 100000, and so on, with k +1 digits. So all we have to do is discard the leftmost digit to get our k-digit answer.Second, finding the 2-complement of x is easy: just invert all the dig-its of x (changing 0’s to 1’s and 1’s to 0’s) and then add 1. See INVERTER. binary subtraction 54 Suppose we want to compute 5 – 2 using 4-digit binary representa-tions. That is, we want to compute:0101 – 0010First, change the second number to its complement, change the minus to a plus, and subtract 2k:0101 + 0010 – 10000To actually compute the complement, invert the digits of 0010 and add 1, so the whole computation becomes:0101 + (1101 + 1) – 10000Evaluate this expression by performing the two additions0101 + 1101 + 1 = 10011and then throwing away the leftmost digit, giving 0011 (= 3), which is the answer.This method for handling subtraction suggests a way to represent neg-ative numbers. Suppose we want to represent –3. Positive 3 is binary 011. Negative 3 can be represented by the 2-complement of 3, which is the binary representation of 5: 101. However, we need an extra bit to indicate that 101 indicates –3 instead of 5. The bit indicating the sign will be included as the first digit of the number, with 1 indicating nega-tive and 0 indicating positive.The range of numbers that can be represented is different than before. Without the sign bit, 4 binary digits can hold numbers from 0 to 15; with the sign bit, the numbers range from –8 to 7. The table shows how.Positive Numbers Negative NumbersDecimal Binary Decimal Binary 0 0 0 0 0 1 0 0 0 1 –1 1 1 1 12 0 0 1 0 –2 1 1 1 03 0 0 1 1 –3 1 1 0 14 0 1 0 0 –4 1 1 0 05 0 1 0 1 –5 1 0 1 16 0 1 1 0 –6 1 0 1 07 0 1 1 1 –7 1 0 0 1 –8 1 0 0 0 On real computers it is typical to use 16 bits (2 bytes) to store integer values. Since one of these bits is the sign bit, this means that the largest positive integer that can be represented is 215 – 1 = 32,767, and the most negative number that can be represented is –(215)= –32,768. Some pro-gramming languages also provide an “unsigned integer” data type that ranges from 0 to 65,535. 55 bitbind to associate symbols with data, or to associate one piece of data with another, in several different ways, among them:1. to give a variable a value; to INITIALIZE it.2. to associate a network protocol with a particular Ethernet port or the like. See PROTOCOL.3. to map an XML document onto a set of variables or objects in Java or another programming language.4. to put together the pages of a book.binding see BIND (all definitions).biometrics measurable physical characteristics of the human body, used to identify an individual for security purposes. They include fingerprints, the distinctive appearance of faces and eyes, and the distinctive sound quality of one’s voice. There are computer input devices to read these characteristics.BIOS (Basic Input Output System) a set of procedures stored on a ROM chip inside PC-compatible computers. These routines handle all input-output functions, including screen graphics, so that programs do not have to manipulate the hardware directly. This is important because if the hardware is changed (e.g., by installing a newer kind of video adapter), the BIOS can be changed to match it, and there is no need to change the application programs.The BIOS is not re-entrant and is therefore not easily usable by mul-titasking programs. Windows programs do not call the BIOS; instead, they use procedures provided by the operating system.BIOS enumerator the BIOS routine that tells a PLUG AND PLAY system what hardware is installed.bipolar transistor a semiconductor device formed by sandwiching a thin layer of P- or N-type semiconductor between two layers of the opposite type of semiconductor. (See TRANSISTOR.) The other general type of tran-sistor is the field-effect transistor (FET).bis Latin for “a second time,” used to denote revised CCITT and ITU-T standards. See CCITT; ITU-T.BIST (built-in self test) a feature included in newer integrated circuits and other electronic equipment. An electronic device that has BIST can test itself thoroughly whenever it is turned on. See INTEGRATED CIRCUIT.bit a shorthand term for binary digit. There are only two possible binary digits: 0 and 1. (See BINARY NUMBER.) Bits are represented in computers by two-state devices, such as flip-flops. A computer memory is a collec-tion of devices that can store bits.A byte is the number of bits (usually 8) that stand for one character. Memory is usually measured in units of kilobytes or megabytes. SeeMEMORY. bit bucket 56One important measure of the capability of a microprocessor is the number
# Thread: On Einstein's explanation of the invariance of c 1. BenTheMan: Surely this thread has run its course. Motor Daddy and I aren't quite sick of it yet. You don't need to participate. Please don't close it. I know exactly what a reference frame is, and I know what "at rest" means (not accelerating). Wrong. "At rest" means "Not moving". "At rest" means zero velocity. Also, you can be at rest and still be accelerating. That's exactly what happens (momentarily) every time you put your foot on the accelerator to start your car moving from the traffic lights. I am measuring the velocity of the ship in space. The ship has its own velocity in space, which is not relative to any other object. You're merely repeating an unproven assertion. You don't seem to get that an object can be in motion in space. Do you not understand that an object can traverse space? Why do you insist on saying an object can't be in motion in space? I can measure the motion of an object in space, using light. The ship has a velocity in space just as light has a velocity in space. Yes, objects can traverse "space". Part of what you are saying there is that objects can move - something I obviously don't deny. The other part of what you're saying is that there is a mysterious stationary substance called "space" that is absolutely at rest, which is wrong. You can't measure the motion of an object in your "space". The most you ever actually manage in your thought experiments is to stand outside the thing you're measuring and measure light travel times in that external frame of reference. I know that's not what you think you're doing, but that's what you're doing. When your method works, that is. No I'm not viewing it from outside. I am not measuring my velocity compared to the ship, I am measuring the ship's velocity in space from inside the ship using light. Do you not understand what I am saying? I understand perfectly what you are saying. It is based on the false imagining that the light travel times measured inside the ship can be different in the two directions. That's error number one. Error number two is your belief that if you measure the times using clocks in another reference frame for the same light, you'll measure the same times. Your answers are only correct if the ship is not moving in space, which means it has a zero velocity. A spaceship always has zero velocity in its own rest frame, by definition. And no experiment can contradict that. If you see a spaceship moving, then you're watching it fly past. You're not sitting in it. By the way, if you're sitting in it and you look out the window at the stars and planets passing by, then you say the stars and planets are moving, not the spaceship. That's what reference frames are all about. People sitting on those planets say the spaceship is moving and the planets are stationary. Change of reference frame - see? Your idea that some things are "really" stationary and others are "really" moving with respect to "space" is false. There's no standard of absolute rest. It just doesn't exist. Yes they were measured from within the ship. I set the problem; you're telling me how I constructed it? All my numbers add up in and outside the ship, compared to every frame in the universe. No, (a) because you assume that the one-way light travel times measured in the ship will be diffferent, and (b) because you assume that anyone in any frame will measure the same one-way travel times. Both of those assumptions are wrong. Light measures distance in space. No. Light just travels the same distance in the same time in any reference frame. And if you think this statement and your statement are saying the same thing, then you don't know what a reference frame is. You want to try and say an object has a zero velocity if there is nothing to measure it against. No. I want to say that an object has zero velocity if you're sitting on it and measuring its speed in its own reference frame. I am telling you that I can measure the velocity of the ship in space, which means I can tell you the distance the ship traversed "empty space" in a given amount of time. But you can't tell me that. You always in fact are implicitly introducing another external reference frame. You call it "space" and you think it's special, but it's just one more frame and has no special status. Space has distance, and the ship can travel a distance in space in x amount of time, no other object or frame required to measure the distance or time. You can't measure distance or time in the absence of a reference frame. A reference frame is what enables these measurements - otherwise quoted times and distances are meaningless. Do you understand that volume has distance, and objects can travel in a volume? Yes. The speed of light is in fact the same in all frames. So, repeat after me: "I agree with Einstein that the speed of light is the same in all frames." Go on, let me see you write that. Can you bring yourself to do it? The part you fail to realize is that objects also have speed, and the object's speed changes the amount of time the light takes to travel from point a to point b on the ship. I understand that if an object has speed relative to a particular reference frame, then what you say here is perfectly true. In that case I agree with you completely. Where we disagree is in your imagining an absolute standard for speed which you call "space". Space is a nothing. It is not a substance. Do you not understand that the ship's velocity changes the distance light has to travel from one end of the ship to the other? The ship could be 1 meter in length, and it could take light .5 seconds to travel from end to end of the ship, because the ship has a velocity in space too! I agree, but it's not a velocity "in space". It's a velocity relative to some reference frame which is implied and not absolute. When I say it has a rest length of 1 metre, I need to measure that. For example, use light. But first you must know if the meter stick is in motion in space. NO! That's where you're absolutely and utterly wrong. The speed of light is the same IN ALL REFERENCE FRAMES. Therefore, I can make the measurement in ANY reference frame I like and find the correct answer. As Neddy Bate previously mentioned, the formula for my theory of finding the absolute velocity and distance between clocks is: v = (cT - ct) / (T + t) d = T(c - v) where: v is the absolute speed along the line between the clocks d is the distance between the clocks T is the greater time t is the lesser time I have no argument with that at all. As long as both times are measured in the same reference frame, the relative speed v calculated for the object will be correct for that frame, as will the distance d. There's nothing absolute about this calculational method, except for the incorrect assumption that the c refers to the speed of light relative to an absolute "space", and the assumption that v is the speed of the object relative to that "space". And, of course, there's your incorrect assumption that if we make the measurement in the frame of the ship then T and t will be different. There is no time dilation or length contraction involved. Correct. Time dilation and length contraction are never an issue if
harmonic radius. \section{Some background and notation} Throughout the paper, we will use the letter $C$ for a constant that may change from step to step in a computation. We denote by $\mathcal{M}(n,\kappa,\iota,V)$ the set of $n$-dimensional, closed Riemannian manifolds $(M,g)$ with the volume bounded above by $V$, Ricci curvature bounded below by $\kappa$, and injectivity radius bounded below by $\iota$. For simplicity, we assume that $M$ and $g$ are smooth, although this assumption can be weakened. We denote by $K(p,t;q)$ the heat kernel on $M$. By definition, the heat kernel satisfies for $p,q \in M$, $t > 0$, \[ \partial_t K(p,t;q) - \Delta K(p,t;q) = 0. \] Moreover, for every continuous function $f$ on $M$, \[ \lim_{t\downarrow 0} \int_M K(p,t;q) f(p) dp = f(q). \] The heat kernel on a manifold has the following representation \[ K(p,t;q) = \sum_{k=0}^\infty e^{- \lambda_k t } \phi_k(p) \phi_k(q), \] where $\phi_k$ are the eigenfunctions of the Laplace operator, $-\Delta \phi_k = \lambda_k \phi_k$, normalized by $|\phi_k|_2 = 1$. We define the truncated heat kernel $K_N$ by \begin{equation} \label{eq:HeatKernTrunc} K_N(p,t;q) := \sum_{k=0}^N e^{ - \lambda_k t } \phi_k(p) \phi_k(q). \end{equation} Finally, we denote by $\Gamma_E$ the standard heat kernel in $\R^n$. \[ \Gamma_E(x,t;y) := \frac{1}{(4 \pi t)^{n/2}} \exp\left[ - \frac{|x-y|^2}{4 t} \right]. \] The local dilatation of a map $f$ from a metric space $X$ to a metric space $Y$ at a point $p$ is defined as \begin{equation} \dil_p(f) := \lim_{r \to 0} \sup_{x,y \in B_r(p) } \frac{d(f(x),f(y))}{d(x,y)}. \end{equation} When $M$ is a smooth Riemannian manifold, that is embedded by a smooth map $f$ into a normed, finite-dimensional vector space $V$, the dilatation is given by \[ \dil_p(f) = |(d f)_p|, \] where the norm on the right hand side is interpreted as the operator norm of the map from $T_p(M)$ to $T_{f(p)}(f(M))$. \subsection{The harmonic radius} \label{se:HarRad} With a lower bound on the Ricci curvature and injectivity radius, there is a lower bound on the radius of balls on which there exist harmonic coordinates. This radius will determine the scale that will play an important role in the rest of the paper. Anderson and Cheeger proved the following theorem \cite{anderson_compactness_1992}. \begin{Theorem}[cf. {\cite{anderson_compactness_1992}}] For every $Q > 1$ and $0 < \alpha < 1$, there is a radius $r_h(n,\kappa,\iota,\alpha,Q)$ such that for every $(M,g) \in \mathcal{M}(n,\kappa,\iota,V)$, and any ball $B_r(p)$ on $M$ with $r \leq r_h$, there exist harmonic coordinates $u: B_r(p) \to \mathbb{R}^n$ such that the coefficients $g_{ij}$, given by, \begin{equation} g_{ij} = g\left( \frac{\partial}{\partial u^i}, \frac{\partial}{\partial u^j} \right), \end{equation} satisfy \begin{subequations} \label{eq:BoundsOng} \begin{align} Q^{-1} (\delta_{ij}) \leq (g_{ij}) &\leq Q (\delta_{ij}) \qquad \text{ as bilinear forms},\\ r_h^\alpha \| g_{ij} \|_{C^\alpha} &\leq Q - 1. \end{align} \end{subequations} \end{Theorem} In the appendix we will give a quantitative estimate on the harmonic radius $r_h$. \section{Some heat kernel estimates} \label{se:HeatKernelEstimates} We will now recall some properties of the decay of the heat kernel, following mostly the book by Grigor'yan \cite{grigoryan_heat_2012}. We will show how these estimates imply heat kernel decay in coordinates. Moreover, we will see how the decay of the heat kernel implies growth of the eigenvalues of the Laplace operator on the manifold, with a lower bound expressed in the Ricci curvature, volume, injectivity radius, and the dimension. After combining this with elliptic estimates, we conclude that the heat kernel can be truncated. \subsection{Heat kernel decay} By the estimate on the harmonic radius in Section \ref{se:HarRad}, there is a radius $r_h = r_h(n,\kappa,\iota,\alpha=1/2,Q=\sqrt{2})$ such that for every open subset $U \subset M$ contained in a ball with radius less than $r_h$, the Faber-Krahn inequality holds for $\lambda_{\min}(U)$, the smallest Dirichlet eigenvalue of the Laplace operator on the domain $U$, \begin{equation} \lambda_{\min} (U) \geq a(n) |U|^{-2/n}, \end{equation} where for any measurable set $U$ on the manifold, $|U|$ denotes the standard volume measure of $U$. By \cite[Theorem 15.14]{grigoryan_heat_2012}, \begin{equation} \label{eq:DecHeatKernMan} K(p,t;q) \leq \frac{C(n) \left( 1+ \frac{d(p,q)^2}{t} \right)^{n/2}} {\left(a(n) \min(t,r_h^2)\right)^{n/2}}\exp\left[ - \frac{d(p,q)^2}{4t}\right], \end{equation} where $d(p,q)$ denotes the (geodesic) distance between $p$ and $q$. From interior parabolic Schauder estimates it also follows that for $t \leq 2 r_h^2$, \begin{equation} \label{eq:GradDecHeatKernMan} |\nabla K(p,t;q)| \leq \frac{D(n)} {t^{(n+1)/2}}\exp\left[ - \frac{d(p,q)^2}{8t}\right]. \end{equation} Indeed, for points $p$ and $q$, we can use parabolic interior Schauder estimates (cf. \cite[Ch. 4, Theorem 4]{friedman_partial_2008}) on a ball around $p$, to conclude (\ref{eq:GradDecHeatKernMan}) for $t \leq \min ( d(p,q)^2/2 , 2 r_h^2)$. For $t > d(p,q)^2/2$, we can use that the heat kernel is $C^1$-close to the Euclidean heat kernel, as explained in Section \ref{se:SchauderTheory}, to conclude that the bound (\ref{eq:GradDecHeatKernMan}) also holds on this scale. \subsection{Heat kernel decay in coordinates} \label{se:DecCoord} Let $p \in M$, let $Q \leq \sqrt{2}$ and $r < r_h(n,\kappa,\iota,\alpha = 1/2, Q)$, and let the coordinates $u:B_{r_h} \to \R^n$ be harmonic satisfying (\ref{eq:BoundsOng}) and $u(p) = 0$. Define the rescaled heat kernels \begin{equation} \tilde{K}(x,s;q) := r^n K\left( u^{-1}(x r), s r^2; q \right), \end{equation} and \begin{equation} \Gamma(x,s;y) := r^n K \left(u^{-1}(xr), s r^2; u^{-1}(y r) \right). \end{equation} It follows that in this case, there is a constant $C_d = C_d(n)$ such that for $s < 2 r_h^2/r^2$, \begin{equation} \label{eq:DecCoord} \Gamma(x,s;y) \leq \frac{C_d(n)}{ s^{n/2} } \exp\left[ - \frac{|x-y|^2}{8s}\right], \end{equation} and a constant $D_d= D_d(n)$ such that \begin{equation} \label{eq:GradDecCoord} \left| \nabla \Gamma(x,s;y) \right| \leq \frac{D_d(n)}{s^{(n+1)/2}} \exp\left[ - \frac{|x-y|^2}{8s} \right]. \end{equation} \subsection{Eigenvalue growth} We use (\ref{eq:DecHeatKernMan}) to bound the trace of the heat kernel as follows \begin{equation} \int_M K(p,t;p) dp \leq \mathrm{Vol}(M) \frac{C(n)}{(a(n) \min(t,r_h^2))^{n/2}}. \end{equation} It follows by \cite[Theorem 14.25]{grigoryan_heat_2012} that if \begin{equation} k \geq \frac{C(n) \mathrm{Vol}(M)}{a(n)^{n/2} r_h^n}e^{n/2}, \end{equation} then the following lower bound on $\lambda_k$ holds \begin{equation} \label{eq:EigValLowBound} \lambda_k(M) \geq \frac{n}{2e} \, a(n) \left( \frac{k}{C(n) \mathrm{Vol}(M)} \right)^{2/n}. \end{equation} \subsection{Bounds on the eigenfunctions and their derivatives} In the following lemma, we use elliptic estimates to get bounds on the supremum norm of the eigenfunctions and their gradients in terms of their $L^2$ norm. These bounds follow from local arguments, and while not optimal from a global perspective, they are good enough for our purposes (cf. \cite{donnelly_eigenfunctions_2006}). \begin{Lemma} \label{le:EigFuncBounds} There is a constant $C = C(n,\kappa,\iota)$ such that for all $(M,g) \in \mathcal{M}(n,\kappa,\iota,V)$ and eigenfunctions $\phi_k$ of the (negative of the) Laplace operator on $M$, with corresponding eigenvalues $\lambda_k$, it holds that for $k \geq k(n,\kappa,\iota,V)$, \begin{subequations} \label{eq:EigFuncBounds} \begin{align} \|\phi_k\|_\infty &\leq C \lambda_k^{n/4} \| \phi_k \|_2, \\ \|\nabla \phi_k\|_\infty &\leq C \lambda_k^{(n+2)/4} \| \phi_k \|_2. \end{align} \end{subequations} \end{Lemma} \begin{proof} Let $r_h=r_h(n,\kappa,\iota,\alpha=1/2,Q=2)$ be the harmonic radius. Let $p\in M$. Select harmonic coordinates $u:B_{r_h}(p)\to \R^n$ so that $u(p)=0$ and the metric coefficients $g^{ij}$ with respect to these coordinates satisfy (\ref{eq:BoundsOng}). The eigenfunctions $\phi_k$ satisfy $- \Delta \phi_k = \lambda_k \phi_k$ on the manifold. By the estimate (\ref{eq:EigValLowBound}) on the growth of the eigenvalues $\lambda_k$ we may now pick $k$ large enough, depending only on $n, \kappa, \iota$ and $V$, such that $\lambda_k \geq 1/r_h^2$. We introduce coordinates $x = u \sqrt{\lambda_k}$, and write down the equation for $\phi_k$ \[ g^{ij}(x / \sqrt{\lambda_k}) \partial_{x^i}\partial_{x^j} \phi_k = \phi_k, \qquad x \in u(B_{r_h}(p))/r. \] Note that $B_{\frac{1}{2}\sqrt{2}}(0) \subset u(B_r(p))$. Since the equation has bounded coefficients, if $|x| \leq 1/2$, \[ | \phi_k(x) | \leq C(n) \left( \int_{B_{\frac{1}{2}\sqrt{2}}(0)} |\phi_k(y)|^2 dy \right)^{1/2}. \] Consequently, by the elliptic Schauder estimates, for $|x| \leq 1/4$ also \[ | \nabla \phi_k (x)| \leq C(n) \left( \int_{B_{\frac{1}{2}\sqrt{2}}(0)} |\phi_k(y)|^2 dy \right)^{1/2}. \] This implies (\ref{eq:EigFuncBounds}). \end{proof} \subsection{Truncation of the heat kernel} Using the bounds on the eigenfunctions derived in the previous section, we can control the tail of the heat kernel. \begin{Lemma} \label{le:TruncHeatKernClose} Let $M \in \mathcal{M}(n,\kappa,\iota,V)$. Let $\epsilon > 0$ and $t_0>0$ be given. Then there exists $N_0 = N_0(n,\kappa,\iota,V,\epsilon,t_0)$, such that when $N \geq N_0$, for every $t_0 \leq t \leq 4$, \begin{align} \label{eq:TruncHeatKernClose} \| K_N(.,t;q) - K(.,t;q) \|_\infty &< \epsilon,\\ \label{eq:DerTruncHeatKernClose} \|\nabla K_N(.,t;q) - \nabla K(.,t;q) \|_\infty &< \epsilon. \end{align} \end{Lemma} \begin{proof} Consider the sum \[ K_{N_1}^{N_2}(p,t;q) := \sum_{k=N_1}^{N_2} e^{-\lambda_k t} \phi_k(p) \phi_k(q). \] By the bounds (\ref{eq:EigFuncBounds}) we find that for a constant $C = C(n,\kappa,\iota)$, and $N_1 \geq k(n,\kappa,\iota,V)$, \[ \left| \nabla K_{N_1}^{N_2}(p,t;q) \right| \leq \sum_{k=N_1}^{N_2} e^{-\lambda_k t} | \nabla \phi_k(p)||\phi_k(q)| \leq C \sum_{k=N_1}^{N_2} e^{-\lambda_k t} \lambda_k^{\frac{n+1}{2}}. \] Since the eigenvalues are bounded below as in (\ref{eq:EigValLowBound}), for $k\geq k_0(n,\kappa,\iota,V,t)$, \[ e^{-\lambda_k t} \lambda_k^{\frac{n+1}{2}} \leq e^{-\lambda_k t/2}. \] With (\ref{eq:EigValLowBound}), we know that with a constant $c=c(n,\kappa,\iota,V)$, \[ \sum_{k=N_1}^{N_2} e^{-\lambda_k t/2} \leq \sum_{k=N_1}^{N_2} e^{- c \, k^{2/n} t}, \] and consequently, there is an $N_0 = N_0(n,\kappa,\iota,V,\epsilon,t_0)$ such that if $N_1 \geq N_0$ then (\ref{eq:DerTruncHeatKernClose}) holds. A similar argument shows that (\ref{eq:TruncHeatKernClose}) holds as well. \end{proof} \section{Embedding with heat kernels} \label{se:HeatTriangulation} In this section we will prove that manifolds can be embedded with heat kernels. In subsections \ref{suse:maximum} and \ref{suse:euclidean} we will show how the local dilatation can be controlled in case of an embedding into $\R^N$ endowed with the maximum norm and Euclidean norm respectively. \subsection{Embedding with heat kernels in $\R^N$
in cold dollars and cents to management, if we could put some of these general principles of values, human relationships, really into practice." One speaks of "human relations" and one means the most inhuman relations, those between alienated automatons; one speaks of happiness and means the perfect routinization which has driven out the last doubt and all spontaneity. The alienated and profoundly unsatisfactory character of work results in two reactions: one, the ideal of complete laziness; the other a deep-seated, though often unconscious hostility toward work and everything and everybody connected with it. It is not difficult to recognize the widespread longing for the state of complete laziness and passivity. Our advertising appeals to it even more than to sex, There are, of course, many useful and labor saving gadgets. But this usefulness often serves only as a rationalization for the appeal to complete passivity and receptivity. A package of breakfast cereal is being advertised as "new--easier to eat." An electric toaster is advertised with these words: "... the most distinctly different toaster in the world! Everything is done for you with this new toaster. You need not even bother to lower the bread. Power-action, through a unique electric motor, gently takes the bread right out of your fingers!" How many courses in languages, or other subjects, are announced with the slogan" effortless learn- ins, no more of the old drudgery." Everybody knows the picture of the elderly couple in the advertisement of a life-insurance company, who have retired at the age of sixty, and spend their life in the complete bliss of having nothing to do except just travel. Radio and television exhibit another element of this yearning for laziness: the idea of "push-button power"; by pushing a button, or turning a knob on my machine, I have the power to produce music, speeches, ball games, and on the television set, to command events of the world to appear before my eyes. The pleasure of driving cars certainly rests partly upon this same satisfaction of the wish for push-button power. By the effortless pushing of a button, a powerful machine is set in motion; little skill and effort are needed to make the driver feel that he is the ruler of space. But there is far more serious and deep-seated reaction to the meaninglessness and boredom of work. It is a hostility toward work which is much less conscious than our craving for laziness and inactivity. Many a businessman feels himself the prisoner of his business and the commodities he sells; he has a feeling of fraudulency about his product and a secret contempt for it. He hates his customers, who force him to put up a show in order to sell. He hates his competitors because they are a threat; his employees as well as his superiors, because he is in a constant competitive fight with them. Most important of all, he hates himself, because he sees his life passing by, without making any sense beyond the momentary intoxication of success. Of course, this hate and contempt for others and for oneself, and for the very things one produces, is mainly unconscious, and only occasionally comes up to awareness in a fleeting thought, which is sufficiently disturbing to be set aside as quickly as possible. (from A Rhetorical Reader, Invention and Design, by Forrest D. Burt and E. Cleve Want) 参考译文——工人是创造者还是机器 工人是创造者还是机器 埃里克弗罗姆 人只要不剥削他人,就得靠劳动来求生存。不论其劳动方式是多么原始,多么简单,仅凭从事生产性劳动这一事实,就足以使人超出动物界。把人定义为"从事生产的动物"是很有道理的。但对于人来说,劳动不仅是必不可少的生存条件。劳动还使他从自然界中解放出来,成为一个不依附于自然界的社会的人。在劳动过程中,即在塑造和改造其自身以外的自然界的过程中,人也塑造和改造了他自己。人由征服自然、驾驭自然才最终达到超出自然的境界,并进而逐步增强了自己的协作能力、思维能力和审美能力。他将自己从自然界,从自己与自然结成的原始统一体中分离出来,同时又以主人翁和建设者的身份重新与自然相结合。人的劳动方式越进步,其个性特征也就发挥得越充分。在塑造和改造自然的过程中,人逐步学会了如何充分利用自己具有的各种能力,增进自己的技艺和创造性。无论是法国南部洞穴中的美丽绘画,原始人所用武器上的纹饰图案,希腊的雕像和神殿,还是中世纪的教堂建筑,能工巧匠制作的桌椅,乃至农民培育出来的花木五谷等等,无一不是人利用自己的思维能力与技艺创造性地改造大自然的具体例证。 在西方历史上,手工技艺,尤其是十三、十四世纪中发展起来的手工技艺构成了人类创造性劳动发展史上的一个顶峰。那时的劳动不仅是一项有现实价值的活动,而且是一项给人以巨大的满足的活动。有关手工技艺的主要特征,美国社会学家米尔斯曾作过清楚的说明。他说,"除了劳动者对于被制造的产品和制造产品的生产过程本身的兴趣之外,劳动并无其他的深层动机。日常工作的细枝末节之所以有意义,是因为在劳动者的心目中,它们与劳动的产品密不可分。劳动者不受任何约束地主宰自己的劳动行为。这样,工匠艺人便能通过劳动过程来学习劳动技艺,并且在劳动过程中应用和提高自己的劳动技艺。工作和娱乐、工作和文化活动融为一体。工匠艺人的谋生手段决定并影响着其生活方式。随着中世纪社会结构的瓦解和现代生产方式的出现,劳动的社会意义和作用发生了根本性的变化,这一变化在新教国家尤为显著。人们对于自己新近获得的自由感到害怕,而为了克服自己的疑惧,他就必须进行某种狂热的活动。这种活动的结果,或成或败,就决定着他的命运和灵魂的归宿,标志着他死后是将进天堂还是入地狱。 于是,劳动便成了一种义务,一种烦恼,而不再是一种能使人满足和愉快的活动。 靠劳动发财致富的可能性越大,劳动就越发变成了一种纯粹的升官发财的手段。 用马克斯·韦伯的话说,劳动已成为"内心世界禁欲主义"思想体系中的一个主要因素,解决人们内心的寂寞和孤独感的一种办法。 不过,这种意义的劳动也只是对于那些能够积累一些资本并雇用他人劳动的中、上层阶级而言才存在的,而对于那仅有劳动力可供出卖的绝大多数人来说,劳动只不过是一种强迫劳役。 十八、十九世纪的工人,若是不想饿死,便得一天劳动十六个小时。他这样做,并不是要以此侍奉上帝,也不是为以工作上的成功来证明他属于"上帝的选民"之列,而是因为他迫于无奈,不得不向那些拥有剥削手段的人出卖自己的劳动力。 现代史开初的几个世纪中,劳动的意义划分为两种:对于中产阶级来说是义务,而对于无产者来说则是强迫劳役。 视劳动为一项义务的宗教观念在十九世纪还十分流行,但最近几十年来,这种宗教观念正经历着重大的演变。 现代人不知道自己该做些什么,怎样才能有意义地度过自己的一生,只是为了逃避无所事事所造成的寂寞无聊,才被迫去参加劳动。 但劳动已不再被人们以十八、十九世纪的中产阶级的那种态度看作是一种道德和宗教上的义务。新的观念产生了。不断地提高生产,追求更大更、好地东西,这些本身已成了劳动的目的,成了新的理想。劳动与劳动者的关系开始异化了。 产业工人的情况又如何呢?他一天要花七八个小时把自己最旺盛的精力用于生产"某种东西"。他需要劳动以求生计,但他在劳动过程中只扮演一个被动的角色。他只在一个复杂的、组织程度很高的生产过程中起一点很小的、孤立的作用,从来没有机会接触到"他的"产品的全貌,至少不能以生产者的身分,而只能以消费者的身份接触到"他的"产品的全貌,即使这样也还需要有一个前提条件,那就是,他得有足够的钱从商店里购买"他所生产出的,,产品。无论对生产出来的完全的成品本身还是其更深远的经济意义和社会意义,他都不用关心。他被安置在一个固定的岗位上,去完成一定的工作任务,而对生产的组织与管理则概不参与。对于为什么要生产这一种产品而不生产另一种产品,该种产品与整个社会需求之间的关系如何,他是既不知晓,也无兴趣知晓。鞋子、汽车、电灯泡等等都是由"工厂"用机器制造出来的。工人只是机器的一个组成部分,不是作为主动操纵者而成为机器的主人。机器不是在为他服务,替他去干过去要完全依靠体力去完成的工作,而是反过来成了他的主人。不是机器替代人力,而是人成了机器的替代物。人的工作被解释为执行目前尚不能由机器完成的动作。 在工业心理学方面的大多数调查都是关于如何使工人的生产率得以提高,如何能使他少带一些抵触情绪去工作。心理学已用来服务于"人类工程",即试图把工人和雇员当作机器来对待,认为他们也像机器一样,只要加好油,就能运转得好一些。泰勒主要关心的是如何在工业生产上更好地组织使用工人的体力,而大多数工业心理学家关心的主要是如何左右工人的心灵。可以这样来表达其基本思想:如果他高兴就能工作得好一些的话,那么就让我们使他高兴、安心、满意或别的什么的,只要这样能提高他的产量,减少抵触情绪就行。在"人际关系"的名义下,他们用对一个完全冷漠的人的一切手段去对待一个工人;就是幸福和人们的价值观也是从与公众建立更好的关系这个角度提出来的。例如,据《时代》周刊报导,美国一位最著名的精神病学家对一批1500名超级市场经理人员说:"如果我们是高高兴兴的,我们的顾客就会感到更满意…如果我们真的能把某些有关价值观和人际关系的总的原则付诸实践,那么对资方来说,换来的将是实实在在的金钱。"他们讲的是"人际关系",指的却是最最非人的关系,冷漠的机器人之间的关系。他们讲的是幸福,指的却是完全机械的重复活动,这种活动使人完全失去了独立的思考和任何的主动性。 劳动的这种冷漠无情,丝毫不能令人满足的性质势必引起两种结果:其一,使人们产生十足的懒惰思想;其二,使人们对劳动及与之有关的一切人和事产生一种根深蒂固的(尽管往往是潜意识的)敌对心理。 不难看出,向往极端的懒散和消极怠工是人们的普遍心理状态。我们的广告对这一点的渲染甚至比对性的渲染更有过之而无不及。当然,确实有许多实用而省力的小玩意,但这种实用性往往只起着使追求十足的消极懒散和坐享其成成为合理化的作用。一包早餐食品在广告中被宣传为"新产品——食用更方便。"一种电烤箱所用的广告词竟是这样的:"……最新烤箱,设计独特,举世无匹!有了这种烤箱,一切工作都会自动完成,连放面包也无需您亲自动手,只要一通电,通过一种功能独特的电动机的电力作用就能将面包从您手上轻轻取下!"有多少语言或其他科目的教科书用着这样的宣传口号:"学习起来真轻松,完全不必下苦功。"有一家人寿保险公司还作了这么一个家喻户晓的广告画:画上是一对六十岁退休的年老夫妇除旅行度假外,长年无所事事,优哉游哉享清福的欢乐景象。 广播和电视反映着这种追求懒散思想的另一方面,即"键钮万能"的思想。 只需按一下按键,或拧一下旋钮,就可以播放出音乐、讲话、球赛实况,或是在电视机上将世界大事收之于眼前。 驾驶汽车使人感到愉快,其部分的原因就是由于键钮万能的理想的实现所带来的满足感。 只需轻轻一按按钮,便能发动一台大功率的发动机,驾车人无需掌握什么技艺,付出任何努力,便能体会到当空间的主宰的滋味。 然而,劳动变得毫无意义而且令人厌烦之后所带来的另一种结果却还要严重而根深蒂固得多。 这就是对劳动的敌对心理。这种心理远不如追求懒散无为那样容易被人们意识到。 许多商人觉得自己变成了自己所经营的企业及其所出售的商品的俘虏;他感觉到自己所售商品有骗人的味道,并从内心里蔑视它们。 他憎恨顾客,因为是他们迫使他弄虚作假来促销商品; 他憎恨竞争对手,因为他们对他构成威胁;他憎恨自己的雇员和上司,因为他与他们永远处于一种互相倾轧的明争暗斗状况。 但他最为痛恨的还是他自己,因为他眼见着自己的有生之年,除了赢利而带来一时陶醉之外,都在毫无意义地白白流逝。 当然,这种对他人、对自己以及对自己的产品所怀有的憎恨和轻蔑,多半是无意识的, 只是偶尔上升到意识中来,但也因憎怒过甚而一闪而过。 (摘自福里斯特D伯特和E克利夫万特《修辞读物发明与设计》) Key Words: molding n. 铸造;装饰用的嵌线;模塑 spontaneity    [.spɔntə'ni:iti] n. 自然性,自生,自发 longing   ['lɔŋiŋ] n. 渴望,憧憬 adj. 渴望的 conscious      ['kɔnʃəs] 参考资料: 高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(1)_品牌英语听力 - 可可英语高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(2)_品牌英语听力 - 可可英语高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(3)_品牌英语听力 - 可可英语高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(4)_品牌英语听力 - 可可英语http://www.kekenet.com/Article/201510/40367shtml高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(7)_品牌英语听力 - 可可英语高级英语第二册(MP3+中英字幕) 第8课:工人是创造者还是机器(8)_品牌英语听力 - 可可英语 展开全文 hpdlzu80100 2021-12-06 11:55:12 • Ulterior motive confirmations(别有用心的确认——汗,翻译成这样好像不太好吧)    四、二次确认页的替代方案:  如果不喜欢二次确认页,那么有别的方法可以取代吗?  1. 防止出错——设置... 什么是二次确认页? 英文定义:A confirmation is a modal dialog box that asks if the user wants to proceed with an action. 翻译成汉语大概就是:一个确认页是一种询问用户是否想继续执行某个动作的对话框。 二次确认页面的特点: 直接出现在用户刚刚发起的某个操作之后。 询问并确认用户是否想要继续之前的操作。 一般会包含一个简单的问题和两到三个操作。 二、什么时候用到二次确认? 二次确认的好处是: 1、减少误点击 2、避免动作中断时的损失(保存确认) 3、使操作更加慎重 4、安全性(有的二次确认还需要用户输入密码) 缺点是: 1、干扰了正常的操作流程,不恰当的多余的二次确认面还会让客户心生厌烦。 2、在一些鼓励的流程中,二次确认页还会形成巨大的漏斗效应,直接造成客户流失。 因此,有以下三个原则:能不用就不用;必要时才用;用了就让人明白。 那么,什么时候用二次确认呢? 1. 保存确认(Save Confirmation) 例:填写表单中途离开,邮件写了一半关闭浏览器,文档未保存状态下点关闭。 确认的目的:避免误操作或损失。 2. 删除确认(delete confirmation) 例:开心网账户的删除(不能恢复),删除好友或文件等。 注意:并不是所有的删除都需要确认,例外情况有如:频繁的操作(如删除邮件),不重要的删除或者恢复成本较低。 3. 其他重要且后果不可逆的操作 例:淘宝的确认收货并同意放款,百度有啊的撤销退款协议。 确认的目的:告知后果使操作谨慎,避免误操作。 4. 重要且不推荐的操作 确认的目的:通过确认让用户更改选择。 最典型的例子莫过于淘宝的“评价确认”: 原文链接:http://www.nnwb.com.cn/homepage-design/2009513243.htm 二次确认页的特征既然是存在两个以上的操作选择,所以当只存在一种选择的时候,无论页面长得再怎么像确认页,也不是。 例如以下的页面: 大家都很关心的问题:到底该如何判断要不要确认页呢? 建议如下: 1. 若不存在两个以上的动作选择——不要使用确认页,可以是成功提示,或者错误提醒,设计成不需要用户操作的样子。 2. 若存在两个以上的选择,但是90%的用户都会选择默认的选项——二次确认也是可以考虑去除的。可以加注一些提示来避免那5%的用户出现损失,但是不要用一个确认页去干扰这90%的用户。 3. 考虑重要性和恢复成本: 重要但是恢复成本低的,和不重要但是恢复成本高的,不建议使用二次确认,提供撤销操作更人性化。 重要且恢复成本高的,最好二次确认,避免损失和误操作。 4.是否是频繁使用的操作:一般情况下,若频繁使用,重要性就不是特别高,而且频繁出现的确认页会让人抓狂的,这种情况下,最好不用二次确认页。而且确认页有很多替代的形式,能够达到同样的目的但是更加亲和。 三、二次确认页的形式: 从设计角度划分: 1. 系统弹出框 2. lightbox(浮出层) 解释:Lightbox的效果类似于WinXP操作系统的注销/关机对话框,除去屏幕中心位置的对话框,其他的区域都以淡出的效果逐渐变为银灰色以增加对比度,此时除了对话框内的表单控件,没有其他区域可以点击。 3. 邮箱验证及手机验证码等替代形式 比如,要删除开心网帐号时,开心网会发一封邮件,点击邮箱里的链接来确认一定要删除。这种形式适用于比较重要的不可恢复的场合。 手机验证码确认的形式一般和资金相关,也用于比较重要的操作。 从内容和功能角度划分 (这段资料来自于《windows vista UX guide》,为避免偶英文翻译有误,保留原文名称) 1.Routine confirmations(常规确认) Confirm that the user wants to proceed with a routine, low risk action. 确认用户想要继续一个常规的,低风险的操作。 如图: 2.Risky action confirmations(风险操作确认) Confirm that the user wants to proceed with an action that has some risk and can’t be easily undone. 确认用户想要继续一个有风险并且不容易撤销的操作。 图: 3.Unintended consequence confirmations(未预期的确认) Confirm that the user wants to proceed with an action that has unexpected or unintended consequences. 确认用户想要继续一个可能会导致意料外的结果的动作。 很多时候,确认页是建立在用户有明确的操作意向的时候,这种情况下,也许用户对后果是有预期判断的:删除就意味着后果就是删除。而若删除命令同时会导致别的意料之外的结果产生,那就是unintended consequence confirmation。 典型的例子:在多标签浏览器环境中,关闭浏览器,一般就会弹出一个确认框。 图: 4.Clarifications(澄清式确认、探询式确认) Clarify how the user wants to proceed with an action that has potentially ambiguous or unexpected consequences. 搞清楚用户想如何继续一个行为,而这个行为可能会导致预期外结果。 就好像在岔路口,导游说:好,我们继续走吧。你可能想反问一下:怎么走呢?向左还是向右?因为你担心右边可能会有打劫的埋伏。 UX guide建议除非确实认为这个行为可能会出现的多种结果中,不然就不需要这种澄清式的确认。 5. Security confirmations(安全确认) Confirm that the user wants to proceed with an action with security consequences. 确认用户想继续执行一个会出现安全问题的动作。 这个大家很常见了吧: 6. Ulterior motive confirmations(别有用心的确认——汗,翻译成这样好像不太好吧) 四、二次确认页的替代方案: 如果不喜欢二次确认页,那么有别的方法可以取代吗? 1. 防止出错——设置任务,用户在进行破坏性的操作前有前置任务需要完成。 比如,在我们最近的一个项目中,用户在点击某个button时,那个命令是需要被确认的,否则一旦误点击会造成不可恢复的后果。但是在点击下这个button后,用户是需要填写一个表单的。在提交表单时,我们就发现没有必要再用一个二次确认。因为用户在填写表单的过程中是可以思考和反悔的,他既然愿意花时间和精力去填写表单,证明他确实想明白了。 2. 提供撤销操作(Undo)——gmail的undo 图: 3. 提供反馈,让不期望的结果显著化。 4. 消除选择——往往需要被确认的是因为有两个或多个response(后续动作),可以认真想一下,是否一定有多个选择,如果仅仅剩下唯一一个了,那么就不需要询问了。 如果需要被confirm的选项不是很重要,干脆拿掉它。我特讨厌有些网站给你一个长长的表单,下面有两个button,一个提交一个清空。往往会不小心点击了清空,结果刚才忙活了半天的东西都没有了。要避免这种情况,当然你可以在我点击清空时给我一个确认,不过我更加期望把这个button拿掉。 五、设计形式的选择: 自从有了浮出层,越来越多的web 2.0的网站抛弃了系统弹出框。开始使用lightbox(浮出层),当然,他们各有优劣,不能一概而论。 比如以下这种情况: 系统弹出层可以允许我挪开确认窗口以阅读“需要被确认的内容”。 而如果使用浮出层,会出现这样的效果: 挪都挪不开,怎么确认?当然你可以把需要被确认的内容放到浮出层上,前提是有足够的信息承受量。 做了一张浮出层与二次确认页 两者的优劣点表,供参考: 六、二次确认页注意事项: 1、时机——确认是必要的时机 2、形式——是不是采取了合适的形式(有哪些形式?),注意不要用二次确认页鱼目混珠,有很多网站把成功页面做成像二次确认页一样,居然还有个感叹号来警示用户“操作已经成功”…… 3、文案 4、icon 5、出错控制——:二次确认页应该给出建议性的下一步操作,默认的动作。 6、结构——这个页面不需要太多创新,最保险的方式就是照顾用户已有的习惯,用主流的结构去呈现。 文案太重要了: 1. button的文案——需要让用户思考。 很多时候我们发现一个页面很莫名其妙,很不容易理解,仔细看看,原来是文案没有传达清楚。 如果二次确认页面也出现含糊不清,模棱两可的文案,那是最糟糕的事情。 大家看得明白下面三个二次确认页的区别吗?——资料来自《windows vista UX guide》 三者的区别在于button引导文案,先使用官方资料: 第一个二次确认页面:windows认为是不合理的二次确认页,因为它起不到该起的作用,因为用户本身就是通过点击“uninstall”操作看到这个页面,当他看到button上的文案还是“uninstall”的时候,他几乎不会去阅读二次确认的问题和描述,直接就会点击“uninstall”。而windows认为二次确认页至少是需要用户思考一下再做操作的(不然还真的没必要)。——Do make me think。 第二个二次确认页面:windows认为是合适的,使用yes和no作为button的文案,用户在点击前,至少会思考一下yes和no分别对应的后果,因此他会去看描述。 第三个二次确认页面:windows认为也是靠谱的。一个简单的anyway作用很大……体会一下。 Yes/No和OK/Cancel的button文案搭配大家似乎在英文站点上司空见惯了。好像是可以相互替代的是吗? 现实生活中,某个人负责写二次确认页面文案,但是button上显示的文案有时却得走"规范",统一使用YES或者OK(比如),至于点击了button到什么页面是由设计师和工程师决定的。就会导致以上矛盾的情况:button和文案牛头不对马嘴,点击后却又是另外的情况…… 2. 页面的文案——足够的信息讲明白后果。 你会经常被这种页面搞得很苦恼,你确定吗?你真的确定吗?你考验我的智力还是判断力还是耐力? ICON可不能乱用 icon很美观,似乎很多设计师总是想用一个icon点缀一下二次确认页。即使不是二次确认页(向左侧的这个可怜的成功页面,却被用了警示的icon,实在匪夷所思) 展开全文 gufanyue 2014-11-21 16:27:49 • placed on symlinks limited their usefulness, there *was* a reasoned engineering analysis --- it wasn't one guy with an ulterior motive trying to avoid a bad review score. In fact, that practically ... 原文链接:http://blog.zorinaq.com/?e=74 "I Contribute to the Windows Kernel. We Are Slower Than Other Operating Systems. Here Is Why." I was explaining on Hacker News why Windows fell behind Linux in terms of operating system kernel performance and innovation. And out of nowhere an anonymous Microsoft developer who contributes to the Windows NT kernel wrote a fantastic and honest response acknowledging this problem and explaining its cause. His post has been deleted! Why the censorship? I am reposting it here. This is too insightful to be lost. [Edit: The anonymous poster himself deleted his post as he thought it was too cruel and did not help make his point, which is about the social dynamics of spontaneous contribution. However he let me know he does not mind the repost at the condition I redact the SHA1 hash info, which I did.][Edit: A second statement, apologetic, has been made by the anonymous person. See update at the bottom.] """ I'm a developer in Windows and contribute to the NT kernel. (Proof: the SHA1 hash of revision #102 of [Edit: filename redacted] is [Edit: hash redacted].) I'm posting through Tor for obvious reasons. Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening. The cause of the problem is social. There's almost none of the improvement for its own sake, for the sake of glory, that you see in the Linux world. Granted, occasionally one sees naive people try to make things better. These people almost always fail. We can and do improve performance for specific scenarios that people with the ability to allocate resources believe impact business goals, but this work is Sisyphean. There's no formal or informal program of systemic performance improvement. We started caring about security because pre-SP3 Windows XP was an existential threat to the business. Our low performance is not an existential threat to the business. See, component owners are generally openly hostile to outside patches: if you're a dev, accepting an outside patch makes your lead angry (due to the need to maintain this patch and to justify in in shiproom the unplanned design change), makes test angry (because test is on the hook for making sure the change doesn't break anything, and you just made work for them), and PM is angry (due to the schedule implications of code churn). There's just no incentive to accept changes from outside your own team. You can always find a reason to say "no", and you have very little incentive to say "yes". There's also little incentive to create changes in the first place. On linux-kernel, if you improve the performance of directory traversal by a consistent 5%, you're praised and thanked. Here, if you do that and you're not on the object manager team, then even if you do get your code past the Ob owners and into the tree, your own management doesn't care. Yes, making a massive improvement will get you noticed by senior people and could be a boon for your career, but the improvement has to be very large to attract that kind of attention. Incremental improvements just annoy people and are, at best, neutral for your career. If you're unlucky and you tell your
over $Y$. \end{Defi} Note that by \cite[3.3.3.2]{lurie-htt}, the $\infty$-category $\on{D}_{\mathrm{qcoh}}(Y)$ is the full subcategory of $\on{D}(Y)$ spanned by Cartesian sections. Informally, an object in $M \in \on{D}(Y)$ is the data of $A$-dg-modules $M_{A,\phi}$ for any map $\phi \colon \Spec A \to Y$, together with coherence maps $\zeta_{f} \colon M_{A,\phi} \otimes_A^L B \to M_{B,\phi \circ f}$ for any map $f \colon \Spec B \to \Spec A$ and higher coherence data. The module $M$ is then quasi-coherent if and only if all the maps $\zeta_f$ are quasi-isomorphisms. The category $\on{D}(Y)$ admits internal homs that we will denote by $R\Hom_{\Oc_Y}$. \paragraph{C. Geometric objects and tangent complexes. } For the definition of geometric derived stacks (or, what is the same, derived Artin stacks) we refer to \cite{toen-vezzosi}. This class includes, first all derived schemes, that is, derived stacks that are Zariski locally equivalent to derived affine schemes. Following \cite{lurie-dagv}, one can represent derived schemes in terms of "homotopically" ring spaces. Namely, a derived scheme $X$ is a topological space together with a sheaf (up to homotopy) of $\ZZ_{\leq 0}$-graded cdga's $\Oc_X$ such that $(X,H^0(\Oc_X))$ is a scheme. In fact, a derived Artin stack is a derived stack that can be obtained from derived affine schemes by a finite number of smooth quotients. \vskip .2cm The {\em cotangent complex} $\LL_Y$ of a derived stack $Y$ is an object of $\mathrm{D}_\mathrm{qcoh}(Y)$ defined (when it exists) by the universal property \[ \Map_{\mathrm{D}_\mathrm{qcoh}(Y)} (\LL_Y, M) \,\,\simeq \,\, \Map_{{}^{Y/}\St} (Y[M], Y), \quad M\in \mathrm{D}_\mathrm{qcoh}^{\leq 0}(Y). \] Here ${}^{Y/} \St$ is the comma category of derived stacks under $Y$. The object $\LL_Y$ is known to exist \cite{toen-vezzosi} when $Y$ is geometric (no smoothness assumption). The {\em tangent complex} $\TT_Y$ is defined as the dual \[ \TT_Y \,\,=\,\, R\Hom_{\Oc_Y}(\LL_Y, \Oc_Y) \in \on{D}(Y). \] If $Y$ is locally of finite presentation \cite{toen-vezzosi}, then $\LL_Y$ is a perfect complex and hence so is $\TT_Y$. In particular $\TT_Y$ is an object of $\mathrm{D}_\mathrm{qcoh}(Y)$. For a $\k$-point $i_y: y\hookrightarrow Y$ we will write \[ \TT_{Y,y} = Li_y^*( \TT_Y) = R\Hom_{\Oc_Y}(\LL_Y, \Oc_y) \] for the tangent complex of $Y$ at $y$. This is a complex of $\k$-vector spaces. \paragraph{D. Derived intersection:} Given a diagram $X\to Z\leftarrow Y$, we have the {\em derived (or homotopy) fiber product} $X\times_Z^h Y$. If $X,Y,Z$ are affine, so our diagram is represented by a diagram $A\leftarrow C\to B$ in $\cdga$, then \[ X\times_Z^h Y \,\,=\,\,\Spec\bigl( A\otimes_C^L B\bigr). \] We will be particularly interested in the following situation. Let $f: X \to Y$ be a morphism of derived stacks, and $y \in Y$ be a $\k$-point. Then we have the derived stack (a derived (affine) scheme, if both $X$ and $Y$ are derived (affine) schemes) \[ Rf^{-1}(y) \,\,=\,\, X\times^h_Y \{y\}. \] It will be called the {\em derived preimage of } $y$. It is the analog of the homotopy fiber of a map between spaces in topology. \subsection{The Kodaira--Spencer homomorphism} \paragraph{A. Group objects and actions.} By a {\em group stack} we will mean a stack $G$ together with simplicial stack $G_\bullet$ such that $G_0\simeq \Spec \k$, $G_1\simeq G$ and which satisfies the {\em Kan condition}: the morphisms corresponding to the inclusions of horns are equivalences. Intuitively, $G_\bullet$ is the nerve of the group structure on $G$, see \cite[ \S 4.2.2]{lurie-halg} for more details. \vskip .2cm Similarly, an {\em action} of a group stack $G$ (given by $G_\bullet$) on a stack $Y$ is a simplicial stack $Y_\bullet$ together with a morphism $q: Y_\bullet\to G_\bullet$ with an identification $Y_0\simeq Y$ such that, for any $m$, the morphism \[ (q_m, \del_{\{m\}\hookrightarrow \{0,1,\dots, m\}}): Y_m\lra G_m\times Y_0 \] is an equivalence. In this case $Y_\bullet$ satisfies the Kan condition. Intuitively, $Y_\bullet$ is the nerve of the ``action groupoid". The ``realization" of $Y_\bullet$, i.e., the derived stack associated to the prestack $A\mapsto |Y_\bullet(A)|$, is the quotient derived stack $[Y/G]$. In particular, we have the stack $BG= [*/G]$, the {\em classifying stack} of $G$. \begin{exas} (a) Let $Y$ be a derived stack and $y\in Y$ be a $\k$-point. The {\em pointed loop stack} \[ \Omega_y Y = \{y\} \times^\mathrm{h}_Y \{y\}: A \mapsto \Omega(Y(A), y) \] is a group stack. The corresponding simplicial stack $(\ul\Omega_yY)_\bullet$ is the (homotopy) nerve of the morphism $\{y\} \to Y$, i.e., \[ (\ul \Omega_yY)_m \,\,=\,\, \{y\} \times^\mathrm{h}_Y \{y\} \times^\mathrm{h}_Y\cdots \times^\mathrm{h}_Y \{y\} \,\,\simeq \,\, (\Omega_yY)^m \] ($(m+1)$-fold product). \vskip .2cm (b) Let $Y$ be any derived stack. Its {\em automorphism stack} is the group stack \[ \RAut(Y): A\mapsto \Map^\mathrm{eq}_{\St/\Spec A}(Y \times \Spec A, Y \times \Spec A) \] Here the superscript ``eq" means the union of connected components of the mapping space formed by vertices which are equivalences. Alternatively, we can describe it as the functor \[ A\mapsto \Omega(\St/\Spec A, Y\times \Spec A), \] the based loop space of the nerve of the category of derived stacks over $\Spec A$ with the base point being the object $Y\times \Spec A$. By construction, we have an action of $\RAut(Y)$ on $Y$; an action of a group stack $G$ on $Y$ gives a morphism of group stacks $G\to\RAut(Y)$. \end{exas} \begin{prop}\label{prop:actionofloops} Let $f \colon X \to Y$ be a map of derived stacks and $y \in Y$ be a $\k$-point. Then the group stack $\Omega_y Y$ has a natural action on the derived preimage $Rf^{-1}(y)$. \end{prop} \noindent{\sl Proof:} We define the simplicial stack $\underline Rf^{-1}(y)_\bullet$ as the nerve of the morphism $Rf^{-1}(y)\to Y$, i.e., \[ \begin{gathered} \underline Rf^{-1}(y)_m \,\,=\,\ Rf^{-1}(y) \times^\mathrm{h}_X Rf^{-1}(y) \times^\mathrm{h}_X\cdots \times^\mathrm{h}_X Rf^{-1}(y) \,\,\simeq \,\, \\ \simeq \,\, \{y\} \times^\mathrm{h}_Y \{y\} \times^\mathrm{h}_Y\cdots \times^\mathrm{h}_Y X \,\,\simeq\,\, (\Omega_yY)^m\times Rf^{-1}(y). \end{gathered} \] All the required data and properties come from contemplating the commutative diagram \[ \xymatrix{ Rf^{-1}(y) \ar[r] \ar[d] & X \ar[d] \\ \relax \{y\} \ar[r] & Y. } \] \qed \begin{ex}[(Eilenberg-MacLane stacks)] Let $\Pi$ be a commutative algebraic group (in our applications $\Pi=\GG_m$). For each $r\geq 0$ we then have group stack $\EM(\Pi,n)$, known as the {\em Eilenberg-MacLane stack}. It is defined in the standard way using the Eilenberg-MacLane spaces for abelian groups $\Pi(A)$ for commutative $\k$-algebras $A$. Thus $\EM(\Pi,0) = \Pi$ as a group stack, i.e., the corresponding simplicial stack $\EM(\Pi,0)_\bullet = \Pi_\bullet$ is the simplicial classifying space of $\Pi$. Similarly (the underlying stack of the group stack) $\EM(\Pi, 1)$ is identified with $B\Pi$. In general, if we denote $\EM(\Pi, n)_\bullet$ the simplicial stack describing the group structure on $\EM(\Pi, n)$, then $|\EM(\Pi,n)|= \EM(\Pi, n+1)$. \end{ex} \begin{Defi} Let $G$ be a group stack and $\Pi$ a commutative algebraic group. A {\em central extension} of $G$ by $\Pi$ is a morphism of group stacks $\phi: G\to B\Pi$ or, what is the same, a morphism of stacks $BG\to\EM(\Pi, 2)$. \end{Defi} A central extension $\phi$ gives, in a standard way, a fiber and cofiber sequence of group stacks \[ 1\to \Pi\lra\wt G\lra G\to 1, \] where $\wt G$ is the fiber of $\phi$. \paragraph{B. Formal moduli problems. } We recall Lurie's work \cite{lurie-dagx} on formal moduli problems which serve as infinitesimal analogs of derived stacks. \begin{Defi} A cdga $A\in \cdga$ is called Artinian, if: \begin{itemize} \item The cohomology of $A$ is finite dimensional (over $\k$); \item The ring $H^0 A$ is local and the unit induces an isomorphism between $\k$ and the residue field of $H^0 A$. \end{itemize} \end{Defi} In particular, any Artinian cdga admits a canonical augmentation (the unique point of $\Spec A$). Artinian cdga's form an $\infty$-category which we will denote by $\mathbf{dgArt}_\k$. \begin{Defi}\label{def:fmp} A formal moduli problem is a functor (of $\infty$-categories) \[ F \colon \mathbf{dgArt}_\k \to \sSet \] such that: \begin{itemize} \item[(1)] $F(\k) \simeq *$ is contractible. \item[(2)] (Schlessinger condition): For any diagram $A \to B \leftarrow C$ in $\mathbf{dgArt}_\k$ with both maps surjective on $H^0$, the canonical map $F(A \times^\mathrm{h}_B C) \to F(A) \times^\mathrm{h}_{F(B)} F(C)$ is an equivalence. \end{itemize} \end{Defi} \noindent We denote by $\Fun_*(\dgart, \sSet)$ the ($\infty$-)category of functors from $\dgart$ to simplicial sets satisfying the condition (1) of Definition \ref{def:fmp}, and by $\FMP$ the full subcategory of formal moduli problems. General criteria
factor of 10. This decrease can be either due to a general decrease in luminosity or due to a shift of the emission out of the ROSAT energy range. The non-detection of RXJ0943.0+4701 in the HRI observation cannot be caused by the fact that the HRI is slightly less sensitive in the soft band compared with the PSPC (ROSAT User's Handbook http://www.xray.mpe.\-mpg.de/rosat/doc/ruh), because the hardness ratio at the position of RXJ0943.0+4701 (hard-soft)/(hard+soft)\-$=-0.08$, i.e. there are about the same number of counts in the soft and in the hard band. \begin{figure*} \begin{tabular}{ll} \label{fig-comb} \psfig{figure=7485_f6a.ps,width=7cm,clip=} & \psfig{figure=7485_f6b.ps,width=7cm,clip=} \\ \psfig{figure=7485_f6c.ps,width=7cm,clip=} & \psfig{figure=7485_f6d.ps,width=7cm,clip=} \\ \psfig{figure=7485_f6e.ps,width=7cm,clip=} & \\ \end{tabular} \caption[]{ Morphology of CL 0939+4713 as seen in an HRI image (a) and in a PSPC image (b) taken 5 years earlier. The HRI image is smoothed with a Gaussian of $\sigma=14$arcsec so that the resolution is about equivalent to the PSPC observation. b) shows the PSPC image in broad band which is the corresponding band to the HRI image. The PSPC images are smoothed with a Gaussian of $\sigma=10$arcsec. In each of the panels the positions of the maxima of the PSPC broad band image are marked with crosses. The comparison shows that the northern maximum of the PSPC observation (RXJ0943.0+4701) has disappeared in the HRI observation. The relative brightness of the maxima M1 and M2 seems to have changed, but this difference is within the expected statistical fluctuations. c) PSPC soft (0.1-0.5 keV) and d) PSPC hard (0.5-2.0 keV) band images are shown to demonstrate that RXJ0943.0+4701 is softer than the rest of the cluster emission. The positions of the northern X-ray maximum are not exactly coincident in the soft and the hard band. e) Subtracted image: HRI (a) -- PSPC (b) image. The most obvious feature is the black minimum at the position of RXJ0943.0+4701. The minimum is caused by the missing RXJ0943.0+4701 emission in the HRI image. Also the small intensity variations of M1 and M2 are visible (M1 slightly fainter, M2 slightly brighter in the HRI image). The emission of the quasar at redshift two is also visible as a faint white region left of M2, but this excess emission in the HRI image is not significant. The images all have a size of 5 arcmin at a side. } \end{figure*} In the ROSAT All-Sky Survey (Voges et al. 1996) CL 0939+4713 was observed in November 1990 for two days. Unfortunately, the exposure time is too low to see any morphological details. The Survey countrate of the whole cluster region is within the errors in agreement with both the PSPC and the (converted) HRI count rate. A lightcurve of the Survey observation shows no indication of variability (Boller \& Voges, private communication). Unfortunately, RXJ0943.0+4701 cannot be resolved with ASCA, i.e. one cannot distinguish the cluster emission from emission coming from RXJ0943.0\-+4701. Therefore, we have no information about the flux of RXJ0943.0\-+4701 at the time of the ASCA observation. For examining the short-term variability of RXJ0943.0\-+4701 we try to derive a lightcurve from the PSPC observation. The source was observed in eight intervals within 72 hours. The intervals have exposure times between 1300s and 2400s. Figure 7 shows this attempt of a lightcurve. It is consistent with a constant emission over 3 days. A spectral fit to the RXJ0943.0+4701 region alone is very difficult because of the small number of photons. In an attempt to get a least a rough idea we fit the spectrum using two different kinds of background: 1) only detector background and 2) detector background plus cluster emission. A fit with a Raymond \& Smith (1977) model yields temperatures of $T=1.1$keV and $T=0.3$keV, respectively, for the two different background models. Although these results have very large errors they show that the RXJ0943.0+4701 emission is considerably softer than the cluster emission (compare also Fig. 6). A fit with a power law to the same spectra yields photon indices of 2.2 and 2.9, respectively (again with very large errors). \subsection{Identification of RXJ0943.0+4701} \label{subsec-opt} Since the source RXJ0943.0+4701 appears to be variable and hence very interesting, we try to find the optical counterpart of the X-ray emission at RXJ0943.0+4701. Unfortunately, the northern part of the cluster was never observed with HST. A ground based I-band image of the field around RXJ0943.0+4701 is shown in Fig. 8. The position of RXJ0943.0+4701 is not very well determined in the PSPC observation for several reasons. First, the maxima in different bands are not exactly coincident (see Fig. 6 and 8). Second, there is no other point source near the cluster with an obvious optical counterpart which could be used to correct for a possible pointing offset of the ROSAT telescope. Therefore, we use the HRI image for the correction. In this image the quasar at z=2.055 is used to determine the pointing offset of the HRI. The pointing offset is only 1 arcsec. After correcting for this offset we use the point source P1 (see Fig.1) present in both, HRI and PSPC image, to correct for the PSPC offset, which is relatively large, 11 arcsec. \begin{figure} \psfig{figure=7485_f7.ps,width=8.8cm,clip=} \label{fig-light} \caption[]{Lightcurve of RXJ0943.0+4701 from the ROSAT/PSPC observation. It is consistent with a constant brightness over the observing interval of 72 hours. The observation was carried out in eight intervals with exposure times between 1300 sec and 2400 sec. The lightcurve is background subtracted, the cluster emission at the position of RXJ0943.0+4701 is not excluded. As the cluster emission should be constant it should give only an offset to the curve. Because of the limited number of photons the error bars are quite large. } \end{figure} The corrected positions of RXJ0943.0+4701 are marked in the optical image (Fig. 8). In the region of these positions are 8 possible optical counterparts with R magnitudes brighter than 22$^m$. These objects are marked with numbers in Fig. 8. We measure the brightness of all these objects using fixed aperture photometry with an aperture of 3.5 arcsec on the deep cluster images taken with the 12 filters (Belloni et al. 1995). The R magnitudes of all these counterpart candidates are listed in Table 3 along with their classification from the SEDs plus morphological analysis. Figure 9 shows the observed fluxes and the SEDs that we assign by eye as the best fit. Although this is admittedly a quite qualitative classification, we can still gain valuable information from it. Using all this information we gather: of the possible optical counterparts two are stars, five are galaxies and one is a blue compact object (\#130). \begin{table*}[htbp] \begin{center} \begin{tabular}{|c|c|c|l|c|} \hline & & & &\\ object & $\alpha$(2000)&$\delta$(2000)& R magnitude & probable identification \\ & & & &\\ \hline & & & &\\ \#94 & 09 42 59.1 & 47 01 04 & 20.37 & spiral \\ \#99 & 09 42 58.2 & 47 01 02 & 19.75 & elliptical/Sa at z=0.30-0.35 \\ \#115 & 09 42 55.8 & 47 00 56 & 19.26 & M0-M2 or K6-K8 star \\ \#118 & 09 42 59.3 & 47 00 56 & 21.05 & Sb, possibly cluster member \\ \#130 & 09 42 57.2 & 47 00 50 & 20.60 & blue compact object, slightly larger than PSF \\ \#134 & 09 42 59.0 & 47 00 48 & 21.63 & possibly elliptical and cluster member \\ \#158 & 09 42 57.5 & 47 00 35 & 21.61 & Sb-Sc, possibly cluster member \\ \#166 & 09 42 58.1 & 47 00 29 & 18.48 & M0-M2 or K6-K8 star \\ & & & &\\ \hline \end{tabular} \end{center} \caption{List of possible optical counterparts of the X-ray source RXJ0943.0+4701 and probable identifications. Numbers are as in Fig. 8. } \end{table*} \begin{figure*} \psfig{figure=7485_f8.ps,width=12.0cm,clip=} \label{fig-opt} \caption[]{Optical image in the region around RXJ0943.0+4701. The image has a size of about 80 arcsec by 80 arcsec (east is left) and is taken in the I-band filter. The possible optical counterparts of RXJ0943.0+4701 are marked with numbers (see Table 3). The position of the X-ray source RXJ0943.0+4701 (M3) of the PSPC observation is marked with (x) for the soft band maximum, (+) for the hard band maximum and (*) for the broad band maximum. The corresponding marginal maximum in the HRI observation is marked with (o). } \end{figure*} \begin{figure*} \label{fig-sed} \psfig{figure=7485_f9.ps,width=17.0cm,clip=} \caption[]{Spectral energy distribution for various possible optical counterparts of RXJ0943.0+4701 (numbers as in Fig. 8). All spectral energy distributions -- except for \#130 -- are overlaid with template spectra of probable identifications. The candidate \#130 could not be uniquely identified. Source \#99 is overlaid with a spectrum of an
efficient than training models from scratch. Attracted by this, a variety of PTLMs have been designed (e.g., GPT \cite{radford2018improving}, BERT \cite{devlin2019bert}), and available in public model zoos \cite{akbik2019flair,zhou2020s} to facilitate the NLP research community and industry. However, this emerging pre-training solution also introduces new security vulnerabilities to NLP applications \cite{chen2021badpre,zhang2020adversarial,krishna2019thieves}. There are several reasons that make PTLM systems particularly vulnerable. First, \textit{the new PTLM pipeline involves more stages and entities for model development and deployment, which inevitably enlarges the attack surface.} For instance, a Model Publisher is responsible for training and releasing the PTLMs. If he is malicious, he could tamper with the model parameters, which can totally affect the inherited downstream models \cite{chen2021badpre,shen2021backdoor}. It is difficult for a user to detect or repair a malicious PTLM. Besides, existing threats and attack techniques for standalone models are also applicable to PTLM systems. Second, \textit{a PTLM exhibits higher transferability, which can increase the attack feasibility.} On the one hand, different downstream models originating from the same PTLM shares similar language representation features. Attacks against one model have high chance to be effective for other models due to such similarity. This gives the adversary new opportunities to attack the black-box victim model \cite{li2020bert,yuan2021transferability}. On the other hand, a PTLM has high transferability of language representations to the downstream tasks. This guarantees high performance of the downstream model, as well as the persistence of threats during the fine-tuning process. As a result, an adversary can inject backdoors into the PTLM, which are still effective in arbitrary models inherited from it \cite{chen2021badpre,shen2021backdoor}. Although a variety of attacks have been designed against the PTLM scenario, it is still in a lack of systematic studies about those threats. To bridge this gap, we present the \textit{first} comprehensive survey about PTLM security from three perspectives. (1) We categorize existing threats in different stages of the PTLM system pipeline (e.g., pre-training, fine-tuning, inferring) and different adversarial entities (model publisher, downstream service provider, user). (2) We summarize two types of transferability in the PTLM scenario (landscape, portrait) that can advance different types of attacks. (3) Based on the adversarial goals, we further consider integrity threats and privacy threats. Each category also contains different types of attacks (backdoor and evasion attacks for integrity, privacy violations of data and model). Based on the above characterization, we discuss some open problems and promising directions for PTLM security. We expect our work can help NLP researchers and practitioners better understand the current status and future direction of PTLM security, and inspire the designs of more advanced attacks and defenses in the future. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{images/procedure.pdf} \caption{The PTLM system pipeline with possible attack goals enabled by two types of transferability.}\label{fig:overview} \end{figure*} \section{System and Threat Overview}\label{sec:overview} \subsection{Pre-trained Language Models} PTLMs are introduced to learn universal language representations \cite{qiu2020pre}. They can be conveniently transferred to various downstream tasks with high generalization and performance. State-of-the-art PTLMs commonly rely on the well-designed transformer architectures, e.g., GPT \cite{radford2018improving}, XLNet \cite{yang2019xlnet}, BERT \cite{devlin2019bert} and its variants such as RoBERTa \cite{liu2019roberta} and ALBERT \cite{liu2019roberta}. The transformers adopt the self-attention mechanism to capture connection weights between words and learn contextual representation \cite{lin2021survey}. Training such a PTLM usually requires massive training corpus. Existing pre-trained language tasks can be classified into the following two categories \cite{xu2021pre}: \begin{itemize} [leftmargin=*, itemsep=0pt, topsep=0pt] \item \textbf{Autoencoding Model (AE)}. Those models are pre-trained through corrupting input tokens and attempting to reconstruct the original sentences. The classical autoencoding tasks are Masked Language Model and Next Sentence Prediction. The notable AE model, BERT, is designed to pre-train deep bidirectional representations, where a portion of input tokens are replaced by a special symbol [MASK]. \item \textbf{Autoregressive Model (AR)}. These models, such as GPT, are trained to encode unidirectional context, which predict the token of current time-step in accordance with the tokens that have been read. One typical task is text generation. \end{itemize} \subsection{PTLM Pipeline} In this paper, we systematize the security threats to PTLM from three dimensions. The first perspective is the PTLM pipeline. Figure \ref{fig:overview} shows the system overview of developing and deploying a NLP task based on a PTLM, which consists of the following three phases. \begin{itemize} [leftmargin=*, itemsep=0pt, topsep=0pt] \item \textbf{Pre-training}. In this step, a Model Publisher ($\mathcal{MP}$) trains a foundation PTLM from enormous unsupervised corpus. This PTLM is able to output the language representation for input sentences from different distributions. \item \textbf{Fine-tuning}. In this phase, a Downstream Service Provider ($\mathcal{DSP}$) obtains the PTLM from $\mathcal{MP}$, and transfers it to a specific downstream model. To achieve this, $\mathcal{DSP}$ usually appends an auxiliary structure (e.g., a linear classifier) to the PTLM, and then fine-tunes the model with his downstream corpus in a supervised manner. Since the PTLM has already obtained powerful feature extraction ability in pre-training step, the fine-tuned model can inherit the knowledge of the PTLM to provide the representation of input sentences from the downstream dataset. \item \textbf{Inferring}. $\mathcal{DSP}$ then deploys the fine-tuned model as a NLP service, and provides APIs for users ($\mathcal{U}$) to remotely utilize this model. When receiving the text queries from $\mathcal{U}$, the inference system conducts forward propagation to obtain the model output, and returns it to $\mathcal{U}$. \end{itemize} Since this pipeline involves multiple phases with different parties, a larger attack surface is introduced to compromise the PTLM or downstream model. Specifically, (1) in the model pre-training phase, if the $\mathcal{MP}$ is malicious, he could tamper with the parameters of the PTLM. When an honest $\mathcal{DSP}$ downloads and fine-tunes this malicious PTLM with clean corpus, the corresponding downstream model can be still vulnerable. (2) In the fine-tuning phase, an honest-but-curious $\mathcal{DSP}$ could extract the sensitive information about the training or inference samples based on the embedding results. (3) In the inferring phase, a malicious $\mathcal{U}$ could leverage the released APIs to compromise the model predictions or extract private information. More seriously, since the downstream model is possibly transferred from a public PTLM, $\mathcal{U}$ can use such information to design more effective attacks. \subsection{Attack Goals} The second perspective of our survey is the adversarial goals. We consider two categories of security threats to the PTLM system. The first one is \textbf{integrity} attacks, where an active adversary tries to compromise the integrity of the model parameters or predictions. Particularly, a malicious $\mathcal{MP}$ could embed \textit{DNN backdoors} into the PTLM, which will be transferred to the downstream model as well. Such backdoors will be activated by malicious input containing specific triggers. Besides, a malicious $\mathcal{U}$ could perform \textit{evasion attacks} to mislead the downstream model to produce wrong results. The second category of threats is \textit{privacy} attacks. An honest-but-curious adversary tries to steal sensitive information from pre-trained or downstream models. For instance, via the interaction with the inference system, a malicious $\mathcal{U}$ could compromise the \textit{data privacy}, e.g., recovering the attributes, keywords or even the entire sentence of the training corpus. He could also compromise the \textit{model privacy} by extracting the proprietary pre-trained model. \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \begin{table*}[] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \Xhline{2pt} \textbf{Threat} & \textbf{Attack} & \textbf{Attack Category} & \textbf{Paper} & \textbf{Phase/Attacker} & \textbf{Technique} & \textbf{Target Model} & \textbf{Transferability} \\ \Xhline{2pt} \multirow{24}{*}{\textbf{Integrity}} & \multirow{7}{*}{Backdoor} & \multirow{2}{*}{Task-specific} & \cite{kurita2020weight} & \multirow{7}{*}{Pre-training($\mathcal{MP}$)} & \multirow{5}{*}{Data Poisoning} & AE & Portrait\\ \cline{4-4}\cline{7-8} & & & \cite{zhang2021trojaning} & & & AE/AR & Landscape, Portrait\\ \cline{3-4}\cline{7-8} & & \multirow{5}{*}{Task-agnostic} & \cite{chen2021badpre} & & & AE & Portrait\\ \cline{4-4}\cline{7-8} & & & \cite{zhang2021trojaning} & & & AE/AR & Landscape, Portrait\\ \cline{4-4}\cline{7-8} & & & \cite{li2021backdoor} & & & AE/AR & Landscape, Portrait \\ \cline{4-4}\cline{6-8} & & & \cite{zhang2021red} & & \multirow{2}{*}{Model Poisoning} & AE & Portrait\\ \cline{4-4}\cline{7-8} & & & \cite{shen2021backdoor} & & & AE & Portrait\\ \cline{2-8} & \multirow{17}{*}{Evasion} & \multirow{13}{*}{Word-level} & \cite{jin2020bert} & \multirow{17}{*}{Inferring($\mathcal{U}$)} & \multirow{7}{*}{Heuristic Generation} & AE & Landscape\\ \cline{4-4}\cline{7-8} & & & \cite{zang2019word} & & & AE & Landscape\\ \cline{4-4}\cline{7-8} & &
# Is a Physics PhD For me? last minute doubts 1. Dec 14, 2014 ### bluechic92 Hey Everyone, So I was set. I studied and re-took the pgre. I thought long and hard about what I wanted to do in graduate school. I thought about career goals. I talked to professors and graduate students. I even submitted two apps! Two schools that I would be extremely happy to attend. Recently, out of nowhere, I am having doubts. Tomorrow is one of the main deadlines for a couple of my schools and I couldn't even get myself to look at the apps. I kept procrastinating. I kept looking for a "way out" or something else that would make me happy. IDK why I'm doing this. I wasn't like this before :/ I was super excited about physics PhD and research. I loved doing research in undergrad and I am very enthusiastic about physics. Many who know me know this. I am confident that I can do well in graduate school. I am a hard worker, love to self study and teach myself things, and I love doing research. Yet, for some reason I am feeling scared. I can't think of a better word, but I just feel scared. I am in a gap year at the moment. I spent most of this year looking for other jobs and trying to learn other things, but I kept coming back to physics. Heck, I even went and audited a QFT class for fun. Obviously some stupid thing is "blocking me", but I want to move it out of the way. I want to feel like I used to. Last time I did research was a year ago. I used to love waking up and going to my office to work on my research. It was fun. Even when I was stuck, I was having fun. I enjoyed it. How can I get that back? I'm scared I won't ever feel that way again :/. I guess this is something internal and it's not something people can just help me with. DO you think these doubts mean that I should wait another year? I wish I could have found a prof to do research with. Unfortunately, the two schools closest to me are extremely competitive with extremely busy profs. ( I emailed a few at each with no luck) :/ 2. Dec 14, 2014 Staff Emeritus I'm starting to worry about you too. You say you are confident, but this is the sixth or seventh thread you have posted asking for validation. I'm afraid you are going to need more self-confidence if you are going to get through graduate school. I would not expect my future graduate school to be warm and fuzzy - I would expect it to be cold, prickly and impersonal. 3. Dec 14, 2014 ### bluechic92 I might have worded this entire thread wrong, mostly because of sleepiness. I remember my previous posts : i was initially not confident because of my pgre/gpa and next because my career goals changed. I am pretty happy with the list I have and I am confident that I can get admitted to some. I just thought more about what One of the poster's said to my previous thread. He asked me why I enjoyed x, y, and z. I thought more about that recently which is what led to my procrastination. I guess I'm not 100% if I want this anymore, but I used to be. I know people go through this in life at times in whatever field or career, but how to deal with it? I really did word my original post wrong. I do plan to still apply. I am just generally wondering how people deal with this type of situation? Idk what to call it honestly. It's a more of a doubt of wanting this path than a doubt of my ability to succeed in this path. I am sure it's a short term doubt. I have had this conversation with my prof who has went through this three times. Once during undergrad, next during grad school, and then during post doc. Now he's a tenured professor! Just wanted to hear more stories. 4. Dec 15, 2014 ### Delong I have the similar situation in that I know I ultimately want to to do graduate school but I have been procrastinating on the applications which are due in 11 days. I don't think I can finish them. Other dreams have started to pop up which I think I might pursue in the meantime because I know even if I go into graduate school now my spirit will not be in it. Because my spirit and mind are not there I think I better work on other things. Right now what's captivating me is the thought of being a mandarin Chinese and english translator or being a Chinese teacher. 5. Dec 15, 2014 ### bluechic92 That's pretty cool Delong, are you going to go for it? Being a Chinese teacher? Maybe you can still apply and defer for a year. I have a friend who is in China currently teaching english. I know that's not what you meant, but maybe there are programs that interest you too. I used to enjoy learning a bunch of things. I would sit and teach myself group theory for physicist, differential geometry etc. but none of it is appealing to me anymore. I just want to bring that part of myself back. I need that "me" for graduate school. I'll take my own advice and apply anyways then maybe defer. Best of luck! 6. Dec 15, 2014 ### Delong I can seriously relate, going to graduate school and practicing science used to consume my mind. But now I have trouble feeling the hunger again. Ideally I would like to do both: teach english to chinese speakers and teach chinese to english speakers cause if i can then why not? I just came back from a trip to China and I'm trying to reconnect with my original culture,it's kind of become a journey of my life. I'll try to apply to at least a few low tier schools before the deadline. 7. Dec 15, 2014 ### bluechic92 That sounds pretty cool! I sort of want to take that type of journey someday. A part of me wishes I had applied for Fulbright or the Watson Fellowship back when I was in undergrad. BTW, you may want to check out Fulbright for next year ( if you are U.S citizen). This is the program my friend is doing: http://www.tfchina.org/en/index.aspx [Broken] On the bright side, my small period of doubt vanished. I am sure it will appear sometime later in life because that's just the way it is. I'm glad I took the time to talk to "real" grown ups, haha. I have a lot of work to do now and for the next several days. Last edited by a moderator: May 7, 2017 8. Dec 15, 2014 ### vela Staff Emeritus A lot of people start grad school, realize it's not for them, and leave. There's no shame in changing your mind after you get there. At least if you apply to schools now, you'll have the option to go next year if you decide to try. 9. Dec 15, 2014 ### Almeisan Confused about why there isn't a MSc in physics in the US. Here in Europe it is normal to get a MSc in physics, then go into business. No one gets a university-level BSc and then leaves. In fact, in my country that was impossible to do. An academic education took 5 years minimum. I don't get the BSc -> PhD track. How do you get a good with just a BSc? Don't know if it is an option to get an MSc, NA or Europe, to either delay the declension or to prepare fully for high level industry
type), but it is not possible to apply two colors or two types to the same clothe (mutual exclusiveness). Henceforth, at the end of the coding process, a collection of quotations has been labelled by each of the coders with one or more codes from the semantic domains according to the mutual exclusiveness rule. However, it is perfectly possible that the codings provided by the different coders do not agree, i.e., different subjects are interpreting the reality in different ways, maybe due to inconsistencies or fuzziness of the definition of the codes. To correct this issue, it is necessary to evolve the codebook by refining both codes and meanings until all coders interpret it in the same way and agree on its application. The detection of these flaws in agreement is precisely the aim of the ICA techniques. These are a collection of quantitative coefficients that allows us to measure the amount of disagreement in the different codings and to determine whether it is acceptable (so we can rely on the output of the coding process) or not (so we must refine the codebook and repeat the coding with new data). For this purpose, in \cite{gonzlezprieto:2020}, it was established a unified framework for measuring and evaluating the ICA based on a new interpretation of Krippendorff's $\alpha$ coefficients. Krippendorff's $\alpha$ coefficients \cite{Hayes:2007,Krippendorff:2004b,Krippendorff:2011,Krippendorff:2016} are part of a standard tool used for quantifying the agreement in content and thematic analysis due to its well-established mathematical properties and probabilistic interpretation. In our research, we shall use Krippendorff’s $\alpha$ coefficients, as described in Appendix \ref{appendix-ica}. \subsection{Initial/Open coding} \label{sec:open-coding} Recall from Section \ref{sec:process} that this activity aims to discover the concepts underlying the data and instantiate in the form of codes. Thereby, at each iteration of the open coding, $n$ documents of the survey (a document is a set of answers to the survey by one of the participants) are analyzed by R1, R2, and R3, i.e., chopped into quotations that are assigned to either a previously discovered code or a new one that emerges to capture a new concept. The process is conducted as follows. R1 analyzes the $n$ documents, i.e., identifies quotations, creates a codebook (codes and semantic domains), and faces the coding. When R1 ends, R2 analyzes the same $n$ documents by using the codebook created by R1, i.e., analyzes previous quotations or identifies new ones and labels these quotations with a code previously proposed by R1 or new codes. If R2 adds new quotations or codes, these changes are reported in a \textit{disagreements diary}. After R2 finishes the coding process, the new codebook is delivered to R3 that repeats the process. Thereby, according to our process, the coders use the codes previously proposed or generate new ones if they think that some key information is missing. Hence, the process is flexible enough to allow the coders to add their points of view in the form of new codes, but the existence of a common codebook also increases the chances of achieving a consensus. After an iteration ends (i.e., $n$ documents have been coded by R1, R2, and R3), the ICA is calculated. In particular, we shall use the Krippendorff's $Cu\textrm{-}\alpha$ coefficient as a quality control, with two scenarios being possible: \begin{itemize} \item $Cu\textrm{-}\alpha$ is below than an acceptable threshold (in our case, we fix the standard $Cu\textrm{-}\alpha < 0.8$). This evidences that there exist significant disagreements in the interpretation of the codes among the coders. In that situation, R1, R2, and R3 meet to discuss their interpretation of the codes. This \textit{review meeting} delivers the \textit{disagreements diary} and a \textit{refined codebook} in which the definitions and range of application of the codes are better delimited. With this new codebook as a basis, a new iteration starts with the next $n'$ documents of the corpus. \item $Cu\textrm{-}\alpha$ is above or equal to the threshold ($Cu\textrm{-}\alpha \geq 0.8$). This means that there exists a consensus among the coders on the meaning of the codes. At this point, the open coding process stops and the generated codes (actually, the whole codebook) are used as input for the following activities, i.e., the selection of the core categories (Section \ref{sec:selection-core}) and the selective coding (Section \ref{sec:selective-coding}). \end{itemize} Additionally, the value of the Krippendorff's $cu\textrm{-}\alpha$ coefficient is also computed per semantic domain. As explained in Section \ref{sec:inter-coder-agreement}, a low value of $cu\textrm{-}\alpha$ in a particular domain means that the coders are failing in interpreting the codes of that domain in the same way. This provides a valuable clue of the conflicting codes so that the discussion of the meaning of the codes can be focused on these codes. Thus, A small value of $cu\textrm{-}\alpha$ points out to potentially problematic codes, so that, during the review meeting, the coders can focus on the codes of these domains. Hopefully, this will lead to a more effective refinement of the codebook, which improves the ICA value of the next iteration more markedly. The following sections describe the evolution of the agreement during the open coding activity of our GT study on EdgeOps. As we will see, after the first iteration of the coding, there was no consensus on the meaning of the codes ($Cu\textrm{-}\alpha < 0.8$). However, after refining the codebook and conducting a second iteration, the agreement improved to reach an acceptable threshold ($Cu\textrm{-}\alpha \geq 0.80$) so the initial coding was concluded. \subsubsection*{Iteration 1} In the first iteration of the open coding process, R1, R2, and R3 analyzed $6$ documents. R1 created a codebook with 29 codes that was subsequently refined by R2 and R3. As by-product of this process, $40$ codes were discovered and divided into $7$ semantic domains (denoted by S1, S2, ..., S7). After completing the coding process the $Cu\textrm{-}\alpha$ and $cu\textrm{-}\alpha$ ICA coefficients were computed and their values are shown in Table \ref{tab:table-open-iter1}. \begin{table}[h] \begin{center} \small \caption{Values of the different Krippendorff's $\alpha$ coefficients in the iteration 1 of the open coding. In bold, the values above the acceptability threshold ($ \geq 0.80$).} \vspace{0.2cm} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{7}{|c|}{$cu\textrm{-}\alpha$ per semantic domain} & \multirow{ 2}{*}{$Cu\textrm{-}\alpha$}\\\cline{1-7} S1 & S2 & S3 & S4 & S5 & S6 & S7 & \\\hline \textbf{0.81} & \textbf{0.98} & 0.59 & \textbf{0.80} & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} & 0.56 \\\hline \end{tabular} \label{tab:table-open-iter1} \end{center} \end{table} As we can observe from this table, the value of the global $Cu\textrm{-}\alpha$ coefficient did not reach the acceptable threshold of $0.8$. For this reason, it was necessary to conduct a review meeting for discussing disagreements and the application criteria of the different codes. The outputs of this meeting are documented in the \textit{disagreements diary} file of the \textit{open coding} folder in the public repository. To highlight the problematic codes, we considered the $cu\textrm{-}\alpha$ coefficients computed per semantic domain. For Table \ref{tab:table-open-iter1}, we observe that domain S3 got a remarkably low value of the $cu\textrm{-}\alpha$ coefficient. A thorough look at the particular codes within S3 shows that this domain includes codes related to the functionality of the system. This is particularly a fuzzy domain in which several concepts can be confused. During the review meeting, clarifications about these codes were necessary to avoid misconceptions. After this, a new codebook was released. In this new version, memos and comments were added, and a code was removed, so $39$ codes (and $7$ semantic domains) proceeded to the second iteration of the open coding. \subsubsection*{Iteration 2} R1, R2, and R3 analyzed other $6$ documents. Since the coders agreed on a common codebook in the previous iteration, we can expect a greater agreement that materializes as a higher value of ICA. As by-product of this second iteration, $8$ new codes arose leading to a new version of the codebook with $47$ codes and $7$ semantic domains. The ICA values of this second iteration are shown in Table \ref{tab:table-open-iter2}. \begin{table}[h] \begin{center} \small
image and recognize the segmented digits. Here is 8kb archive with the following code + ten. It has 60,000 grayscale images under the training set and 10,000 grayscale images under the test set. ∙ Tsinghua University ∙ 0 ∙ share. 53647331619 Average loss epoch 4: 0. Handwritten Digit Recognition using Convolutional Neural Networks in Python with Keras convolutional-neural-networks-python-keras/ MNIST Handwritten Digit. Thus, the purpose of this project is to make a deeper understanding on different classifiers. The digits have been size-normalized and centered in a fixed-size image. This dataset has a training set of 60,000 examples, and a test set of 10,000 examples. The primary aim of this dataset is to classify the handwritten digits 0-9. code and g++-4. In this post you will discover how to develop a deep learning model to achieve near state of the art performance on the MNIST handwritten digit recognition task in Python using the Keras deep learning library. The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. There are also many existing open. The MNIST dataset contains 70,000 samples of handwritten digits, each of size 28 x 28 pixels. We are going to take the MNIST dataset for training and recognition. Call for Papers - International Journal of Science and Research (IJSR) is a Peer Reviewed, Open Access International Journal. a beginner in python and I have found machine learning to be quite. The first post introduced the traditional computer vision image classification pipeline and in the second post, we. This has been done for you, so hit 'Submit Answer' to see which handwritten digit this happens to be!. 11/08/2017 Introduction to Deep Learning Fall 2017 30. Thanks for contributing an answer to Code Review Stack Exchange! Browse other questions tagged python machine-learning tensorflow or ask your own question. The Mnist database contains 28x28 arrays, each representing a digit. The MNIST database is a large collection of images of handwritten digits. 8 Apr 2020 • jwwthu/MNIST-MIX. 2% accuracy with:. For the purposes of this post we will be using the famous mnist dataset, containing around 70 000 28×28 images of handwritten digits, created by more. It is a good database for people who want to learn about various pattern. The Python programming language is an ideal platform for rapidly prototyping and developing production-grade codes for image processing and computer vision with its robust syntax and wealth of powerful libraries. 76% test accuracy. In a series of posts, I'll be training classifiers to recognize digits from images, while using data exploration and visualization to build our. Handwritten digit recognition is also widely used in a number of academic institutions to process their examination papers [1], [3]. In base paper the testing result on this system shows that the Local Binary Pattern Variance method can recognize handwriting digit character on MNIST dataset with accuracy level 89,81% using the best parameter value radius=4,256 and 64-bin histogram, 9 region division on the image, and 10 nearest neighbor on K-NN algorithm. They are from open source Python projects. Thanks to tensorflow. Each of those is flattened to be a 784 size 1-d vector. This model predicts handwritten digits using a convolutional neural network (CNN). Source code:. The dataset consists of two CSV (comma separated) files namely train and test. txt (see Output sections). We use python-mnist to simplify working with MNIST, PCA for dimentionality reduction, and KNeighborsClassifier from sklearn for classification. Curt-Park / handwritten_digit_recognition. In this video you will find an easy explanation of how the KNN algorythm works for handwritten digits recognition. The MNIST handwritten digit classification problem is a standard dataset used in computer vision and deep learning. Mar 2020 – Apr 2020 2 months. This post will show you how to create an algorithm to identify characters drawn by the computer mouse. How I built it We have trained a simple Convolutional Neural Network using TensorFlow2. In this tutorial, we’ll use the MNIST dataset of handwritten digits. Subhransu Maji and Jitendra Malik EECS Department, UCB, Tech. GitHub Gist: instantly share code, notes, and snippets. Ce sont des images en noir et blanc, normalisées centrées de 28 pixels de côté [ 1 ]. The original NIST's training dataset was taken from American Census Bureau…. I have created two python scripts that already include these lines to create a model. Digit ranges from 0 to 9, meaning 10 patterns in total. Consists of 70. If the number contains, for example, 10 digits and the recognition rate of one digit is 0. Since our objective is to visualize MNIST data in 2-D space, we need to find out the top two eigen values and eigen vectors that represent the most spread/variance. Although the dataset is effectively solved, it can be used as the basis for learning and practicing how to develop, evaluate, and use convolutional deep learning neural networks for image classification from scratch. Figure 1: The implementation of the MNIST dataset using tensorflow. First start by downloading and unzipping the MNIST database images to create some training and test datasets. Most standard implementations of neural networks achieve an accuracy of ~(98–99) percent in correctly classifying the handwritten digits. The first task is to download and extract the data. Jürgen Schmidhuber (2009-2013). 8 Apr 2020 • jwwthu/MNIST-MIX. Fast and Accurate Digit Classification. The MNIST dataset contains a large number of hand written digits and corresponding label (correct digit). The data set consists of 60,000 handwritten digits from 0 through 9 that have been digitized. Importing the MNIST dataset using Tensorflow can be achieved through the use of the following python code. In this video you will find an easy explanation of how the KNN algorythm works for handwritten digits recognition. H2O Posted on February 20, 2017 by tomaztsql — 10 Comments Recently, I did a session at local user group in Ljubljana, Slovenija , where I introduced the new algorithms that are available with MicrosoftML package for Microsoft R Server 9. MNIST_CNN MNIST handwritten digit recognition, convolution neural network, tensorflow environment. It is also known as the Hello World application of Machine Learning. Its used in computer vision. MNIST Handwritten Digit Classification Challenge (ECKOVATION MACHINE LEARNING) PROJECT REPORT 2. The recognition process for the new classifier differs from the previous ones. MNIST Dataset. The paper describes a low-cost handwritten character recognizer. Converting MNIST dataset for Handwritten digit recognition in IDX Format to Python Numpy Array. This post will show you how to create an algorithm to identify characters drawn by the computer mouse. Just run the file as : python CNN_MNIST. Optical Character Recognition involves the detection of text content on images and translation of the images to encoded text that the computer can easily understand. Source: Handwritten Digit Recognition using Deep Learning, Keras and Python – Gogul Ilango 2019-08-04 0. It consists of a training set of 60,000 examples, and a test set of 10,000 examples. We are not going to create a new database but we will use the popular MNIST database of handwritten digits. By introducing digits from 10 different languages, MNIST-MIX becomes a. Each image in the MNIST dataset is 28x28 and contains a centered, grayscale. The MNIST handwritten digit classification problem is a standard dataset used in computer vision and deep learning. In this post we are going to develop a Handwritten Digit Recognition accuracy. I will use MXNet and its Gluon API to build a neural network. Figure 1: The implementation of the MNIST dataset using tensorflow. The codebase consists of Python and TensorFlow scripts producing trained models used by the recognisers implemented in TypeScript to recognise a digit or an expression handwritten on an HTML canvas. h41occ9cn7,
and navigating with constant policies $h_{k_0}$, $h_{k_1}$, $h_{k_2}$, and $h_{k_3}$, respectively. Figure \ref{fig:trayectoriaColor} corroborates that the policies improve as the online agent moves along its trajectory, allowing the episodic agent to navigate better. Indeed, at first the episodic agent only knows to go north west on the straight blue line, but eventually it manages to follow the purple line moving towards the charger and avoiding the obstacle. This apparent improvement in Figure \ref{fig:trayectoriaColor} is not reflected in a significant step increase in Figure \ref{fig:rewards}. This is because the forgetting factor $\gamma=0.9$ weights a few steps of the trajectory in the value function, and the fist steps are where the trajectories are not significantly separated. Overall, this numerical example shows that the algorithm developed in this paper is capable of learning how to navigate on a loop in between to goal locations, avoiding an obstacle, and following a cyclic trajectory that does not comply with the standard stationary assumptions in the literature. \begin{figure} \includegraphics[scale=0.2]{figures/trayectoriasColor.png} \caption{Trajectories of an episodic agent using four policies that are produced by the online agent when following the cyclic trajectory of Figure \ref{fig:trayectoria}. Each colored point represents a location $x_k$ in which the online agent updates policy to obtain $h_k$. The line of the corresponding color represents the trajectory of the episodic agent that uses the fixed policy $h_k$ to navigate from $x$ towards the charger. } \label{fig:trayectoriaColor} \end{figure} \section{Online Policy Gradient}\label{sec_online} \subsection{Stochastic Gradient Ascent}\label{sec_offline_alg} In order to compute a stochastic approximation of $\nabla_h U_{s_0}(h)$ given in \eqref{eqn_nabla_U} we need to sample from the distribution $\rho_{s_0}(s,a)$ defined in \eqref{eqn_discounted_distribution}. The intuition behind $\rho_{s_0}(s,a)$ is that it weights by $(1-\gamma)\gamma^t$ the probability of the system being at a specific state-action pair $(s,a)$ at time $t$. Notice that the weight $(1-\gamma)/\gamma^t$ is equal to the probability of a geometric random variable of parameter $\gamma$ to take the value $t$. Thus, one can interpret the distribution $\rho_{s_0}(s,a)$ as the probability of reaching the state-action pair $(s,a)$ after running the system for $T$ steps, with $T$ randomly drawn from a geometric distribution of parameter $\gamma$, and starting at state $s_0$. The geometric sampling transforms the discounted infinite horizon problem into an undiscounted episodic problem with random horizon (see e.g. \cite[pp.39-40]{bertsekas1996NDP}). This supports steps 2-7 in Algorithm \ref{alg_stochastic_grad} which describes how to obtain a sample $(s_T,a_T)\sim\rho_{s_0}(s,a)$. Then to compute an unbiased estimate of $\nabla_h U_{s_0}(h)$ (cf., Proposition \ref{prop_unbiased_grad}) one can substitute the sample $(s_T,a_T)$ in the stochastic gradient expression \begin{equation}\label{eqn_stochastic_gradient} \hat{\nabla}_h U_{s_0}(h,\cdot) = \frac{1}{1-\gamma}\hat{Q}(s_T,a_T;h)\kappa(s_T,\cdot)\Sigma^{-1}(a_T-h(s_T)), \end{equation} with $\hat Q(s_T,a_T;h)$ being an unbiased estimate of $Q(s_T,a_T;h)$. Algorithm \ref{alg_stochastic_grad} summarizes the steps to compute the stochastic approximation in \eqref{eqn_stochastic_gradient}. We claim that it is unbiased in Proposition \ref{prop_unbiased_grad} as long as the rewards are bounded. We formalize this assumption next as long with some other technical conditions required along the paper. \begin{assumption}\label{assumption_reward_function} There exists $B_r>0$ such that $\forall (s,a) \in \ccalS\times\ccalA$, the reward function $r(s,a)$ satisfies $|r(s,a)|\leq B_r$. In addition $r(s,a)$ has bounded first and second derivatives, with bounds $|\partial r(s,a)/ \partial s|\leq L_{rs}$ and $|\partial r(s,a)/\partial a |\leq L_{ra}$. \end{assumption} Notice that these assumptions are on the reward which is user defined, as such they do not impose a hard requirement on the problem. \begin{algorithm} \caption{StochasticGradient}\label{alg_stochastic_grad} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \Require $h$, $s_0$ \State Draw an integer $T$ form a geometric distribution with parameter $\gamma$, $P(T = t) = (1-\gamma)\gamma^t$ \State Select action $a_{0} \sim \pi_h(a|{s})$ \For {$t = 0,1,\ldots T-1 $} \State Advance system $s_{t+1} \sim P_{s_t\to s_{t+1}}^{a_t}$ \State Select action $a_{t+1} \sim \pi_h(a|{s_{t+1}})$ \EndFor \State Get estimate of $Q(s_T,a_T;h)$ \label{step_Q} \State Compute the stochastic gradient $\hat{\nabla}_hU(h,\cdot)$ as in \eqref{eqn_stochastic_gradient} \Return $\hat{\nabla}_hU(h,\cdot)$ \end{algorithmic}\label{alg_stochastic_gradient} \end{algorithm} \begin{proposition}[(Proposition 3 \cite{paternain2018stochastic})] \label{prop_unbiased_grad} The output $\hat{\nabla}_h U_{s_0}(h,\cdot)$ of Algorithm \ref{alg_stochastic_gradient} is an unbiased estimate of $\nabla_h U_{s_0}(h,\cdot)$ in \eqref{eqn_nabla_U}. \end{proposition} {An unbiased estimate of $Q(s_T,a_T)$ can be computed considering the cumulative reward from $t=T$ until a randomly distributed horizon $T_Q\sim geom(\gamma)$ (cf., Proposition 2 \cite{paternain2018stochastic}). The variance of this estimate may be high resulting on a slow convergence of the policy gradient algorithm (Algorithm \ref{alg_stochastic_grad}). For these reasons, the literature on RL includes several practical improvements. Variance can be reduced by including batch versions of the gradient method, in which several stochastic gradients are averaged before performing the update in \eqref{eqn_stochastic_gradient}. One particular case of a batch gradient iteration in \cite{paternain2018stochastic}, averages two gradients sharing the same state $s_i$ with stochastic actions. Other approaches include the inclusion of baselines \cite{williams1992simple} and actor critic methods \cite{konda2000actor,bhatnagar2009natural,degris2012off}. Irrespective of the form selected to estimate the $Q$ function with the estimate \eqref{eqn_stochastic_gradient} one could update the policy iteratively running stochastic gradient ascent} \begin{equation}\label{eqn_stochastic_update} h_{k+1} = h_k + \eta_k \hat{\nabla}_h U_{s_0}(h_k,\cdot), \end{equation} where $\eta_k>0$ is the step size of the algorithm. Under proper conditions stochastic gradient ascent methods can be shown to converge with probability one to the local maxima \cite{pemantle1990nonconvergence}. This approach has been widely used to solve parametric optimization problems where the decision variables are vectors in $R^n$ and in \cite{paternain2018stochastic} these results are extended to non-parametric problems in RKHSs. Observe however, that in order to provide an estimate of $\nabla U_{s_0}(h_k,\cdot)$, Algorithm \ref{alg_stochastic_grad} requires $s_0$ as the initial state. Hence, it is not possible to get estimates of the gradient without resetting the system to the initial state $s_0$, preventing a fully online implementation. {As discussed in Section \ref{sec_continuing}, this is a common challenge in continuing task RL problems and in general the alternative is to modify the objective the function and to assume the existence of a steady-state distribution to which the MDP converges (see e.g., \cite[Chapter 13]{sutton1998reinforcement} or \cite{degris2012off}), to make the problem independent of the initial state. In this work we choose to keep the objective \eqref{eqn_problem_statement} since the ergodicity assumption is not necessarily guaranteed in practice and the alternative formulation makes transient behaviors irrelevant, as it was also discussed in Section \ref{sec_continuing}. } Notice that, without loss of generality, Algorithm \ref{alg_stochastic_grad} can be initialized at state $s_k$ and its output becomes an unbiased estimate of $\nabla U_{s_k}(h_k,\cdot)$. The main contribution of this work is to show that the gradient of $U_{s_k}(h)$ is also an ascent direction for $U_{s_0}(h)$ (cf., Theorem \ref{prop_all_gradients}) and thus, these estimates can be used to maximize $U_{s_0}(h)$ hence allowing a fully online implementation. We describe the algorithm in the next section. \subsection{Online Implementation} As suggested in the previous section it is possible to compute unbiased estimates of $\nabla_h U_{s_k}(h_k)$ by running Algorithm \ref{alg_stochastic_gradient} with inputs $h_k$ and $s_k$. The state $s_k$ is defined for all $k\geq 1$ as the state resulting from running the Algorithm \ref{alg_stochastic_gradient} with inputs $h_{k-1}$ and $s_{k-1}$. This is, at each step of the online algorithm \textemdash which we summarize under Algorithm \ref{alg_online_policy_gradient}\textemdash the system starts from state $s_k$ and transits to a state $s_{T_k}$ following steps 3--6 of Algorithm \ref{alg_stochastic_gradient}. Then, it advances from $s_{T_k}$ to $s_{k+1}$ to perform the estimation of the $Q$-function, one that admits an online implementation, for instance by adding the rewards of the next $T_Q$ steps with $T_Q$ being a geometric random variable. The state $s_{k+1}$ is the initial state for the next iteration of Algorithm \ref{alg_online_policy_gradient}. \begin{algorithm} \caption{Online Stochastic Policy Gradient Ascent} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \Require step size $\eta_0$ \State \textit{Initialize}: $h_0=0$, and draw initial state $s_0$ \For{$k=0 \ldots$} \State Compute the stochastic gradient and next state:\\ $\left(\hat{\nabla}_h U(h_k,\cdot), s_{k+1} \right) = \textrm{StochasticGradient}(h_k, s_k)$ \State Stochastic gradient ascent step $$ \tilde{h}_{k+1} = h_k +\eta_k \hat{\nabla}_h U(h_k,\cdot) $$ \State Reduce model order $h_{k+1} =$ KOMP($\tilde{h}_{k+1},\epsilon_K$) \EndFor \end{algorithmic}\label{alg_online_policy_gradient} \end{algorithm} Notice that the update \eqref{eqn_stochastic_update} \textemdash step 5 in Algorithm \ref{alg_online_policy_gradient} \textemdash requires the introduction of a new element $\kappa(s_{T_k},\cdot)$ in the kernel dictionary at each iteration, thus resulting in memory explosion. {To overcome this limitation we modify the stochastic gradient ascent by introducing a projection over a RKHS of lower dimension as long as the induced error remains below a
as follows. \begin{proposition}\label{prop:sirt_recur} Starting with the last coordinate $k = d$, we set $\bs{\mathcal{B}}_d = \bs{\mathcal{A}}_d$. Suppose for the first $k$ dimensions ($k>1$), we have a coefficient tensor $\bs{\mathcal{B}}_k \in \mathbb{R}^{r_{k-1}\times n_k \times r_k}$ that defines a marginal function $\hat\pi_{\leq k}(\bs x_{\leq k})$ as in \eqref{eq:marginal_sqk}. The following procedure can be used to obtain the coefficient tensor $\bs{\mathcal{B}}_{k-1}\in \mathbb{R}^{r_{k-2}\times n_{k-1} \times r_{k-1}}$ for defining the next marginal function $\hat{\pi }_{ < k}(\bs x_{< k})$: \begin{enumerate}[leftmargin=14pt] \item Use the Cholesky factorisation of the mass matrix, $\bs \cL_k \bs \cL_k^\top = \bs \cM_k \in \mathbb{R}^{n_k \times n_k}$, to construct a tensor $\bs{\mathcal{C}}_k \in \mathbb{R}^{r_{k-1}\times n_k \times r_k}$: \begin{align} \bs{\mathcal{C}}_k[\alpha_{k-1}, \tau, \ell_{k}] = \sum_{i = 1}^{n_k} \bs{\mathcal{B}}_k[\alpha_{k-1}, i, \ell_{k}] \, \bs \cL_k[i, \tau]. \end{align} \item Unfold $\bs{\mathcal{C}}_k$ along the first coordinate \cite{kolda2009tensor} to obtain a matrix $\bs \cC_k^{(\rm R)} \in \mathbb{R}^{r_{k-1} \times (n_k r_k)} $ and compute the thin QR factorisation \begin{align}\label{eq:SIRT-QR} \bs \cQ_k \bs \cR_k = \big( \bs \cC_k^{(\rm R)} \big)^\top, \end{align} where $\bs \cQ_k \in \mathbb{R}^{(n_k r_k) \times r_{k-1}} $ is semi--orthogonal and $\bs \cR_k \in \mathbb{R}^{r_{k-1} \times r_{k-1}}$ is upper--triangular. \item Compute the new coefficient tensor \begin{align}\label{eq:B_recur} \bs{\mathcal{B}}_{k-1}[\alpha_{k-2},i, \ell_{k-1}] = \sum_{\alpha_{k-1} = 1}^{r_{k-1}} \bs{\mathcal{A}}_{k-1}[\alpha_{k-2},i, \alpha_{k-1}]\, \bs \cR_k[\ell_{k-1},\alpha_{k-1}]. \end{align} \end{enumerate} Furthermore, at index $k = 1$, the unfolded $\bs{\mathcal{C}}_1$ along the first coordinate is a row vector $\bs \cC_1^{(\rm R)} \in \mathbb{R}^{1 \times (n_1 r_1)} $. Thus, the thin QR factorisation $\bs \cQ_1 \bs \cR_1 = \big( \bs \cC_1^{(\rm R)} \big)^\top$ produces a scalar $\bs \cR_1 \in \mathbb{R}$ and the normalising constant $\hat{z} = \int_{\mathcal{X}_1} \hat{\pi }_{ \leq 1}(\bs x_{1}) \lambda_1(\bs x_1) d\bs x_1$ can be obtain by $ \hat{z} = \bs \cR_1^2 = \|\bs \cC_1^{(\rm R)}\|^2$. \end{proposition} \begin{proof} See Appendix \ref{appen:prop:sirt_recur}. \end{proof} \begin{proposition}\label{prop:sirt_cond_cdf} The marginal PDF of $\hat{\bs X}_1$ can be expressed as \begin{equation} \hat{f}_{\hat{\bs X}_1}(\bs x_1) = \sum_{\ell_{1}=1}^{r_{1}} \Big(\sum_{i =1}^{n_1}\phi_{1}^{(i)}(\bs x_1) \, \bs \cD_1 [i, \ell_1] \Big)^2 \,\lambda_1(\bs x_1), \quad \textrm{where} \quad \bs \cD_1[i, \ell_1] = \frac{1}{\hat{z}} \bs{\mathcal{B}}_1 [\alpha_{0}, i, \ell_1], \end{equation} and $\alpha_0 = 1$. For $k>1$ and a given $\bs x_{<k}$, the conditional PDF of $\hat{\bs X}_k | \hat{\bs X}_{<k}$ can be expressed as \begin{equation} \hat{f}_{\hat{\bs X}_k | \hat{\bs X}_{< k}}(\bs x_k | \bs x_{<k}) = \sum_{\ell_{k}=1}^{r_{k}} \Big(\sum_{i =1}^{n_k}\phi_{k}^{(i)}(\bs x_k) \, \bs \cD_k [i, \ell_k] \Big)^2 \,\lambda_k(\bs x_k) \end{equation} where $\bs \cD_k \in \mathbb{R}^{n_k \times r_k}$ is given by \begin{align*} \bs \cD_k[i, \ell_k] & = \frac{1}{\hat{\pi }_{ < k}(\bs x_{< k}) } \sum_{\alpha_{k-1} = 1}^{r_{k-1}} \mathcal{G}^{(\alpha_{k-1})}_{<k}(\bs x_{<k})\bs{\mathcal{B}}_k [\alpha_{k-1}, i, \ell_k]. \end{align*} \end{proposition} \begin{proof} The above results directly follow from the definition of conditional PDF and the marginal functions in \eqref{eq:marginal_sq1} and \eqref{eq:marginal_sqk}. \end{proof} Note that the product $\mathcal{G}_{1}(\bs x_{1}) \cdots \mathcal{G}_{k-1}(\bs x_{k-1})$ requires $k-1$ univariate interpolations and $k-2$ products of matrices per sample, that is the same operations as in the standard inverse Rosenblatt transport. The QR decomposition~\eqref{eq:SIRT-QR} and the construction of the coefficient tensors~\eqref{eq:B_recur} need $\mathcal{O}(dnr^3)$ operations, but these are pre-processing steps that are independent of the number of samples. However, in contrast to the vector-valued function $\mathcal{F}_k(\bs x_k)\bar{\mathcal{F}}_{k+1} \cdots \bar{\mathcal{F}}_{d} \in \mathbb{R}^{r_{k-1}}$, in evaluating the PDF $\hat{f}_{\hat{\bs X}}$, we need to multiply the matrix-valued function $\mathcal{L}_{k}(\bs x_{k}) \in \mathbb{R}^{ r_{k-1} \times r_{k} }$ for each sample. Thus, the leading term of the complexity becomes $\mathcal{O}(Ndnr^2)$, one order of $r$ or $n$ higher than the complexity of the standard inverse Rosenblatt transport. However, for small $r$ and $n$ this is well compensated by a smoother map, which will be crucial in Section~\ref{sec:dirt}. \subsection{Implementation of CDFs}\label{sec:sirt_cdf} To evaluate the SIRT, one has to first construct the marginal CDF of $\hat{\bs X}_1$ and the conditional CDFs of $\hat{\bs X}_k | \hat{\bs X}_{<k}$ for $k > 1$, and then inverts the CDFs (see \eqref{eq:inverse_trans}). Here we discuss the computation and the inversion of CDFs, which are based on pseudo-spectral methods, for problems with bounded domains and extensions to problems with unbounded domains. We refer the readers to \cite{boyd2001chebyshev,shen2011spectral,trefethen2019approximation} and references therein for a more details. \subsubsection{Bounded domain with polynomial basis} For a bounded parameter space $\mathcal{X}\subset\mathbb{R}^d$, we consider the weighting function $\lambda(\bs x) = 1$. Since $\mathcal{X}$ can be expressed as a Cartesian product, without loss of generality, here we discuss the CDF of a one-dimensional random variable $Z$ with PDF \begin{equation}\label{eq:one_pdf} \hat{f}_{Z}(\zeta) = \sum_{\ell=1}^{r} \Big(\sum_{i =1}^{n}\phi^{(i)}(\zeta) \, \bs \cD [i, \ell] \Big)^2, \end{equation} where $\{\phi^{(i)}(\zeta)\}_{i = 1}^{n}$ are the basis functions, $\bs \cD \in \mathbb{R}^{n \times r}$ is a coefficient matrix, and $\zeta \in [-1, 1]$. Here $\hat{f}_{Z}(\zeta)$ can be either the marginal PDF or the conditional PDFs defined in Proposition \ref{prop:sirt_cond_cdf} with a suitable linear change of coordinate. We first consider a polynomial basis, $\phi^{(i)}(z) \in \mathbb{P}_{n-1}$ for $i = 1, \ldots, n$, where $\mathbb{P}_{n-1}$ is a vector space of polynomials of degree at most $n-1$ defined on $ [-1, 1]$. Thus, the PDF $\hat{f}_{Z}(\zeta)$ can be represented exactly in $\mathbb{P}_{2n-2}$. To enable fast computation of the CDF, we choose the Chebyshev polynomials of the second kind \[ p_m(\zeta) = \frac{\sin\big( (m+1)\cos^{-1}(\zeta) \big)}{\sin\big(\cos^{-1}(\zeta)\big)}, \quad m = 0, 1, \ldots, 2n-2, \] as the basis of $\mathbb{P}_{2n-2}$. Using the roots of $p_{2n-1}(\zeta)$, we can define the set of collocation points \[ \big\{\zeta_m \big\}_{m=1}^{2n-1}, \quad \textrm{where} \quad \zeta_m = \cos\Big(\frac{m \pi} {2n}\Big). % \] This way, by evaluating $\hat{f}_{Z}(\zeta)$ on the collocation points, which needs $\mathcal{O}(nr)$ operations, one can apply the collocation method (Chapter 4 of \cite{boyd2001chebyshev}) to represent $\hat{f}_{Z}(\zeta)$ using the Chebyshev basis: \begin{equation}\label{eq:cheby_pdf} \hat{f}_{Z}(\zeta) = \sum_{m=0}^{2n-2} a_m \, p_m(\zeta) , \end{equation} where the coefficients $\{a_m\}_{m=1}^{2n-2}$ can be computed by the fast Fourier transform with $\mathcal{O}(n\log(n))$ operations. Then, one can express the CDF of $Z$ as \begin{equation}\label{eq:cheby_cdf} F_Z(\zeta) = \int_{-1}^\zeta \hat{f}_{Z}(\zeta^\prime) d\zeta^\prime = \sum_{m=0}^{2n-2} \frac{a_m}{m+1} \big(t_{m+1}(\zeta) - t_{m+1}(-1) \big), \end{equation} where $t_m(\zeta) = \cos\big( m \cos^{-1}(\zeta) \big)$ is the Chebyshev polynomial of the first kind of degree $m$. A random variable $Z$ can be generated by drawing a uniform random variable $U$ and evaluating $Z = F_Z^{-1}(U)$ by solving the root finding problem $F_Z(Z) = U$. \begin{remark} The PDF in \eqref{eq:one_pdf} is non-negative for $\forall \zeta \in [-1, 1]$ by construction and can be represented exactly in $\mathbb{P}_{2n-2}$ with the polynomial basis. Thus, its Chebyshev representation in \eqref{eq:cheby_pdf} is also non-negative. This way, the resulting CDF in \eqref{eq:cheby_cdf} is monotonic, and thus the solution to the inverse CDF equation, $F_Z(Z) = U$, admits a unique solution. \end{remark} \begin{remark} One can also employ piecewise Lagrange polynomials as a basis to enable hp-adaptivity. With piecewise Lagrange polynomials, the above-mentioned technique can also be used to obtain the piecewise definition of the CDF. \end{remark} Since $F_Z(Z) = U$ has a unique solution and $F_Z$ is monotone and bounded between $[0,1]$, it requires usually only a few iterations to apply the root finding methods, such as the regula falsi method and the Newton's method, to solve $F_Z(Z) = U$ with an accuracy close to machine precision. Overall, the construction of the CDF needs $\mathcal{O}(nr + n\log(n))$ operations, and the inversion of the CDF function needs $\mathcal{O}(c n )$ operations, where $\mathcal{O}(n)$ is the cost of evaluating the CDF and $c$ is the number of iterations required by the root finding method. In comparison, building the matrix $\bs \cD$ requires $\mathcal{O}(nr^2)$ operations (cf. Proposition \ref{prop:sirt_cond_cdf}). \subsubsection{Bounded domain with Fourier basis} If the Fourier transform of the PDF of $Z$, which is the characteristic function, is band-limited in the frequency domain, then one may choose the sine and cosine Fourier series as the basis for representing the PDF in \eqref{eq:one_pdf}. In this case, the above strategy can also be applied. Recall the Fourier basis with an even cardinality $n$ \[ \big\{ 1, \ldots, \sin(m \pi \zeta), \cos (m \pi \zeta), \ldots, \cos(n \pi \zeta / 2) \big\}, \quad m = 1, \ldots, n/2-1, \] which consists of $n/2-1$ sine functions and $n/2+1$ cosine functions. The PDF $\hat{f}_{Z}(\zeta)$ defined in \eqref{eq:one_pdf} yields an exact representation using the Fourier basis with cardinality $2n$. This way, one can represent $\hat{f}_{Z}(\zeta)$ as \[ \hat{f}_{Z}(\zeta) = a_0 + \sum_{m=1}^{n} a_m \cos(m \pi \zeta) + \sum_{m=1}^{n-1} b_m \sin(m \pi \zeta), \] where the coefficients, $a_m$ and $b_m$, are obtained by evaluating $\hat{f}_{Z}(\zeta)$ on the collocation points \[ \big\{\zeta_m \big\}_{m=1}^{2n}, \quad \textrm{where} \quad \zeta_m = \frac{m}{n} - 1, \] and applying the rectangular rule. This leads to the CDF \[ F_Z(\zeta) = \int_{-1}^\zeta \hat{f}_{Z}(\zeta^\prime) d\zeta^\prime = a_0(\zeta + 1) + \sum_{m=1}^{n} \frac{a_m}{m\pi}
Assignment 1: minesweeper, MIPS minesweeper version: 1.6 last updated: 2021-10-18 20:00:00 Aims • to give you experience writing MIPS assembly code • to give you experience with data and control structures in MIPS Getting Started Create a new directory for this assignment called minesweeper, change to this directory, and fetch the provided code by running these commands: mkdir minesweeper cd minesweeper 1521 fetch minesweeper If you're not working at CSE, you can download the provided files as a zip file or a tar file. This will add the following files into the directory: • grid.h: some grid related constants in C, such as its size. • grid.s: the MIPS version of grid.h. • beginner.[hs]: more constants in C and MIPS. • intermediate.[hs]: more constants in C and MIPS. • expert.[hs]: more constants in C and MIPS. • minesweeper.c: a reference implementation of Minesweeper in C. • minesweeper.s: a stub assembly file to complete. Minesweeper How to play Minesweeper minesweeper.c is an implementation of the classic game Minesweeper. The objective of the game is to clear a board that contains hidden "mines" or bombs without detonating any of them. To do this, you will need to use the clues on the board, which indicate how many bombs are in the adjacent cells. At any point, you can perform 2 actions: • Marking a cell - marking or flagging a cell that you think might be a bomb, • Revealing a cell - revealing the cell. If it is an empty cell, then all of the surrounding empty cells will also be revealed. Revealing a cell containing a bomb is game over. Once you have revealed all cells that do not contain a bomb, you win the game. Before starting the assignment, it is recommended that you understand how to play Minesweeper. A good place to start is Google's built-in minesweeper game that can be found here. minesweeper.c The assignment version of Minesweeper is played on a grid represented by a 2D array of 8-bit ints (int8_t **grid). Each element in the 2D array represents a cell on the grid. For each element in the 2D array, 7 of the 8 bits are used to represent information regarding that cell, as shown in the diagram below. • is_marked - 1 if cell is marked, 0 if not. • is_revealed - 1 if cell is revealed, 0 if not. • is_bomb - 1 if cell is a bomb, 0 if not. • value - number of bombs in adjacent cells, including diagonally adjacent cells. This will hold the value of 0 - 8 as there are a total of 8 adjacent cells. The value 0 represents an empty cell. You can use bitwise operations to extract or set relevant bits. The relevant masks are #defined at the top of the minesweeper.c file. By default, the game is played on a 10x10 grid. However, you can change these settings to play on a different sized grid by changing the #include provided. As a brief guidline, the standard levels of Minesweeper are: • Beginner: 9 x 9 grid, with max 10 bombs. • Intermediate: 16 x 16 grid, with max 40 bombs. • Expert: 16 x 30 grid, with max 90 bombs. The assignment Minesweeper runs on an infinite loop, which means that you can play multiple rounds of Minesweeper. For each round, your score will be calculated based on how many cells are left to be revealed. Once you have won or lost a game, you will be prompted for input on whether you would like to play again, or if you would like to print out the user scores so far. The assignment implementation keeps track of the last 10 rounds of Minesweeper. Running minesweeper.c First compile and run minesweeper.c by running these commands: dcc -o minesweeper minesweeper.c ./minesweeper Once you run the program, you will be prompted for some user input. • Number of bombs - number of bombs on the grid. Input should be an integer from 1 to 91 inclusive. • Seed - number used to generate bombs on the grid. Different seeds will generate different grids. • Debug mode - allows the entire grid to be immediately revealed. Input should be 0 or 1. • Username - any string you would like to use to identify yourself. This will be used to keep track of your score, and the running high score. As an example: How many bombs on the grid? 10 Seed: 2 Debug Mode: 0 Reveal Any Cell to Begin...: Total Bomb Count: 10 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - What's your first move? (action row col) If debug mode is 1, then the output will look like this: How many bombs on the grid? 20 Seed: 2 Debug Mode: 1 Reveal Any Cell to Begin...: Total Bomb Count: 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 What's your first move? (action row col) IMPORTANT: Note that the grid is initially always completely empty (all zeroes). This is done as the bombs on the grid are only generated after you reveal the first cell. This is so that your first input will never cause you to immediately lose. Playing minesweeper.c To play the game you need to provide input in the form of action row col, where • action - either 0 for marking, 1 for revealing or -1 to exit the program. • row - row of cell you want to perform action on. This is zero indexed. • col - column of cell you want to perform action on. This is zero indexed. As an example, providing the input 1 4 4 will reveal the cell at (4, 4). This input will generate the board, which will look something like this (note that this example follows on from the above example, i.e. the seed used is 2): Total Bomb Count: 10 0 0 0 0 1 - - - - - 0 0 0 0 1 - - - - - 0 0 0 0 1 1 2 - - - 0 0 0 0 0 0 2 - - - 0 0 0 0 0 0 2 - - - 0 0 0 1 1 1 2 - - - 0 0 1 2 - - - - - - 0 0 1 - 2 1 1 - - - 0 0 1 1 1 0 1 - - - 0 0 0 0 0 0 1 - - - What's your next move? (action row col) To mark a cell, you would do something like this: 0 7 3 This marks the cell at (7, 3) by placing an X on the cell. Marking a bomb decreases the total bomb count by 1. The total bomb count can be negative if your number of marked cells is larger than the total number of bombs on the grid. Note that you can unmark a cell by providing the same input again. This will increase the total bomb count. What's your next move? (action row col) 0 7 3 Total Bomb Count: 9 0 0 0 0 1 - - - - - 0 0 0 0 1 - - - - - 0 0 0 0 1 1 2 - - - 0 0 0
in need of tagging are found. :) The risk of supervolcanos looks higher than previously thought, though none are imminent. Is there anything conceivable which can be done to ameliorate the risk? 2gwern9yThe only suggestion I've heard is self-sustaining space colonies, which obviously is not doable anytime soon. Depending on the specifics, buried bunkers might work, as long as they're not on the same continent to be buried in the ash or lava. 0faul_sname9yHow long will it actually take for a self-sustaining colony in LEO to be plausible? We have the ISS and Biosphere 2, and have for quite some time. Zero G poses some problems, but certainly not insurmountable ones. It looks like we have at least a few hundred years of advance notice, which would likely be enough time to set up an orbital colony even with only current technology. Besides, it looks like past a couple hundred miles, the eruption would be survivable without any special, though agriculture would be negatively impacted. 0NancyLebovitz9ySounds like something best left for the future, which I hope will have much better tech. Tectonic engineering? Forcefields? Everyone uploaded? 1gwern9ySome existential risks simply may be intractable and the bullet bitten. It's not like we can do anything about [http://lesswrong.com/lw/cih/value_of_information_8_examples/6o6y] a vacuum collapse either. I am doing a study on pick-up artistry. Currently I'm doing exploratory work to develop/find an abbreviated pick-up curriculum and operationalize pick-up success. I've been able to find some pretty good online resources*, but would appreciate any suggestions for further places to look. As this is undergraduate research I'm on a pretty nonexistent budget, so free stuff is greatly preferred. That said I can drop some out of pocket cash if necessary. If anyone with pick-up experience can talk to me, especially to give feedback on the completed materials that would be great. *Seduction Chronicles and Attractology have been particularly useful If you will need to convince a professor to someday give you a passing grade on this work I hope you are taking into account that most professors would consider what you are doing to be evil. Never, ever describe this kind of work on any type of graduate school application. Trust me, I know a lot about this kind of thing. 3Kaj_Sotala9yI'd be curious to hear more about the details of that episode. 7James_Miller9yI wrote up what happened for Forbes. [http://www.forbes.com/forbes/2004/0607/054.html]. I later found out that it was Smith's President not its Board of Trustees that finally decided to give me tenure. 3Kaj_Sotala9yHuh. I knew that academia had a liberal bias, but I didn't know it was quite that bad. 2beoShaffer9yThe professor who will be grading this has actively encouraged this research topic and multiple other professors at my school have expressed approval, with none expressing disapproval. 3James_Miller9yImpressive. What school? 0beoShaffer9yKnox College [http://www.knox.edu/] 1KPier9yYour article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think? I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue? 7James_Miller9yRacism, sexism and homophobia are the three primary evils for politically correct professors. From what I've read of pick-up (i.e. Roissy's blog) it is in part predicated on a negative view of women's intelligence, standards and ethics making it indeed sexist. See this [http://en.wikipedia.org/wiki/Lawrence_Summers#Differences_between_the_sexes] to get a feel for how feminist react to criticisms of women. Truth is not considered a defense for this kind of "sexism". (A professor suggested I should not be teaching at Smith College because during a panel discussion on free speech I said Summers was probably correct.) I've never discussed pick-up with another professor but systematically manipulative women into having sex by convincing them that you are something that you are not (alpha) would be considered by many feminist, I suspect, as form of non-consensual sex. 2[anonymous]8yHow comes they describe that in terms of 'convincing them that you are something you are not' rather than 'becoming something you didn't use to be'? Do they think people have an XML tag attached that reads 'beta' or something, independent of how they behave and interact? To me, the idea of convincingly faking being alpha makes as much sense as that of convincingly faking being fluent in English, and sounds like something a generalization of the anti-zombie principle would dismiss as utterly meaningless. 2Viliam_Bur8yThe given "something" is a package consisting of many parts. Some of them are easy to detect, some of them are difficult to detect. In real life there seems to be a significant correlation between the former and the latter, so people detect the former to predict the whole package. After understanding this algorithm, other people learn the former parts, with intention to give a false impression that they have the whole package. The whole topic is difficult to discuss, because most package-detectors have a taboo against speaking about the package (especially admitting that they want it), and most package-fakers do not want to admit they actually don't have the whole package. Thus we end with very vague discussions about whether it is immoral to do ...some unspecifed things... in order to create an impression of ...something unspecified... when ...something unspecified... is missing; usually rendered as "you should be yourself, because pretending otherwise is creepy". Which means: "I am scared of you hacking my decision heuristics, so I would like to punish you socially." 1[anonymous]8yWhat is it that is difficult to detect in a person and still people care about potential partners having it? Income? (But I don't get the impression that the typical PUA is poverty-stricken, and I can't think of reasons for people to care about that in very-short-term relationships, which AFAIK are those most PUAs are after.) Lack of STDs? (But, if anything, I'd expect that to anticorrelate with alpha behaviour.) Penis size? (But why would that correlate with behaviour at all?) -2Viliam_Bur8yI guess it is how the person will behave in the future, and in exceptional situations. We can predict it based on person's behavior here and now, unless that behavior is faked to confuse our algorithms. Humans are not automatically strategic and nature is not antropomorphic, but if I tried to translate the nature's concerns for "I want an alpha male", it would be: "I want to be sure my partner will be able to protect me and our children in case of conflict." This strategy is calibrated for an ancient environment, so it sometimes fails, but often it works; some traits are also useful now, and even the less useful traits still make impression on other people, so they give a social bonus. (For example higher people earn more on average, even if their height is not necessary for their work.) Of course there is a difference between what our genes "want" and what we want. I guess a typical human female does not rationally evaluate male's capacity of protecting her in combat, but it's more like unconscious evaluation + halo effect. A male unconsciously evaluated as an alpha male seems superior in almost everything; he will seem at the same time stronger, wiser, more witty, nicer, more skilled, spiritually superior, whatever. A conflicting information will be filtered away. ("He beat the shit out of those people, because they looked at him the wrong way, and he felt the need to protect me. He loves me so much! No, he is not agresssive; he may give that impression, but only because you don't really know him. In fact he is very gentle, he has a good heart and
budget deficits become significant, LMGDP gives you a completely new viewpoint from which you may evaluate the situation. In particular, it's useful to quantify the countries' debt as percentage of the annual LMGDP. For countries like Greece, you will get a much larger – and much more realistic – percentage. The Greek "actual" or "undistorted" GDP – and LMGDP is a way to quantify it (although I think that an even lower figure than LMGDP encodes the "corrected equivalent GDP" of Greece) – is actually much smaller than the conventionally used GDP. That's why the percentage we often use – debt at 160% of the GDP, and so on – understates the severity of the actual problem (a greater-than-visual-field problem that will only be seen in its entirety once Greece actually starts to cure the imbalances if it ever does). Whatever currency Greece will use, if it doesn't bankrupt, it will ultimately have to see that the actual "undistorted GDP" – e.g. LMGDP which isn't artificially doped by the budget deficit steroids – is much smaller, and the debt is therefore a significantly higher multiple of the undistorted GDP. If the U.S. turns out to be unable to bring its deficit to tolerable levels closer to zero – whether or not its economists fabricate a new propaganda "explaining" why such a deficit reduction is a bad thing – it will only mean one thing: that the country's problem is really a long-term one and serious investors should trust the country in the future much less than they have trusted it so far. If America isn't capable of swallowing the "fiscal cliff pill" sufficiently quickly, then it's a country that is addicted to debt and going towards the Greek-like "fiscal chasm" – and be sure that the sign of this chasm is opposite than the sign of the good old fiscal cliff. And that's the memo. #### snail feedback (18) : reader Luke Lea said... I take it you would be opposed to a monetary policy designed to temporarily increase inflation to the five or six percent range as a way to decrease the real debt of of both government and consumers, decrease real wages, and bring about an expansion of employment at a lower standard of living? reader Luboš Motl said... Yes, of course, I would be opposed - in my country or in any country I care about. I think that all such distortions of the market conditions - e.g. your "deliberate inflation bump" - are undesirable. Stable currency is the fairest dependence of its value on time because stability was what was assumed in all the contracts. Your sleight of hand is nothing else than a selective punishment of savers and "affirmative action" bringing an advantage to those in debt - which includes the (typical Western) government. too. Such a policy is particularly sick in the situation when the debt is already high because it encourages the chronic borrowers (including the government) to borrow even more in the future while it spanks the savers - and responsible public bodies etc. - and tell them that they should pile debt just like everyone else. It's not only doubly unfair but it's also accelerating all the negative trends. In my country, such pro-inflation ideas, while widespread among the top bankers, are even more sick because we're a country of savers rather than borrowers and inflation leads to the decrease of the real wealth of the real savers who are extremely important for the home consumption, and therefore reduces the real domestic consumption. Also, your hypothetical positive consequences of the maneuver are rubbish in the medium and long term because foreign lenders will inevitably demand higher interest rates to cover the inflation and expected possible later inflation, too. I think that your last sentence must have some mistake in it. Income tax is something else than tax on savings, isn't it? I consider the value-added-tax to be a very modern way for the government to get revenue and I think that the system in which the VAT/sales tax would be higher and income tax could be completely eliminated would be superior because of many reasons. But I am amazed by the adjective "graduated" that you stubbornly insert everywhere. Do you mean progressive? I surely oppose progressive tax, both progressive income tax and progressive sales tax. I can't even imagine how a sales tax could be progressive, when it comes to practical issues. We've been the most socialized country in the world but I assure you that both our income tax as well as the value-added tax is flat. Well, we have 2 rates for VAT tax, 14% and 20% or so, for different kinds of products and services. It's a detail. But it's surely not progressive as a function of the consumers' wealth. It would be a complete legal and bureaucratic nightmare to make the rate wealth-dependent in this way. reader Luke Lea said... I take it, then, that you are ok with the "natural" distribution of income (and consumption) in a market economy. Fair enough. Does it necessarily follow that you would favor free trade and free mobility of capital between the West and a vast, low-wage Goliath like China? Should the effects of such trade on the distribution of income between labor and capital (lowering wages but increasing total GDP) be of concern? reader anna v said... Inflation is part and parcel of a capitalistic economy if left without checks and balances, and even with them. I worked in the summers in the US back in 1960 for 132 $a month. Those jobs now get a multiple of that . Inflation happened though probably not controlled. In a sense you propose a controlled stability, not free capitalism, the opposite view proposes a controlled inflation ( which I think I was taught back in 1961 in economics 101 as the way to go). Another example of "inflation happens in uncontrolled capitalistic systems" is feudal Europe ( and feudalism is where such systems end up) where when they discovered and brought back the gold of the Incas the price of eggs became a gold coin in Spain. The effect will be the same if , for example, solar panel technology becomes very cheap and energy prices fall by 90%. The world economy at the moment is experimenting in free for all capitalism with no checks and balances either way. How can islands of stability, as you propose, survive in this environment? reader Luboš Motl said... Dear Luke, I don't know whether it follows but I obviously support tree trade, between rich and poor, straight eyes or slanted eyes, everyone. Of course, the Czech economy heavily depends on trade's being (almost) free. And I assure you that our products are "mostly" competitive with the Chinese ones even when it comes to the value and if they aren't, it's inevitable that the producer etc. ultimately goes out of business. Wages may only be as high as the market allows. One may be "concerned" that his wage is lower than he dreams but it's similar as being "concerned" that the Earth is spinning around its axis. One must just accept basic laws and the pressure from the market competition is surely such a law. Attempts to deny these basic laws are ludicrous and hypothetical policies making free trade impossible would surely be more devastating than living according to the laws of the free market. I don't understand what is "extreme" about your theme. What is extreme is your suggestion that one should restrict free trade. Not even commies have been proposing anything of the sort in my country. reader Luboš Motl said... What you write is complete rubbish, Anna. Inflation has nothing whatsoever to do with the economy's being capitalist or
two expectations each of which is zero. Technical point: The behaviour as $\rho\to\pm1$ is awkward. In fact, this limit is irregular and the result as $\rho\to1$ is not the same as that for $\rho=1$; similarly for $-1$. However, these two values of $\rho$ cannot occur in our problem.}. One way of understanding this is to see that as the position is always of magnitude 1, it does not have the characteristic---associated with positive momentum---of increasing the position when the P\&L is positive. A less precise explanation is that during periods in which the market is not trending, the strategy loses money rather quickly because it is buying and selling the same size of position as it holds when a trend has been detected. By contrast, the other two (sigmoidal) functions that we have just examined only trade a small size until a trend is established, resulting in P\&L distribution with more, but smaller, losses, and fewer, but bigger, gains: that is where their positive skewness comes from. A recent piece on FX strategy \cite{Bloom12} makes this point somewhat tangentially. The authors use the simple step function and note that the performance is excellent when the market is trending but very poor otherwise. In the light of what we have just said, we are not surprised by this observation. As $\varepsilon$ is raised, notice that the skewness rises without limit, which seems rather good: however, if $\varepsilon$ is too high then the algorithm hardly ever trades, so practicalities dictate $\varepsilon\lesssim1.5$. \begin{figure}[h!] \centering \scalebox{\figscal}{\includegraphics*{momnl_fig4a.eps}} \caption{Sharpe ratio and skewness as a function of $\varepsilon$ for double-step type activation function, showing performance dropoff for $\varepsilon>0.6$.} \label{fig:n2} \end{figure} \begin{figure}[h!] \begin{tabular}{l|rrrrrrr} $\frac{w_R}{w_S}\setminus\lambda$& 0.25 & 0.50 &0.75 &1.00 &1.25& 1.50& 1.75\\ \hline $\infty$&\small 0.39\s 1.77 &\small 0.45\s 1.17 &\small 0.48\s 0.69 &\small 0.48\s 0.34 &\small 0.45\s 0.09 &\small 0.40\s $-$0.09 &\small 0.35\s $-$0.23 \\ 5.0 &\small 0.38\s 1.74 &\small 0.44\s 1.19 &\small 0.48\s 0.75 &\small 0.48\s 0.43 &\small 0.46\s 0.21 &\small 0.43\s 0.04 & \small 0.39\s $-$0.09 \\ 2.4 &\small 0.38\s 1.73 &\small 0.43\s 1.21 &\small \fbox{0.47\s 0.80} &\small 0.48\s 0.50 &\small 0.47\s 0.29 &\small 0.45\s 0.14 &\small 0.42\s 0.02 \\ 1.5 &\small 0.38\s 1.71 &\small 0.43\s 1.22 &\small 0.46\s 0.83 &\small 0.48\s 0.56 &\small 0.47\s 0.37 &\small 0.46\s 0.22 & \small 0.43\s 0.11 \\ 1 &\small 0.38\s 1.70 &\small 0.42\s 1.23 &\small 0.46\s 0.87 &\small 0.47\s 0.62 &\small 0.47\s 0.43 &\small 0.46\s 0.30 & \small 0.45\s 0.19\\ 0.67 &\small 0.38\s1.69 &\small 0.41\s 1.24 &\small 0.45\s 0.91 &\small 0.47\s 0.67 &\small 0.47\s 0.50 &\small 0.46\s 0.37 & \small 0.45\s 0.28\\ 0.4 &\small 0.37\s1.67 &\small 0.41\s 1.25 &\small 0.44\s 0.94 &\small 0.46\s 0.73 &\small 0.46\s 0.57 &\small 0.46\s 0.46 & \small 0.46\s 0.37\\ 0.2 &\small 0.37\s1.65 &\small 0.40\s 1.27 &\small 0.43\s 0.99 &\small 0.45\s 0.80 &\small 0.46\s 0.66 &\small 0.46\s 0.56 & \small 0.46\s 0.48 \\ 0 &\small 0.37\s1.63 &\small 0.39\s 1.29 &\small 0.41\s 1.05 &\small 0.43\s 0.89 &\small 0.44\s 0.77 &\small 0.45\s 0.69 & \small 0.45\s 0.62\\ \end{tabular} \caption{Sharpe ratio and skewness as a function of $\lambda$ and the ratio $w_R/w_S$ for compound sigmoidal activation function. In each pair of numbers, the first is the SR, and the second is the skewness.} \label{fig:n3} \end{figure} \begin{figure}[h!] \begin{center} \scalebox{\figscal}{\includegraphics*{momnl_fig4c.eps}} \end{center} \caption{Sketch of the activation function highlighted in Figure~\ref{fig:n3} (parameters: $w_R=0.71$, $w_S=0.30$, $\lambda=0.75$).} \label{fig:n4} \end{figure} \begin{figure}[h!] \begin{center} \scalebox{\figscal}{\includegraphics*{momall_oiltest.eps}} \end{center} \caption{Trend-following oil over a decade, using the sigmoid and reverting sigmoid for activation function ($\lambda=0.75$, $N_1=10$, $N_2=20$). The reverting sigmoid fails to capitalise on the full selloff in late 2014. } \label{fig:oil} \end{figure} \subsection{Skewness computation} We show in Figure~\ref{fig:n1} the term structure of skewness for the different examples given above: sigmoid (a,b), reverting sigmoid (c,d), double-step (e,f). The linear result is overlaid for comparison. The precise choice of momentum crossover does not affect the main conclusion, and we have used EMA2 with $N=20,40$ throughout. Using a faster or slower momentum measure simply stretches or compresses the graph in a horizontal direction, as it did in the linear models. It is apparent from the results that as the activation function becomes progressively less linear, the main effect is to compress the graph in a vertical direction, so that the maximum skewness is reduced. With the reverting sigmoid, the graph can be affected much more, to the extent of becoming negative when $\lambda$ is high enough: we predicted this earlier when remarking that $H_k$ could become negative as a result of the activation function being decreasing over much of its domain, so the model spends a lot of time incrementally trading against the trend rather than with it. (In fact for $\psi(z)=ze^{-\lambda^2z^2/2}$ the critical $\lambda$, above which the skewness is no longer everywhere positive, is around 1.3. An explanation is in the Appendix.) The general conclusion so far is that any capping effect in the activation function will cause the trading returns to be less positively skewed, and any reverting effect will exacerbate this reduction in skewness. From the perspective of skewness alone, these effects should be avoided as much as possible. However, they may well be justified by reason of risk management and/or expected return, so we consider these next. \subsection{Empirical analysis and expected return} Analysis of the expected return is a totally different proposition because there are no theoretical guidelines at all. One can only adopt an empirical approach, seeing what has worked in the past, and relying on it continuing to do so. \notthis{ For example, if we are preferring the reverting sigmoid to the simple sigmoid (as defined in the previous section), then we are saying that when momentum becomes very strong it is wise to take profits because on average the market tends to reverse. Note however that historical information on this is rare because the momentum factor is only rarely large, so to an extent the design of such strategies is by fiat. The results will necessarily be somewhat subjective because they depend on what data are used. } We need to decide what objective function is to be maximised, and the most natural thing to do is to maximise the Sharpe ratio (SR) of the trading strategy, i.e.\ use an objective function that directly relates to trading model performance. As the SR is the expected return divided by the volatility, we will be penalising any effect that increases volatility without generating enough extra return. Taking a range of futures contracts across different asset classes (stocks, bonds, FX, commodities) and a range of EMA2 periods (5 vs 10 days, 10 vs 20 days, etc.), we have run trading simulations over the available history, which is typically 20 years or more, and calculated the Sharpe ratio; this gives a list of Sharpe ratios, one for each contract and speed. For simplicity we are going to use the same activation function across all contracts and speeds. We then average the list of Sharpe ratios and use this as our performance indicator, to be maximised\footnote{The degree of temporal and cross-asset-class diversification that can be obtained is governed by what is commonly known as `breadth' and explained in detail by Grinold \cite{Grinold99}.}. We first examine the double-step activation function. Here there is only one parameter to adjust, namely $\varepsilon$, the half-width of the `dead zone'. Figure~\ref{fig:n2} shows the performance as a function of $\varepsilon$. It is not surprising that the SR drops off as $\varepsilon$ becomes large, because the strategy hardly ever has a position on and can never make any money. What is interesting is that the performance for $\varepsilon<0.6$ is so flat. Thus from the perspective of expected return, one may as well choose any $\varepsilon<0.6$. However, when we overlay the conclusion about skewness, we can sharpen this deduction. As the skewness has a term structure, we look at one return-period ($\pd$) throughout: we choose $\pd=100$ for convenience, this being the top of the curve for a linear activation function when the EMA periods are 20,40 (see Figure~\ref{fig:n1}(b,d,f)). The skewness is also shown in Figure~\ref{fig:n2}, and clearly it increases with increasing $\varepsilon$, so from that perspective alone we prefer $\varepsilon$ as high as possible. If we can push $\varepsilon$ up to about 0.6 without decreasing the SR, and in doing so can have positive skewness as well, then we should do just that. So this is our first conclusion about design of nonlinear momentum strategies: the blue line in Figure~\ref{fig:n1}(e), $\varepsilon=0.6$, is a good construction. Next we turn to the sigmoidal functions that we
,?n) is continuous and the random variables ?1 > ?2 > • • • > ?n are independent then the densities of the variables are re- related by the formula Xi e (-oo, 00), i = 1, 2, ... , n. For1 discrete independent random variables ?i> ?21 • • • > ?n the equality is valid for all z; G (—00, 00), i = 1,... , n. 1.1.2 Moments of random variables First we recall the definition of the Stieltjes integral for a distribution func- function F(x) and a function f(x) continuous on an interval [a, b] of the real line. We partition the interval [a, b] into n subintervals [xi} xi+1] such that a = xQ < xx < x2 < ¦¦ ¦ <xn =b and calculate a sum 1.1 Probability distributions and random variables 7 where xt is any number from the interval [x{, xi+1]. If the sum S tends to a finite limit as max \xi+1— x{\—>0, n —> 00 , l<i<n and this limit does not depend on the particular sequence of partitions and on the choice of the points xi, then it is called the Stieltjes integral of f(x) with respect to the distribution function F(x) and is denoted by 6 J f(x) dF(x) In what follows we assume that the Stieltjes integral of a function f(x) with respect to a- function F(x) exists if and only if the corresponding integral of the function | f(x) \ exists. By definition, 00 b [ f(x) dF{x) = lim / f{x) dF(x). I a—*—00 / -00 Let ? be a random variable defined on a probability space (fi, #, P). The mathematical expectation (or mean value or, simply, mean) of the ran- random variable ? is the number oo x dF(x), —oo where F(x) is the distribution function of ?. The mathematical expectation of a discrete random variable ? taking values • • • < x.x < x0 < xx < ¦¦¦ < xn < • • • is calculated by the formula 00 E k=—oo if ? is a continuous random variable with density pJx), the integral in the definition of expectation reduces to the usual Riemann integral, namely, oo *ju j QtjC ¦ We list the basic properties of expectation: A)E(C^) = CE^ for any constant C; B) E (?x +?2) = E?x +E?2 if the mathematical expectations E?x and exist; 8 1 Relevant Elements of Probability Theory C) If ?x and ?2 are independent random variables then The variance of a random variable ? is defined by the formula Variance has the following basic properties: A) Var (C?) = C2 Var? for any constant C; B) If ?x and ?2 are independent random variables then C) Let ? be a nonnegative random variable: ? > 0, and let e be an arbitrary positive number. The following inequality is valid: A.4) ^ This inequality implies, for any random variable ?, the Chebyshev in- inequality: A-5) The moment of the kth order of a random variable ? is defined to be the quantity Mk '¦= E?fc (if the mathematical expectation exists). The numbers E |?|fc and jik := E(? — E?) are called, respectively, the absolute and central moments of order k. The following relations are valid: i=o k = 0, 1, ... , where //0 = Mo = 1. The factorial and binomial moments of order A; are defined, respectively, by the equalities 1.1 Probability distributions and random variables where = @* Jfe! Obviously, = k\Bk, The following relations are valid: [M)k= Y,s{k,j)M^ * = 0, 1,..., Mk= X>(A:,j) [M]j, fc = 0,l, ... , i=o where s(k,j) and cr(k,j) are the Stirling numbers of the /irsi and second kind respectively. These numbers are defined by the equalities i=o xk:= E(T(k,j)(x) A; = 0,1, j=o 1.1.3 Integer-valued random variables Discrete random variables taking only integer values will be of special importance in this book. Such random variables are called integer-valued. Below we consider the case of nonnegative integer-valued random variables. Let ? be a nonnegative integer-valued random variable with Qk-= E pU = i}, * = o,i,.... j=k+l Then we have oo A.6) 10 1 Relevant Elements of Probability Theory The generating function of an integer-valued random variable ? is defined by the equality oo P(x):= Y,Pkx\ Pk:=P{? = k}. k=o It is clear that P(x) is an analytic function within the circle | x | < 1 and, in view of the equality Pk= ^P{k)@), ? = 0,1,..., where P (k\0) is the value of the kth derivative of P{x) at the point x = 0, it determines the distribution of ? uniquely. One can use, as an inversion formula, Cauchy 's integral formula: rU+1 where C is a contour in the complex plane enclosing the origin and lying inside the circle where P(x) is analytic. Let ?i,?2)--- >?n be independent random variables and let Pi(x), P2(x), ... , Pn{x) be the corresponding generating functions. The gener- generating function P(x) of the random variable ( = ?x + ?2 + ' " + ?n is given by the formula P(x) = P^x) P2(x) ¦¦• Pn(x). For a random variable ? the functions 00 k M(x) := J2 M k=0 M(x) := k=0 oo k=0 are called, respectively, the moment generating function, the factorial moment generating function and the binomial moment generating function of ?. These generating functions are expressed by the generating function of the random variable ? as follows: A.7) M{x) = P(ex), A.8) M{x)=B(x) = P(x + l). 1.1 Probability distributions and random variables 11 Binomial moments of ? can be computed by the values of derivatives of P(x) at the point x = 1: Bk= Y\p{k)A^ * = o,i,.... Let us consider some examples of integer-valued variables. 1.1.3.1 Binomial distribution The random variable where fx, ?2, ... , ?n are independent and P {& = 1} = p, P{& = 0} = q, p + q = 1, is said to have binomial distribution with parameters (n,p). Obviously, = (f)pk(ln'k> k = 0,l,...,n. The generating function and the binomial moment generating function of ((n) are of the form: A.9) Pn(x) = (px + q)n, A.10) Bn(x) = (px + l)n. Hence the formula Bk = (i)pk> k = 0,l, ... ,n, for binomial moments of (^n^ follows. 1.1.3.2 Pascal distribution Consider a scheme of independent Bernoulli trials. Each elementary trial has probability p of success and q of failure, p + q = 1. Let ? be the number of failures till the rth success occurs. The law of distribution of the random variable ?, A.11) P{Z = k}=(k + rk~l^prqk, k = 0,l,..., is called a Pascal distribution. The generating function and the binomial moment generating function of ? are 12 1 Relevant Elements of Probability Theory n i^ pm- ( p V Vl-gxy A.13) B(x) = (—-—J . Binomial moments can be calculated by the formula A.14) Bk=(r + h-l\(l\ki fc = 0, 1, 1.1.3.3 Poisson distribution A random variable ? is said to have Poisson distribution with parameter A > 0 if Using the generating function of the distribution of ?, A.15) P(x)= ex^~1\ and the binomial moment generating function of ?, A.16) B(x) = eXx, we find the binomial moments of ?: A.17) Bk= Id ' A; = 0,1,.... 1.1.3.4 Hypergeometric distribution Consider a set consisting of n elements, m of which are of one kind and n — m of the other. Denote by ? the number of elements of the first kind con- taned in a sample of size r chosen at random from the set. The distribution of the random variable ?, is called hypergeometric. The binomial moments of the distribution have the
dice, each numbered 1-6, there are two possible ways to roll a 3: Thus, for the outcome of 3 (a particular macrostate) there are 2 microstates. 5-1 Expected value of a roll of a die --2. 6, Diceware gives you at least 2. Simple roll of two dice Each die has six faces, so in the roll of two dice there are 36 possible combinations for the outcome. Lab 18 Experiment 2: Probability of States Results/Observations Enter your data in the following tables: If you roll two dice of different colors, the sum of the individual dice can be equal to the numbers 2 through 12. Expect to see a call for work­shops in the very near future, because that’s how we roll. MUTTALIB Jean-Louis PICHARD Show more. This is automatically added by the tool. It is (1/2) raised to the 20th power. What is the MaxEnt prior for ? First, we generalize to an sided die, (at the end, we set. As others have pointed out, there is not a solution that works 100% of the times, and you have to use rejection sampling. posted by Craig Gidney on April 23, 2013. If one of these dice rolls a 6, roll an additional d6, rolling again if this die rolls a 6, and so on, to a maximum of 4d6 damage for the entire attack. Now, whenever a dice is rolled we can get either 1, 2,3,4,5 or 6 dots on the upper most face. The determinism of the Blockchain has many benefits, but prohibits the entropy required to make a pseudo-random number as random as possible. Dice rolling seems to be an ideal source of randomness if only a few bits of entropy are required, and thus methods have been proposed to expand the amount of randomness produced. Corrolary 1a) except for Jan (our master of entropy). When rolling a pair of dice: - There is only one way to roll a 2 or a 12 - There are six ways to roll a 7 - The probability of rolling a 7 is six times greater than that of rolling 2 or 12 - The state 7 is of higher probability than the state 2 or 12. well, this isn't actually an advice request but, is anybody else unreasonably fascinated with the dice rolling function on the forums? i just want to find any opportunity to use it, lol. In the first entry two dice were rolled but in that case there are only 36 arrangements and 10 outcomes (rolls from 2 to 12). You have a fair six-sided die. "Dice Club" - Yatzee or Poker Dice - is a well-known board game. The possible outcomes when rolling one six sided die is 1,2,3,4,5,6. D&D Helper - Palm OS software to help speed up your Dungeons and Dragons campaign (or other type of dice-based system) by rolling dice, looking up information, and generating interesting things. Mathematicians would probably represent the range of possible numbers with something like this: [0, 1) though, because the numbers in Perl are limited to some 15 digits after the decimal point the actual numbers rand() will generated don't cover the whole range. The entropy of Nothingness has to be the minimal possible entropy. some interpretations to "structure in data" given some data, one can predict other data points with some confidence; one can compress the data, i. Information & Entropy •Information Equation p = probability of the event happening b = base (base 2 is mostly used in information theory) *unit of information is determined by base base 2 = bits base 3 = trits base 10 = Hartleys base e = nats. 02 ounces/114 g); 1 skein or other dk weight yarn. The entropy of a sum two or twelve is thus much lower than that of a sum six. The two die rolls are independent and you are not allowed to communicate with your friend after the dice have been thrown, though you can coordinate beforehand. In casting a pair of dice 7 is the most probable outcome because there are 6 ways to get a 7 and 36 total possibilities. Similarly, I don't think you could use Entropy 2 to make it so that a pair of dice would ALWAYS roll a 4+4. And the probability of rolling a six on a dice does not increase over time. So the obvious solution is a von Neumann extractor. In this article, some formulas will assume that n = number of identical dice and r = number of sides on each die, numbered 1 to r , and 'k' is the combination value. Best Tabletop Games of 2016. What would be expected value and variance of die?. If you roll doubles, you may flip the placard and move it, along with organized. • The probability of rolling a 4 for one die is 1 in 6. The macrost~tes of both Band R range from 3 to 18 (there being 63 microstates that give rise to them). This does not show that the most random state dominates (i. Play free online games at Armor Games! We're the best online games website, featuring shooting games, puzzle games, strategy games, war games, and much more. So experiment here is "Rolling a 6 faced dice" and list of possible outcomes is. low_entropy. –Let Ω={1,2,3,4,5,6}be the 6 possible outcomes of a dice role – =1,5,6⊆Ωwould be the event that the dice roll comes up as a one, five, or six • The probability of an event is just the sum of all of the outcomes that it contains – L = L1+ L5+ L(6) 27. specifying the outcome of a fair coin flip (two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a fair dice (six equally likely outcomes). What is the probability that if a die is rolled ve times, only two di erent values appear? ii. In the first entry two dice were rolled but in that case there are only 36 arrangements and 10 outcomes (rolls from 2 to 12). In 2008, a screw-up in the RNG of one of the Debian distributions resulted in only 15 bits of entropy in their keys. For large systems, the phrase “will tend to be in” becomes “will be extremely close to”. The flattest distributions are those having maximum multiplicity Win the absence of constraints. Using rolling dice is exactly the idea behind Diceware - a technique for random selection of words from a wordlist. We easily can do this for any number of dice, we just iterate and roll each die. Java Puzzle Applet - Free puzzle applet that lets you easily add an image puzzle to your web site. Entropy of a dice: What is the average dice roll and entropy of an unbiased dice (6 sides)? If the probabilities are p1 = ⅛ p2 = ¼ p3 = ⅛ p4 = ⅛ p5 = ¼ p6 = ⅛ (p1 is the probability of rolling a 1 p2= probability of rolling a 2 etc. One the other hand, there is no obvious way of estimating how difficult a long natural language passphrase like "Blue Light shines from the small Bunny onto the Lake" would really be for a password cracker. 02 ounces/114 g); 1 skein or other dk weight yarn. for 3 rolls, 100% - 57. i'm gonna start making a random character, just to roll. The most commonly used dice are cubes with six sides. Very customizable. Entropy is a measure of how many ways the system could be arranged “The probability of rolling a 3 with a 6-sided die is 1/6. You want to roll
(2r)^{\Delta}\prod_{\Delta_*=-1/2}^{n_r/2-1} \frac{1}{\left(\Delta - \Delta_*\right)}, \end{equation} where $\Delta_*$ goes over half-integer values in the given range. Factoring out these same poles, and substituting $r \rightarrow r(\xi = 1)$, we obtain the polynomial approximation for each derivative of the bulk block, upto $M$ derivatives. \subsection{Parameters} A few more parameters are needed to fully describe the semi-definite programming calculations we did with \texttt{SDPB 2.0}. The bulk conformal blocks were approximated upto $n_r = 50$ degree in $r$. We also chose $M = 17$ number of derivatives to define the space of linear functionals to be optimized in. All calculations were done with a precision of $\texttt{prec} = 700$. The polynomial approximations were calculated in \texttt{Mathematica} and exported in XML files as input to $\texttt{SDPB 2.0}$. \subsection{Numerical values of the bounds} For reference, we also provide the numerical values of the bounds in figure \ref{fig:mu_comparison} obtained from the positive bootstrap. Table \ref{tab:SDPBdata} lists the bounds for the boundary OPE coefficients $\mu_\sigma$ and $\mu_\textup{t}$ with the assumptions $\Sigma_2[\Delta_\textup{min} = 3]$. Let us pause here to dwell on the computation of the errors on the bounds from positive bootstrap presented in this work. Both here and in the main text, the quoted errors are from the input parameters. We compute the bound on a $7\times7$ grid in $(\Delta_\phi, \Delta_\epsilon)$ that covers the region of allowed values as per the literature. We found that the regions of interest are small, and the CFT data are essentially featureless inside. The coarse grid is enough to find the range of variation in the bound values. We only calculate the error on the bounds in cases that are relevant to the main message of this work. \begin{table}[h] \begin{center} \begin{tabular}{c c c } \toprule $N$ & $(\mu_{\sigma, \textup{min}}, \mu_{\sigma, \textup{max}})$ & $(\mu_{\textup{t}, \textup{min}}, \mu_{\textup{t}, \textup{max}})$\\ \midrule 2 & (4.841, 11.313) & $(4.897\times 10^{-3}, 0.534)$ \\ 3 & (6.8273(39), 12.475) & (0.138, 0.40298(15))\\ 4 & (8.759(80), 14.077) & (0.178, 0.3600(29))\\ 5 & (10.653, 15.432)& (0.204, 0.335)\\ 10 & (20.246, 24.41) & (0.235, 0.293)\\ 20 & (39.837, 44.078) &(0.243, 0.272)\\ \bottomrule \end{tabular} \end{center} \caption{Bounds on the boundary OPE coefficients from the positive bootstrap.} \label{tab:SDPBdata} \end{table} Finally, we did some solitary calculations to improve the lower bounds on $\alpha$, as discussed in Sec \ref{sec:results}. The bounds thus produced are reported separately in table \ref{tab:SDPBdata2}. \begin{table}[h] \begin{center} \begin{tabular}{c c c c } \toprule $N$ & $\Delta_\mathrm{min}$ & $\mu_{\sigma, \textup{min}}$ & $\mu_{\textup{t}, \textup{max}}$\\ \midrule 2 & 3.775 & 7.0426(45) & 0.42995(18) \\ 3 & 3.75 & 8.727(13) & 0.35579(31)\\ 4 & 3.80 & 10.77(26) & 0.3236(51)\\ \bottomrule \end{tabular} \end{center} \caption{Additional bounds from the positive bootstrap} \label{tab:SDPBdata2} \end{table} \subsection{2+$\epsilon$ expansion} In this subsection we use the $2+\epsilon$ expansion to compute $\langle \phi^a(x) \phi^a(y)\rangle$ at the normal fixed point to order $\epsilon^2$.\footnote{Summation over repeated indices is understood in this section.} As far as we know, this is a new (although conventional) computation, so we report it in some detail. We begin with the non-linear $\sigma$-model for the field $\vec{\phi}$ in $d = 2+\epsilon$ with a codimension one boundary:\footnote{ This should be distinguished from eq.~(\ref{Stotal}) in section~\ref{sec:RG} where the non-linear sigma model is two dimensional and lives on the boundary of $3d$ space.} \begin{equation} L = \frac{1}{2g} (\partial_{\mu} \vec{\phi})^2, \quad \vec{\phi}^2 = 1. \end{equation} We write $\vec{\phi} = (\vec{\pi}, \sqrt{1- \vec{\pi}^2})$ so that \begin{equation} L = \frac{1}{2g} \left[(\partial_{\mu} \vec{\pi})^2 + \frac{1}{1-\vec{\pi}^2} (\vec{\pi} \cdot \partial_{\mu} \vec{\pi})^2\right]. \end{equation} We use dimensional regularization. We have $g = \mu^{-\epsilon} g_r Z_g(g_r)$, $\vec{\phi} = Z^{1/2}_\phi \vec{\phi}_r$ with~\cite{ZinnJustinBook} \begin{eqnarray} Z_g(g_r) &\approx& 1 + \frac{(N-2) \tilde{g}_r}{\epsilon}, \quad Z_\phi \approx 1 + \frac{(N-1) \tilde{g}_r}{\epsilon},\\ \beta(\tilde{g}_r) &\approx& \epsilon \tilde{g}_r - (N-2) \tilde{g}^2_r(1+\tilde{g}_r), \quad \quad \Delta_\phi \approx \frac{\epsilon}{2}\frac{N-1}{N-2} \left(1- \frac{\epsilon}{N-2}\right), \label{eta2eps}\end{eqnarray} and \begin{equation} \tilde{g}_r = g_r N_d, \quad\quad N_d = \frac{2}{(4\pi)^{d/2} \Gamma(d/2)}. \end{equation} We first fix the bulk normalization of the field $\vec{\phi}$. We let $\vec{\phi}_{nrm} = C \vec{\phi}_r$ and demand that $\langle \phi^a_{nrm}(x) \phi^a_{nrm}(y) \rangle = \dfrac{N}{(x-y)^{2 \Delta_\phi}}$. In the absence of a boundary, we have the propagator \begin{equation} \langle \pi^i(x) \pi^j(0) \rangle_0 = \delta^{ij} g D_0(x), \quad\quad D_0(x) = \frac{c_d}{x^{d-2}}, \quad\quad c_d = \frac{\Gamma(d/2-1)}{4 \pi^{d/2}}.\end{equation} Then, the bare field correlation function is \begin{equation} \langle \phi^a(x) \phi^a(0) \rangle \approx 1 - \langle \vec{\pi}^2\rangle + \langle \pi^i(x) \pi^i(0)\rangle = 1 + (N-1) g D_0(x) \approx 1 + \frac{(N-1) g_r}{2 \pi \epsilon} \left(1 - \frac{\epsilon}{2}\gamma_E - \frac{\epsilon}{2} \log \pi - \epsilon \log \mu x\right). \end{equation} So \begin{equation} \langle \phi^a_r(x) \phi^a_r(0) \rangle \approx 1 + (N-1) \tilde{g}_r (\log 2 - \gamma_E - \log \mu x). \end{equation} Thus, after inserting the fixed point value $\tilde{g}^*_r\approx \dfrac{\epsilon}{N-2}$, we obtain \begin{equation} C \approx \sqrt{N} \mu^{\Delta_\phi} (1- \Delta_\phi (\log 2 - \gamma_E)).\end{equation} We next proceed to the system in the presence of a boundary. To avoid clutter, expectation values are denoted with the same symbol $\braket{ \dots}$, but from now on the presence of the boundary is understood. We access the normal universality class by imposing Dirichlet boundary conditions $\vec{\pi}(x^d = 0) = 0$. Now the free $\pi$ propagator is \begin{equation} \langle \pi^i(x) \pi^j(y)\rangle_0 = \delta^{ij} g D_d(x,y), \quad\quad D_d(x,y) = D_0(x-y) - D_0(x-R y), \label{app:Dd}\end{equation} where $R (\vec{x}, x^d) = (\vec{x}, -x^d)$. We now compute the transverse and longitudinal two point functions to leading non-trivial order in $\epsilon$. We note that the connected longitudinal correlation function $\langle \phi^N(x) \phi^N(y)\rangle_{conn}$ only starts at $O(\epsilon^2)$ so we compute the disconnected longitudinal components first: \begin{equation} \langle \phi^N(x) \rangle = 1 - \frac{1}{2} \langle \vec{\pi}^2\rangle = 1 - \frac{g (N-1)}{2} D_d(x,x) = 1+\frac{g_r(N-1)}{4 \pi \epsilon} \left(1- \frac{\epsilon \gamma_E}{2} - \frac{\epsilon}{2} \log \pi - \epsilon \log(2 x_d)\right). \end{equation} After multiplying by $Z^{-1/2}_{\phi}$ and $C$, and setting $g_r$ to its fixed point value, we obtain: \begin{equation} \langle \phi^N_{nrm}(x) \rangle = \frac{a_\sigma}{(2 x^d)^{\Delta_\phi}}, \quad \mu_\sigma = a^2_\sigma = N + O(\epsilon^2).\end{equation} To ${O}(\epsilon)$, the longitudinal two point function is $\langle \phi^N_{nrm}(x) \rangle \langle \phi^N_{nrm}(y) \rangle$. We compute the connected correlation function and $\mu_\sigma$ to ${O}(\epsilon^2)$ later in this section. We also determine $\mu_\sigma$ to $O(\epsilon^2)$ below. From Eq.~(\ref{app:Dd}), the transverse correlation function to $O(\epsilon)$ is \begin{equation} \langle \pi^i_{nrm}(x) \pi^j_{nrm}(y) \rangle = \delta^{ij} \frac{N \epsilon }{2(N-2)} \mu^{2 \Delta_\phi} \log\left(\frac{1+\xi}{\xi}\right) \approx \delta^{ij} \mu_{\rm{t}} \frac{1}{(x-y)^{2 \Delta_{\phi}}} \xi^{\Delta_{\phi}} f_\textup{bry}(1, \xi), \end{equation} with $\mu_{\rm{t}} \approx \dfrac{N \epsilon}{2 (N-2)}$. (We will be able to determine $\mu_{\rm{t}}$ to $O(\epsilon^2)$ below.) Thus, the transverse correlation function is saturated to leading order by the boundary conformal block of the tilt operator with $\Delta_{\rm{t}} = d-1 \approx 1$. Combining the transverse and longitudinal contributions, \begin{equation} \langle \phi^a_{nrm}(x) \phi^a_{nrm}(y)\rangle = \frac{N}{(x-y)^{2 \Delta_\phi}} (1 + \Delta_\phi \log (1 + \xi)).\end{equation} Decomposing this into bulk conformal blocks, we find that the correlator is saturated by just one operator with dimension $\Delta \approx 2$ and \begin{equation} \lambda_{\Delta =2} \approx \Delta_\phi + O(\epsilon^2).\end{equation} This operator is the single relevant O$(N)$ singlet of the O$(N)$ model. Note that $\lambda_{\Delta = 2}$ is positive in accord with the conjecture in section~\ref{sec:sdpb}. We now proceed to next order in $\epsilon$. We denote the first correction to the transverse correlator by $\langle \pi^i(x) \pi^j(y)\rangle_1 = \delta^{ij} D^1_{\pi}(x,y)$. Then \begin{eqnarray} D^1_{\pi}(x,y) &=& -g^2 \int d^d w \bigg[ D_d(x,w) D_d(w,y) \lim_{w'\to w} \partial^w_{\mu} \partial^{w'}_{\mu} D_d(w, w') + \partial^w_{\mu} D_d(x,w) \partial^w_{\mu} D_d(w,y) D_d(w, w) \nonumber\\ &+& N \left(\partial^w_{\mu} D_d(x,w) D_d(w,y) + D_d(x,w) \partial^w_{\mu} D_d(w,y)\right) \lim_{w'\to w} \partial^w_{\mu} D_d(w, w')\bigg]. \end{eqnarray} Integrating by parts, we obtain: \begin{eqnarray} D^1_{\pi}(x,y) &=& -g^2 \Bigg(\int d^d w \bigg[ D_d(x,w) D_d(w,y) \left(\lim_{w'\to w} \partial^w_{\mu} \partial^{w'}_{\mu} D_d(w, w') - N \partial^w_{\mu} \lim_{w'\to w} \partial^w_{\mu} D_d(w, w')\right) \nonumber\\ &-& D_d(x,w) \partial^w_{\mu} D_d(w,y) \partial^w_{\mu}\left(\lim_{w'\to w} D(w,w')\right)\bigg] + D_d(x,y) D_d(y,y)\Bigg) \nonumber\\ &=& -g^2 c_d \Bigg(2 (d-2) \int d^d w \bigg[D_d(x,w) D_d(w,y) \frac{(N-1) (d-1)}{(2 w^d)^d} - D_d(x,w) \frac{\partial}{\partial w^d} D_d(w,y) \frac{1}{(2 w^d)^{d-1}}\bigg] \nonumber\\ && - \frac{1}{(2y^d)^{d-2}} D_d(x,y)\Bigg), \end{eqnarray} where $c_d = \dfrac{\Gamma(d/2-1)}{4 \pi^{d/2}}$. While the integral above can be taken explicitly (in particular, by going to momentum space in the direction along the boundary), here we use a different approach. Let's apply $-\partial^2_x$ to $D^1_{\pi}(x,y)$: \begin{eqnarray} -\partial^2_x D^1_{\pi}(x,y) &=& g^2 c_d \left(\frac{1}{(2x^d)^{d-2}} \delta^d(x-y) - 2 \frac{(N-1)(d-2) (d-1)}{(2x^d)^d} D_d(x,y) + \frac{2 (d-2)}{(2x^d)^{d-1}} \frac{\partial}{\partial x^d} D_d(x,y)\right) \nonumber\\ &\approx& \frac{g^2 \mu^{\epsilon}}{2 \pi} \left[ \left(\frac{1}{\epsilon} - \log (2 \mu x^d) - \frac{\gamma_E}{2}- \frac{\log \pi}{2} \right) \delta^d(x-y) - \frac{N-1}{2 (x^d)^2} D_d(x,y) + \frac{1}{x^d} \frac{\partial}{\partial x^d}
isometry in $A$. Since $vv^*=l(x)$ and $v^*v=r(x)$, we have $l(x)\sim r(x)$. \end{proof} \begin{lemma}\label{p1p2} Let $A$ be an R$^*$-algebra and $p\in \P(A)$. Then $p$ is nonabelian if and only if there exists a pair of nonzero subprojections $p_1, p_2$ of $p$ in $A$ with $p_1p_2=0$ and $p_1\sim p_2$. \end{lemma} \begin{proof} If the latter condition is satisfied then the algebra $pAp$ has a partial isometry $v$ with $v^*v=p_1\neq p_2=vv^*$, thus $p$ is nonabelian. Suppose that $p$ is nonabelian, which means that $pAp$ is noncommutative. Since $pAp$ is an R$^*$-algebra, it is spanned by projections in $pAp$. It follows that there exists a pair of projections $p_1, p_2\in \P(pAp)$ such that $p_1p_2\neq p_2p_1$. Thus $p_1p_2-p_2p_1p_2$ is not self-adjoint, and we obtain $x:=(p-p_2)p_1p_2 = p_1p_2-p_2p_1p_2\neq 0$. Then $l(x)\leq p-p_2$ and $r(x)\leq p_2$, thus $l(x)$ and $r(x)$ are mutually orthogonal and equivalent. \end{proof} The \emph{center} of an R$^*$-algebra $A$ is the collection $\{x\in A\mid xy=yx\text{ for all }y\in A\}$. An element of $A$ is said to be central if it belongs to the center of $A$. \begin{lemma}\label{central} Let $A$ be an R$^*$-algebra and $p\in\P(A)$. Then $p$ is noncentral if and only if there exists a pair of nonzero projections $p_1, p_2\in \P(A)$ such that $p_1\leq p$, $p_2p=0$ and $p_1\sim p_2$. \end{lemma} \begin{proof} If $p$ is central then for every partial isometry $v$ with $v^*v\leq p$ we have $vv^*=vpv^*=pvv^*p\leq p$, so there is no pair $p_1, p_2$ with the property. Assume that $p$ is not central. Since $A$ is spanned by projections there is a projection $q\in A$ such that $pq\neq qp$. It follows that $x:=pq(p\vee q-p)=pq-pqp$ is not self-adjoint and thus nonzero. Then $p_1:=l(x)\leq p$ and $p_2:=r(x)\leq p\vee q-p$ satisfy the desired property. \end{proof} Recall that a ring $R$ is \emph{Baer} if the right annihilator of every subset of $R$ is generated by an idempotent as a principal right ideal, that is, if for every $X\subset R$ there exists an idempotent $e\in R$ (i.e., $e=e^2$) such that $\{r\in R\mid Xr=0\} = eR$. A von Neumann algebra is a Baer $^*$-ring. The structure of Baer $^*$-rings is studied by Kaplansky \cite{Kap} (see also \cite{Be}) as a purely algebraic generalization of von Neumann algebras. Which R$^*$-algebras are Baer? Below we give the answer (Proposition \ref{baer}). Recall that a lattice $\P$ is said to be \emph{complete} if any family $(p_i)_{i\in I}$ of elements in $\P$ has the least upper bound $\bigvee_{i\in I} p_i$ (upward complete) and the greatest lower bound $\bigwedge_{i\in I} p_i$ (downward complete). First let us recall a fact, which is actually true for general $^*$-regular rings \cite[Proposition 4.1]{Be}, and sketch the proof. \begin{lemma}\label{completeness} For an R$^*$-algebra $A$ the following conditions are equivalent. \begin{enumerate} \item $A$ is Baer. \item $\P(A)$ is complete. \item $\P(A)$ is upward complete. \end{enumerate} \end{lemma} \begin{proof} Recall that every principal right ideal of a $^*$-regular ring is generated by a projection. \\ $(1)\Rightarrow(3)$ Assume that $A$ is Baer. It clearly follows that $A$ is unital. Let $(p_i)_{i\in I}$ be a family of elements in $\P(A)$. Let the right annihilator of $(p_i)_{i\in I}$ be equal to $qA$ with $q\in \P(A)$. Then we may check that $1-q$ is the least upper bound of $(p_i)_{i\in I}$, hence $\P(A)$ is upward complete.\\ $(3)\Rightarrow(2)$ If $\P(A)$ is upward complete then the least upper bound of $\P(A)$ needs to be a unit of $A$, hence $A$ is unital. Then $\P(A)$ is downward complete, too. Indeed, for each family $(p_i)_{i\in I}$ of projections the greatest lower bound is determined by $1- \bigvee_{i\in I} (1-p_i)$.\\ $(2)\Rightarrow(1)$ For a subset $X\subset R$, the right annihilator of $X$ is the principal right ideal generated by $\bigwedge_{x\in X}(1-r(x))$. \end{proof} It is also common to consider completeness with respect to countably infinite collections. A lattice $\P$ is said to be \emph{$\sigma$-complete} if any countable family $(p_i)_{i\in I}$ of $\P$ has the least upper bound $\bigvee_{i\in I} p_i$ (upward $\sigma$-complete) and the greatest lower bound $\bigwedge_{i\in I} p_i$ (downward $\sigma$-complete). Recall that these concepts are important in measure theory. A $\sigma$-field is a $\sigma$-complete Boolean algebra, and for a measure space $(X, \mathcal{F}, \mu)$ the collection $\{A\in \mathcal{F} \mid \mu(A)<\infty\}$ forms a downward $\sigma$-complete generalized Boolean algebra. Notice that a $\sigma$-complete generalized Boolean algebra need not be Boolean. For example, for an uncountable set $X$, consider the generalized field of sets formed of all (at most) countable subsets of $X$. It should be also mentioned that a $\sigma$-complete Boolean algebra need not be orthoisomorphic to a $\sigma$-field (see \cite[Chapter 40]{GH}). Let us characterize R$^*$-algebras whose lattice of projections are upward/downward $\sigma$-complete. \begin{proposition}\label{sigma} For an R$^*$-algebra $A$ the following conditions are equivalent. \begin{enumerate} \item $\P(A)$ is $\sigma$-complete. \item $\P(A)$ is upward $\sigma$-complete. \item There exist a $\sigma$-complete generalized Boolean algebra $\S$ and a finite-dimensional R$^*$-algebra $B$ such that $A$ is $^*$-isomorphic to $R(\S)\oplus B$. \end{enumerate} \end{proposition} \begin{proof} $(3)\Rightarrow(1)$ Easy. $(1)\Rightarrow(2)$ This is trivial.\\ $(2)\Rightarrow(3)$ Assume $\P(A)$ is upward $\sigma$-complete. First we claim that there exists no sequence $(p_n)_{n\geq 1}$ of pairwise orthogonal nonzero nonabelian projections in $A$. Indeed, if there exists such a sequence, then Lemma \ref{p1p2} implies that for each $n\geq 1$ there exist mutually equivalent nonzero subprojections $q_n, r_n\leq p_n$ with $q_nr_n=0$. Take a partial isometry $v_n\in A$ with $v_n^*v_n=q_n, v_nv_n^*=r_n$. Note that $\{q_n, v_n, v_n^*, r_n\}$ forms a system of $2\times 2$ matrix units. Set $q=\bigvee_{n\geq 1}q_n$ and \[ r=\bigvee_{n\geq 1} \left((1-n^{-1})q_n + \sqrt{(1-n^{-1})n^{-1}}(v_n+v_n^*) + n^{-1}r_n\right) \] Then we have $p_nqp_n = q_n$ and \[ p_nrp_n=(1-n^{-1})q_n + \sqrt{(1-n^{-1})n^{-1}}(v_n+v_n^*) + n^{-1}r_n \] for each $n\geq 1$. From this{,} it is not difficult to see that the sum $q+r$ has infinite spectrum, which contradicts Theorem \ref{equi}. Thus we get the desired claim. Consider the collection $S$ of all projections in $\P(A)$ that cannot be written as a sum of finitely many mutually orthogonal abelian projections. We prove that for each $p\in S$ there exists a subprojection $e$ of $p$ such that $e\in S$ and $p-e$ is nonabelian. Let $p\in S$. Since $p$ is clearly nonabelian, there is a pair of subprojections $p', p''\in \P(A)$ of $p$ such that $p'p''=0$ and $p'\sim p''$. If $p'$ is not abelian, then it is clear that both $p'$ and $p-p'$ are nonabelian and at least one of $p', p-p'$ is in $S$, which leads to the desired conclusion. If $p'$ is abelian, then it is clear that $p'+p''$ is nonabelian and does not belong to $S$, and hence $p-(p'+p'')$ is in $S$, which also leads to the desired conclusion. Therefore, if $S\neq \emptyset$, then we may inductively take a sequence of mutually orthogonal nonzero nonabelian projections in $A$, which contradicts the above claim. It follows that $S=\emptyset$. Let $p\in \P(A)$ be an abelian projection. We prove that there exists a subprojection $f(p)\leq p$ such that \begin{enumerate} \item[(a)] there exists a collection $e_1, e_2, \ldots, e_n$ of mutually orthogonal atoms in $A$, none of which is central in $A$, such that $f(p)=e_1+\cdots+e_n$, and \item[(b)] $p-f(p)$ is a central abelian projection in $A$. \end{enumerate} To prove it, take a maximal family $(p_i)_{i\in I}$ of pairwise orthogonal nonzero subprojections of $p$ with the following property: For each $i\in I$ there exists a projection $q_i\in \P(A)$ such that $p_iq_i=0$ and $q_i\sim p_i$ (the existence of a maximal family with this property is ensured by Zorn's lemma). Then $(q_i)_{i\in I}$ is pairwise orthogonal. Indeed, if $q_iq_j\neq 0$ for some $i\neq j\in I$ then $l(q_iq_j)\,\, (\leq q_i)$ is equivalent to $r(q_iq_j)\,\, (\leq q_j)$, which must imply that some nonzero subprojection of $p_i$ is equivalent to a subprojection of $p_j$, and this contradicts the commutativity of $pAp$. Similarly, we have $p_iq_j=0$ if $i\neq j\in I$. It follows that $(p_i+q_i)_{i\in I}$ is a family of pairwise orthogonal nonzero nonabelian projections in $A$, which forces $I$ to be a finite set. Set $f(p):= \sum_{i\in I}p_i$. The above reasoning also shows that there is $q\in \P(A)$ with $f(p)\sim q$ and $f(p)q=0$. Take a partial isometry $v$ such that $v^*v=f(p)$ and $vv^*=q$. Assume that $f(p)Af(p)$ is infinite-dimensional. Proposition \ref{descend} implies that there is a sequence of pairwise orthogonal nonzero subprojections $(p_n)_{n\geq 1}$ of $f(p)$. It follows that the sequence $(p_n+ vp_nv^*)_{n\geq 1}$ is a pairwise orthogonal sequence of nonzero nonabelian projections, which contradicts the above claim. Therefore, we
at a sampling rate of 22 500 Hz (UK) or 44 100 Hz (France). The tones in a trial were separated by an ISI of 250 ms, and differed in frequency by an amount (ΔF) expressed in musical cents (1 cent equals 1/100 of a semitone or 1/1200 of an octave). The direction of the frequency change was equiprobably upward or downward, and listeners were instructed to identify that direction. The magnitude of ΔF in a run of trials was set initially to 100 cents, and was manipulated within the run using a weighted one-up one-down adaptive procedure that estimated DLFs corresponding to 75% correct on the psychometric function (Kaernbach, 19914. Kaernbach, C. (1991). “Simple adaptive testing with the weighted up-down method,” Percept. Psychophys. 49, 227–229. ). Up to the fifth reversal in the direction of the staircase, ΔF was decreased by a factor of $2.253$ following a correct response and increased by a factor of 2.25 following an incorrect response. At the fifth reversal onward, down and up step sizes were $1.53$ and 1.5, respectively. The DLF was defined as the geometric mean of all values visited after the fifth reversal. For the UK listeners, a run ended after the 12th reversal, except if an error was made within the first three trials, in which case two additional first-phase reversals were added to the run; in such cases, the measurement phase started on the seventh rather than the fifth reversal, and ended after the 14th reversal. For the French listeners, the run always ended after the 14th reversal. Listeners completed 24 runs of trials in the same prescribed order (see Table I). There were six run phases in the experiment, each containing four runs. Within each of the first five phases (runs 1–20), the frequency of the first tone was fixed at a specific value—either 0, 1551, or 3102 cents above 400 Hz (400, 979.8, or 2400.1 Hz)—that switched at every phase. In the final phase (runs 21–24), the frequency of the first tone on each trial was drawn from a logarithmically flat continuous probability distribution ranging from 400 to 2400.1 Hz. In phase 1 and phases 4–6, responses were followed by visual feedback and a 600-ms pause before the start of the next trial. In phases 2 and 3, visual feedback was omitted and the next trial started 600 ms after the response. Overall, therefore, 24 DLFs per listener were measured. Testing was carried out individually in a sound-attenuating booth, and the listeners completed the experiment in a single session. TABLE I. Details of the runs of trials in the experiment. PhaseRunsStandard frequency (Hz)Feedback after trials? 11–4979.8yes 25–8400no 39–122400.1no 413–16400yes 517–202400.1yes 621–24rovedyes Figure 1(a) shows the geometric means of each listener’s DLFs during each of the six phases of the experiment. The figure indicates that, in L1–5, DLFs were slightly larger when the standard frequency was 400 Hz (phases 2 and 4) than when it was 979.8 or 2400.1 Hz (phases 1, 3, and 5). This finding tallies with the literature on frequency discrimination (e.g., Moore, 19737. Moore, B. C. J. (1973). “Frequency difference limens for short-duration tones,” J. Acoust. Soc. Am. 54, 610–619. ). L1–5 showed little effect of withholding feedback (phases 2 and 3), and there was only a small increase in their DLFs due to the inclusion of frequency roving (phase 6). This finding is also consistent with previous results (Amitay et al., 20051. Amitay, S., Hawkey, D. J. C., and Moore, D. R. (2005). “Auditory frequency discrimination learning is affected by stimulus variability,” Percept. Psychophys. 67, 691–698. ; Jesteadt and Bilger, 19743. Jesteadt, W., and Bilger, R. C. (1974). “Intensity and frequency discrimination in one- and two-interval paradigms,” J. Acoust. Soc. Am. 55, 1266–1276. ; Mathias et al., 20106. Mathias, S. R., Micheyl, C., and Bailey, P. J. (2010). “Stimulus uncertainty and insensitivity to pitch-change direction,” J. Acoust. Soc. Am. 127, 3026–3037. ). L6–10 had generally larger DLFs than L1–5. For these direction-impaired listeners, standard-frequency switching and withholding feedback had little effect on performance, but DLFs increased dramatically after the inclusion of roving. The data were subjected to a repeated-measures analysis of variance (ANOVA) with run phase (1–6) as a within-subjects factor, group (unimpaired, direction-impaired) as a between-subjects factor, and listeners’ log-transformed DLFs per phase as the dependent variable. The ANOVA revealed significant main effects of phase and group, and a significant interaction (F ≥ 9.53, p < 0.001, η2 ≥ 0.54). Three planned comparisons were used to investigate which levels of the phase factor were driving the phase × group interaction. The planned comparisons were (a) phase 1 versus the mean of phases 2 and 3, (b) the mean of phases 2 and 3 versus the mean of phases 4 and 5, and (c) phase 6 versus the mean of phases 1, 4, and 5. Comparisons (a) and (b) did not reveal significant phase × group interactions (F ≤ 0.75, p ≥ 0.41, η2 ≤ 0.09); this indicates that the effects of standard-frequency switching and withholding feedback on listeners’ DLFs were not significantly different between the two groups. By contrast, the interaction was significant for comparison (c) [F(1, 8) = 19.05, p < 0.01, η2 = 0.70]; this indicates that the effect of frequency roving on DLFs was significantly larger in L6–10 than it was in L1–5. As shown by Mathias et al. (2010)6. Mathias, S. R., Micheyl, C., and Bailey, P. J. (2010). “Stimulus uncertainty and insensitivity to pitch-change direction,” J. Acoust. Soc. Am. 127, 3026–3037. and confirmed in the present experiment, although direction-impaired listeners have great difficulty identifying the direction of small pitch changes in pairs of tones when the standard frequency varies widely from pair to pair, this difficulty is reduced considerably (but not eliminated) when the standard frequency is fixed and trial-by-trial feedback is provided. According to the “learning” hypothesis, this is observed because: (1) when the first element in each pair (i.e., the standard frequency) is always the same, direction-impaired listeners can learn to label upward and downward changes differently without genuinely perceiving them as rising and falling and (2) when the first element differs widely from pair to pair (i.e., the standard frequency is roved), learning is difficult or impossible. This hypothesis implies that direction-impaired listeners should have to re-learn to label the direction of pitch changes whenever the standard frequency is shifted substantially. Moreover, new learning should not occur when these listeners are not provided with the feedback that supposedly allows them to label correctly the different-sounding cases. In the present experiment, switching the standard frequency and withholding feedback did not appear to disrupt the direction-impaired listeners’ performance. Note that this could not have come about because the shifts in standard frequency were too small to prevent the generalization of the learned labels to the new stimuli. The standard frequency was shifted by 1551 cents between phases 1 and 2, and between phases 1 and 3. This shift is larger than the mean shift (1034 cents) that took place from trial to trial within phase 6, during which the DLFs measured in L6–10 increased dramatically. Thus, it seems reasonable to rule out the learning hypothesis as an explanation for the effect of roving on the DLFs in direction-impaired listeners. According to what Mathias et al. (2010)6. Mathias, S. R., Micheyl, C., and Bailey, P. J. (2010). “Stimulus uncertainty and insensitivity to pitch-change direction,” J. Acoust. Soc. Am. 127, 3026–3037. called the “sequential interference” hypothesis, L6–10’s DLFs were largest in phase 6 because the stimulus ensemble included large irrelevant pitch changes. A trial-by-trial analysis of the data from the UK listeners was performed to test this explanation. For each listener and each run phase, trials completed during the course of the experiment were sorted into two bins depending on whether or not the frequency change between the last tone of the previous trial and the first tone of the current trial was
through a compression with cooling is a relatively easy process to carry out, using the relationship that the higher the pressure, the lower the water content of the gas at a constant temperature. Assuming that the gas is compressed and then cooled in interstage air coolers, this process will result in water dropping, which will then be separated from the gas in interstage scrubbers as well as in a separator during the final compression and cooling stage [17]. Unfortunately, this method does not allow the removal of water vapour in such a quantity that the gas can be fed into the pipeline. Even if the amount of water in the compressed gas will be lower than the water content of the inlet gas, the compressed gas will still be saturated at the temperature to which it was cooled down and will not meet the parameters suitable for transportation through the pipeline. Therefore, after using the cooling compression method, the dried gas must be dried with one of the previously described methods. This combination of the two methods enables the installation of a smaller and cheaper drying system behind the compressor [24]. ### 2.4. Gas drying using low-temperature processes Among the technologies using low-temperature processes, we can distinguish: • – IFPEX-1® proces, • – supersonic separator Twister®, • – DexProTM proces. #### 2.4.1. IFPEX-1® proces The IFPEX-1® process enables both water and hydrocarbon dew point temperatures to be achieved simultaneously. The wet gas is mixed with methanol and then cooled to the required dew point using a suitable method such as a throttling valve, turboexpander or external cooling circuit. The liquefied mixture of methanol and water is separated in a gas separator and then directed to a drive column. Methanol is recovered in the column. Water with a methanol content below 100 ppm [17, 25] is collected at the bottom of this column. The diagram of the gas drying process using this technology is presented in Fig. 6. ##### Figure 6. Operating diagram of a gas drying installation using IFPEX-1® technology The application of this process allows to obtain a dew point from -100°C to -70°C and at the same time reduce hydrocarbon vapours and does not require heat supply. It is competitive with glycol processes as the required dew point is -30°C and the capital expenditure is approx. 30% lower [22, 26]. #### 2.4.2. Supersonic separator Twister® The Twister® process uses a supersonic nozzle, often referred to as the de Laval nozzle, in which pressure is reduced by isentropic expansion at constant entropy, which contributes to temperature drop and water condensation. At the inlet to the nozzle there are blades that cause the gas to swirl (Fig. 7). The centrifugal force pushes the liquid droplets onto the nozzle wall, where they are then discharged to the liquid outlet port (Fig. 7). The gas itself reduces its speed in the diffuser, which is equal to the speed of gas in the pipeline after the Twister separator. In the nozzle, a pressure drop of up to 30% of the gas inlet pressure occurs, while in the diffuser, a pressure recovery of up to approximately 70–80% of the cylinder pressure occurs [17, 27]. ##### Figure 7. Cross section of the TWISTER® supersonic separator (own preparation based on [17]) The liquid, which will be removed by the outlet stub, is directed to the separator, where water and liquid hydrocarbons are separated. If hydrates are formed, a centrifugal separator is usually used to separate the hydrates and liquid from the gas. The hydrates themselves are melted by the use of a heating coil. The use of this technology requires certain conditions during gas drying. First of all, the gas intensity should be 200 m3/h and the pressure drop should be 25%. This process makes it possible to reduce the temperature by 60°C with a pressure drop of 30 bar [28]. The big advantage of this technology is that the gas residence time in the Twister separator is so short that it prevents hydrates from forming. In addition, the separator has no moving parts and is very compact. It also requires no maintenance and is therefore also used on offshore platforms [17, 23]. #### 2.4.3. DexProTM proces The DexProTM process technology is used for the additional drying of acidic gas, in addition to mechanical drying, during its multi-stage compression before injection into the field. In order to best remove water from the gas, this process is combined with the compression process. Part of the gas flow in the final stage is passed through the temperature control valve to the DexProTM apparatus installed in front of the scrubber on the suction of the final compression stage. This equipment allows the streams to mix properly without creating conditions for hydrates to grow [29]. Mixing cold, dry acid gas with warm, wet acid gas contributes to the cooling of the mixture in such a way that the expected amount of water is condensed. It is then separated in the suction scrubber during the last compression stage. DexProTM is a process with many advantages, which contribute to its increased use in drying systems. The unit itself is very compact, occupies a very small space and has minimal running costs. The DexProTM process is a cost effective investment as the cost of purchasing it is approximately 30% lower than that of a glycol installation. It does not require the use of any kind of chemicals and is also emission free [17]. ## 3. CONCLUSIONS The most frequently used gas drying technology is the method of water absorption in ethylene glycol solutions with some modification in the form of striping gas. It is characterized by a simple structure. It translates into low costs of the installation itself in comparison to the obtained effects of increasing glycol concentration. Additionally, it is an easy to use technology, which contributes to low failure rate and thus much lower costs [8, 3]. Being at the stage of selecting an appropriate method of gas drying, the basic criterion to be taken into account is the required degree of dehumidification of this gas. Moreover, it should be taken into account whether the selection of a given technology will be compatible with other dependent processes and whether the whole process will be an economic solution for the potential investor [30]. Of the range of methods discussed, usually the most effective and cost-effective is the method of water absorption in ethylene glycol solutions. However, it happens that limitations of a given system, such as its high energy consumption, may dictate the choice of another technology [12]. Installations that use liquid dryers have the advantage of being small in size, flexible in operation, continuous operation and usually cheaper than installations using solid dryers [17]. Installations using solid dehumidifiers have the advantage of being much more efficient than a glycol installation because they can dry the gas to a water content of <0.1 ppmv. In order to minimize the size of these installations, a frequent solution is to pre-dry the glycol to about 60 ppmv water. These types of installations require much higher financial and operating costs [17]. Molecular sieve technology is also a proven method. Its advantage is the ability to remove water, CO2 and mercaptans at the same time. They are characterized by the highest water vapour absorption and durability. Silica gel, on the other hand, is better compared to molecular sieves if the gas to be transmitted by pipeline needs to be dried, because it requires much less gas for regeneration compared to other adsorbents [21]. Among the low temperature methods, the IFPEX-1® process is the most promising as it can compete with glycol processes (which are currently the most commonly used) when the required dew point is below 30°C. At this point, the capital expenditure is reduced by approximately 30% compared to glycol processes [24]. The
show excessemission. The decline in the magnitude of excess emission, for thosestars that show it, has a roughly t0/time dependence, witht0~150 Myr. If anything, stars in binary systems (includingAlgol-type stars) and λ Boo stars show less excess emission thanthe other members of the sample. Our results indicate that (1) there issubstantial variety among debris disks, including that a significantnumber of stars emerge from the protoplanetary stage of evolution withlittle remaining disk in the 10-60 AU region and (2) in addition, it islikely that much of the dust we detect is generated episodically bycollisions of large planetesimals during the planet accretion end game,and that individual events often dominate the radiometric properties ofa debris system. This latter behavior agrees generally with what we knowabout the evolution of the solar system, and also with theoreticalmodels of planetary system formation. Radial Distribution of Dust Grains around HR 4796AWe present high dynamic range images of circumstellar dust around HR4796A that were obtained with MIRLIN at the Keck II telescope atλ=7.9, 10.3, 12.5, and 24.5 μm. We also present a newcontinuum measurement at 350 μm obtained at the Caltech SubmillimeterObservatory. Emission is resolved in Keck images at 12.5 and 24.5 μmwith point-spread function FWHM values of 0.37" and 0.55", respectively,and confirms the presence of an outer ring centered at 70 AU. Unresolvedexcess infrared emission is also detected at the stellar position andmust originate well within 13 AU of the star. A model of dust emissionfitted to flux densities at 12.5, 20.8, and 24.5 μm indicates thatdust grains are located 4+3-2 AU from the starwith effective size 28+/-6 μm and an associated temperature of260+/-40 K. We simulate all extant data with a simple model ofexozodiacal dust and an outer exo-Kuiper ring. A two-component outerring is necessary to fit both Keck thermal infrared and Hubble SpaceTelescope scattered-light images. Bayesian parameter estimates yield atotal cross-sectional area of 0.055 AU2 for grains roughly 4AU from the star, and an outer-dust disk composed of a narrowlarge-grain ring embedded within a wider ring of smaller grains. Thenarrow ring is 14+/-1 AU wide with inner radius 66+/-1 AU and totalcross-sectional area 245 AU2. The outer ring is 80+/-15 AUwide with inner radius 45+/-5 AU and total cross-sectional area 90AU2. Dust grains in the narrow ring are about 10 times largerand have lower albedos than those in the wider ring. These propertiesare consistent with a picture in which radiation pressure dominates thedispersal of an exo-Kuiper belt. Spectral Types for Four OGLE-III Transit Candidates: Could These Be Planets?We present spectral types for OGLE (Optical Gravitational LensingExperiment) transiting planet candidates OGLE-TR-134 through 137 basedon low-resolution spectra taken at Kitt Peak. Our main objective is toaid those planning radial velocity monitoring of transit candidates. Weobtain spectral types with an accuracy of 2 spectral subtypes, alongwith tentative luminosity classifications. Combining the spectral typeswith light-curve fits to the OGLE transit photometry, and with TwoMicron All Sky Survey counterparts in two cases, we conclude thatOGLE-TR-135 and 137 are not planetary transits, while OGLE-TR-134 and136 are good candidates and should be observed with precision radialvelocity monitoring to determine whether the companions are of planetarymass. OGLE-TR-135 is ruled out chiefly because a discrepancy between thestellar parameters obtained from the transit fit and those inferred fromthe spectra indicates that the system is a blend. OGLE-TR-137 is ruledout because the depth of the transit combined with the spectral type ofthe star indicates that the transiting object is stellar. OGLE-TR-134and 136, if unblended main-sequence stars, are each orbited by atransiting object with radius below 1.4 RJ. The caveats arethat our luminosity classification suggests that OGLE-TR-134 could be agiant (and therefore a blend), while OGLE-TR-136 shows a (much smaller)discrepancy of the same form as OGLE-TR-135, which may indicate that thesystem is a blend. However, since our luminosity classifications areuncertain at best, and the OGLE-TR-136 discrepancy can be explained ifthe primary is a slightly anomalous main-sequence star, the stars remaingood candidates. Extrasolar planets and brown dwarfs around A-F type stars. I. Performances of radial velocity measurements, first analyses of variationsWe present the performances of a radial velocity measurement method thatwe developed for A-F type stars. These perfomances are evaluated throughan extensive set of simulations, together with actual radial velocityobservations of such stars using the ELODIE and HARPS spectrographs. Wereport the case of stars constant in radial velocity, the example of abinary detection on HD 48097 (an A2V star, with v sin{i} equal to 90 kms^-1) and a confirmation of the existence of a 3.9 M_Jup planet orbitingaround HD 120136 (Tau Boo). The instability strip problem is alsodiscussed. We show that with this method, it is in principle possible todetect planets and brown dwarfs around A-F type stars, thus allowingfurther study of the impact of stellar masses on planetary systemformation over a wider range of stellar masses than is currently done. CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 On the Spectroscopic Nature of HD 221866On the basis of a new classification-resolution spectrum, we find thatHD 221866, a member of the newly discovered γ Doradusvariable-star class, is an Am (metallic-line A-type) star. Observationswith the Cambridge radial-velocity spectrometer reveal that the star isa double-lined spectroscopic binary with a period of 135 days and a massratio of 1.11+/-0.03. We have determined the basic physical parametersof both components through spectral analysis of theclassification-resolution spectrum and a medium-resolution spectrum fromthe KPNO coudé-feed spectrograph in conjunction with fluxes fromvisible spectrophotometry and the TD-1 satellite. We confirm the resultsof Fekel et al. that the primary is the Am star, whereas the secondaryappears to be a normal early F-type dwarf. We reanalyze the time-seriesphotometric data for HD 221866 and confirm the existence of two periodsfound by Henry & Fekel. Improved Baade-Wesselink surface brightness relationsRecent, and older accurate, data on (limb-darkened) angular diameters iscompiled for 221 stars, as well as BVRIJK[12][25] magnitudes for thoseobjects, when available. Nine stars (all M-giants or supergiants)showing excess in the [12-25] colour are excluded from the analysis asthis may indicate the presence of dust influencing the optical andnear-infrared colours as well. Based on this large sample,Baade-Wesselink surface brightness (SB) relations are presented fordwarfs, giants, supergiants and dwarfs in the optical and near-infrared.M-giants are found to follow different SB relations from non-M-giants,in particular in V versus V-R. The preferred relation for non-M-giantsis compared to the earlier relation by Fouqué and Gieren (basedon 10 stars) and Nordgren et al. (based on 57 stars). Increasing thesample size does not lead to a lower rms value. It is shown that theresiduals do not correlate with metallicity at a significant level. Thefinally adopted observed angular diameters are compared to thosepredicted by Cohen et al. for 45 stars in common, and there isreasonable overall, and good agreement when θ < 6 mas.Finally, I comment on the common practice
from __future__ import print_function import importlib import numpy as np import matplotlib.pyplot as plt import incompressible.incomp_interface_f as incomp_interface_f import mesh.reconstruction as reconstruction import mesh.patch as patch import mesh.array_indexer as ai from simulation_null import NullSimulation, grid_setup, bc_setup import multigrid.MG as MG import particles.particles as particles class Simulation(NullSimulation): def initialize(self): """ Initialize the grid and variables for incompressible flow and set the initial conditions for the chosen problem. """ my_grid = grid_setup(self.rp, ng=4) # create the variables bc, bc_xodd, bc_yodd = bc_setup(self.rp) my_data = patch.CellCenterData2d(my_grid) # velocities my_data.register_var("x-velocity", bc_xodd) my_data.register_var("y-velocity", bc_yodd) # phi -- used for the projections my_data.register_var("phi-MAC", bc) my_data.register_var("phi", bc) my_data.register_var("gradp_x", bc) my_data.register_var("gradp_y", bc) my_data.create() self.cc_data = my_data if self.rp.get_param("particles.do_particles") == 1: n_particles = self.rp.get_param("particles.n_particles") particle_generator = self.rp.get_param("particles.particle_generator") self.particles = particles.Particles(self.cc_data, bc, n_particles, particle_generator) # now set the initial conditions for the problem problem = importlib.import_module("incompressible.problems.{}".format(self.problem_name)) problem.init_data(self.cc_data, self.rp) def method_compute_timestep(self): """ The timestep() function computes the advective timestep (CFL) constraint. The CFL constraint says that information cannot propagate further than one zone per timestep. We use the driver.cfl parameter to control what fraction of the CFL step we actually take. """ cfl = self.rp.get_param("driver.cfl") u = self.cc_data.get_var("x-velocity") v = self.cc_data.get_var("y-velocity") # the timestep is min(dx/|u|, dy|v|) xtmp = self.cc_data.grid.dx/(abs(u)) ytmp = self.cc_data.grid.dy/(abs(v)) self.dt = cfl*float(min(xtmp.min(), ytmp.min())) def preevolve(self): """ preevolve is called before we being the timestepping loop. For the incompressible solver, this does an initial projection on the velocity field and then goes through the full evolution to get the value of phi. The fluid state (u, v) is then reset to values before this evolve. """ self.in_preevolve = True myg = self.cc_data.grid u = self.cc_data.get_var("x-velocity") v = self.cc_data.get_var("y-velocity") self.cc_data.fill_BC("x-velocity") self.cc_data.fill_BC("y-velocity") # 1. do the initial projection. This makes sure that our original # velocity field satisties div U = 0 # next create the multigrid object. We want Neumann BCs on phi # at solid walls and periodic on phi for periodic BCs mg = MG.CellCenterMG2d(myg.nx, myg.ny, xl_BC_type="periodic", xr_BC_type="periodic", yl_BC_type="periodic", yr_BC_type="periodic", xmin=myg.xmin, xmax=myg.xmax, ymin=myg.ymin, ymax=myg.ymax, verbose=0) # first compute divU divU = mg.soln_grid.scratch_array() divU.v()[:, :] = \ 0.5*(u.ip(1) - u.ip(-1))/myg.dx + 0.5*(v.jp(1) - v.jp(-1))/myg.dy # solve L phi = DU # initialize our guess to the solution, set the RHS to divU and # solve mg.init_zeros() mg.init_RHS(divU) mg.solve(rtol=1.e-10) # store the solution in our self.cc_data object -- include a single # ghostcell phi = self.cc_data.get_var("phi") phi[:, :] = mg.get_solution(grid=myg) # compute the cell-centered gradient of phi and update the # velocities gradp_x, gradp_y = mg.get_solution_gradient(grid=myg) u[:, :] -= gradp_x v[:, :] -= gradp_y # fill the ghostcells self.cc_data.fill_BC("x-velocity") self.cc_data.fill_BC("y-velocity") # 2. now get an approximation to gradp at n-1/2 by going through the # evolution. # store the current solution -- we'll restore it in a bit orig_data = patch.cell_center_data_clone(self.cc_data) # get the timestep self.method_compute_timestep() # evolve self.evolve() # update gradp_x and gradp_y in our main data object new_gp_x = self.cc_data.get_var("gradp_x") new_gp_y = self.cc_data.get_var("gradp_y") orig_gp_x = orig_data.get_var("gradp_x") orig_gp_y = orig_data.get_var("gradp_y") orig_gp_x[:, :] = new_gp_x[:, :] orig_gp_y[:, :] = new_gp_y[:, :] self.cc_data = orig_data if self.verbose > 0: print("done with the pre-evolution") self.in_preevolve = False def evolve(self): """ Evolve the incompressible equations through one timestep. """ u = self.cc_data.get_var("x-velocity") v = self.cc_data.get_var("y-velocity") gradp_x = self.cc_data.get_var("gradp_x") gradp_y = self.cc_data.get_var("gradp_y") phi = self.cc_data.get_var("phi") myg = self.cc_data.grid #--------------------------------------------------------------------- # create the limited slopes of u and v (in both directions) #--------------------------------------------------------------------- limiter = self.rp.get_param("incompressible.limiter") ldelta_ux = reconstruction.limit(u, myg, 1, limiter) ldelta_vx = reconstruction.limit(v, myg, 1, limiter) ldelta_uy = reconstruction.limit(u, myg, 2, limiter) ldelta_vy = reconstruction.limit(v, myg, 2, limiter) #--------------------------------------------------------------------- # get the advective velocities #--------------------------------------------------------------------- """ the advective velocities are the normal velocity through each cell interface, and are defined on the cell edges, in a MAC type staggered form n+1/2 v i,j+1/2 +------+------+ | | n+1/2 | | n+1/2 u + U + u i-1/2,j | i,j | i+1/2,j | | +------+------+ n+1/2 v i,j-1/2 """ # this returns u on x-interfaces and v on y-interfaces. These # constitute the MAC grid if self.verbose > 0: print(" making MAC velocities") _um, _vm = incomp_interface_f.mac_vels(myg.qx, myg.qy, myg.ng, myg.dx, myg.dy, self.dt, u, v, ldelta_ux, ldelta_vx, ldelta_uy, ldelta_vy, gradp_x, gradp_y) u_MAC = ai.ArrayIndexer(d=_um, grid=myg) v_MAC = ai.ArrayIndexer(d=_vm, grid=myg) #--------------------------------------------------------------------- # do a MAC projection ot make the advective velocities divergence # free #--------------------------------------------------------------------- # we will solve L phi = D U^MAC, where phi is cell centered, and # U^MAC is the MAC-type staggered grid of the advective # velocities. if self.verbose > 0: print(" MAC projection") # create the multigrid object mg = MG.CellCenterMG2d(myg.nx, myg.ny, xl_BC_type="periodic", xr_BC_type="periodic", yl_BC_type="periodic", yr_BC_type="periodic", xmin=myg.xmin, xmax=myg.xmax, ymin=myg.ymin, ymax=myg.ymax, verbose=0) # first compute divU divU = mg.soln_grid.scratch_array() # MAC velocities are edge-centered. divU is cell-centered. divU.v()[:, :] = \ (u_MAC.ip(1) - u_MAC.v())/myg.dx + (v_MAC.jp(1) - v_MAC.v())/myg.dy # solve the Poisson problem mg.init_zeros() mg.init_RHS(divU) mg.solve(rtol=1.e-12) # update the normal velocities with the pressure gradient -- these # constitute our advective velocities phi_MAC = self.cc_data.get_var("phi-MAC") solution = mg.get_solution() phi_MAC.v(buf=1)[:, :] = solution.v(buf=1) # we need the MAC velocities on all edges of the computational domain b = (0, 1, 0, 0) u_MAC.v(buf=b)[:, :] -= (phi_MAC.v(buf=b) - phi_MAC.ip(-1, buf=b))/myg.dx b = (0, 0, 0, 1) v_MAC.v(buf=b)[:, :] -= (phi_MAC.v(buf=b) - phi_MAC.jp(-1, buf=b))/myg.dy #--------------------------------------------------------------------- # recompute the interface states, using the advective velocity # from above #--------------------------------------------------------------------- if self.verbose > 0: print(" making u, v edge states") _ux, _vx, _uy, _vy = \ incomp_interface_f.states(myg.qx, myg.qy, myg.ng, myg.dx, myg.dy, self.dt, u, v, ldelta_ux, ldelta_vx, ldelta_uy, ldelta_vy, gradp_x, gradp_y, u_MAC, v_MAC) u_xint = ai.ArrayIndexer(d=_ux, grid=myg) v_xint = ai.ArrayIndexer(d=_vx, grid=myg) u_yint = ai.ArrayIndexer(d=_uy, grid=myg) v_yint = ai.ArrayIndexer(d=_vy, grid=myg) #--------------------------------------------------------------------- # update U to get the provisional velocity field #--------------------------------------------------------------------- if self.verbose > 0: print(" doing provisional update of u, v") # compute (U.grad)U # we want u_MAC U_x + v_MAC U_y advect_x = myg.scratch_array() advect_y = myg.scratch_array() # u u_x + v u_y advect_x.v()[:, :] = \ 0.5*(u_MAC.v() + u_MAC.ip(1))*(u_xint.ip(1) - u_xint.v())/myg.dx + \ 0.5*(v_MAC.v() + v_MAC.jp(1))*(u_yint.jp(1) - u_yint.v())/myg.dy # u v_x + v v_y advect_y.v()[:, :] = \ 0.5*(u_MAC.v() + u_MAC.ip(1))*(v_xint.ip(1) - v_xint.v())/myg.dx + \ 0.5*(v_MAC.v() + v_MAC.jp(1))*(v_yint.jp(1) - v_yint.v())/myg.dy proj_type = self.rp.get_param("incompressible.proj_type") if proj_type == 1: u[:, :] -= (self.dt*advect_x[:, :] + self.dt*gradp_x[:, :]) v[:, :] -= (self.dt*advect_y[:, :] + self.dt*gradp_y[:, :]) elif proj_type == 2: u[:, :] -= self.dt*advect_x[:, :] v[:, :] -= self.dt*advect_y[:, :] self.cc_data.fill_BC("x-velocity") self.cc_data.fill_BC("y-velocity") #--------------------------------------------------------------------- # project the final velocity #--------------------------------------------------------------------- # now we solve L phi = D (U* /dt) if self.verbose > 0: print(" final projection") # create the multigrid object mg = MG.CellCenterMG2d(myg.nx, myg.ny, xl_BC_type="periodic", xr_BC_type="periodic", yl_BC_type="periodic", yr_BC_type="periodic", xmin=myg.xmin, xmax=myg.xmax, ymin=myg.ymin, ymax=myg.ymax, verbose=0) # first compute divU # u/v are cell-centered, divU is cell-centered divU.v()[:, :] = \ 0.5*(u.ip(1) - u.ip(-1))/myg.dx + 0.5*(v.jp(1) - v.jp(-1))/myg.dy mg.init_RHS(divU/self.dt) # use the old phi as our initial guess phiGuess = mg.soln_grid.scratch_array() phiGuess.v(buf=1)[:, :] = phi.v(buf=1) mg.init_solution(phiGuess) # solve mg.solve(rtol=1.e-12) # store the solution phi[:, :] = mg.get_solution(grid=myg) # compute the cell-centered gradient of p and update the velocities # this differs depending on what we projected. gradphi_x, gradphi_y = mg.get_solution_gradient(grid=myg) # u = u - grad_x phi dt u[:, :] -= self.dt*gradphi_x v[:, :] -= self.dt*gradphi_y # store gradp for the next step if proj_type == 1: gradp_x[:, :] += gradphi_x[:, :] gradp_y[:, :] += gradphi_y[:, :] elif proj_type == 2: gradp_x[:, :] = gradphi_x[:, :] gradp_y[:, :] = gradphi_y[:, :] self.cc_data.fill_BC("x-velocity") self.cc_data.fill_BC("y-velocity") if self.particles is not None: self.particles.update_particles(self.dt) # increment the time if not self.in_preevolve: self.cc_data.t += self.dt self.n += 1 def dovis(self): """ Do runtime visualization """ plt.clf() plt.rc("font", size=10) u = self.cc_data.get_var("x-velocity") v = self.cc_data.get_var("y-velocity") myg = self.cc_data.grid vort = myg.scratch_array() divU = myg.scratch_array() vort.v()[:, :] = \ 0.5*(v.ip(1) - v.ip(-1))/myg.dx - \ 0.5*(u.jp(1) - u.jp(-1))/myg.dy divU.v()[:, :] = \ 0.5*(u.ip(1) - u.ip(-1))/myg.dx + \ 0.5*(v.jp(1) - v.jp(-1))/myg.dy fig, axes = plt.subplots(nrows=2, ncols=2, num=1) plt.subplots_adjust(hspace=0.25) fields = [u, v, vort, divU] field_names = ["u", "v", r"$\nabla \times U$",
period $T$, and let $$ \widetilde\gamma \colon t\in \r\longrightarrow \left(\widetilde\gamma_h(t),\widetilde\gamma_v(t)\right) \in\sol/\!\raisebox{-.65ex}{\ensuremath{\Lambda_0}} = \left(K/\!\raisebox{-.65ex}{\ensuremath{\Lambda_0}} \rtimes \r \right) $$ be a lift of $\gamma$ to the infinite cyclic covering of $L$. There exists $l \in \Lambda/\!\raisebox{-.65ex}{\ensuremath{\Lambda_0}}$ of infinite order such that for all $t\in\r$, $\widetilde \gamma (t + T) = l\cdot \widetilde \gamma(t)$. The action of $l$ on the torus $K/\!\raisebox{-.65ex}{\ensuremath{\Lambda_0}}$ is defined by a hyperbolic linear map $A$. We deduce that for all $t\in\r$, $\left(A - I\right)\left(\widetilde\gamma_h(t)\right) = 0$. Hence, $\widetilde\gamma_h$ is necessarily constant and equal to a fixed point of $A$. \end{proof} \begin{rem}\label{rem.estimate} The proof of Lemma~\ref{lem.closed} provides an estimate of the length of closed geodesics of type $A$ homotopic to closed geodesics of type $C$. This estimate will be crucial in the proof of Proposition~\ref{prop.metric.choice}. Likewise, if $L$ is the suspension of a diffeomorphism of the torus defined by a hyperbolic linear map $A \in \glz$, we deduce that closed geodesics of type $B$ of $L$ are in correspondence with the periodic points of $A \colon {\r^2}/{\z^2} \longrightarrow {\r^2}/{\z^2}$. \end{rem} \begin{prop}\label{prop.metric.choice} Let $L$ be a closed three-dimensional manifold given by Theorem~\ref{thm.classif} and let $\Pi$ be a finite subset of homotopy classes of $L$. There exists a $\sol$-metric on $L$ such that no element of $\Pi$ gets realized by a closed geodesic of type $C$ of $L$. Furthermore, this metric can be chosen such that closed geodesics of type $A$ of $L$ homotopic to elements of $\Pi$ are of Morse-Bott index~$1$. \end{prop} \begin{proof} From Theorem~\ref{thm.classif}, the $\sol$-manifold $L$ is diffeomorphic to the quotient of $\sol$ by a lattice $\Lambda \subset \isom(\sol)$ satisfying the exact sequence $0\to \Lambda_0 \longrightarrow \Lambda \stackrel{P_L}{\longrightarrow} \Lambda/\!\raisebox{-.65ex}{\ensuremath{\Lambda_0}} \to 0$ where $\Lambda_0 \subset K$ is a lattice. The fundamental group of $L$ is therefore isomorphic to $\Lambda$, and from Lemma~\ref{lem.closed}, only classes in $\Pi\cap \Lambda_0$ can be realized by closed geodesics of type $A$ or $C$. Up to multiplication of the lattice $\Lambda_0$ by a constant $0<\varepsilon \ll 1$, we can assume that all the elements of $\Pi\cap \Lambda_0$ have length bounded from above by $4 - \pi$. Such a $\sol$-metric fits. Indeed, from Lemma~\ref{lem.closed} and Remark~\ref{rem.estimate}, every closed geodesic of type $C$ of $L$ is homotopic to a closed geodesic of type $A$ of length a multiple of $8LK\sqrt{\vert ab\vert}$, adopting the notations of \cite{tro}. Now, taking again the notations of \cite{tro}, we get $$ 8LK\sqrt{\vert ab\vert} = \frac{8}{\sqrt 2 \sqrt{1 + k^2}}\left(E - \frac K2(1 - k^2)\right)\textrm{ where $0\leq k \leq 1$.} $$ Moreover, $E = \displaystyle\int^\frac{\pi}2_0\sqrt{1 - k^2\sin^2\theta}\,\mathrm d \theta \geq 1$ and $$ K\sqrt{1 - k^2} = \displaystyle\int^\frac{\pi}2_0\sqrt{\frac{1 - k^2}{1 - k^2\sin^2\theta}}\,\mathrm d \theta \leq \frac\pi2\;. $$ We get the estimate $8LK\sqrt{\vert ab\vert} \geq 4 - \pi$ which prevents the geodesic of type $C$ to be homotopic to an element of $\Pi$. Likewise, the length of closed geodesics of type $A$ homotopic to elements of $\Pi$ are less than $4 - \pi < \frac{2\pi}{\sqrt{2}}$. From Proposition~\ref{prop.index.A}, the Conley-Zehnder index of these geodesics in the trivialisation $(h_1,\dots,h_4)$ of $\xi$ is $1$. Indeed, the Conley-Zehnder index of the rotation block is $1$ by definition, while the (Bott-)Conley-Zehnder index of the unipotent block $ U = \left[ \begin{matrix} 1 & t \cr 0 & 1 \cr \end{matrix} \right] $ vanishes, see the thesis of F.~Bourgeois. Indeed, this block is solution of the differential equation $\dot U = S\mathcal{J}U$ with $U(0) = I$, $S = \left[ \begin{matrix} 1 & 0 \cr 0 & 0 \cr \end{matrix} \right]$ and $\mathcal{J} = \left[ \begin{matrix} 0 & 1 \cr -1 & 0 \cr \end{matrix} \right]$. By definition, the (Bott-)Conley-Zehnder index of this block is the Conley-Zehnder index of the solution of the differential equation $\dot V = \left( S - \delta I\right)\mathcal{J} V$ with $V(0) = I$ and $0<\delta \ll 1$, which is hyperbolic. The result follows from \cite[Theorem~3.1]{V}, \cite[Proposition~1.7.3]{EGH} which identifies this Conley-Zehnder index to the Morse-Bott index. \end{proof} The $\sol$-metrics given by Proposition~\ref{prop.metric.choice} are metrics for which the area of the fibers of the map $p \colon L \longrightarrow B$ is not too large compared to the length of $B$. In fact, without changing the length of $B$, it is possible to expand or contract the fibers of $p$ as much as we want, keeping the $\sol$ feature of the metric. This observation was crucial in the proof of Proposition~\ref{prop.metric.choice} and will be very useful in Section~2. \section{$\sol$ Lagrangian submanifolds in uniruled symplectic manifolds} \subsection{Statement of the results} \begin{dfn}\label{dfn.uniruled} We say that a closed symplectic manifold $(X,\omega)$ is uniruled iff it has a non vanishing genus $0$ Gromov-Witten invariant of the form $ \langle [pt]_k;[pt],\omega^k \rangle^X_A\;, $ where $A \in H_2(X;\z)$, $k\geq 2$, and $[pt]_k$ represents the Poincar\'e dual of the point class in the moduli space $\overline{\mathcal{M}}_{0,k+1}$ of genus $0$ stable curves with $k+1$ marked points. \end{dfn} This Definition~\ref{dfn.uniruled} differs from \cite[Definition~4.5]{hu-li-ruan} where $\omega^k$ is replaced by any finite set of differential forms on $X$. Nevertheless, from \cite[Theorem~4.2.10]{kollar.sp.uni}, complex projective uniruled manifolds are all symplectically uniruled in the sense of Definition~\ref{dfn.uniruled}. The advantage for us to restrict ourselves to Definition~\ref{dfn.uniruled} is that for every Lagrangian submanifold $L$ of $X$, the form $\omega$ has a Poincar\'e dual representative disjoint from $L$. Our goal is to prove the following results. \begin{thm}\label{thm.main} Let $(X,\omega)$ be a closed uniruled symplectic manifold of dimension six. For any Lagrangian submanifold $L$ of $X$ homeomorphic to the suspension of a hyperbolic diffeomorphism of the two-dimensional torus, there exists a symplectic disc of Maslov index zero with boundary on $L$. Furthermore, such a disc can be chosen such that its boundary does not vanish in $H_1(L;\q)$. \end{thm} In particular, such a Lagrangian submanifold $L \hookrightarrow X$ given by Theorem~\ref{thm.main} cannot be monotone. It might be true that such Lagrangian submanifolds do not exist at all, see \S \ref{rems}. In fact, in the case of the projective space, the absence of orientable Sol Lagrangian submanifolds follows from Theorem 14.1 of \cite{Fuk}. Moreover, in this paper Kenji Fukaya remarks that his methods may extend to uniruled manifolds as well. Nevertheless, we deduce the following corollaries. \begin{cor}\label{cor.dp} Let $p \colon (X,c_X) \to (B,c_B)$ be a dominant real morphism with rational fibers, where $(X,c_X)$ (respectively $(B,c_B)$) is a real algebraic manifold of dimension $3$ (respectively $1$). Then, the real locus of $X$ has no $\sol$ component $L \subset X^{nonsing}$ such that the restriction of $p$ to $L= \raisebox{-.65ex}{\ensuremath{\Lambda}}\!\backslash \sol$ is the map $L \to \r/\!\raisebox{-.65ex}{\ensuremath{P_L(\Lambda)}}$ defined in \S~\ref{subsec.closedgeodes}. \end{cor} In particular, the restriction of $p$ to $L$ is a submersion if $L$ is the suspension of a hyperbolic diffeomorphism of the torus and has two multiple fibers if $L$ is a sapphire. Note that Koll\'ar proved in \cite{koIV} that in the situation of Corollary~\ref{cor.dp}, an orientable $\sol$ component $L$ of $X(\r)$ automatically satisfies the last conditions. That is $L $ is contained in the nonsingular part $X^{nonsing}$ of $X$ and the restriction of $p$ to $L= \raisebox{-.65ex}{\ensuremath{\Lambda}}\!\backslash \sol$ is the map $L \to \r/\!\raisebox{-.65ex}{\ensuremath{P_L(\Lambda)}}$. Corollary~\ref{cor.dp} means that in \cite[Theorems~1.1 and 1.3]{koIV}, the manifold $N$ cannot be endowed with a $\sol$ metric, confirming the expectation of Koll\'ar discussed in Remark~1.4 of this paper. The upshot is that if $X$ is a projective uniruled manifold defined over $\mathbb{R}$ with orientable real locus, then, up to connected sums with $\mathbb{R} P^3$ or $S^2 \times S^1$ and modulo finitely many closed three manifolds, every component of $\mathbb{R} X$ is a Seifert fiber space or a connected sum of Lens spaces. \begin{proof}[Proof of Corollary~\ref{cor.dp}] Choosing an appropriate branched covering $ (B',c_{B'})\to (B,c_B)$ and resolving the singularities of the fibered product $X \times_p B'$, we get a nonsingular uniruled real projective variety $Y$ containing in its real locus a connected component $L'$ homeomorphic to the suspension of a hyperbolic diffeomorphism of the torus. In this construction, $B'$ can be obtained of positive genus and such that the projection $p_* \colon H_1(L';\q)\to H_1(B';\q)$ is injective. It follows that $H_1(L';\q)$ injects into $H_1(Y;\q)$ and Theorem~\ref{thm.main} provides the contradiction. \end{proof} \begin{cor}\label{cor.fano} The real locus of a smooth three-dimensional Fano manifold does not contain any connected component homeomorphic to the suspension of a hyperbolic diffeomorphism of the two-dimensional torus. \qed \end{cor} Indeed, in the situation of Corollary~\ref{cor.fano}, the real locus
\section{\label{sec:intro}Introduction} Atomic wires on semiconducting surfaces are prime candidates to realize the physics of one-dimensional (1D) correlated electrons~\cite{springborg07,onc08,sni10} but the interpretations of experimental results are highly controversial. For instance, Au/Ge(001)~\cite{blu11,blum12}, Bi/InSb(100)~\cite{ohts15} and Pt/Ge(100)~\cite{yaji13,yaji16} have been described as quasi-1D conductors and thus as possible realizations of Luttinger liquids~\cite{Schoenhammer,giamarchi07,solyom}. Charge-density-wave (CDW) states~\cite{gruener,chen} have been reported in In/Si(111)~\cite{yeom99,cheo15} and Au/Si(553)~\cite{shin,aulbach}. However, the theory of correlated electrons in quasi-1D systems is well established only for isolated chains and narrow ladders, as well as for anisotropic bulk electronic systems~\cite{sol79,giamarchi07,gruener,solyom}. It has not been extended yet to account for the influence of a three-dimensional (3D) host such as a semiconducting substrate on the properties of a 1D Luttinger liquid or an electronic CDW. The physical properties of Luttinger liquids and CDW systems can be studied theoretically using lattice models. In principle, the properties of quasi-1D lattice models can be calculated using the density matrix renormalization group (DMRG) method~\cite{whi92,whi93,sch05,jec08a}. However, DMRG cannot treat 3D lattice models for wire-substrate systems directly. Therefore, in a previous publication~\cite{paper1}, we introduced a 3D lattice model for a correlated atomic wire deposited on a substrate and showed how to map it exactly onto a two-dimensional (2D) ladder-like lattice that can be approximated by quasi-1D narrow ladder models (NLM) with increasing number of legs. We demonstrated the approach using the 1D Hubbard model to represent a correlated atomic wire~\cite{paper2}. Due to the high computational cost of DMRG for electronic ladder systems, we were not able to study the convergence (and thus the stability of 1D features) with the number of legs systematically. In this paper, we apply the NLM approach to a correlated wire represented by the 1D spinless fermion (1DSF) model~\cite{giamarchi07,solyom}. This model is defined by the Hamiltonian \begin{eqnarray} \label{eq:1Dspinlessmodel} H &=& -t_{\text w} \sum^{L_x-1}_{x} \left ( c^{\dag}_{x} c^{\phantom{\dag}}_{x+1} + \text{H.c.} \right ) \\ \nonumber && + V \sum^{L_x-1}_x \left ( c^{\dag}_{x+1}c^{\phantom{\dag}}_{x+1} - \frac{1}{2} \right) \left ( c^{\dag}_{x}c^{\phantom{\dag}}_{x} - \frac{1}{2} \right) \end{eqnarray} where $c_{x}$ ($c^{\dag}_{x}$) annihilates (creates) a spinless fermion on site $x$ residing in a 1D lattice with length $L_x$. The parameter $t_{\text w}$ determines the hopping amplitude between nearest-neighbor sites while $V$ determines the strength of the Coulomb interaction between nearest-neighbor fermions. As the model exhibits a particle-hole symmetry that changes the sign of the hopping term only, we can assume without loss of generality that $t_{\text w}\geq 0$. This model is exactly solvable using the Bethe Ansatz method and its properties are well known~\cite{gaudin,giamarchi07}. Here we focus on the repulsive case $V\geq 0$ and thus the (grand-canonical) ground state occurs at half filling, i.e. for $N=L_x/2$ spinless fermions on the lattice. The model exhibits two different ground-state phases at half-filling as a function of $V\geq 0$. In the range $V \leq 2t_{\text w}$ the ground-state density distribution is uniform, $n_x=\left \langle c^{\dag}_{x}c^{\phantom{\dag}}_{x} \right \rangle=\frac{1}{2}$. The excitation spectrum is gapless and its low-energy sector is described by a one-component Luttinger liquid. For $V>2t_{\text w} $, the 1DSF model exhibits a spontaneous broken-symmetry ground state with a CDW $n_x= \frac{1}{2} + (-1)^x \delta$ ($0< \vert \delta \vert < \frac{1}{2}$) while its excitation spectrum is gapped. $V_{\text{CDW}}=2t_{\text w}$ is the quantum critical point of the continuous quantum phase transition between the CDW and the Luttinger liquid phases. In this paper we investigate the fate of theses phases when the wire is coupled to a semiconducting substrate. DMRG allows us to compute broader ladder systems for spinless fermions than for electronic models. For this study we have used NLM with up to 15 legs. The slow increase of entanglement with the number of legs in the NLM~\cite{paper2} allows us to study large ladder widths with high accuracy and reasonable computational cost. Therefore, we can perform a more accurate study of the convergence with the number of legs and confirm that the NLM approach can describe the quasi-1D low-energy physics occurring in 3D wire-substrate systems. We demonstrate that Luttinger liquids and CDW states remain stable when coupled to a non-interacting gapped substrate and thus shed some light on these hallmarks of 1D correlated electron systems in atomic wires deposited on semiconducting substrates. \section{\label{sec:models}Models} \subsection{3D wire-substrate model \label{sec:full_model}} We start from a 3D wire-substrate model that is similar to the one introduced in our previous work~\cite{paper1}. However, we consider only spinless fermions and the 1DSF Hamiltonian~(\ref{eq:1Dspinlessmodel}) is substituted for the 1D Hubbard Hamiltonian. The full Hamiltonian takes the form \begin{eqnarray} \label{eq:hamiltonian} H&=& -t_{\text w} \sum_{x} \left ( c^{\dag}_{{\text w} x} c^{\phantom{\dag}}_{{\text w},x+1} + \text{H.c.} \right ) \nonumber \\ && + V \sum_x \left ( c^{\dag}_{{\text w}x+1}c^{\phantom{\dag}}_{{\text w}x+1} - \frac{1}{2} \right) \left( c^{\dag}_{{\text w}x}c^{\phantom{\dag}}_{{\text w}x} - \frac{1}{2} \right) \nonumber \\ && + \sum_{b, \bm{r} } \epsilon_{\text b} c^{\dag}_{{\text b}\bm{r}} c^{\phantom{\dag}}_{{\text b}\bm{r}} -t_{\text s} \sum_{\langle \bm{r} \bm{r'} \rangle} \sum_{\text{b} } \left ( c^{\dag}_{{\text b}\bm{r} } c^{\phantom{\dag}}_{{\text b}\bm{r'}} + \text{H.c.} \right ) \nonumber \\ && -t_{\text{ws}} \sum_{b, <x,r>} \left ( c^{\dag}_{{\text b} \bm{r} } c^{\phantom{\dag}}_{{\text w} x } + \text{H.c.} \right ) . \end{eqnarray} The substrate lattice is a cubic lattice of size $L_x \times L_y\times L_z$ with open boundary conditions in the $z$-direction and periodic boundary conditions in the $x$ and $y$ directions. The sum over $\bm{r}$ runs over all substrate lattice sites and the sum over $\langle \bm{r} \bm{r'} \rangle$ is over all pairs of nearest-neighbor sites in the substrate. The substrate is modeled by a tight-binding Hamiltonian with nearest-neighbor hopping $t_{\text s}$ and two orbitals per site, one for the valence band and one for the conduction band. The operator $c^{\dag}_{{\text b} \bm{r} }$ creates a spinless fermion on the site with coordinates $\bm{r} = (x,y,z)$ in the valence ($\text{b}=\text{v}$) or conduction ($\text{b}=\text{c}$) orbital. In momentum space the single-particle dispersions take the form \begin{equation} \label{eq:disp} \epsilon_{\text{b}}(\bm{k}) = \epsilon_{\text b} - 2t_{\text s} [ \cos(k_x) + \cos(k_y) + \cos(k_z) ] , \end{equation} where $k_x, k_y \in [-\pi,\pi]$ and $k_z \in [0,\pi]$ while $\epsilon_{\text b}=\pm \epsilon_{\text s}$ denotes the on-site energies for the valence and conduction ($\text{b}=\text{v,c}$) bands. Thus there is an indirect gap $\Delta_{\text s} = 2 \epsilon_{\text s} - 12 t_{\text s} $ between the bottom of the conduction band and the top of the valence band. The spectrum is gapped only if $\epsilon_{\text s} > 6 t_{\text s}$ and this condition must be fulfilled to represent a semiconducting substrate. The wire is aligned with the substrate surface in the $x$-direction at the position $y=y_0 \in \{1,\dots,L_y\}$ and $z=0$. The operator $c^{\dag}_{{\text w} x}$ creates a spinless fermion on the wire site at the position $\bm{r} = (x,y_0,0)$. The last term in~(\ref{eq:hamiltonian}) represents the hybridization between the wire and the substrate which is a single-particle hopping $t_{\text{ws}}$ between each wire site and the adjacent substrate site at the position $\bm{r} = (x,y_0,1)$. The sums over $x$ run over all wire sites from $1$ to $L_x$. Note that the Hamiltonian~(\ref{eq:hamiltonian}) describes a single wire. Real systems of atomic wires on semiconducting substrates are made of several parallel wires. Thus we assume here that the (direct or substrate-mediated) interactions between wires can be neglected. This is justified in first approximation for Luttinger liquids and ground-state CDW phases. Several parallel wires would have to be taken into account to study quasi-1D long-range ordered phases at finite temperature, however. \subsection{Narrow ladder model \label{sec:nlm}} Applying the exact mapping introduced in our previous work~\cite{paper1} to the Hamiltonian~(\ref{eq:hamiltonian}), we get a ladder-like Hamiltonian on a 2D lattice of size $L_x \times M$ where $M=2L_yL_z+1$ is the number of legs \begin{eqnarray} \label{eq:ladder-hamiltonian} H&=& -t_{\text w} \sum_{x} \left ( g^{\dag}_{x,0} g^{\phantom{\dag}}_{x+1,0} + \text{H.c.} \right ) \nonumber \\ && + V \sum_x \left( g^{\dag}_{x+1,0}g^{\phantom{\dag}}_{x+1,0} - \frac{1}{2} \right) \left( g^{\dag}_{x,0}g^{\phantom{\dag}}_{x,0} - \frac{1}{2} \right) \nonumber \\ &&-t_{\text s} \sum^{M-1}_{n=1}\sum_{x} \left ( g^{\dag}_{xn}g^{\phantom{\dag}}_{x+1,n} +\text{H.c.}\right) \nonumber \\ &&-\sum^{M-2}_{n=0} t^{\text{rung}}_{n+1} \sum_{x}\left( g^{\dag}_{xn}g^{\phantom{\dag}}_{x,n+1}+\text{H.c.}\right). \end{eqnarray} Here, $g^{\dag}_{xn}$ creates a fermion at position $x$ in the $n$-th leg ($n=0,\dots,M-1$). The first leg ($n=0$) is identical with the wire, in particular $g^{\dag}_{x0} = c^{\dag}_{\text{w}x}$, while legs $n=1,\dots,M-1$ correspond to successive substrate shells around the wire. The Hamiltonian~(\ref{eq:ladder-hamiltonian}) consists of the original 1DSF model on the leg representing the correlated atomic wire, an intra-leg hopping $t_{\text s}$ in every leg representing the substrate, and a nearest-neighbor rung hopping $t^{\text{rung}}_n$ between substrate legs $n-1$ and $n$. The first two rung hoppings $t^{\text{rung}}_{1}=\sqrt{2}t_{\text{ws}}$ and $t^{\text{rung}}_{2}=\sqrt{3t^2_{\text s}+\epsilon_{\text s}^2}$ can be obtained algebraically. For larger $n$, $t^{\text{rung}}_{n+1}$ can be computed easily using
processes. Resolving this question is a topic for future work. We believe the work done by \citeauthor{borkar} (\citeyear{borkar}) could provide greater insight. See figure \ref{graphAdvantage}, where {Q}-learning appears to converge for a simple GridWorld domain. \section{Conclusion} In this paper, we analyzed the convergence of COnvergent Actor-Critic by Humans \cite{coach} under three types of feedback---one-step reward, policy, and advantage feedback. These are all examples of feedback a human trainer might give. We defined a COACH variant called E-COACH and demonstrated its convergence under these types of feedback. Original COACH, unfortunately, does not necessarily converge to an optimal policy under the feedback types defined in this paper. In addition, we compared the new E-COACH with two algorithms: {Q}-learning and TAMER. TAMER does poorly under one-step-reward feedback. And {Q}-learning appears to converge to optimal behavior under one-step-reward and policy feedback, but future work is required to determine its performance under advantage feedback. \section{Experiments} \label{experimentsSection} We used the simple\_rl framework (zzz ref) to test the performance of E-COACH in different grid worlds under different feedback schemes. To generate an optimal policy for action feedback, we ran value iteration (VI) on the MDP for a large number of iterations (zzz be more specific) before the start of learning. reward obtained by {Q}-learning (zzz why Q learning? Why not show pi start?) and a random agent to that of the three E-COACH agents. \begin{figure} \centering \begin{subfigure} \centering \includegraphics[width=0.4\textwidth]{content/graphics/average_reward3.pdf} \caption{E-COACH on $3 \times 3$ Gridworld with Lava} \label{3by3} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.4\textwidth]{content/graphics/average_reward10.pdf} \caption{E-COACH on $10 \times 10$ Gridworld with Lava} \label{10by10} \end{subfigure} \end{figure} As seen in the results in Figure~\ref{3by3} and \ref{10by10}, the two E-COACH agents perform nearly optimally. In the $3\times 3$ gridworld, the two E-COACH agents converge at a slower rate than {Q}-learning but eventually catch up with it. In the $10\times 10$ gridworld, the two E-COACH agents consistently outperform {Q}-learning. It is interesting to note that the graphs for the two E-COACH agents are very similar even though they receive very different feedback. (zzz why the difference relative to {Q}-learning? Why did 3x3 run longer than the more complex 10x10?) \subsection{E-COACH Under Policy Feedback} \label{indicatorSection} Let $M_1 = \langle S, A, R, T, \gamma \rangle$ be an MDP without any specific reward function. Under \emph{policy feedback}, a trainer has a target stationary deterministic policy $\pi^*_1$ in mind and delivers feedback based on whether the trainer's decision agrees with $\pi^*_1$. When an agent takes an action $a$ in state $s$, the trainer will give feedback \begin{align*} f(s, a) = I(s,a), \end{align*} with $I(s,a)$ defined as, \begin{align*} I(s, a) = \begin{cases} 1, & \textnormal{if } \pi^*_1(s) = a,\\ 0, & \textnormal{otherwise.} \end{cases} \end{align*} \thmNum{indicatorThmNum} \textbf{Theorem \ref{indicatorThmNum}:} E-COACH converges under feedback $f(s, a) = I(s,a),\forall \> s\times a \in S \times A$. \textbf{Proof:} Consider the case of replacing the reward function $R(s,a)$ with $I(s,a)$ in MDP $M_{1}$, constructing a new MDP $M_2 = \langle S, A, f, T, \gamma \rangle$. We would like to show that, in this setting, the E-COACH algorithm converges to the optimal solution. $M_1$ and $M_2$ satisfy the prerequisites for theorem \ref{genThmNum}. Consider the optimal policy for $M_2$. The best policy will select the best action in every state. We have that $V^*_2(s_0) = \sum_{i = 0}^\infty 1 \cdot \gamma^i$. The optimal policy for $M_2$ will achieve this value function because, if not, then we have a policy such that $V_2'(s_0) = \sum_{i = 0}^\infty t(i) \cdot 1 \cdot \gamma^i$, where $t(i) \in \set{0, 1} \forall \> i$ and $t(j) = 0$ for some $j$. Take the smallest value $k \in \N$ such that $t(k) = 0$, then $V^*_2 (s_0) - V_2'(s_0) \geq \gamma^k$ so then the policy achieving $V_2'$ is sub-optimal. We can conclude that $V^*_2(s_0)$ is the value function for the optimal policy. Therefore, the policy that always chooses the action that gives a value of one is optimal. Also, note that always choosing the action that results in a feedback of one corresponds exactly to the decision of $\pi_1^*$ by construction of $f(s, a)$. So, we obtain that $\pi_1^*(s, a) = \pi_2^*(s, a), \forall \> (s, a) \in S \times A$. In other words, an optimal policy in the new domain is equivalent to the target policy from the original one. We can leverage Theorem~\ref{genThmNum} to show that the algorithm converges under policy feedback. \newline \strut\hfill $\Box$ \section{E-COACH Under Policy Feedback} To argue that E-COACH converges under policy feedback, we first consider a more general form of feedback and then show policy feedback is a special case. \subsection{E-COACH with a More General Type of Feedback} \label{generalization} Let us start by considering two similar MDPs $M_1 = \langle S, A, R, T, \gamma\rangle$ and $M_2 = \langle S, A, f, T, \gamma\rangle$. Note the differing reward functions $R$ and $f$ in the two MDPs. We will denote the value functions for $M_1$ and $M_2$ as $V_1$ and $V_2$, respectively. We will say that the starting state for both of our MDPs is $s_0$. Define $V_1^{\min} = \min_{\pi \in \Pi} V_1^\pi (s_0)$, $V_1^{*} = \max_{\pi \in \Pi} V_1^\pi(s_0)$. The following theorem will have the following assumption: \begin{enumerate} \item E-COACH (see algorithm \ref{coachAlgorithm}) will give us a policy $\pi_2(s, a)$ such that $\mathbb{E}_{s \sim d^{\pi_2^*}} \big[\sum_a\big|\pi_2^*(s, a) - \pi_2(s, a)\big|\big] \leq \delta$ for some optimal policy $\pi_2^*$ on the domain $M_2$. The proof in section \ref{oneStepRewardSection} strengthens this assumption by showing that E-COACH optimizes the policy gradient objective. Note $\pi_2^*$ may not be the only optimal policy; instead, it is a single optimal policy. \label{kavoshAssumption} \item We also assume that $\gamma \neq 1$ for the case where the MDP has an infinite horizon, which will we will justify later on. \label{lambdaNotOneAssumption} \end{enumerate} Theorem \ref{genThmNum} requires the condition that all optimal policies for $M_2$ are also optimal for $M_1$. We will later show that this condition holds true for the case of policy feedback in theorem \ref{indicatorThmNum}, allowing us to leverage these results. \thmNum{genThmNum} \textbf{Theorem~\ref{genThmNum}:} If all optimal policies for $M_2$ are also optimal for $M_1$ (optimal policies of $M_2$ are a subset of those for $M_1$), then running E-COACH on $M_2$ will result in a policy that is close to an optimal policy on $M_2$, which will also be close to an optimal policy for $M_1$. Let's define $W = \max(|V_1^*|, |V_1^{\min}|)$. Then we find that, \begin{align*} 0 \leq V_1^* - V_1^{\pi_2} \leq W \delta \end{align*} \textbf{Proof:} We have to show that running E-COACH in $M_2$ will yield a policy that is not too far off from an optimal policy for $M_1$. We would like to run E-COACH on $M_2$, using the alternate form of feedback as the reward function, and for any good policy (as per assumption \ref{kavoshAssumption}) we get from E-COACH on $M_2$, we would like for that policy to also be good on $M_1$, the original MDP we are trying to solve. The lower-bound in the theorem statement is immediate. For the upper-bound, let's let $\pi^{(n)}$ denote a policy that follows/simulates $\pi_2^*$ for the first $n - 1$ time-steps and $\pi_2$ for the rest. Hence, on the $n^\textnormal{th}$ time-step, $\pi^{(n)}$ will follow/simulate $\pi_2$ and not $\pi_2^*$. Let $V^{(n)}$ denote the value of policy $\pi^{(n)}$. Therefore, we can say that $V_1^{\pi_2} = V^{(0)}$ and $V_1^* = V^{(\infty)}$. Remember that $\pi_{2}^{*}$ is optimal on $M_1$ \emph{and} $M_2$ by the condition above, and thus has value $V^{*}$. We'll start by considering $V^{(t)} - V^{(t - 1)}$. Both $\pi^{(t)}$ and $\pi^{(t - 1)}$ accumulate the same expected reward for the first $t - 2$ steps and so these rewards cancel out. Note that the $\mathbb{P}$ we use below is the same as that defined in section \ref{obj-func}. We find the following: \begin{align*} V^{(t)} - &V^{(t - 1)} = \gamma^{t - 1} \sum_s \mathbb{P}_{t - 1}^{\pi_2^*}(s) \sum_a \pi_2^*(s,a) Q^{\pi_2}(s, a) \\ - &\gamma^{t - 1} \sum_s \mathbb{P}_{t - 1}^{\pi_2^*}(s) \sum_a \pi_2(s, a) Q^{\pi_2}(s, a) \\ = &\gamma^{t - 1} \sum_s \mathbb{P}_{t - 1}^{\pi_2^*}(s) \sum_a (\pi_2^*(s, a) - \pi_2(s, a)) Q^{\pi_2}(s, a) \\ \leq &\gamma^{t
point of \eqref{eq:problem_setting}. Then $p_*$ is locally optimal for \eqref{eq:problem_setting} if and only if $v_* = 0 \in \tangentspace{p_*}{\cM}$ is a local minimizer of \eqref{eq:problem_setting_Retraction}. In this case, when \eqref{eq:ZKRCQ} holds at~$p_*$, then there exists $\mu \in \cotangentspace{g(p_*)}{\cN}$ such that \begin{align*} \bf'(0_{p_*}) + \mu \, \bg'(0_{p_*}) & = 0 \quad \text{in } \cotangentspace{p_*}{\cM} , \\ \mu \in \paren[big](){\tangentcone{0_{p_*}}{\bK}}^\circ & = \paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ . \end{align*} \end{theorem} \begin{proof} Clearly, $0_{p_*} \in \tangentspace{p_*}{\cM}$ is a local minimizer of \eqref{eq:problem_setting_Retraction} if and only if $p_*$ is a local minimizer of \eqref{eq:problem_setting}. Moreover, by the chain rule, using \cref{item:definition:retraction:diff} of $\retract{p_*}$ and \cref{item:definition:linearizing_map:Diff} of $S_{g(p_*)}$: \begin{equation*} \bf'(0_{p_*}) = f'(p_*) , \quad \bg'(0_{p_*}) = g'(p_*) , \quad \tangentcone{0_{p_*}}{\bK} = \innertangentcone{g(p_*)}{\cK} . \end{equation*} Thus, our conditions directly follow from \eqref{eq:KKT_conditions}. \end{proof} As an alternative approach, we can apply a classical theorem on KKT conditions to \eqref{eq:problem_setting_Retraction_classic} and obtain \begin{equation}\label{eq:KKT_retractions_explicit} \begin{aligned} \bf'(0_{p_*}) + \lambda_I^\transp A_I \, \bg'(0_{p_*}) + \lambda_E^\transp A_E \, \bg'(0_{p_*}) & = 0 , \\ \lambda_I & \ge 0 , \end{aligned} \end{equation} with $\lambda_I \in \R^\ell$ and $\lambda_E \in \R^{n-k}$, which depend on the choice of $A_I$ and $A_E$. By invariance, the first row equivalently yields: \begin{equation*} f'(p_*) + \lambda_I^\transp A_I \, g'(p_*) + \lambda_E^\transp A_E \, g'(p_*) = 0 \end{equation*} and thus by comparison, \begin{equation*} \mu = \lambda_I^\transp A_I + \lambda_E^\transp A_E \in \paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ . \end{equation*} We emphasize that the number of rows in $A_I$, which is equal to the index $\ell$ of the corner $g(p_*)$, depends on $g(p_*)$. Thus, there is no further distinction necessary between active and inactive constraints, because this is already built into the local representation of $\cK$. The formulation \eqref{eq:KKT_retractions_explicit} allows us to split the given constraints into individual components and to distinguish strongly active and weakly active constraints, according to the structure of $\lambda_I$. \begin{definition} We call the $i$-th constraint $(A_I)_i \, \bg \le 0$ weakly active at $(p_*,\lambda_I,\lambda_E)$ if $(\lambda_I)_i = 0$ holds, and strongly active in case $(\lambda_I)_i > 0$. \end{definition} Observe that this definition does not depend on the particular choice of $A_I$. If $A_I$ is premultiplied by a positive diagonal matrix, then the notion of weak and strong activity of $(A_I)_i$ is not changed. \section{Lagrangian Functions} \label{section:Lagrange_function} When $\cN = V$ is a normed linear space with dual space $V^*$ and $g \colon \cM \to V$, then a Lagrangian function for our problem \eqref{eq:problem_setting} with Lagrange multiplier $\mu \in V^*$ can be defined as usual: \begin{equation*} L \colon \cM \times V^* \ni (p,\mu) \mapsto L(p,\mu) \coloneqq f(p) + \mu(g(p)) \in \R . \end{equation*} However when $\cN$ is a nonlinear manifold, then $\mu$ cannot be defined as a linear functional on $\cN$. Rather, we need to replace it with a function $h \in C^1(\cN,\R)$ and define \begin{equation*} L \colon \cM \times C^1(\cN,\R) \ni (p,h) \mapsto L(p,h) \coloneqq f(p) + h(g(p)) \in \R \end{equation*} as a Lagrangian function. In the following we will consider $h$ fixed and regard the mapping $p \mapsto L(p,h)\colon \cM \to \R$ as a function in $p$. Its derivative $L'$ is given by \begin{equation*} L'(p,h) \coloneqq \frac{\d}{\d p} L(p,h) = f'(p) + h'(g(p)) \, g'(p) . \end{equation*} For these derivatives to be well-defined at a point $p$, it is enough that $h$ is defined in some neighborhood of $p$. We can observe two things. First, $\mu \coloneqq h'(g(p)) \in \cotangentspace{g(p)}{\cN}$ can be interpreted as a Lagrange multiplier; second, $L'(p,h)$ only depends on $\mu = h'(g(p))$ and not on the particular choice of $h$. The paragraph above explains how to obtain $\mu$ from $h$. Conversely, let $p_* \in \cM$ be fixed and $q_* = g(p_*)$. In view of the KKT-conditions \eqref{eq:KKT_conditions} we would like to extend a Lagrange multiplier $\mu\in \cotangentspace{q_*}{\cN}$ locally to a nonlinear function $h$ on a neighbourhood of $q_*$ such that $h'(q_*) = \mu$ holds. This can be achieved by using a linearizing map $S_{q_*}$ about $q_*$ and defining $h \coloneqq \mu \circ S_{q_*}$. Then we obtain a Lagrangian function of the form \begin{equation*} L_{S_{q_*}}(p,\mu) \coloneqq L(p,\mu \circ S_{q_*}) = f(p) + \mu \circ S_{q_*} \!\! \circ g(p) . \end{equation*} Since $h'(q_*) = \mu \circ DS_{q_*}(q_*) = \mu$, we obtain with this definition of $h$: \begin{equation} \label{eq:coincidence_of_Lagrangian_functions} L'_{S_{q_*}}(p_*,\mu) = f'(p_*) + \mu \, g'(p_*) = L'(p_*,h) . \end{equation} Alternatively we may define Lagrangian functions near $p_*$ with $q_* = g(p_*)$ via pull-backs: \begin{equation*} \begin{aligned} & \bL \colon \tangentspace{p_*}{\cM} \times \cotangentspace{q_*}{\cN} \to \R \\ & (v,\mu) \mapsto \bL(v,\mu) \coloneqq \bf(v) + \mu(\bg(v)) = (f \circ \retract{p_*})(v) + (\mu \circ S_{q_*} \!\! \circ g \circ \retract{p_*})(v) \end{aligned} \end{equation*} with derivative \begin{equation*} \bL'(v,\mu) = \bf'(v) + \mu \, \bg'(v) \quad \text{and thus} \quad \bL'(0_{p_*},\mu) = f'(p_*) + \mu \, g'(p_*). \end{equation*} It is therefore justified to define the derivative of the Lagrangian function in the following way: \begin{equation} \label{eq:coincidence_of_Lagrangian_functions_2} \begin{aligned} L'(p_*,\mu) \coloneqq f'(p_*) + \mu \, g'(p_*) = \bL'(0_{p_*},\mu) = L'_{S_{q_*}}(p_*,\mu) = L'(p_*,h) \\ \text{for } \mu = h'(q_*) , \end{aligned} \end{equation} independently of the choice of the retraction $\retract{p_*}$, linearizing map~$S_{q_*}$, and $h$, as long as $\mu = h'(q_*)$. Utilizing the identifications $\mu \coloneqq h'(g(p_*))$ and $h \coloneqq \mu \circ S_{g(p_*)}$, we find that the KKT conditions \eqref{eq:KKT_conditions} can equivalently be written in the familiar way: \begin{subequations}\label{eq:KKT_conditions_pull-back} \begin{align} \label{eq:KKT_conditions_pull-back1} & L'(p_*,\mu) = 0 \quad \text{on } \cotangentspace{p_*}{\cM} , \\ \label{eq:KKT_conditions_pull-back2} & \mu \in \paren[big](){\innertangentcone{g(p_*)}{\cK}}^\circ . \end{align} \end{subequations} \section{The Critical Cone} \label{section:Critical_cone} To derive second-order optimality conditions, we need a definition of the critical cone at a KKT point $p_*$ as a subset of the tangent cone~$\tangentcone{p_*}{\cF}$. Suppose that $(p_*,\mu)$ satisfies the KKT conditions \eqref{eq:KKT_conditions}. We define the critical cone at $p_*$ as \begin{align*} \criticalcone{\cM} & \coloneqq \setDef{v \in \tangentspace{p_*}{\cM}}{g'(p_*) \, v \in \innertangentcone{g(p_*)}{\cK} \text{ and } f'(p_*) \, v = 0} \\ & \mrep[r]{{}={}}{{}\coloneqq{}} \setDef{v \in \tangentspace{p_*}{\cM}}{g'(p_*) \, v \in \innertangentcone{g(p_*)}{\cK} \text{ and } \mu \, g'(p_*) \, v = 0} . \end{align*} We also introduce the definition \begin{align*} \criticalcone{\cN} & \coloneqq g'(p_*) \, \criticalcone{\cM} = \setDef{w \in \innertangentcone{g(p_*)}{\cK}}{\dual{\mu}{w} = 0} \\ & \mrep[r]{{}={}}{{}\coloneqq{}} \setDef{w \in \innertangentcone{g(p_*)}{\cK}}{(A_I)_j w = 0 \text{ for all } j = 1, \dots, \ell \text{ such that } (\lambda_I)_j = 0} , \end{align*} where $(A_I)_j$ are the components of the mapping $A_I \colon \tangentspace{g(p_*)}{\cN} \to \R^\ell$ used in \eqref{eq:problem_setting_Retraction_classic}. Then we can write $\mu \in \paren[big](){\Span \criticalcone{\cN}}^\circ$ for any Lagrange multiplier $\mu \in \Lambda(p_*)$. The following considerations will be useful for the discussion of second-order conditions: \begin{lemma}\label{lemma:PhiCone} Suppose that $X$ is a normed linear space and $U$, $V$ are open neighborhoods of $0 \in X$. Consider a diffeomorphism $\Phi \colon U \to V$ such that $\Phi(0) = 0$ and $\Phi'(0) = \id_X$ hold. Let $K$ be a polyhedral cone of the form \begin{equation*} K = \setDef{v \in X}{A_I \, v \le 0, \; A_E \, v = 0} \end{equation*} with linear maps $A_I \colon X \to \R^{n_I}$ and $A_E \colon X \to \R^{n_E}$. Suppose that \begin{equation*} \Phi \colon K \cap U \to K \cap V \end{equation*} is bijective. Select a row $a_j = (A_I)_j$ and define the facet \begin{equation*} K_j = \setDef{v \in X}{A_I \, v \le 0, \; A_E \, v = 0, \; a_j v = 0} . \end{equation*} Then there are neighborhoods $\tilde U$ and $\tilde V$ of $0$ such that \begin{equation*} \Phi \colon K_j \cap \tilde U \to K_j \cap \tilde V \end{equation*} is also bijective. \end{lemma} \begin{proof} We may assume \wolog that $\tilde U = U = B_r(0)$ is an open ball of radius~$r$ about~$0$. Since $\Phi$ is a homeomorphism and thus preserves boundaries of sets, we conclude in particular that \begin{equation*} \Phi \colon \partial K \cap U \to \partial K \cap V \end{equation*} is also a homeomorphism. Consider now the \enquote{open} facet \begin{equation*} \tilde K_j = \setDef{v \in K_j}{(A_I)_\ell \, v < 0 \text{ for all } \ell \neq j} , \end{equation*} which is a relatively open subset of $\partial K$. Then $U \cap \tilde K_j$ is a connected set, because $U$ and $\tilde K_j$ are both connected and convex. The continuity of $\Phi$ implies that $\Phi(U \cap \tilde K_j)$ is connected as well. However, the arbitrary union of two (or more) distinct open facets is not connected because each $\tilde K_j$ is a relatively open subset of this union. Hence, $\Phi(U \cap \tilde K_j)$ is a subset of an open facet $\tilde K_\ell$ and it remains to show $j = \ell$. Since $\Phi'(0) = \id_X$ holds, we find that \begin{equation*} \Phi'(0) \colon \tilde K_j \to \tilde K_j \end{equation*} is bijective. Using the differentiability of~$\Phi$ this implies that there exists $x_0 \in \tilde K_j$ such that $\Phi(x_0) \in \tilde K_j$ holds. We thus conclude that $\Phi(U \cap \tilde K_j) \subset \tilde K_j$. Picking some $B_\rho(0) \subset V$ we can show by the same argumentation \begin{equation*} \Phi^{-1}(B_\rho(0) \cap \tilde K_j) \subset \tilde K_j \cap U \end{equation*} and thus $B_\rho(0) \cap \tilde K_j \subset \Phi(\tilde K_j \cap U)$. Thus, $\Phi(U \cap \tilde K_j)$ can be written as $\tilde K_j \cap \tilde V$, where $\tilde V$ is a neighborhood of $0$. \end{proof} This lemma can be applied recursively also to subfacets of $K$. Hence, after finitely many steps of application, we conclude in particular that there are neighborhoods $U$ and $V$ of $0$ such that $\Phi$ maps $\criticalcone{\cN} \cap U$ bijectively onto $\criticalcone{\cN} \cap V$. \begin{lemma}\label{lemma:ThetaIntoCone} Consider two adapted linearizing maps $S_{q,1}$ and $S_{q,2}$ and the transition map~$\Theta \coloneqq S_{q,1} \circ S_{q,2}^{-1}$. Then \begin{equation*} \begin{aligned} v \in \innertangentcone{q}{\cK} \quad & \Rightarrow \quad \Theta''(0_q)[v, v] \in \tangentspace{q}{\cK} , \\ v \in \criticalcone{\cN} \quad & \Rightarrow \quad \Theta''(0_q)[v, v] \in \Span \criticalcone{\cN} . \end{aligned} \end{equation*} \end{lemma} \begin{proof} Consider any cone $K \subset \tangentspace{q}{\cN}$ such that $\Theta$ maps $K$ into $K$. Since $\Theta(0_q) = 0_q$ and $\Theta'(0_q) = \id_{\R^n}$ hold, we can compute \begin{equation*} \Theta''(0_q)[v, v] = \lim_{t \to 0} t^{-2} \paren[auto](){\Theta(t \, v) - \Theta(0_q) - \Theta'(0_q) \, t \, v} = \lim_{t \to 0} t^{-2} \paren[auto](){\Theta(t \, v) - t \, v} . \end{equation*} Since both $\Theta(t \, v)$ and $t \, v$ belong to $K$, $\Theta(t \, v) -
project is using DX11.1? Also here is my shader code, though I doubt the issue is coming from there: // // Particle effect using geometry shader and stream out // 2013 Christoph Romstoeck (lwm) // Texture2D<float4> Texture; SamplerState linearSampler { Filter = MIN_MAG_MIP_LINEAR; }; #define FLAG_CONSTRAINED 1 cbuffer EveryFrame : register(b0) { float4x4 View; float4x4 Projection; float4x4 LookAt; float3 CamDir; float Time; float3 Gravity; }; struct ParticleVertex { float3 Position : POSITION; float3 Velocity : NORMAL; float4 Color : COLOR; uint Flags : TEXCOORD1; float2 SizeStartEnd : TEXCOORD2; }; struct ParticleVertexGsUpdateOut { float4 Position : SV_POSITION; float3 Velocity : NORMAL; float4 Color : COLOR; uint Flags : TEXCOORD1; float2 SizeStartEnd : TEXCOORD2; }; struct ParticleVertexGsOut { float4 Position : SV_POSITION; float4 Color : COLOR; float2 TexCoord : TEXCOORD0; float4 PositionVS : TEXCOORD1; }; // === // Vertex shader has no work to do. // Simply pass vertex on to the next stage. ParticleVertex VS_Passthrough(ParticleVertex v) { return v; } // Geometry shader to update one particle. [maxvertexcount(1)] void GS_Update(point ParticleVertex vertex[1], inout PointStream<ParticleVertexGsUpdateOut> stream) { ParticleVertex input = vertex[0]; // Calculate new age of the particle. float newTimer = input.TimerLifetime.x + Time; // If the particle is older than its lifetime, don't do anything. return; // Calculate new position by adding the particle's velocity. float3 newPosition = input.Position + input.Velocity * Time; // Calculate new velocity by adding the world's gravity. float3 newVelocity = input.Velocity + Gravity * Time; ParticleVertexGsUpdateOut output; output.Position = float4(newPosition, 1); output.Velocity = newVelocity; output.Color = input.Color; output.Flags = input.Flags; output.SizeStartEnd = input.SizeStartEnd; // Append updated particle to output stream. stream.Append(output); } GeometryShader pGSwSO = ConstructGSWithSO(pGSComp, "SV_POSITION.xyz; NORMAL.xyz; COLOR.xyzw; TEXCOORD0.xy; TEXCOORD1.x; TEXCOORD2.xy"); technique11 UpdateTeq { pass Pass1 { } } // =============================================== [maxvertexcount(4)] void GS_Render(point ParticleVertex inputArray[1], inout TriangleStream<ParticleVertexGsOut> stream) { ParticleVertex input = inputArray[0]; // Calculate the particles age in [0..1] ParticleVertexGsOut v; // Determine the particle's color based on its age. v.Color = input.Color; v.Color.a *= (-(256 * 256) * pow(age - 0.5f, 16) + 1); else v.Color.a *= (-4 * (age - 0.5f) * (age - 0.5f) + 1); // Calculate the particle's current size float2 size = lerp(input.SizeStartEnd.x, input.SizeStartEnd.y, age); // Check if one of the quad's axes should be constrained to the particle's velocity. bool constrained = (input.Flags & FLAG_CONSTRAINED) > 0; float3 right, up; if(constrained) { right = normalize(input.Velocity); up = cross(CamDir, right) * size.y; right *= size.x; } else { float2 xr = float4(size.x, 0, 0, 1); float2 yr = float4(0, size.y, 0, 1); right = mul(xr, LookAt).xyz; up = mul(yr, LookAt).xyz; } // Create and append four vertices to form a quad. float4 positionWS = float4(input.Position + right + up, 1.f); v.PositionVS = mul(positionWS, View); v.Position = mul(v.PositionVS, Projection); v.TexCoord = float2(1, 1); stream.Append(v); positionWS = float4(input.Position - right + up, 1.f); v.PositionVS = mul(positionWS, View); v.Position = mul(v.PositionVS, Projection); v.TexCoord = float2(0, 1); stream.Append(v); positionWS = float4(input.Position + right - up, 1.f); v.PositionVS = mul(positionWS, View); v.Position = mul(v.PositionVS, Projection); v.TexCoord = float2(1, 0); stream.Append(v); positionWS = float4(input.Position - right - up, 1.f); v.PositionVS = mul(positionWS, View); v.Position = mul(v.PositionVS, Projection); v.TexCoord = float2(0, 0); stream.Append(v); stream.RestartStrip(); } // Simple pixel shader to render the particles. float4 PS_Render(ParticleVertexGsOut input) : SV_Target { float4 tex = Texture.Sample(linearSampler, input.TexCoord); return tex * input.Color; } technique11 RenderTeq { pass Pass1 { } } ### #34unbird  Crossbones+   -  Reputation: 7107 Like 0Likes Like Posted 26 July 2013 - 10:58 AM @Guo-Leo: Well, yes, more or less. The sample uses additive blending, that's about the simplest one with respect to depth. E.g. for alpha-blended partcicles one needs them to draw them back to front (farthest first), so you have to sort them according to distance. There's also so-called soft particles (google it). I recently stumbled upon this blog post which is a quite interesting read. @Telanor: I'm really sorry to tell you I'm out of clues or helpful suggestions. I'm still using VS 2010, vanilla D3D11 and PIX. Yesterday I heard from NightCreature about the inconveniences of the graphics debugger. Your suspicion about D3D11.1 might as well be true. Looking at your shader I have one final thought though: This looks like you're using the D3DX effect system (technique11, not the SharpDX effect system). Is it ?. When I played with alternative tools, or sometimes even with PIX, I had troubles. I also wonder how well it behaves with the newer runtimes. Try using bare shaders for a change. Good luck. ### #35Telanor  Members   -  Reputation: 1421 Like 1Likes Like Posted 27 July 2013 - 03:10 PM I may give that a try but I'm not so sure it'll help. I've been using the effects system for all my shaders so far with no problem. When I get a chance I'll set up the particle system in a separate project and switch between DX11 and 11.1 and see if that makes any difference. Thanks for trying though unbird, I appreciate it. ### #36GuoLei007  Members   -  Reputation: 118 Like 1Likes Like Posted 28 July 2013 - 05:40 PM @Guo-Leo: Well, yes, more or less. The sample uses additive blending, that's about the simplest one with respect to depth. E.g. for alpha-blended partcicles one needs them to draw them back to front (farthest first), so you have to sort them according to distance. There's also so-called soft particles (google it). I recently stumbled upon this blog post which is a quite interesting read. @Telanor: I'm really sorry to tell you I'm out of clues or helpful suggestions. I'm still using VS 2010, vanilla D3D11 and PIX. Yesterday I heard from NightCreature about the inconveniences of the graphics debugger. Your suspicion about D3D11.1 might as well be true. Looking at your shader I have one final thought though: This looks like you're using the D3DX effect system (technique11, not the SharpDX effect system). Is it ?. When I played with alternative tools, or sometimes even with PIX, I had troubles. I also wonder how well it behaves with the newer runtimes. Try using bare shaders for a change. Good luck. Thank you for your guidance!  you  are  great!  Thank you Very  much!   I  will  go  on study with  your  guidance. ### #37Krohm  Crossbones+   -  Reputation: 3757 Like 1Likes Like Posted 29 July 2013 - 02:22 AM I just want to make a clarification: Also, on the topic of gpu particles, I've seen some systems that use directcompute instead. Am I correct in assuming that only works on feature level 11 and up? No. GPU particles have been around for a while, they are viable on anything that is Shader Model 3.0 and later - google "Lutz Latta" and "Building a million particle system". More advanced hardware makes everything much more streamlined and eventually takes it to be viable on a meaningful installed base. ### #38unbird  Crossbones+   -  Reputation: 7107 Like 0Likes Like Posted 02 August 2013 - 02:07 PM Also, on the topic of gpu particles, I've seen some systems that use directcompute instead. Am I correct in assuming that only works on feature level 11 and up? No, you can use compute shaders with feature level 10.x (cs_4_0) with a couple of restrictions. Also, so called Append/Consume buffers are restricted to shader model 5 which would be nice for particles (see the Hieroglyph engine particle sample). Edit: Oh, well, that was actually an old question and has sort of been answered already. I really should stop doing this echoing thing Edited by unbird, 02 August 2013 - 03:16 PM. ### #39Telanor  Members   -  Reputation: 1421 Like 0Likes Like Posted 08 August 2013 - 08:57 PM So I went and rebuilt the sample in a stand-alone project using plain sharpdx without the toolkit. It runs fine and the graphics debugger works. I tested with both DX11 and DX11.1, no problems. So I guess there's some kind of strange issue occurring elsewhere in my project which is somehow breaking the particle code. I guess the only thing I can do is turn off systems and see if it starts working. ### #40Telanor  Members   -  Reputation: 1421 Like 0Likes Like Posted 08 August 2013 - 11:31 PM While setting up the stand-alone project, I had to set the gravity/deltaTime to 0 because otherwise the particles randomly flew off the screen at insane speeds. Turns out I forgot the gravity parameter was a Vector3 and was passing just a float. For some reason, SharpDX has a Set(float dataRef) overload which was being called. So the particles now render correctly in my main project (yay!). I also went through and commented out all the other draw calls but the VS debugger *still* crashes instantly on start up, so I really have no idea what its problem is. Nvida Nsight now supports DX11.1 and runs but says it can't find debug info for the shaders (it is there) so I can't debug with that either. I guess I'll just have to use the separate project if I ever need to debug the particles. Old topic! Guest,
\section{Introduction} In the past few years, two interesting theories of investigating the violation of Lorentz Invariance (LI) are proposed. One is the so called Doubly Special Relativity (DSR) \cite{Amelino1,Amelino2,Amelino3,Smolin1,Smolin2}. This theory takes Planck-scale effects into account by introducing an invariant Planckian parameter in the theory of special relativity. Another is the so called Very Special Relativity (VSR) developed by Cohen and Glashow \cite{Glashow}. This theory suggested that the exact symmetry group of nature may be isomorphic to a subgroup SIM(2) of the Poincare group. And the SIM(2) group semi-direct product with the spacetime translation group gives an 8-dimensional subgroup of the Poincare group called ISIM(2) \cite{Kogut}. Under the symmetry of ISIM(2), the CPT symmetry is preserved and many empirical successes of special relativity are still functioned. Recently, Physicists found that the two theories mentioned above are related with Finsler geometry. Girelli, Liberati and Sindoni \cite{Girelli} showed that the Modified dispersion relation (MDR) in DSR can be incorporated into the framework of Finsler geometry. The symmetry of the MDR was described in the Hamiltonian formalism. Also, Gibbons, Gomis and Pope \cite{Gibbons} showed that the Finslerian line element $ds=(\eta_{\mu\nu} dx^\mu dx^\nu)^{(1-b)/2}(n_\rho dx^\rho)^b$ is invariant under the transformations of the group DISIM$_b(2)$ (1-parameter family of deformations of ISIM(2)). Finsler geometry as a natural generation of Riemann geometry could provide new sight on modern physics. the model based on Finsler geometry could explain the recent astronomical observations which Einstein's gravity could not. An incomplete list includes: the flat rotation curves of spiral galaxies can be deduced naturally without invoking dark matter \cite{Finsler DM}; the anomalous acceleration\cite{Anderson} in solar system observed by Pioneer 10 and 11 spacecrafts could account for a special Finsler space-Randers space \cite{Finsler PA}; the secular trend in the astronomical unit\cite{Krasinsky,Standish} and the anomalous secular eccentricity variation of the Moon's orbit\cite{Williams} could account for effect of the length change of unit circle in Finsler geometry\cite{Finsler AU}. Thus, the symmetry of Finslerian spacetime is worth investigating. The way of describing spacetime symmetry in a covariant language (the symmetry should not depend on any particular choice of coordinate system) involves the concept of isometric transformation. In fact, the symmetry of spacetime is described by the so called isometric group. The generators of isometric group is directly connected with the Killing vectors\cite{Killing}. In this paper, we use solutions of the Killing equation to establish the symmetry of a class of Finslerian spacetime. In particular, we show that the isometric group of a special kind of $(\alpha,\beta)$ space is equivalent to the symmetry of the VSR. \section{Killing vector in Riemann space} In this section, we give a brief review of the Killing vectors in Riemann space (further material can be found, for example, in \cite{Weinberg}). Under a given coordinate transformation $x\rightarrow\bar{x}$, the Riemannian metric $g_{\mu\nu}(x)$ transforms as \begin{equation} \bar{g}_{\mu\nu}(\bar{x})=\frac{\partial x^\rho}{\partial \bar{x}^\mu}\frac{\partial x^\sigma}{\partial \bar{x}^\nu}g_{\rho\sigma}(x). \end{equation} Any transformation $x\rightarrow\bar{x}$ is called isometry if and only if the transformation of the metric $g_{\mu\nu}(x)$ satisfies \begin{equation} \label{isometry} g_{\mu\nu}(\bar{x})=\frac{\partial x^\rho}{\partial \bar{x}^\mu}\frac{\partial x^\sigma}{\partial \bar{x}^\nu}g_{\rho\sigma}(x). \end{equation} One can check that the isometric transformations do form a group. It is convenient to investigate the isometric transformation under the infinitesimal coordinate transformation \begin{equation} \label{coordinate tran} \bar{x}^\mu=x^\mu+\epsilon V^\mu, \end{equation} where $|\epsilon|\ll1$. To first order in $|\epsilon|$, the equation (\ref{isometry}) reads \begin{equation} V^\kappa\frac{\partial g_{\mu\nu}}{\partial x^\kappa}+g_{\kappa\mu}\frac{\partial V^\kappa}{\partial x^\nu}+g_{\kappa\nu}\frac{\partial V^\kappa}{\partial x^\mu}=0. \end{equation} By making use of the covariant derivatives with respect to Riemannian connection, we can write the above equation as \begin{equation} \label{killing} V_{\mu|\nu}+V_{\nu|\mu}=0, \end{equation} where $``|"$ denotes the covariant derivative. Any vector field $V_{\mu}$ satisfies equation (\ref{killing}) is called Killing vector. Thus, the problem of finding all isometries of a given metric $g_{\mu\nu}(x)$ is reduced to find the dimension of the linear space formed by Killing vectors. In Riemann geometry, by making use of the covariant derivative, one could obtain the Ricci identities or interchange formula \begin{equation} \label{Ricci} V_{\rho|\mu|\nu}-V_{\rho|\nu|\mu}=-V_\sigma R^{~\sigma}_{\rho~\nu\mu}, \end{equation} where $R^{~\sigma}_{\rho~\nu\mu}$ is the Riemannian curvature tensor. And the first Bianchi identity for the Riemannian curvature tensor gives \begin{equation} \label{cyclic sum} R^{~\sigma}_{\rho~\nu\mu}+R^{~\sigma}_{\nu~\mu\rho}+R^{~\sigma}_{\mu~\rho\nu}=0. \end{equation} Deducing from the equation (\ref{Ricci}) and (\ref{cyclic sum}), we obtain \begin{equation} V_{\rho|\mu|\nu}=V_\sigma R^{~\sigma}_{\nu~\mu\rho}. \end{equation} Thus, all the derivatives of $V_\mu$ will be determined by the linear combinations of $V_\mu$ and $V_{\mu|\nu}$. Once the $V_\mu$ and $V_{\mu|\nu}$ at an arbitrary point of Riemannian space is given, then $V_\mu$ and $V_{\mu|\nu}$ at any other point is determined by integration of the system of ordinary differential equations. Therefore, the dimension of linear space formed by Killing vector can be at most $\frac{n(n+1)}{2}$ in $n$ dimensional Riemannian space. If a metric admits that the maximum number $\frac{n(n+1)}{2}$ of Killing vectors, its Riemann space must homogeneous and isotropic (or the space is isotropic for every point). Such space is called maximally symmetry space. In Riemann geometry, the Schur's lemma tells us that a Riemannian space with at least 3 dimension is maximally symmetry space if and only if its sectional curvature is constant. Also, one can check that a 2 dimensional Riemannian space is maximally symmetry space if and only if its sectional curvature is constant. Thus, the maximal symmetry of a given metric is an intrinsic property, and not depending on the choice of coordinate system. One special maximally symmetry space is the Minkowskian space. The Killing equation (\ref{killing}) of a given Minkowskian metric $\eta_{\mu\nu}(x)$ reduces to \begin{equation} \frac{\partial V_\mu}{\partial x^\nu}+\frac{\partial V_\nu}{\partial x^\mu}=0. \end{equation} The solution of the above equation is \begin{equation} \label{killing s} V^{\mu}=Q^\mu_{~\nu} x^\nu+C^\mu, \end{equation} where $Q_{\mu\nu}=\eta_{\rho\mu}Q^\rho_{~\nu}$ is an arbitrary constant skew-symmetric matrix and $C^\mu$ is an arbitrary constant vector. Thus, substituting the solution (\ref{killing s}) into the coordinate transformation (\ref{coordinate tran}) we obtain \begin{equation} \bar{x}^\mu=(\delta^\mu_\nu+\epsilon Q^\mu_{~\nu})x^\nu+\epsilon C^\mu. \end{equation} One should find that the term $\delta^\mu_\nu+\epsilon Q^\mu_{~\nu}$ in above equation is just the Lorentz transformation matrix and the term $\epsilon C^\mu$ is related to the spacetime translation. Expanding the matrix $\delta^\mu_\nu+\epsilon Q^\mu_{~\nu}$ and the vector $\epsilon C^\mu$ near identity, one could obtain the famous Poincare algebra. Other two types of maximally symmetry space are spherical and hyperbolic case. Without loss of generality, we set its constant sectional curvature to be $\pm1$ for spherical and hyperbolic case respectively. The length element of spherical and hyperbolic case is given in a unified form \begin{equation} ds^2=\frac{\sqrt{(1+k(x\cdot x))(dx\cdot dx)-k(x\cdot dx)^2}}{1+k(x\cdot x)}, \end{equation} where the $\cdot$ denotes the inner product with respect to Minkowskian metric and $k=\pm1$ for spherical and hyperbolic case respectively. The metric is given as \begin{equation} g_{\mu\nu}=\left(\frac{\eta_{\mu\nu}}{1+k(x\cdot x)}-k\frac{x_\mu x_\nu}{(1+k(x\cdot x))^2}\right), \end{equation} where $x_\mu\equiv\eta_{\mu\nu}x^\nu$. The Christoffel symbols of the above length element is given as \begin{equation} \gamma^\rho_{\mu\nu}=-k\frac{x_\mu\delta^\rho_\nu+x_\nu\delta^\rho_\mu}{1+k(x\cdot x)}. \end{equation} Thus, the Killing equation (\ref{killing}) now reads \begin{equation} \frac{\partial V_\mu}{\partial x^\nu}+\frac{\partial V_\nu}{\partial x^\mu}+\frac{2k}{1+k(x\cdot x)}(x_\mu V_\nu+X_\nu V_\mu)=0. \end{equation} The solution of the above equation is \begin{equation} \label{k in curv} V^\mu\equiv g^{\mu\nu}V_\nu=Q^\mu_{~\nu}x^\nu+C^\mu+k(x\cdot C)x^\mu, \end{equation} where the index of $Q$ and $C$ are raise and lower by Minkowskian metric $\eta^{\mu\nu}$ and its matrix reverse $\eta_{\mu\nu}$. \section{Killing vectors in Finsler space} Instead of defining an inner product structure over the tangent bundle in Riemann geometry, Finsler geometry is base on the so called Finsler structure $F$ with the property $F(x,\lambda y)=\lambda F(x,y)$ for all $\lambda>0$, where $x$ represents position and $y\equiv\frac{dx}{d\tau}$ represents velocity. The Finsler metric is given as\cite{Book by Bao} \begin{equation} g_{\mu\nu}\equiv\frac{\partial}{\partial y^\mu}\frac{\partial}{\partial y^\nu}\left(\frac{1}{2}F^2\right). \end{equation} Finsler geometry has its genesis in integrals of the form \begin{equation} \label{integral length} \int^r_sF(x^1,\cdots,x^n;\frac{dx^1}{d\tau},\cdots,\frac{dx^n}{d\tau})d\tau~. \end{equation} So the Finsler structure represents the length element of Finsler space. Like Riemannian case, to investigate the Killing vector we should construct the isometric transformation of Finsler structure. Let us consider the coordinate transformation (\ref{coordinate tran}) together with the corresponding transformation for $y$ \begin{equation} \label{coordinate tran1} \bar{y}^\mu=y^\mu+\epsilon\frac{\partial V^\mu}{\partial x^\nu}y^\nu. \end{equation} Under the coordinate transformation (\ref{coordinate tran}) and (\ref{coordinate tran1}), to first order in $|\epsilon|$, we obtain the expansion of the Finsler structure, \begin{equation} \label{coordinate tran F} \bar{F}(\bar{x},\bar{y})=\bar{F}(x,y)+\epsilon V^\mu\frac{\partial F}{\partial x^\mu}+\epsilon y^\nu\frac{\partial V^\mu}{\partial x^\nu}\frac{\partial F}{\partial y^\mu}, \end{equation} where $\bar{F}(\bar{x},\bar{y})$ should equal $F(x,y)$. Under the transformation (\ref{coordinate tran}) and (\ref{coordinate tran1}), a Finsler structure is called isometry if and only if \begin{equation} F(x,y)=\bar{F}(x,y). \end{equation} Then, deducing from the (\ref{coordinate tran F}) we obtain the Killing equation $K_V(F)$ in Finsler space \begin{equation} \label{killing F} K_V(F)\equiv V^\mu\frac{\partial F}{\partial x^\mu}+y^\nu\frac{\partial V^\mu}{\partial x^\nu}\frac{\partial F}{\partial y^\mu}=0. \end{equation} Searching the Killing vectors for general Finsler structure is difficult. Here, we give the Killing vectors for a class of Finsler space-$(\alpha,\beta)$ space\cite{Shen} with metric defining as \begin{eqnarray} F=\alpha\phi(s),~~~s=\frac{\beta}{\alpha},\\ \alpha=\sqrt{g_{\mu\nu}y^\mu y^\nu}~~{\rm and}~~ \beta=b_\mu(x)y^\mu, \end{eqnarray} where $\phi(s)$ is a smooth function, $\alpha$ is a Riemannian metric and $\beta$ is one form. Then, the Killing equation (\ref{killing F}) in $(\alpha,\beta)$ space reads \begin{eqnarray} 0&=&K_V(\alpha)\phi(s)+\alpha K_V(\phi(s))\nonumber\\\label{killing ori} &=&\left(\phi(s)-s\frac{\partial \phi(s)}{\partial s}\right)K_V(\alpha)+\frac{\partial\phi(s)}{\partial s}K_V(\beta). \end{eqnarray} And by making use of the Killing
import numpy as np import pandas as pd import xgboost as xgb import re from pitci.xgboost import XGBoosterLeafNodeScaledConformalPredictor import pitci import pytest class TestInit: """Tests for the XGBoosterLeafNodeScaledConformalPredictor._init__ method.""" def test_inheritance(self): """Test that XGBoosterLeafNodeScaledConformalPredictor inherits from LeafNodeScaledConformalPredictor. """ assert ( XGBoosterLeafNodeScaledConformalPredictor.__mro__[1] is pitci.base.LeafNodeScaledConformalPredictor ), ( "XGBoosterLeafNodeScaledConformalPredictor does not inherit from " "LeafNodeScaledConformalPredictor" ) def test_model_type_exception(self): """Test an exception is raised if model is not a xgb.Booster object.""" with pytest.raises( TypeError, match=re.escape( f"model is not in expected types {[xgb.Booster]}, got {tuple}" ), ): XGBoosterLeafNodeScaledConformalPredictor((1, 2, 3)) def test_attributes_set(self, xgboost_1_split_1_tree): """Test that SUPPORTED_OBJECTIVES, version and model attributes are set.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) assert ( confo_model.__version__ == pitci.__version__ ), "__version__ attribute not set to package version value" assert ( confo_model.model is xgboost_1_split_1_tree ), "model attribute not set with the value passed in init" assert ( confo_model.SUPPORTED_OBJECTIVES == pitci.xgboost.SUPPORTED_OBJECTIVES_ABSOLUTE_ERROR ), "SUPPORTED_OBJECTIVES attribute incorrect" def test_check_objective_supported_called(self, mocker, xgboost_1_split_1_tree): """Test that check_objective_supported is called in init.""" mocked = mocker.patch.object(pitci.xgboost, "check_objective_supported") XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) assert ( mocked.call_count == 1 ), "check_objective_supported not called (once) in init" call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert call_pos_args == ( xgboost_1_split_1_tree, pitci.xgboost.SUPPORTED_OBJECTIVES_ABSOLUTE_ERROR, ), "positional args in check_objective_supported call not correct" assert ( call_kwargs == {} ), "keyword args in check_objective_supported call not correct" class TestCalibrate: """Tests for the XGBoosterLeafNodeScaledConformalPredictor.calibrate method.""" def test_data_type_exception(self, xgboost_1_split_1_tree): """Test an exception is raised if data is not a xgb.DMatrix object.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) with pytest.raises( TypeError, match=re.escape( f"data is not in expected types {[xgb.DMatrix]}, got {str}" ), ): confo_model.calibrate("abcd") def test_train_data_type_exception( self, dmatrix_2x1_with_label, xgboost_1_split_1_tree ): """Test an exception is raised if train_data is not a xgb.DMatrix object.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) with pytest.raises( TypeError, match=re.escape( f"train_data is not in expected types {[xgb.DMatrix, type(None)]}, got {str}" ), ): confo_model.calibrate(data=dmatrix_2x1_with_label, train_data="abcd") def test_super_calibrate_call_no_response_passed( self, mocker, dmatrix_2x1_with_label, dmatrix_2x1_with_label_gamma, xgboost_1_split_1_tree, ): """Test LeafNodeScaledConformalPredictor.calibrate is called when response is not passed. """ confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) mocked = mocker.patch.object( pitci.base.LeafNodeScaledConformalPredictor, "calibrate" ) confo_model.calibrate( data=dmatrix_2x1_with_label, alpha=0.9, train_data=dmatrix_2x1_with_label_gamma, ) assert ( mocked.call_count == 1 ), "incorrect number of calls to LeafNodeScaledConformalPredictor.calibrate" call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert ( call_pos_args == () ), "positional args incorrect in call to LeafNodeScaledConformalPredictor.calibrate" assert sorted(list(call_kwargs.keys())) == [ "alpha", "data", "response", "train_data", ] assert call_kwargs["data"] == dmatrix_2x1_with_label assert call_kwargs["alpha"] == 0.9 np.testing.assert_array_equal( call_kwargs["response"], dmatrix_2x1_with_label.get_label() ) assert call_kwargs["train_data"] == dmatrix_2x1_with_label_gamma def test_super_calibrate_call_response_passed( self, mocker, dmatrix_2x1_with_label, dmatrix_2x1_with_label_gamma, xgboost_1_split_1_tree, ): """Test LeafNodeScaledConformalPredictor.calibrate is called when response is passed.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) mocked = mocker.patch.object( pitci.base.LeafNodeScaledConformalPredictor, "calibrate" ) response_array = np.array([5, 7]) confo_model.calibrate( data=dmatrix_2x1_with_label, response=response_array, alpha=0.9, train_data=dmatrix_2x1_with_label_gamma, ) assert ( mocked.call_count == 1 ), "incorrect number of calls to LeafNodeScaledConformalPredictor.calibrate" call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert ( call_pos_args == () ), "positional args incorrect in call to LeafNodeScaledConformalPredictor.calibrate" assert sorted(list(call_kwargs.keys())) == [ "alpha", "data", "response", "train_data", ] assert call_kwargs["data"] == dmatrix_2x1_with_label assert call_kwargs["alpha"] == 0.9 np.testing.assert_array_equal(call_kwargs["response"], response_array) assert call_kwargs["train_data"] == dmatrix_2x1_with_label_gamma class TestPredictWithInterval: """Tests for the XGBoosterLeafNodeScaledConformalPredictor.predict_with_interval method.""" def test_data_type_exception(self, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test an exception is raised if data is not a xgb.DMatrix object.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) confo_model.calibrate(dmatrix_2x1_with_label) with pytest.raises( TypeError, match=re.escape( f"data is not in expected types {[xgb.DMatrix]}, got {pd.DataFrame}" ), ): confo_model.predict_with_interval(pd.DataFrame()) def test_super_predict_with_interval_call( self, mocker, dmatrix_2x1_with_label, xgboost_1_split_1_tree ): """Test that LeafNodeScaledConformalPredictor.predict_with_interval is called and the outputs of this are returned from the method. """ confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) confo_model.calibrate(dmatrix_2x1_with_label) predict_return_value = np.array([200, 101, 1234]) mocked = mocker.patch.object( pitci.base.LeafNodeScaledConformalPredictor, "predict_with_interval", return_value=predict_return_value, ) results = confo_model.predict_with_interval(dmatrix_2x1_with_label) # test output of predict_with_interval is the return value of # LeafNodeScaledConformalPredictor.predict_with_interval np.testing.assert_array_equal(results, predict_return_value) assert ( mocked.call_count == 1 ), "incorrect number of calls to super().predict_with_interval" call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert ( call_pos_args == () ), "positional args incorrect in call to LeafNodeScaledConformalPredictor.predict_with_interval" assert call_kwargs == { "data": dmatrix_2x1_with_label }, "keyword args incorrect in call to LeafNodeScaledConformalPredictor.predict_with_interval" class TestGeneratePredictions: """Tests for the XGBoosterLeafNodeScaledConformalPredictor._generate_predictions method.""" def test_data_type_exception(self, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test an exception is raised if data is not a xgb.DMatrix object.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) with pytest.raises( TypeError, match=re.escape( f"data is not in expected types {[xgb.DMatrix]}, got {float}" ), ): confo_model._generate_predictions(12345.0) def test_predict_call(self, mocker, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test that the output from xgb.Booster.predict with ntree_limit = best_iteration + 1 is returned from the method. """ confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) confo_model.calibrate(dmatrix_2x1_with_label) predict_return_value = np.array([200, 101]) mocked = mocker.patch.object( xgb.Booster, "predict", return_value=predict_return_value ) results = confo_model._generate_predictions(dmatrix_2x1_with_label) assert ( mocked.call_count == 1 ), "incorrect number of calls to xgb.Booster.predict" np.testing.assert_array_equal(results, predict_return_value) call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert call_pos_args == ( dmatrix_2x1_with_label, ), "positional args incorrect in call to xgb.Booster.predict" assert call_kwargs == { "ntree_limit": xgboost_1_split_1_tree.best_iteration + 1 }, "keyword args incorrect in call to xgb.Booster.predict" class TestGenerateLeafNodePredictions: """Tests for the XGBoosterLeafNodeScaledConformalPredictor._generate_leaf_node_predictions method. """ def test_data_type_exception(self, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test an exception is raised if data is not a xgb.DMatrix object.""" confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) with pytest.raises( TypeError, match=re.escape( f"data is not in expected types {[xgb.DMatrix]}, got {list}" ), ): confo_model._generate_leaf_node_predictions([]) def test_predict_call(self, mocker, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test that the output from xgb.Booster.predict with ntree_limit = best_iteration + 1 and pred_leaf = True is returned from the method. """ confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) confo_model.calibrate(dmatrix_2x1_with_label) predict_return_value = np.array([[200, 101], [5, 6]]) mocked = mocker.patch.object( xgb.Booster, "predict", return_value=predict_return_value ) results = confo_model._generate_leaf_node_predictions(dmatrix_2x1_with_label) assert ( mocked.call_count == 1 ), "incorrect number of calls to xgb.Booster.predict" np.testing.assert_array_equal(results, predict_return_value) call_args = mocked.call_args_list[0] call_pos_args = call_args[0] call_kwargs = call_args[1] assert ( call_pos_args == () ), "positional args incorrect in call to xgb.Booster.predict" assert call_kwargs == { "ntree_limit": xgboost_1_split_1_tree.best_iteration + 1, "data": dmatrix_2x1_with_label, "pred_leaf": True, }, "positional args incorrect in call to xgb.Booster.predict" def test_output_2d(self, mocker, dmatrix_2x1_with_label, xgboost_1_split_1_tree): """Test the array returned from _generate_leaf_node_predictions is a 2d array even if the output from predict is 1d. """ confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_1_split_1_tree) confo_model.calibrate(dmatrix_2x1_with_label) # set the return value from predict to be a 1d array predict_return_value = np.array([200, 101]) mocker.patch.object(xgb.Booster, "predict", return_value=predict_return_value) results = confo_model._generate_leaf_node_predictions(dmatrix_2x1_with_label) expected_results = predict_return_value.reshape( predict_return_value.shape[0], 1 ) np.testing.assert_array_equal(results, expected_results) class TestCalibrateLeafNodeCounts: """Tests that _calibrate_leaf_node_counts calculates the correct values for specific models.""" def test_leaf_node_counts_correct_1( self, xgboost_2_split_1_tree, dmatrix_4x2_with_label ): """Test the leaf_node_counts attribute has the correct values with hand workable example.""" # rules for xgboost_2_split_1_tree are as follows; # leaf 1 - if (f0 < 0.5) # leaf 3 - if (f0 > 0.5) & (f1 < 0.5) # leaf 4 - if (f0 > 0.5) & (f1 > 0.5) # there for the dmatrix_4x2_with_label data will be mapped to; # [1, 1] - leaf 4 # [1, 0] - leaf 3 # [0, 1] - leaf 1 # [0, 0] - leaf 1 # therefore the leaf_node_counts attribute for a single tree # should be; expected_leaf_node_counts = [{1: 2, 3: 1, 4: 1}] confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_2_split_1_tree) confo_model._calibrate_leaf_node_counts(dmatrix_4x2_with_label) assert ( confo_model.leaf_node_counts == expected_leaf_node_counts ), "leaf_node_counts not calculated correctly" def test_leaf_node_counts_correct_2(self, xgboost_2_split_1_tree): """Test the leaf_node_counts attribute has the correct values with 2nd hand workable example.""" # rules for xgboost_2_split_1_tree are as follows; # leaf 1 - if (f0 < 0.5) # leaf 3 - if (f0 > 0.5) & (f1 < 0.5) # leaf 4 - if (f0 > 0.5) & (f1 > 0.5) # for this dataset the leaf nodes for each row are inline below; xgb_data = xgb.DMatrix( data=np.array( [ [1, 1], # leaf 4 [1, 0], # leaf 3 [0, 1], # leaf 1 [0, 0], # leaf 1 [1, 0], # leaf 3 [0, 1], # leaf 1 [0, 0], # leaf 1 ] ) ) # therefore the leaf_node_counts attribute for a single tree # should be; expected_leaf_node_counts = [{1: 4, 3: 2, 4: 1}] confo_model = XGBoosterLeafNodeScaledConformalPredictor(xgboost_2_split_1_tree) confo_model._calibrate_leaf_node_counts(xgb_data) assert ( confo_model.leaf_node_counts == expected_leaf_node_counts ), "leaf_node_counts not calculated correctly" def test_leaf_node_counts_correct_3(self, xgboost_2_split_2_tree): """Test the leaf_node_counts attribute has the correct values with 3rd hand workable example.""" # rules for xgboost_2_split_1_tree are as follows; # tree 1, leaf 1 - if (f0 < 0.5) # tree 1, leaf 2 - if (f0 > 0.5) # tree 2, leaf 1 - if (f1 < 0.5) # tree 2, leaf 2
# coding: utf-8 from scipy.special import comb import math import numpy as np import matplotlib.pyplot as plt from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import operator from sklearn import datasets from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline from sklearn.model_selection import cross_val_score from sklearn.metrics import roc_curve from sklearn.metrics import auc from itertools import product from sklearn.model_selection import GridSearchCV import pandas as pd from sklearn.ensemble import BaggingClassifier from sklearn.metrics import accuracy_score from sklearn.ensemble import AdaBoostClassifier # *Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017 # # Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-edition # # Code License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) # # Python Machine Learning - Code Examples # # Chapter 7 - Combining Different Models for Ensemble Learning # Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). # *The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* # ### Overview # - [Learning with ensembles](#Learning-with-ensembles) # - [Combining classifiers via majority vote](#Combining-classifiers-via-majority-vote) # - [Implementing a simple majority vote classifier](#Implementing-a-simple-majority-vote-classifier) # - [Using the majority voting principle to make predictions](#Using-the-majority-voting-principle-to-make-predictions) # - [Evaluating and tuning the ensemble classifier](#Evaluating-and-tuning-the-ensemble-classifier) # - [Bagging – building an ensemble of classifiers from bootstrap samples](#Bagging----Building-an-ensemble-of-classifiers-from-bootstrap-samples) # - [Bagging in a nutshell](#Bagging-in-a-nutshell) # - [Applying bagging to classify samples in the Wine dataset](#Applying-bagging-to-classify-samples-in-the-Wine-dataset) # - [Leveraging weak learners via adaptive boosting](#Leveraging-weak-learners-via-adaptive-boosting) # - [How boosting works](#How-boosting-works) # - [Applying AdaBoost using scikit-learn](#Applying-AdaBoost-using-scikit-learn) # - [Summary](#Summary) # # Learning with ensembles def ensemble_error(n_classifier, error): k_start = int(math.ceil(n_classifier / 2.)) probs = [comb(n_classifier, k) * error**k * (1-error)**(n_classifier - k) for k in range(k_start, n_classifier + 1)] return sum(probs) ensemble_error(n_classifier=11, error=0.25) error_range = np.arange(0.0, 1.01, 0.01) ens_errors = [ensemble_error(n_classifier=11, error=error) for error in error_range] plt.plot(error_range, ens_errors, label='Ensemble error', linewidth=2) plt.plot(error_range, error_range, linestyle='--', label='Base error', linewidth=2) plt.xlabel('Base error') plt.ylabel('Base/Ensemble error') plt.legend(loc='upper left') plt.grid(alpha=0.5) #plt.savefig('images/07_03.png', dpi=300) plt.show() # # Combining classifiers via majority vote # ## Implementing a simple majority vote classifier np.argmax(np.bincount([0, 0, 1], weights=[0.2, 0.2, 0.6])) ex = np.array([[0.9, 0.1], [0.8, 0.2], [0.4, 0.6]]) p = np.average(ex, axis=0, weights=[0.2, 0.2, 0.6]) p np.argmax(p) class MajorityVoteClassifier(BaseEstimator, ClassifierMixin): """ A majority vote ensemble classifier Parameters ---------- classifiers : array-like, shape = [n_classifiers] Different classifiers for the ensemble vote : str, {'classlabel', 'probability'} (default='label') If 'classlabel' the prediction is based on the argmax of class labels. Else if 'probability', the argmax of the sum of probabilities is used to predict the class label (recommended for calibrated classifiers). weights : array-like, shape = [n_classifiers], optional (default=None) If a list of `int` or `float` values are provided, the classifiers are weighted by importance; Uses uniform weights if `weights=None`. """ def __init__(self, classifiers, vote='classlabel', weights=None): self.classifiers = classifiers self.named_classifiers = {key: value for key, value in _name_estimators(classifiers)} self.vote = vote self.weights = weights def fit(self, X, y): """ Fit classifiers. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. y : array-like, shape = [n_samples] Vector of target class labels. Returns ------- self : object """ if self.vote not in ('probability', 'classlabel'): raise ValueError("vote must be 'probability' or 'classlabel'" "; got (vote=%r)" % self.vote) if self.weights and len(self.weights) != len(self.classifiers): raise ValueError('Number of classifiers and weights must be equal' '; got %d weights, %d classifiers' % (len(self.weights), len(self.classifiers))) # Use LabelEncoder to ensure class labels start with 0, which # is important for np.argmax call in self.predict self.lablenc_ = LabelEncoder() self.lablenc_.fit(y) self.classes_ = self.lablenc_.classes_ self.classifiers_ = [] for clf in self.classifiers: fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y)) self.classifiers_.append(fitted_clf) return self def predict(self, X): """ Predict class labels for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. Returns ---------- maj_vote : array-like, shape = [n_samples] Predicted class labels. """ if self.vote == 'probability': maj_vote = np.argmax(self.predict_proba(X), axis=1) else: # 'classlabel' vote # Collect results from clf.predict calls predictions = np.asarray([clf.predict(X) for clf in self.classifiers_]).T maj_vote = np.apply_along_axis( lambda x: np.argmax(np.bincount(x, weights=self.weights)), axis=1, arr=predictions) maj_vote = self.lablenc_.inverse_transform(maj_vote) return maj_vote def predict_proba(self, X): """ Predict class probabilities for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- avg_proba : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample. """ probas = np.asarray([clf.predict_proba(X) for clf in self.classifiers_]) avg_proba = np.average(probas, axis=0, weights=self.weights) return avg_proba def get_params(self, deep=True): """ Get classifier parameter names for GridSearch""" if not deep: return super(MajorityVoteClassifier, self).get_params(deep=False) else: out = self.named_classifiers.copy() for name, step in six.iteritems(self.named_classifiers): for key, value in six.iteritems(step.get_params(deep=True)): out['%s__%s' % (name, key)] = value return out # ## Using the majority voting principle to make predictions iris = datasets.load_iris() X, y = iris.data[50:, [1, 2]], iris.target[50:] le = LabelEncoder() y = le.fit_transform(y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=1, stratify=y) clf1 = LogisticRegression(penalty='l2', C=0.001, random_state=1) clf2 = DecisionTreeClassifier(max_depth=1, criterion='entropy', random_state=0) clf3 = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski') pipe1 = Pipeline([['sc', StandardScaler()], ['clf', clf1]]) pipe3 = Pipeline([['sc', StandardScaler()], ['clf', clf3]]) clf_labels = ['Logistic regression', 'Decision tree', 'KNN'] print('10-fold cross validation:\n') for clf, label in zip([pipe1, clf2, pipe3], clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) # Majority Rule (hard) Voting mv_clf = MajorityVoteClassifier(classifiers=[pipe1, clf2, pipe3]) clf_labels += ['Majority voting'] all_clf = [pipe1, clf2, pipe3, mv_clf] for clf, label in zip(all_clf, clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) # # Evaluating and tuning the ensemble classifier colors = ['black', 'orange', 'blue', 'green'] linestyles = [':', '--', '-.', '-'] for clf, label, clr, ls in zip(all_clf, clf_labels, colors, linestyles): # assuming the label of the positive class is 1 y_pred = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1] fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=y_pred) roc_auc = auc(x=fpr, y=tpr) plt.plot(fpr, tpr, color=clr, linestyle=ls, label='%s (auc = %0.2f)' % (label, roc_auc)) plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2) plt.xlim([-0.1, 1.1]) plt.ylim([-0.1, 1.1]) plt.grid(alpha=0.5) plt.xlabel('False positive rate (FPR)') plt.ylabel('True positive rate (TPR)') #plt.savefig('images/07_04', dpi=300) plt.show() sc = StandardScaler() X_train_std = sc.fit_transform(X_train) all_clf = [pipe1, clf2, pipe3, mv_clf] x_min = X_train_std[:, 0].min() - 1 x_max = X_train_std[:, 0].max() + 1 y_min = X_train_std[:, 1].min() - 1 y_max = X_train_std[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1)) f, axarr = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(7, 5)) for idx, clf, tt in zip(product([0, 1], [0, 1]), all_clf, clf_labels): clf.fit(X_train_std, y_train) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.3) axarr[idx[0], idx[1]].scatter(X_train_std[y_train==0, 0], X_train_std[y_train==0, 1], c='blue', marker='^', s=50) axarr[idx[0], idx[1]].scatter(X_train_std[y_train==1, 0], X_train_std[y_train==1, 1], c='green', marker='o', s=50) axarr[idx[0], idx[1]].set_title(tt) plt.text(-3.5, -5., s='Sepal width [standardized]', ha='center', va='center', fontsize=12) plt.text(-12.5, 4.5, s='Petal length [standardized]', ha='center', va='center', fontsize=12, rotation=90) #plt.savefig('images/07_05', dpi=300) plt.show() mv_clf.get_params() params = {'decisiontreeclassifier__max_depth': [1, 2], 'pipeline-1__clf__C': [0.001, 0.1, 100.0]} grid = GridSearchCV(estimator=mv_clf, param_grid=params, cv=10, scoring='roc_auc') grid.fit(X_train, y_train) for r, _ in enumerate(grid.cv_results_['mean_test_score']): print("%0.3f +/- %0.2f %r" % (grid.cv_results_['mean_test_score'][r], grid.cv_results_['std_test_score'][r] / 2.0, grid.cv_results_['params'][r])) print('Best parameters: %s' % grid.best_params_) print('Accuracy: %.2f' % grid.best_score_) # **Note** # By default, the default setting for `refit` in `GridSearchCV` is `True` (i.e., `GridSeachCV(..., refit=True)`), which means that we can use the fitted `GridSearchCV` estimator to make predictions via the `predict` method, for example: # # grid = GridSearchCV(estimator=mv_clf, # param_grid=params, # cv=10, # scoring='roc_auc') # grid.fit(X_train, y_train) # y_pred = grid.predict(X_test) # # In addition, the "best" estimator can directly be accessed via the `best_estimator_` attribute. grid.best_estimator_.classifiers mv_clf = grid.best_estimator_ mv_clf.set_params(**grid.best_estimator_.get_params()) mv_clf # # Bagging -- Building an ensemble of classifiers from bootstrap samples # ## Bagging in a nutshell # ## Applying bagging to classify samples in the Wine dataset df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/' 'machine-learning-databases/wine/wine.data', header=None) df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] # if the Wine dataset is temporarily unavailable from the # UCI machine learning repository, un-comment the following line # of code to load the dataset from a local path: # df_wine = pd.read_csv('wine.data', header=None) # drop 1 class df_wine = df_wine[df_wine['Class label'] != 1] y
is $\partial^{m-n} \cdot b_n \cdot \partial^n = \sigma^{m-n}(b_n) \cdot \partial^m$. % Since $R$ is \textsf{CLM}\xspace, there are $a_0'$ and $b_0'$ s.t.~$a_0' \cdot a_m = b_0' \cdot \sigma^{m-n}(b_n)$. % Therefore, $R_1 := a_0' \cdot R_0 - b_0' \cdot \partial^{m-n} \cdot B$ has degree strictly less than $m_0 := m = \deg {R_0}$. % We repeat this operation obtaining a sequence of remainders: % \begin{align*} a_0' \cdot R_0 &= b_0' \cdot \partial^{m_0-n} \cdot B + R_1, \\ a_1' \cdot R_1 &= b_1' \cdot \partial^{m_1-n} \cdot B + R_2, \\ &\ \ \vdots \\ a_{k-1}' \cdot R_{k-1} &= b_{k-1}' \cdot \partial^{m_{k-1}-n} \cdot B + R_k, \\ a_k' \cdot R_k &= b_k' \cdot \partial^{m_k-n} \cdot B + R_{k+1}, \end{align*} % where $m_i := \deg{R_i}$, $R_{i+1} := a_i' \cdot R_i - b_i' \cdot \partial^{m_i-n}$, and the degrees satisfy $m_0 > m_1 > \cdots > m_k > n > m_{k+1}$. % By defining $a = a_k' a_{k-1}' \cdots a_0' \in R$, taking as quotient the skew polynomial % $$P = b_k' \cdot \partial^{m_k-n} + a_k' b_{k-1}' \cdot \partial^{m_{k-1}-n} + \cdots + a_k' a_{k-1}' \cdots a_1' b_0' \cdot \partial^{m_0-n} \in R[\partial; \sigma]$$ % and as a remainder $Q = R_{k+1}$ we have, as required, $\deg Q < m$ and % \begin{align*} a \cdot A = P \cdot B + Q. \end{align*} We now show that $R[\partial; \sigma]$ has the \textsf{CLM}\xspace property. % To this end, let $A_1, A_2 \in R[\partial; \sigma]$ with $\deg {A_1} \geq \deg {A_2}$ be given. % We apply the pseudo-division algorithm above to obtain the sequence % \begin{align*} a_1 \cdot A_1 &= Q_1 \cdot A_2 + A_3, \\ a_2 \cdot A_2 &= Q_2 \cdot A_3 + A_4, \\ &\ \ \vdots \\ a_{k-2} \cdot A_{k-2} &= Q_{k-2} \cdot A_{k-1} + A_k, \\ a_{k-1} \cdot A_{k-1} &= Q_{k-1} \cdot A_k + A_{k+1}, \end{align*} % with $a_1,\ldots,a_{k-1}\in R$, $A_{k+1} = 0$, and the degrees of the $A_i$'s are strictly decreasing: $\deg {A_2} > \deg {A_3} > \cdots > \deg {A_k}$. % Consider the following two sequences of skew polynomials % \begin{align*} &S_1 = 1, \quad S_2 = 0, \quad S_i = a_{i-2} \cdot S_{i-2} - Q_{i-2} \cdot S_{i-1}, \textrm{ and } \\ &T_1 = 0, \quad T_2 = 1, \quad T_i = a_{i-2} \cdot T_{i-2} - Q_{i-2} \cdot T_{i-1}. \end{align*} % It can easily be verified that $S_i \cdot A_1 + T_i \cdot A_2 = A_i$ for every $0 \leq i \leq k+1$: The base cases $i = 0$ and $i = 1$ are clear; inductively, we have % \begin{align*} S_i A_1 + T_i A_2 &= (a_{i-2} \cdot S_{i-2} - Q_{i-2} \cdot S_{i-1})A_1 + (a_{i-2} \cdot T_{i-2} - Q_{i-2} \cdot T_{i-1})A_2 = \\ &= a_{i-2} (S_{i-2}A_1 + T_{i-2}A_2) - Q_{i-2}(S_{i-1}A_1 + T_{i-1}A_2) = \\ &= a_{i-2} A_{i-2} - Q_{i-2} A_{i-1} = A_i. \end{align*} % In particular, at the end $S_{k+1} \cdot A_1 + T_{k+1} \cdot A_2 = 0$, as required. It remains to check that $S_{k+1}$ is nonzero. We show the stronger property that $\deg {S_i} = \deg{A_2} - \deg{A_{i-1}}$ for every $3 \leq i \leq k+1$. The base case $i = 3$ is clear. For the inductive step, notice that $\deg {Q_{i-2}} = \deg {A_{i-2}} - \deg {A_{i-1}} > 0$. Thus $\deg({Q_{i-2}}\cdot S_{i-1}) = \deg {A_{i-2}} - \deg {A_{i-1}} + \deg{A_2} - \deg{A_{i-2}} = \deg{A_2} - \deg {A_{i-1}}$. Moreover, $\deg (a_{i-2} \cdot S_{i-2}) = \deg S_{i-2} = \deg{A_2} - \deg{A_{i-3}} < \deg{A_2} - \deg{A_{i-2}}$. Thus, $\deg {S_i} = \deg({Q_{i-2}}\cdot S_{i-1}) = \deg{A_2} - \deg {A_{i-1}}$, as required. \end{proof} \lemZeroness* \begin{proof} We recall Lagrange's classical bound on the roots of univariate polynomials. % \begin{theorem}[Lagrange, 1769] \label{thm:Lagrange} The roots of a complex polynomial $p(z) = \sum_{i=0}^d a_i \cdot z^i$ of degree $d$ are bounded by $1 + \sum_{0\leq i\leq d-1} \frac{|a_i|}{|a_n|}$. % In particular, the maximal root of a polynomial $p(k) \in \mathbb Q[k]$ with integral coefficients is at most $1 + d \cdot \max_i|a_i|$. \end{theorem} % By \cref{thm:Lagrange}, the largest root of the leading polynomial coefficient $p_{i^*, j^*}(k)$ is $\leq 1 + \deg_k p_{i^*, j^*} \cdot \size {p_{i^*, j^*}} < 2 + e \cdot h$ and similarly the roots of all the leading polynomial coefficients of the cancelling relations for the sections $f(0, n), \dots, f(i^*, n)$ are $< 2 + e \cdot h$. In the following, let % \begin{align*} K = 2 + j^* + e \cdot h. \end{align*} % \begin{claim} The one-dimensional section $f(n, L) \in \mathbb Q^\mathbb N$ for a fixed $L \geq 0$ is identically zero if, and only if, $f(0, L) = f(1, L) = \cdots = f(m \cdot (L+3), L) = 0$. \end{claim} % \begin{proof}[Proof of the claim] The ``only if'' direction is obvious. % By \cref{lem:linrec:section}, for any fixed $L \in \mathbb N$ the 1-dimensional $L$-section $f(n, L)$ is linrec of order $\leq m \cdot (L+3)$. % In fact, it is C-recursive of the same order since the coefficients do not depend on $n$ and are thus constants. % It follows that if $f(0, L) = f(1, L) = \cdots = f(m \cdot (L+3), L) = 0$, then in fact $f(n, L) = 0$ for every $n \in \mathbb N$ (c.f.~the proof of \cref{lem:zeroness:C-rec}). \end{proof} % \begin{claim} The one-dimensional section $f(M, k) \in \mathbb Q^\mathbb N$ for a fixed $0 \leq M \leq i^*$ is identically zero if, and only if, $f(M, 0) = f(M, 1) = \cdots = f(M, d + e \cdot h) = 0$. \end{claim} % \begin{proof}[Proof of the claim] The ``only if'' direction is obvious. % % By assumption, $f(M, k)$ admits a cancelling relation \eqref{eq:cancelling:relation:1} of $\shift2$-degree $\ell^* \leq d$ and leading polynomial coefficient $q_{\ell^*}(k)$ of degree $\leq e$ and height $\leq h$. % By \cref{thm:Lagrange}, the roots of $q_{\ell^*}(k)$ are bounded by $O(e \cdot h)$. % It follows that if $f(M, 0) = f(M, 1) = \cdots = f(M, d + e \cdot h) = 0$ then $f(M, n)$ is identically zero. \end{proof} % \begin{claim} \label{claim:zeroness} $f = 0$ if, and only if, all the one-dimensional sections $$f(n, 0), \dots, f(n, K), f(0, k), \dots, f(i^*, k) \in \mathbb Q^\mathbb N$$ are identically zero. \end{claim} \begin{proof}[Proof of the claim] The ``only if'' direction is obvious. % For the ``if'' direction, assume all the sections above are identically zero as one-dimensional sequences. % By way of contradiction, let $(n, k)$ be the pair of indices which is minimal for the lexicographic order s.t.~$f(n, k) \neq 0$. % By assumption, we necessarily have $n > i^*$ and $k > K$. % By \eqref{eq:cancelling:relation} we have % \begin{align*} p_{i^*, j^*}(k - j^*) \cdot f(n, k) = \sum_{(i, j) <_\text{lex} (i^* , j^*)} p_{i, j}(n - i^*, k - j^*)\cdot f(n - (i^* - i), k - (j^* - k)). \end{align*} % Since $k > K$, $k - j^* > K - j^* \geq 2 + e \cdot h$, we have $p_{i^*, j^*}(k - j^*) \neq 0$ since the largest root of $p_{i^*, j^*}$ is $\leq 1 + e \cdot h$. % Consequently, there exists $(i, j) <_\text{lex} (i^* , j^*)$ s.t.~$f(n - (i^* - i), k - (j^* - k)) \neq 0$, which contradicts the minimality of $(n, k)$. \end{proof} % By putting together the three claims above it follows that $f$ is identically zero if, and only if, $f$ is zero on the set of inputs % \begin{align*} \set{0, \dots, m \cdot (K+3)} \times \set {0, \dots, K} \cup \set{0, \dots, i^*} \times \set {0, \dots, d + e \cdot h}. \end{align*} % Let $N = 1 + \max \set{m\cdot (K+3), i^*}$ and $K' = 1 + \max \set{K, d + e \cdot h}$. % The condition above can be verified by computing $O(N \cdot K')$ values for $f(n,
& & & & & \\ & & +1 & +1 & & & & +1 \\ & & +1 & & +1 & & \\ \hline & & & +1 & \star & \star & \star \\ & & & & \star & \star & \star \\ & & & & \star & \star & \star & +1\\ \hline & & +1 & & & &+1 & \epsilon+1 \\ \end{array} \end{pmatrix} \\ \\ &\overset{4}{\sim} \begin{pmatrix} \begin{array}{cccc|ccc|c} -1 & & & & & & & \\ & -1 & & & & & & \\ & & +1 & & & & & \\ & & & -1& +1 & & & -1 \\ \hline & & & +1 & \star & \star & \star \\ & & & & \star & \star & \star \\ & & & & \star & \star & \star & +1 \\ \hline & & & -1 & & &+1 & \epsilon \\ \end{array} \end{pmatrix} = \begin{pmatrix} \begin{array}{ccc|cccc|c} -1 & & & & & & & \\ & -1 & & & & & & \\ & & +1 & & & & & \\ \hline & & & -1& +1 & & & -1 \\ & & & +1 & \star & \star & \star \\ & & & & \star & \star & \star \\ & & & & \star & \star & \star & +1 \\ \hline & & & -1 & & &+1 & \epsilon \\ \end{array} \end{pmatrix} \end{align*} \caption{Proving that $P_{i,\epsilon} \sim B \oplus N_{i-1,\epsilon}$.} \label{fig:BlockifyPk} \end{figure} We can now finish the proof of \Cref{lemma:BigLemma}: \textit{for all $q \geq 1$, $P_{q, \epsilon}$ is congruent to a diagonal matrix with $q$-many $B$ factors, and a single $1 \times 1$ factor with entry $\epsilon \pm 2$.} We begin by diagonalizing the first three rows and columns of $P_{q,\epsilon}$ to produce a block diagonal matrix with factors $B$ and $N_{q-1, \epsilon}$. Then, we factor $N_{q-1, \epsilon}$ into a block diagonal matrix with factors $B$ and $P_{q-2, \epsilon}$. Exhausting this procedure eventually yields a factorization with $q$-many $B$ blocks. The remaining block is a $1 \times 1$ matrix. When $q$ is even, this entry will be $\epsilon+2$, and when $q$ is odd, the entry will be $\epsilon-2$. \end{proof} This concludes the proof of \Cref{lemma:BigLemma}. We can now prove \Cref{DiagonalizeG(K)}, which claimed that $G(K)$ can be diagonalized. \begin{proof}[Proof of \Cref{DiagonalizeG(K)}.] Let $K$ be the specified knot, and construct a spanning surface and Goeritz matrix for $K$ as in \Cref{GoeritzSetup}. Notice that $G(K) = P_{k,2m-1}$. By \Cref{lemma:BigLemma}, $P_{k,2m-1}$ is congruent to a diagonal matrix $D$, where $D = (\bigoplus_{i=1}^k B) \oplus L$, where $L$ is a $1 \times 1$ matrix with entry $(2m-1) \pm 2$. This is what we wanted to show. \end{proof} \begin{thm} \label{thm:Signature} Let $K = T(3,3k+1;2m)$ where $k\geq 1$ and $m \geq 0$. Then \begin{align*} \sigma(K) = \begin{cases} -4k-2m-2 & \qquad k \equiv 1 \mod 2, \ \ m = 0, 1 \\ -4k-2m & \qquad \text{otherwise} \end{cases} \end{align*} \end{thm} \begin{proof} For the specified $K$, \Cref{DiagonalizeG(K)} tells us that the Goeritz matrix $G(K)$ is congruent to $D = (\bigoplus_{i=1}^k B) \oplus L$, where $L$ is a $1 \times 1$ matrix with entry $(2m-1) \pm 2$. The entry of $L$ is completely determined by $m$ and the parity of $k$. In particular, $(2m-1) \pm 2 < 0$ if and only if $m = 0, 1$ and $k$ is odd. Therefore, by applying Sylvester's theorem of inertia, we have: \begin{align} \label{eqn:SignG(K)} \text{sign}(G(K))= \text{sign}(P_{k,2m-1}) = \text{sign}(D) = \begin{cases} -k-1 & \qquad k \equiv 1 \mod 2, \ \ m = 0, 1 \\ -k+1 & \qquad \text{otherwise} \end{cases} \end{align} By \Cref{GordonLitherland_Mu}, $\mu(K) = (3k+1)+2m$. Combining this with $\sigma(K) = \text{sign}(G(K))-\mu(K)$, we deduce the result. \end{proof} \begin{rmk} Suppose $k \geq 1$ and $m \geq 0$. If $\sigma(T(3,3k+1;2m)) = -4k-2m-2$, then by \Cref{thm:Signature}, $K$ is a torus knot: either $K \approx T(3, 3k+1)$ or $K \approx T(3,3k+1; 2)$, which, by \Cref{lemma:AdjacentTTKs}, is isotopic to $T(3, 3k+2)$. \end{rmk} \subsection{Signatures and the Seifert genus} The following lemmas study how the signatures of twisted torus knots of a fixed genus are related. \begin{lemma} \label{lemma:DifferentSignaturesCase1} Let $K_1 = T(3,3k+1;2m)$ and $K_2=T(3,3(k-1)+1; 2(m+3))$ where $m \geq 2$ and $(3(k-1)+1) \geq 4$. Then $g(K_1)=g(K_2)$, but $\sigma(K_1) - \sigma(K_2) = 2$. \end{lemma} \begin{proof} As in the proof of \Cref{lemma:PotentialConcordance}, to prove that $g(K_1) = g(K_2)$, it suffices to check that $K_1$ and $K_2$ have the same writhe: \[ wr(K_1) = 2(3k+1)+2m = 2(3k+1 + 3 - 3) + 2m = 2(3(k-1)+1) + 6 + 2m = wr(K_2) \] $K_1$ and $K_2$ both satisfy the hypotheses of \Cref{thm:Signature}. Since we assumed that $m \geq 2$, the signatures of $K_1$ and $K_2$ have forms in the second case of \Cref{thm:Signature}. Thus, \begin{align*} \sigma(K_1) - \sigma(K_2) = (-4k-2m) - (-4(k-1)-2(m+3)) =2 \end{align*} This is what we wanted to show. \end{proof} \begin{lemma} \label{lemma:DifferentSignaturesCase2} Suppose $k \geq 1$. For $0 \leq i \leq k-1$, let $K_i = T(3, 3(k-i)+1; 6i)$. The knots $K_i$ have the same Seifert genus. Additionally, if $k \equiv 0 \mod 2$, then for all $i$, $\sigma(K_i) - \sigma(K_{i+1})=2$, but if $k \equiv 1 \mod 2$, then $\sigma(K_0) = \sigma(K_1)$, and for all $i \geq 1$, $\sigma(K_i) - \sigma(K_{i+1})=2$. \end{lemma} \begin{proof} It is elementary to check that all the knots have the same writhe. Since they are all positive 3-braid closures, it follows they have the same Seifert genus. Suppose $k$ is even. By applying the \Cref{thm:Signature}, we see that $\sigma(K_0) = -4k$, $\sigma(K_1) = -4k-2$, and $\sigma(K_2)=-4k-4$. In particular, $\sigma(K_0) - \sigma(K_1) = 2$ and $\sigma(K_1) - \sigma(K_2) = 2$. Applying \Cref{lemma:DifferentSignaturesCase1} for $2 \leq i \leq s-1$ yields the desired result. Now suppose $k$ is odd. \Cref{thm:Signature} tells us that $\sigma(K_0) = -4k-2$, $\sigma(K_1) = -4k-2$, and $\sigma(K_2)=-4k-4$. In particular, $\sigma(K_0) = \sigma(K_1)$ and $\sigma(K_1) - \sigma(K_2) = 2$. Applying \Cref{lemma:DifferentSignaturesCase1} for $2 \leq i \leq s-1$ concludes the proof. \end{proof} \begin{lemma} \label{lemma:DifferentSignaturesCase3} Suppose $k \geq 1$. For $0 \leq i \leq k-1$, let $K_i = T(3, 3(k-i)+1; 2+6i)$. The knots $K_i$ have the same Seifert genus. Additionally, if $k \equiv 0 \mod 2$, then for all $i$, $\sigma(K_i) - \sigma(K_{i+1})=2$, but if $k \equiv 1 \mod 2$, then $\sigma(K_0) = \sigma(K_1)$, and for all $i \geq 1$, $\sigma(K_i) - \sigma(K_{i+1})=2$. \end{lemma} \begin{proof} The proof is nearly identical to the one in \Cref{lemma:DifferentSignaturesCase2}. Note that if $k \equiv 1 \mod 2$, then $\sigma(K_0)=\sigma(K_1)=-4k-4$. \end{proof} To prove \Cref{thm:DistinctConcordanceClasses}, we need one last lemma which analyzes the case where a twisted torus knot and a torus knot have the same Seifert genus and signature. \begin{lemma} \label{lemma:DifferentSignaturesCase4} Suppose $K_1 = T(3,q)$ is a torus knot, and $K_2$ is a twisted torus knot on three strands such that $g(K_1) = g(K_2)$ and $\sigma(K_1) = \sigma(K_2)$. The knots $K_1$ and $K_2$ are not concordant. \end{lemma} \begin{proof} Suppose $K_1$ and $K_2$ are a pair of knots satisfying the hypothesis of the lemma. By \Cref{lemma:DifferentSignaturesCase2} and \Cref{lemma:DifferentSignaturesCase3}, we must be in one of the following two scenarios: \begin{enumerate} \item $k$ is odd, with $K_1 = T(3, 3k+1; 0)$ and $K_2=T(3, 3(k-1)+1; 6)$. \item $k$ is odd, with $K_1 = T(3, 3k+1; 2)$ and $K_2=T(3, 3(k-1)+1; 8)$. \end{enumerate} We first prove the lemma for \textbf{Case (1)}. If $K_1$ is concordant to $K_2$, then $K = K_1 \# m(K_2^r)$ is slice. The Fox-Milnor criterion says that the Alexander polynomial of $K$ would factor in a particular way: in particular, $\Delta_K(t)=f(t)f(t^{-1})$. To obstruct a concordance between $K_1$ and $K_2$, we will show that the putative factorization of Alexander polynomial does not exist. We recall that torus knots have simple Alexander polynomials \cite{Rolfson:KnotsAndLinks}: in general, \begin{align*} \Delta_{T(p,q)}(t) = \displaystyle \prod _{\substack{h|p, \ell | q \\ h, \ell \neq 1}} \phi_{\ell h}(t) \end{align*} where $\phi_n(t)$ denotes the $n$-th cyclotomic polynomial. Thus, $\Delta_{K_1}(t) = \Delta_{T(3,q)}(t) = \displaystyle \prod _{\substack{\ell | q, \ \ell \neq 1}} \phi_{3\ell}(t)$. In particular, $\Delta_{K_1}(t)$ factors into a product of cyclotomic polynomials, where there are no repeated factors. Therefore, to prove that $\Delta_K(t)$ cannot factor as described by the Fox-Milnor criterion, it suffices to prove that $\Delta_{K_2}(t) \neq \Delta_{K_1}(t)$. We will use the reduced Burau matrix, which we described in \Cref{AlexPolyResult}. In particular, when a knot is
\section{Introduction} The Hamiltonian of General Relativity (GR) is a sum of constraints. The constraints are usually grouped into two sets: a single scalar (Hamiltonian) constraint $S$ and a three-component vector constraint $V$, so that the gravitational Hamiltonian is: \begin{equation} H[N,\vec{N}] = S[N]+V[\vec{N}]=\int_{\Sigma}d^3x N C + \int_{\Sigma}d^3x \vec{N} \cdot \vec{C}, \end{equation} where $\Sigma$ is a spatial hypersurface. The $N$ and $\vec{N}$ are the lapse function and the shift vector respectively, which are integrated with the smeared constraints $C$ and $\vec{C}$. By imposing the constraint on the kinematical phase space $\Gamma_{\text{kin}}$ the physical phase space $\Gamma_{\text{phys}}$ is obtained. However, due to the complicated form of the constraints (specifically the smeared scalar constraint $C$), extraction of the physical phase space is in general a difficult task. This difficulty propagates onto the quantum case where the initial kinematical Hilbert space $\mathcal{H}_{\text{kin}}$ is a subject of imposing the quantum constraints $\hat{C}$ and $\hat{\vec{C}}$ in order to extract the physical states $|\Psi_{\text{phys}}\rangle$, belonging to the physical Hilbert space $\mathcal{H}_{\text{phys}} \subseteq \mathcal{H}_{\text{kin}}$ (the equality of sets $\mathcal{H}_{\text{phys}} = \mathcal{H}_{\text{kin}}$ corresponds to the trivial case of vanishing constraints). There are various strategies for approaching the problem. Before we proceed to reviewing the most common of them let us restrict our considerations to the case of a single quantum constraint $\hat{C}$ - the quantum Hamiltonian constraint (scalar constraint). This assumption is to simplify our considerations and make them more transparent. However, extension to the case of multiple constraints is, in principle, straightforward. Namely, a given method of solving the single constraint has to be applied successively. However, additional technical difficulties may appear due to differences in the functional form of the constraints. Furthermore, the case with the single constraint $\hat{C}$ corresponds to the case of homogenenous minisuperspace models, which are relevant in (quantum) cosmology. By extracting states which solve the quantum Hamiltonian constraint $\hat{C}$, a subspace $\mathcal{H}_C$ of the kinematical Hilbert space $\mathcal{H}_{\text{kin}}$ can be found. In general, the subspace $\mathcal{H}_C$, is further restricted by solving the quantum vector constraint $\hat{\vec{C}}$, leading to $\mathcal{H}_{\text{phys}}$, such that $\mathcal{H}_C \subseteq \mathcal{H}_{\text{phys}}$. However, in the special case of vanishing vector constraint (which is satisfied for certain minisuperspace models), we have $\mathcal{H}_C= \mathcal{H}_{\text{phys}}$. Therefore, in general, the following sequence of weak inclusions holds: $\mathcal{H}_{\text{phys}} \subseteq \mathcal{H}_C \subseteq \mathcal{H}_{\text{kin}}$, which is satisfied independently on whether the vector constraint is solved before or after the Hamiltonian constraint. Perhaps the most common approach to determine $\mathcal{H}_C$ is provided by the Dirac method of quantizing constrained systems. Here, taking the $\hat{C}$, which is a self-adjoint operator, one is looking for states which are annihilated by the operator, i.e. \begin{equation} \hat{C}| \Psi \rangle \approx 0, \label{WHD} \end{equation} where ``$\approx$" denotes weak equality, i.e. satisfied for the states $| \Psi \rangle \in \mathcal{H}_C$. Eq. (\ref{WHD}) is the famous Wheeler-DeWitt (WDW) equation. Solutions to the equation, which are belonging to the kernel of the operator $\hat{C}$, span the Hilbert space $\mathcal{H}_C = \ker{\hat{C}}$. The difficulty of the method lies in finding solution to the WDW equation. The solutions are known e.g. for certain quantum cosmological models \cite{HH,Vilenkin}. Extraction of the physical states can alternatively be performed employing the \emph{group averaging} \cite{Ashtekar:1995zh} approach, which utilizes the projection operator $\hat{P}$. The $\hat{P}$ is a non-unitary, but self-adjoint ($\hat{P}^{\dagger}=\hat{P}$) and idempotent ($\hat{P}^2=\hat{P}$) operator, which for the case of a constraint $\hat{C}$ with a zero eigenvalue takes the following form: \begin{equation} \hat{P}=\lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T}^{T}d\tau e^{i \tau \hat{C}}. \end{equation} The expression performs Dirac delta-like action on the kinematical states, projecting them onto the physical subspace. Another widely explored method of finding the physical states is provided by the \emph{reduced phase space} method \cite{Thiemann:2004wk}, in which one looks for solution of the constraints already at the classical level. For gravity, this is perhaps not possible do in general. However, utility of the approach has been shown for certain minisuperspace models (see e.g. \cite{Mielczarek:2011mx}). While the $\Gamma_{\text{phys}}$ is extracted, the algebra of observables is a subject of quantization, leading to the physical Hilbert space $\mathcal{H}_{\text{phys}}$. The method we are going to study here is based on the observation made in Ref. \cite{Mielczarek:2021xik}. Namely, while a Hamiltonian constraint $C\approx 0$ is considered, the configurations satisfying the constraint can be found by identifying ground states of a new Hamiltonian $C^2$. A possibility of extracting $\Gamma_{\text{phys}}$ for a prototype classical constraint $C$ with the use of adiabatic quantum computing has been discussed. Here, we generalize the method to the quantum case and investigate its implementation on a universal quantum computer. The approach, utilizes Variational Quantum Eigensolver (VQE) \cite{Peruzzo}, which is a hybrid quantum algorithm. The algorithm has been widely discussed in the literature, in particular in the context of quantum chemistry \cite{Kandala,Cao}. While our VQE-based method is introduced in a general fashion, which does not depend on the particular form of $\hat{C}$, over the article we will mostly refer to the concrete case of $\hat{C}$, corresponding to a quantum cosmological model. The VQE will be implemented on both simulator of a quantum computer (employing Penny Lane \cite{PennyLane} and Qiskit \cite{Qiskit} tools) and on actual superconducting quantum computer provided by IBM \cite{IBM}. Applying quantum computing methods unavoidably requires dealing with the finite systems - having finite dimensional Hilbert spaces. Because standard canonical quantization of gravitational system does not lead to finite dimensional Hilbert space representation, a procedure of cutting-off dimension of the Hilbert space has to be applied. For this purpose, we apply the recently introduced Non-linear Field Space Theory (NFST) \cite{Mielczarek:2016rax}, which provides a systematic procedure of compactifying phase spaces of the standard affine phase spaces. The compactification leads to finite volume of the phase space, and in consequence finite dimension of the Hilbert space. In case of the spherical compactification, of a $\mathbb{R}^2$ phase space, the control parameter of the cut-off is the total spin $S$, associated with the volume of the spherical phase space. In the large spin limit ($S\rightarrow \infty$), the standard case with an infinite dimensional Hilbert space is recovered. Depending on quantum computational resources, the value of $S$ can be fixed such that the corresponding Hilbert space can be represented with available number of logical qubits. Additional advantage of the method introduced in the article is that finding physical states employing variational method gives as explicitly operator (i.e. ansatz with determined parameters) which can be used to generate the physical states on a quantum computer. The states can be used for further simulations on a quantum processor. For example, transitions amplitudes between the states can be evaluated. On the other hand, when the physical states are found using analytical methods or classical numerics, the difficulty of constructing operator preparing a given state remains. The organization of the article is the following. In Sec. \ref{Compact}, the method of regularizing the Hamiltonian constraint, employing compactification of the phase space is introduced. The procedure is applied to the case of de Sitter cosmology. Then, in Sec. \ref{VQE}, general considerations concerning the VQE applied to solving the Hamiltonian constraint are made. The qubit representation of Hamiltonian constraint introduced in Sec. \ref{Compact} is discussed in Sec. \ref{ExpectationValues}. In Sec. \ref{FixedSpin} the problem of determining fixed spin subspace of the physical Hilbert space is addressed. A quantum method of evaluating gradients in the VQE procedure is presented in Sec. \ref{Gradient}. Examples of applying the procedure for the case of spin $s=1$ is given in Sec. \ref{S1} and in Sec. \ref{S2} for $s=2$. The computational complexity considerations of the method are made in Sec. \ref{Complexity}. The results are summarized in Sec. \ref{Summary}. \section{Compact phase space regularization of de Sitter model} \label{Compact} The initial step towards quantum variational solving of the Wheeler-DeWitt equation is making system's Hilbert space finite. Actually, there are theoretical arguments for gravitational Hilbert space being locally finite \cite{Bao:2017rnv}. Some of the approaches quantum gravity, aim to implement this property while performing quantization of gravitational degrees
\section{Introduction} The observation of a new bosonic state with a mass around $\sim 125\mbox{GeV}$ \cite{:2012gk,:2012gu} may provide a window into new physics beyond the Standard Model (SM). At the moment the observed signal strengths are consistent with the SM Higgs boson but more data is needed to assess the nature of the recently discovered state. Physics beyond the SM may affect the Higgs decay rates to SM particles and give rise to new channels of Higgs decays (for recent reviews of nonstandard Higgs boson decays see \cite{Chang:2008cw}). In particular, Higgs boson can decay with a substantial branching fraction into states which can not be directly detected. Such invisible Higgs decay modes may occur in models with an enlarged symmetry breaking sector (Majoron models, SM with extra singlet scalar fields etc.) \cite{majoron, Martin:1999qf}, in ``hidden valley'' models \cite{hidden-valley}, in the SM with a fourth generation of fermions \cite{fourth-generation}, in the supersymmetric (SUSY) extensions of the SM \cite{Baer:1987eb}\footnote{Recently the nonstandard Higgs decays within the Next--to--Minimal Supersymmetric Standard model were discussed in \cite{King:2012tr}.}, in the models with compact and large extra dimensions \cite{Martin:1999qf, higgs-extraD}, in the littlest Higgs model with T-parity \cite{Asano:2006nr} etc. In the context of invisible Higgs decays it is especially interesting to consider the nature and extent of invisibility acquired by the SM--like Higgs state within well motivated SUSY extensions of the SM. Here we focus on the $E_6$ inspired SUSY models which are based on the low--energy SM gauge group together with an extra $U(1)_{N}$ gauge symmetry defined by: \be U(1)_N=\ds\frac{1}{4} U(1)_{\chi}+\ds\frac{\sqrt{15}}{4} U(1)_{\psi}\,. \label{1} \ee The\quad two\quad anomaly-free\quad $U(1)_{\psi}$\quad and $U(1)_{\chi}$ symmetries can originate from the breakings $E_6\to$ $SO(10)\times U(1)_{\psi}$, $SO(10)\to SU(5)\times U(1)_{\chi}$. To ensure anomaly cancellation the particle spectrum in these models is extended to fill out three complete 27-dimensional representations of the gauge group $E_6$. Each $27$-plet contains one generation of ordinary matter; singlet fields, $S_i$; up and down type Higgs doublets, $H^{u}_{i}$ and $H^{d}_{i}$; charged $\pm 1/3$ coloured exotics $D_i$, $\bar{D}_i$. The presence of exotic matter in $E_6$ inspired SUSY models generically lead to non--diagonal flavour transitions and rapid proton decay. To suppress flavour changing processes as well as baryon and lepton number violating operators one can impose a set of discrete symmetries \cite{King:2005jy}--\cite{King:2005my}. The $E_6$ inspired SUSY models with extra $U(1)_{N}$ gauge symmetry and suppressed flavor-changing transitions, as well as baryon number violating operators allow exotic matter to survive down to the TeV scale that may lead to spectacular new physics signals at the LHC which were analysed in \cite{King:2005jy}--\cite{Accomando:2006ga}. Only in this Exceptional Supersymmetric Standard Model (E$_6$SSM) \cite{King:2005jy}--\cite{King:2005my} right--handed neutrinos do not participate in the gauge interactions so that they may be superheavy, shedding light on the origin of the mass hierarchy in the lepton sector and providing a mechanism for the generation of the baryon asymmetry in the Universe via leptogenesis \cite{King:2008qb}. Recently the particle spectrum and collider signatures associated with it were studied within the constrained version of the E$_6$SSM \cite{8}. In this note we consider the nonstandard Higgs decays within the $E_6$ inspired SUSY models in which a single discrete $\tilde{Z}^{H}_2$ symmetry forbids tree-level flavor-changing transitions and the most dangerous baryon and lepton number violating operators \cite{nevzorov}. These models contain at least two states which are absolutely stable and can contribute to the relic density of dark matter. One of these states is a lightest SUSY particle (LSP) while another one tends to be the lightest ordinary neutralino. The masses of the LSP and next--to--lightest SUSY particle (NLSP) are determined by the vacuum expectation values (VEVs) of the Higgs doublets. As a consequence they give rise to nonstandard Higgs boson decays. In the phenomenologically viable scenarios LSP should have mass around $1\,\mbox{eV}$ or even smaller forming hot dark matter in the Universe while NLSP can be substantially heavier. NLSPs with GeV scale masses result in substantial branching ratios of the lightest Higgs decays into NLSPs. Since NLSP tend to be longlived particle in this case it decays outside the detectors leading to the invisible decays of the SM-like Higgs state. In the considered $E_6$ inspired SUSY model the lightest ordinary neutralino can account for all or some of the observed cold dark matter relic density. The paper is organised as follows. In the next section we briefly review the $E_6$ inspired SUSY models with exact custodial $\tilde{Z}^{H}_2$ symmetry. In section 3 we specify a set of benchmark scenarios that lead to the invisible decays of the SM--like Higgs state mentioned above. Section 4 concludes the paper. \section{$E_6$ inspired SUSY models with exact $\tilde{Z}^{H}_2$ symmetry} In this section, we give a brief review of the $E_6$ inspired SUSY models with exact custodial $\tilde{Z}^{H}_2$ symmetry \cite{nevzorov}. These models imply that near some high energy scale (scale $M_X$) $E_6$ or its subgroup is broken down to $SU(3)_C\times SU(2)_W\times U(1)_Y\times U(1)_{\psi}\times U(1)_{\chi}\times Z_{2}^{M}$, where $Z_{2}^{M}=(-1)^{3(B-L)}$ is a matter parity. Below scale $M_X$ the particle content of the considered models involves three copies of $27_i$--plets and a set of $M_{l}$ and $\overline{M}_l$ supermultiplets from the incomplete $27'_l$ and $\overline{27'}_l$ representations of $E_6$. All matter superfields, that fill in complete $27_i$--plets, are odd under $\tilde{Z}^{H}_2$ discrete symmetry while the supermultiplets $\overline{M}_l$ can be either odd or even. All supermultiplets $M_{l}$ are even under the $\tilde{Z}^{H}_2$ symmetry and therefore can be used for the breakdown of gauge symmetry. In the simplest case the set of $M_{l}$ includes $H_u$, $H_d$, $S$ and $L_4$, where $L_4$ and $\overline{L}_4$ are lepton $SU(2)_W$ doublet and anti--doublet supermultiplets that originate from a pair of additional $27'_{L}$ and $\overline{27'}_L$. At low energies (i.e. TeV scale) the superfields $H_u$, $H_d$ and $S$ play the role of Higgs fields. The VEVs of these superfields ($\langle H_d \rangle = v_1/\sqrt{2}$, $\langle H_u \rangle = v_2/\sqrt{2}$ and $\langle S \rangle = s/\sqrt{2}$) break the $SU(2)_W\times U(1)_Y\times U(1)_{N}$ gauge symmetry down to $U(1)_{em}$ associated with the electromagnetism. In the simplest scenario $\overline{H}_u$, $\overline{H}_d$ and $\overline{S}$ are odd under the $\tilde{Z}^{H}_2$ symmetry. As a consequence $\overline{H}_u$, $\overline{H}_d$ and $\overline{S}$ from the $\overline{27'}_l$ get combined with the superposition of the corresponding components from $27_i$ so that the resulting vectorlike states gain masses of order of $M_X$. On the other hand $L_4$ and $\overline{L}_4$ are even under the $\tilde{Z}^{H}_2$ symmetry. These supermultiplets form TeV scale vectorlike states to render the lightest exotic quark unstable. In this simplest scenario the exotic quarks are leptoquarks. The $\tilde{Z}^{H}_2$ symmetry allows the Yukawa interactions in the superpotential that originate from $27'_l \times 27'_m \times 27'_n$ and $27'_l \times 27_i \times 27_k$. One can easily check that the corresponding set of operators does not contain any that lead to the rapid proton decay. Since the set of multiplets $M_{l}$ contains only one pair of doublets $H_d$ and $H_u$ the $\tilde{Z}^{H}_2$ symmetry also forbids unwanted FCNC processes at the tree level. The gauge group and field content of the $E_6$ inspired SUSY models considered here can originate from the orbifold GUT models in which the splitting of GUT multiplets can be naturally achieved \cite{nevzorov}. In the simplest scenario discussed above extra matter beyond the minimal supersymmetric standard model (MSSM) fill in complete $SU(5)$ representations. As a result the gauge coupling unification remains almost exact in the one--loop approximation. It was also shown that in the two--loop approximation the unification of the gauge couplings in the considered scenario can be achieved for any phenomenologically acceptable value of $\alpha_3(M_Z)$, consistent with the central measured low energy value \cite{unif-e6ssm}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $27_i$ & $27_i$ &$27'_{H_u}$&$27'_{S}$& $\overline{27'}_{H_u}$&$\overline{27'}_{S}$&$27'_{L}$\\ & & &$(27'_{H_d})$& &$(\overline{27'}_{H_d})$& &$(\overline{27'}_L)$\\ \hline &$Q_i,u^c_i,d^c_i,$&$\overline{D}_i,D_i,$ & $H_u$ & $S$ & $\overline{H}_u$&$\overline{S}$&$L_4$\\ &$L_i,e^c_i,N^c_i$ & $H^d_{i},H^u_{i},S_i$& $(H_d)$ & & $(\overline{H}_d)$& &$(\overline{L}_4)$\\ \hline $\tilde{Z}^{H}_2$ & $-$ & $-$ & $+$ & $+$ & $-$&$-$&$+$\\ \hline $Z_{2}^{M}$ & $-$ & $+$ & $+$ & $+$ & $+$&$+$&$-$\\ \hline $Z_{2}^{E}$ & $+$ & $-$ & $+$ & $+$ & $-$&$-$&$-$\\ \hline \end{tabular} \caption{Transformation properties of different components of $E_6$ multiplets under $\tilde{Z}^H_2$, $Z_{2}^{M}$ and $Z_{2}^{E}$ discrete symmetries.} \label{tab1} \end{table} As mentioned before, the gauge symmetry in the $E_6$ inspired SUSY models being considered here, is broken so that the low--energy effective Lagrangian of these models is invariant under both $Z_{2}^{M}$ and $\tilde{Z}^{H}_2$ symmetries. Since $\tilde{Z}^{H}_2 = Z_{2}^{M}\times Z_{2}^{E}$ the $Z_{2}^{E}$ symmetry associated with exotic states is also conserved. The transformation properties of different components of $27_i$, $27'_l$ and $\overline{27'}_l$ supermultiplets under the $\tilde{Z}^{H}_2$, $Z_{2}^{M}$ and $Z_{2}^{E}$ symmetries are summarized in Table~\ref{tab1}. The invariance of the Lagrangian under the $Z_{2}^{E}$ symmetry implies that the lightest exotic state, which is odd under this symmetry, must be stable. Using the method proposed in \cite{Hesselbach:2007te} it was argued that that there are theoretical upper bounds on the masses
Jean Van Schaftingen # Publications ## Submitted papers [1] , , , and , Spaces of Besov–Sobolev type and a problem on nonlinear approximation. [2] , , and , Families of functionals representing Sobolev norms. [3] and , Limiting Sobolev and Hardy inequalities on stratified homogeneous groups. ## Accepted papers [4] and , Quantitative characterization of traces of Sobolev maps, to appear in Commun. Contemp. Math. [5] , and , Renormalised energies and renormalisable singular harmonic maps into a compact manifold on planar domains, to appear in Math. Annal. [6] , Reverse superposition estimates in Sobolev spaces, to appear in Pure Appl. Funct. Anal. ## Published papers [7] , and , On limiting trace inequalities for vectorial differential operators, Indiana Univ. Math. J. 70 (2021), no. 5, 2133–2176. [8] , and , Ginzburg-Landau relaxation for harmonic maps on planar domains into a general compact vacuum manifold, Arch. Rat. Mech. Anal. 242 (2021), no. 2, 875–935. [9] and , Lifting in compact covering spaces for fractional Sobolev mappings, Anal. PDE 14 (2021), no. 6, 1851–1871. [10] and , Trace theory for Sobolev mappings into a manifold, Ann. Fac. Sci. Toulouse Math. (6) 30 (2021), no. 2, 281–299. [11] , and , Going to Lorentz when fractional Sobolev, Gagliardo and Nirenberg estimates fail, Calc. Var. Partial Differential Equations 60 (2021), no. 129 [12] , and , A surprising formula for Sobolev norms, Proc. Natl. Acad. Sci. USA 118 (2021), no. 8, e2025254118. [13] and , Metric characterization of the sum of fractional Sobolev spaces, Stud. Math. 258 (2021), 27–51. [14] and , Estimates of the amplitude of holonomies by the curvature of a connection on a bundle, Pure Appl. Funct. Anal. 5 (2020), no. 4, 891–897. [15] and , Characterization of the traces on the boundary of functions in magnetic Sobolev spaces, Adv. Math. 371 (2020), 107246. [16] , and , Groundstates for Choquard type equations with Hardy–Littlewood–Sobolev lower critical exponent, Proc. Roy. Soc. Edinb. A 150 (2020), no. 3, 1377–1400. [17] and , Range convergence monotonicity for vector measures and range monotonicity of the mass, Ric. Mat. 69 (2020), no. 1, 293-326. [18] and , An estimate of the Hopf degree of fractional Sobolev mappings, Proc. Amer. Math. Soc. 148 (2020), no. 7, 2877–2891. [19] and , Vortex motion for the lake equations, Comm. Math. Phys. 375 (2020), no. 2, 1459–1501. [20] , Estimates by gap potentials of free homotopy decompositions of critical Sobolev maps, Adv. Nonlinear Anal. 9 (2019), no. 1, 1214–1250. [21] and , Optimal embeddings into Lorentz spaces for some vector differential operators via Gagliardo’s lemma, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 30 (2019), no. 3, 413–436. [22] , , and , Representing three-dimensional cross fields using fourth order tensors, in Roca  X. and Loseille  A. (eds.), 27th International Meshing Roundtable. IMR 2018, Springer, Cham, Lecture Notes in Computational Science and Engineering, No. 127, 2019, 89–108. [23] and , Higher order intrinsic weak differentiability and Sobolev spaces between manifolds, Adv. Calc. Var. 12 (2019), no. 3, 303–332. [24] , and , Properties of groundstates of nonlinear Schrödinger equations under a weak constant magnetic field, J. Math. Pures Appl. (9) 124 (2019), 123–168. [25] and , Uniform boundedness principles for Sobolev maps into manifolds, Ann. Inst. H. Poincaré Anal. Non Linéaire 36 (2019), no. 2, 417–449. [26] , , , and , Sharp Gagliardo-Nirenberg inequalities in fractional Coulomb–Sobolev spaces, Trans. Amer. Math. Soc. 370 (2018), no. 11, 8285–8310. [27] and , Groundstates of the Choquard equations with a sign-changing self-interaction potential, Z. Angew. Math. Phys. 69 (2018), no. 3, 69:86. [28] and , Groundstates for a local nonlinear perturbation of the Choquard equations with lower critical exponent, J. Math. Anal. Appl. 464 (2018), no. 2, 1184–1202. [29] , and , Weak approximation by bounded Sobolev maps with values into complete manifolds, C. R. Math. Acad. Sci. Paris 356 (2018), no. 3, 264–271. [30] , Sobolev mappings: from liquid crystals to irrigation via degree theory, Lecture notes of the Godeaux Lecture delivered at the 9th Brussels Summer School of Mathematics (2018) [31] and , Odd symmetry of least energy nodal solutions for the Choquard equation, J. Differential Equations 264 (2018), no. 2, 1231–1262. [32] and , Gauge-measurable functions, Rend. Istit. Mat. Univ. Trieste 49 (2017), 113–135. [33] , , and , Semiclassical Sobolev constants for the electro-magnetic Robin Laplacian, J. Math. Soc. Japan 69 (2017), no. 4, 1667–1714. [34] , and , Density of bounded maps in Sobolev spaces into complete manifolds, Ann. Mat. Pura Appl. (4) 196 (2017), no. 6, 2261–2301. [35] and , Approximation of symmetrizations by Markov processes, Indiana Univ. Math. J. 66 (2017), no. 4, 1145–1172. [36] and , Standing waves with a critical frequency for nonlinear Choquard equations, Nonlinear Anal. 161 (2017), 87–107. [37] , and , The logarithmic Choquard equation: sharp asymptotics and nondegeneracy of the groundstate, J. Funct. Anal. 272 (2017), no. 12, 5255–5281. [38] , and , Bourgain–Brezis estimates on symmetric spaces of non-compact type, J. Funct. Anal. 273 (2017), no. 4, 1504-1547. [39] and , Existence of groundstates for a class of nonlinear Choquard equations in the plane, Adv. Nonlinear Stud. 17 (2017), no. 3, 581–594. [40] , and , The incompressible Navier Stokes flow in two dimensions with prescribed vorticity, in Sagun  Chanillo, Bruno  Franchi, Guozhen  Lu, Carlos  Perez and Eric T.  Sawyer (eds.), Harmonic Analysis, Partial Differential Equations and Applications, Birkhäuser, Applied and Numerical Harmonic Analysis, 2017, 19–25. [41] and , Controlled singular extension of critical trace Sobolev maps from spheres to compact manifolds, Int. Math. Res. Not. IMRN 2017 (2017), no. 12, 3467–3683. [42] and , A guide to the Choquard equation, J. Fixed Point Theory Appl. 19 (2017), no. 1, 773–813. [43] , and , An $$L^1$$–type estimate for Riesz potentials, Rev. Mat. Iberoam. 33 (2017), no. 1, 291–304. [44] , and , Variations on a proof of a borderline Bourgain–Brezis Sobolev embedding theorem, Chinese Ann. Math. Ser. B 38 (2017), no. 1, 235–252. [45] and , Choquard equations under confining external potentials, NoDEA Nonlinear Differential Equations Appl. 24 (2017), no. 1, 1–24. [46] , and , Least action nodal solutions for the quadratic Choquard equation, Proc. Amer. Math. Soc. 145 (2017), no. 2, 737–747. [47] , and , Groundstates and radial solutions to nonlinear Schrödinger–Poisson–Slater equations at the critical frequency, Calc. Var. Partial Differential Equations 55 (2016), no. 146, 58. [48] and , Nodal solutions for the Choquard equation, J. Funct. Anal. 271 (2016), no. 1, 107–135. [49] and , Intrinsic colocal weak derivatives and Sobolev spaces between manifolds, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 16 (2016), no. 1, 97–128. [50] , and , Applications of Bourgain–Brezis inequalities to fluid mechanics and magnetism, C. R. Math. Acad. Sci. Paris 354 (2016), no. 1, 51–55. [51] and , Geometric partial differentiability on manifolds: the tangential derivative and the chain rule, J. Math. Anal. Appl. 435 (2016), no. 2, 1672–1681. [52] and , Groundstates of nonlinear Choquard equations: Hardy–Littlewood–Sobolev critical exponent, Commun. Contemp. Math. 17 (2015), no. 5, 1550005 (12 pages). [53] and , Existence of groundstates for a class of nonlinear Choquard equations, Trans. Amer. Math. Soc. 367 (2015), no. 9, 6557–6579. [54] and , Semiclassical stationary states for nonlinear Schrödinger equations under a strong external magnetic field, J. Differential Equations 259 (2015), no. 2, 596–627. [55] , and , Strong density for higher order Sobolev spaces into compact manifolds, J. Eur. Math. Soc. (JEMS) 17 (2015), no. 4, 763–817. [56] and , Semi-classical states for the Choquard equations, Calc. Var. Partial Differential Equations 52 (2015), no. 1, 199–235. [57] and , Existence, stability and oscillation properties of slow decay positive solutions of supercritical elliptic equations with Hardy potential, Proc. Roy. Soc. Edinburgh Sect. A 58 (2015), no. 1, 255–271. [58] , Limiting Bourgain–Brezis estimates for systems of linear differential equations: Theme and variations, J. Fixed Point Theory Appl. 15 (2014), no. 2, 273–297. [59] , and , Strong approximation of fractional Sobolev maps, J. Fixed Point Theory Appl. 15 (2014), no. 1, 133–153. [60] and , Hardy–Sobolev inequalities for vector fields and canceling linear differential operators, Indiana Univ. Math. J. 63 (2014), no. 5, 1419–1445. [61] , Equivalence between Pólya–Szegő and relative capacity inequalities under rearrangement, Arch. Math. (Basel) 103 (2014), no. 4, 367–379. [62] , Interpolation inequalities between Sobolev and Morrey–Campanato spaces: A common gateway to concentration-compactness and Gagliardo–Nirenberg, Port. Math. 71 (2014), no. 3–4, 159–175. [63] , Approximation in Sobolev spaces by piecewise affine interpolation, J. Math. Anal. Appl. 420 (2014), no. 1, 40–47. [64] , and ,
$A$ is transitive if and only if that affine function on $\IZ/d\IZ$ permutes all $d$ elements in one cycle. By \cite[Section 3.2.1.2, Theorem A]{Knu98a}, this is equivalent to the conditions on $(d,e,s)$ given above. \end{proof} Henceforth, we always assume that $A$ is a transitive subgroup of $\GammaL_1(p^m)$ not containing $\widehat{\IF_{p^m}^{\ast}}$. Note that then $p^m-1\mid|A|=\frac{p^m-1}{d}\cdot\frac{m}{s}$, or equivalently, $ds\mid m$. Moreover, since $p^s-1$ must be divisible by every prime divisor of $d$ (and $d>1$), we also must have $s>1$. So, if $(d,e,s)$ are the standard parameters of $A$, then our assumptions on $A$ are equivalent to the following conditions on $(d,e,s)$: \begin{equation}\label{eqI} d\geq 2\text{ and }d\mid p^m-1, \end{equation} \begin{equation}\label{eqII} s\geq 2, ds\mid m,p^s-1 \text{ is divisible by each prime divisor of }d\text{, and }({4\mid d}\Rightarrow{4\mid p^s-1}), \end{equation} and \begin{equation}\label{eqIII} e\in\{1,\ldots,d-1\}, \gcd(d,e)=1 \text{ and }d\mid\frac{p^m-1}{p^s-1}. \end{equation} We note that since $1<d\mid\gcd(m,2^m-1)$, for $p=2$, the case $m\leq5$ is excluded, and for general $p$, at least the case $m=1$ is excluded. We thus note the following conditions: \begin{equation}\label{eq0} m,n\in\IN^+,m\geq2\text{ and }({p=2}\Rightarrow{m\geq6}). \end{equation} We will also need criteria for when another (not necessarily transitive) subgroup $K$ of $\GammaL_1(p^m)$ is contained in $A$ and when it is a normal subgroup of $A$. For the former, we use exactly the same characterization as the one in \cite[Lemma 4.8]{LLP09a}, whereas for the latter, we use a modified version of \cite[Lemma 4.9]{LLP09a}. \begin{lemma}\label{containmentLem}(Containment of subgroups of $\GammaL_1(p^m)$, see \cite[Lemma 4.8]{LLP09a}) Let $K$ be a subgroup of $\GammaL_1(p^m)$, with standard parameters $(d_1,e_1,s_1)$. Then $K\leq A$ if and only if \begin{enumerate} \item $d\mid d_1$, \item $s\mid s_1$, and \item $d\mid (e\frac{p^{s_1}-1}{p^s-1}-e_1)$. \end{enumerate}\qed \end{lemma} \begin{lemma}\label{normalityLem}(Normality in $A$ of subgroups of $\GammaL_1(p^m)$, cf.~\cite[Lemma 4.9]{LLP09a}) Let $K$ be a subgroup of $\GammaL_1(p^m)$, with standard parameters $(d_1,e_1,s_1)$. Assume that $K\leq A$, so that the conditions from Lemma \ref{containmentLem} hold. Then $K\unlhd A$ if and only if \begin{enumerate} \item $d_1\mid d\cdot(p^{s_1}-1)$ and \item $d_1\mid e_1(p^s-1)-e(p^{s_1}-1)$. \end{enumerate} \end{lemma} \begin{proof} As in \cite[proof of Lemma 4.9]{LLP09a}, we observe that $K\unlhd A$ is equivalent to the conjunction of $(\alpha^{s_1}\hat{\omega}^{e_1})^{\hat{\omega}^d}\in K$ and $(\alpha^{s_1}\hat{\omega}^{e_1})^{\alpha^s\hat{\omega}^e}\in K$, the former of which is equivalent to $d_1\mid d\cdot(p^{s_1}-1)$. We now show that the latter is equivalent to $d_1\mid e_1(p^s-1)-e(p^{s_1}-1)$. Indeed, observe that \[ (\alpha^{s_1}\hat{\omega}^{e_1})^{\alpha^s\hat{\omega}^e}=(\alpha^{s_1}\hat{\omega}^{p^se_1})^{\hat{\omega}^e}=\hat{\omega}^{-e}\alpha^{s_1}\hat{\omega}^{p^se_1+e}=\alpha^{s_1}\hat{\omega}^{-p^{s_1}e}\hat{\omega}^{p^se_1+e}=\alpha^{s_1}\hat{\omega}^{p^se_1+e(1-p^{s_1})}. \] Moreover, since $\alpha^{s_1}\hat{\omega}^{e_1}\in K$ and $K\cap\langle\hat{\omega}\rangle=\langle\hat{\omega}^{d_1}\rangle$, we have that for all $E\in\IZ$, $\alpha^{s_1}\hat{\omega}^E\in K$ if and only if $d_1\mid E-e_1$. It follows that $\alpha^{s_1}\hat{\omega}^{p^se_1+e(1-p^{s_1})}\in K$ if and only if $d_1\mid p^se_1+e(1-p^{s_1})-e_1=e_1(p^s-1)-e(p^{s_1}-1)$, as required. \end{proof} This allows us to prove: \begin{proposition}\label{uniqueMaxNormProp} Unless $(p,m)=(3,2)$, each transitive subgroup of $\GammaL_1(p^m)$ has a (unique) largest abelian normal subgroup, namely its intersection with $\widehat{\IF_{p^m}^{\ast}}$. \end{proposition} \begin{proof} This is clear for cyclic transitive subgroups of $\GammaL_1(p^m)$ by Lemma \ref{onlyCyclicLem}, so we may assume that the transitive subgroup to be considered is our arbitrary, but fixed non-cyclic transitive subgroup $A$. So, let $N$ be an abelian normal subgroup of $A$, say with standard parameters $(d_1,e_1,s_1)$. Then the conditions from Lemmas \ref{containmentLem} and \ref{normalityLem} hold. We want to show that $N$ is contained in $\langle\hat{\omega}^d\rangle$, which is equivalent to $s_1=m$. So assume that $s_1<m$. Since $N=\langle\alpha^{s_1}\hat{\omega}^{e_1},\hat{\omega}^{d_1}\rangle$ is abelian, we must have that $p^m-1\mid (p^{s_1}-1)d_1$, or equivalently, $\frac{p^m-1}{p^{s_1}-1}\mid d_1$. As long as $p^m-1$ has a p.p.d., this yields a contradiction as follows: Let $l$ be a p.p.d.~of $p^m-1$ (so in particular, $l\nmid p^{s_1}-1$). Then, using that $d_1\mid d(p^{s_1}-1)$ by the normality of $N$ in $A$ (see Lemma \ref{normalityLem}(1)), \[ l\mid\frac{p^m-1}{p^{s_1}-1}\mid d_1\mid d\cdot(p^{s_1}-1)\mid m\cdot(p^{s_1}-1), \] and hence $l\mid m$, which is impossible by Lemma \ref{notDivLem}. We are thus left with the two cases \enquote{$(p,m)=(2,6)$} and \enquote{$m=2$ and $p$ is a Mersenne prime}, for which we give a separate argument. As for the former case, note that $s_1\in\{1,2,3\}$ then. We make a subcase distinction: \begin{enumerate} \item Subcase: $s_1=1$. This is impossible since $s\mid s_1$, while also $s>1$. \item Subcase: $s_1=2$. Then $21=\frac{2^m-1}{2^{s_1}-1}\mid d_1$ and $s\mid s_1=2$, so $s=2$ and $2^s-1=3$. Since $d>1$ and each prime divisor of $d$ must divide $2^s-1=3$, $d$ is a nontrivial power of $3$. But we must also have $d_1\mid d\cdot(2^{s_1}-1)=3d$, whence $d_1$ is a power of $3$, which is impossible in view of $21\mid d_1$. \item Subcase: $s_1=3$. Then $9=\frac{2^m-1}{2^{s_1}-1}\mid d_1$ and $s\mid s_1=3$, so $s=3$ and $2^s-1=7$. Since $d>1$ and $d\mid d_1=9$, we conclude that $3\mid d$. So $3$ is a prime divisor of $d$ and must therefore also divide $7=2^s-1$, a contradiction. \end{enumerate} Now assume that $m=2$ (in which case $s_1=1$) and $p$ is a Mersenne prime, say $p=2^r-1$ with $r$ a prime. Then the above condition $\frac{p^m-1}{p^{s_1}-1}\mid m(p^{s_1}-1)$ turns into $2^r=p+1\mid2(2^r-2)=4(2^{r-1}-1)$, which implies $r=2$ and thus $p=3$. \end{proof} We note that the conclusion in Proposition \ref{uniqueMaxNormProp} is false for $(p,m)=(3,2)$: Let $A\leq\GammaL_1(9)$ have standard parameters $(2,1,1)$, and let $K\leq\GammaL_1(9)$ have standard parameters $(4,1,1)$. Then $A$ is nonabelian and (sharply) transitive on $\IF_9^{\ast}$, and both $\langle\hat{\omega}^2\rangle=A\cap\widehat{\IF_9^{\ast}}$ and $K$ are index $2$ (normal) abelian subgroups of $A$ (actually, they are both cyclic of order $4$). The intersection of $A$ with $\widehat{\IF_{p^m}^{\ast}}$ will play an important role in the following observations. Let us introduce some terminology for talking about it: \begin{definition}\label{largeSubgroupDef} A subgroup of $\widehat{\IF_{p^m}^{\ast}}$ (resp.~of $\IF_{p^m}^{\ast}$) is called \emph{large} if and only if its index divides $m$. Moreover, for each subgroup $H$ of $\GammaL_1(p^m)$, we set $H_0:=H\cap\widehat{\IF_{p^m}^{\ast}}$, called the \emph{field unit subgroup of $H$}. \end{definition} Recall that the order of $A_0$, the field unit subgroup of $A$, is just $\frac{p^m-1}{d}$ where $d\mid\gcd(m,p^m-1)$ (by Formula (\ref{eqII})). Hence $A_0$ is a large subgroup of $\widehat{\IF_{p^m}^{\ast}}$. The following simple observation will be useful later on: \begin{lemma}\label{galoisOnLargeLem} Let $L$ be a large subgroup of $\IF_{p^m}^{\ast}$, let $i\in\IN$, and assume that $\psi\in\Aut(\IF_{p^m})$ acts on $L$ via $x\mapsto x^{p^i}$. Then $\psi=\alpha^i$. \end{lemma} \begin{proof} This follows from the observation that $L$ generates $\IF_{p^m}$ as a field, which is clear as long as $p^m-1$ has a p.p.d.~or if $m\leq2$, and it can be checked separately for $(p,m)=(2,6)$. \end{proof} When, for $p=2$, our arbitrary, but fixed transitive subgroup $A$ of $\GammaL_1(2^m)$ occurs as the domain of the homomorphism $\varphi$ in a predatum $(\sigma:\IF_{2^m}\rightarrow\IF_{2^n},\varphi)$, then it has a quotient $B$ (the image of $\varphi$) which is a transitive subgroup of $\GammaL_1(2^n)$ for some $n\leq m$. We would like to better understand the possible mapping behavior of the homorphism $\varphi:A\twoheadrightarrow B$. Statement (3) in the following lemma is useful in that regard: \begin{lemma}\label{transitiveQuotLem} Let $N$ be a normal subgroup of $A$ with standard parameters $(d_1,e_1,s_1)$. Let $a:=\frac{1}{d}(e_1-e\cdot\frac{p^{s_1}-1}{p^s-1})\in\IZ$ (by Lemma \ref{containmentLem}(3)), and set $k:=\gcd(a,\frac{d_1}{d})$. Then the following hold: \begin{enumerate} \item The quotient $A/N$ has order $\frac{d_1}{d}\cdot\frac{s_1}{s}$ and a consistent polycyclic presentation of the form $\langle x,y\mid x^{s_1/s}=y^{-a},y^{d_1/d}=1,y^x=y^{p^s}\rangle$, associated with the pcgs $(\alpha^s\hat{\omega}^eN,\hat{\omega}^dN)$. \item With respect to the above presentation of $A/N$, we have $\langle x\rangle_{A/N} \cap \langle y\rangle_{A/N}=\langle y^k\rangle_{A/N}$. \item Assume that $p=2$. For every predatum $(\sigma:\IF_{2^m}\rightarrow\IF_{2^n},\varphi:A\twoheadrightarrow B)$, we have $\varphi[A_0]=B_0=\varphi[A]_0$. \end{enumerate} \end{lemma} \begin{proof} For (1): By the observation on orders of subgroups of $\GammaL_1(p^m)$ in standard form from the paragraph after Lemma \ref{normalFormLem}, we have that $|A|=\frac{p^m-1}{d}\cdot\frac{m}{s}$ and $|N|=\frac{p^m-1}{d_1}\cdot\frac{m}{s_1}$, so the assertion on the order of $A/N$ follows. That $A/N$ has a polycylic presentation of the asserted form with respect to the specified pcgs follows from simple calculations such as \[ (\alpha^s\hat{\omega}^e)^{s_1/s}=\alpha^{s_1}\hat{\omega}^{e\cdot\frac{p^{s_1}-1}{p^s-1}} \equiv \hat{\omega}^{e\cdot\frac{p^{s_1}-1}{p^s-1}-e_1}=\hat{\omega}^{-da}=(\hat{\omega}^d)^{-a}\Mod{N}, \] and that said presentation is consistent follows from the known order of $A/N$. For (2): This follows from the consistency of the given polycyclic presentation of $A/N$. For (3): Set $C:=\varphi[A]_0\leq\GammaL_1(2^n)$, and identify $\varphi[A]$ with $A/N$ where $N:=\ker{\varphi}$. Then the cyclic group $C$, being the largest abelian normal subgroup of $A/N$, contains $\langle y\rangle_{A/N}$; assume, aiming for a contradiction, that this inclusion is proper. Then $C=\langle x^{\epsilon_1}y^{\epsilon_2}\rangle_{A/N}$ where $\epsilon_1\in\{1,\ldots,\frac{s_1}{s}-1\}$ and $\epsilon_2\in\{0,\ldots,\frac{d_1}{d}-1\}$. But $y^{-\epsilon_2}$ is an element of $C$ and is not a generator of $C$, so $x^{\epsilon_1}y^{\epsilon_2}\cdot y^{-\epsilon_2}=x^{\epsilon_1}$ is also a generator of $C$. It follows that \[ \langle y\rangle_{A/N}=C\cap\langle y\rangle_{A/N}=\langle x^{\epsilon_1}\rangle_{A/N}\cap\langle y\rangle_{A/N}\leq\langle x\rangle_{A/N}\cap\langle y\rangle_{A/N}=\langle y^k\rangle_{A/N}. \] Hence, since $k\mid\frac{d_1}{d}=\ord(y)$, we must have $k=1$. Equivalently, $a$ is coprime to $\frac{d_1}{d}$, and so, since $\frac{d_1}{d}$ divides $2^s-1$ by Lemma \ref{normalityLem}(2), $\frac{d_1}{d}\mid 2^s-1$. Moreover, in view of its presentation, $A/N$ is cyclic. So $\varphi[A]$ is a cyclic transitive subgroup of $\GammaL_1(2^n)$, whence by Lemma \ref{onlyCyclicLem}, $\varphi[A]=\widehat{\IF_{2^n}^{\ast}}$. In particular, the nonabelian $3$-orbit $2$-group $G$ associated with the predatum at hand admits a cyclic group of automorphisms acting transitively on the set of involutions in $G$, so $G$ is either $\Q_8$ or a (proper) Suzuki $2$-group. In particular, $n\in\{\frac{m}{2},m\}$ by \cite{Hig63a}. But moreover, since $\varphi[A]$ acts transitively
restricted to that support, also known as {\em refitting} \cite{figueiredo,lederer,falcon}. A slightly more advanced approach consists in adding a sign-constraint derived from the solution of the variational regularization method in addition to the support condition. This means effectively that the solution of the debiasing step shares an $\ell^1$-subgradient with the solution of the variational regularization method. A different and more general approach is to iteratively reduce the bias via Bregman iterations \cite{osh-bur-gol-xu-yin} or similar approaches \cite{gilboa,tadmorvese}. Recent results for the inverse scale space method in the case of $\ell^1$-regularization (respectively certain polyhedral regularization functionals \cite{adaptive1,adaptive2,bregmanbuchkapitel}) show that the inverse scale space performs some kind of debiasing. Even more, under certain conditions, the variational regularization method and the inverse scale space method provide the same subgradient at corresponding settings of the regularization parameters \cite{spectraltv2}. Together with a characterization of the solution of the inverse scale space method as a minimizer of the residual on the set of elements with the same subgradient, this implies a surprising equivalence to the approach of performing a debiasing step with sign-constraints. Recently, bias and debiasing in image processing problems were discussed in a more systematic way by Deledalle et al. \cite{deledalle,deledalle2016clear}. They distinguish two different types of bias, namely method bias and model bias. In particular they suggest a debiasing scheme to reduce the former, which can be applied to some polyhedral one-homogeneous regularizations. The key idea of their approach is the definition of suitable spaces, called model subspaces, on which the method bias is minimized. The remaining model bias is considered as the unavoidable part of the bias, linked to the choice of regularization and hence the solution space of the variational method. The most popular example is the staircasing effect that occurs for total variation regularization due to the assumption of a piecewise constant solution. In the setting of $\ell^1$-regularization a natural model subspace is the set of signals with a given support, which yields consistency with the ad-hoc debiasing approach mentioned above. Based on this observation, the main motivation of this paper is to further develop the approach in the setting of variational regularization and unify it with the above-mentioned ideas of debiasing for $\ell^1$-regularization, Bregman iterations, and inverse scale space methods. Let us fix the basic notations and give a more detailed discussion of the main idea. Given a bounded linear operator $A \colon \mathcal{X} \to \mathcal{Y} $ between Banach spaces, a convex regularization functional $J \colon \mathcal{X} \rightarrow \mathbb{R} \cup \{\infty\}$ and a differentiable data fidelity $H: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}$, we consider the solution of the variational method \begin{equation} \label{variationalmethod0} u_\alpha \in \arg \min_{u \in \mathcal{X}} \ H(Au,f) + \alpha J(u) \end{equation} as a first step. Here $\alpha > 0$ is a suitably chosen regularization parameter. This problem has a systematic bias, as we further elaborate on below. The optimality condition is given by \begin{equation} \label{optimality0} A^* \partial_{u} H (Au_\alpha,f) + \alpha p_\alpha = 0,\ p_\alpha \in \partial J(u_\alpha), \end{equation} where $\partial_{u} H$ is the derivative of $H$ with respect to the first argument. Now we proceed to a second step, where we only keep the subgradient $p_\alpha$ and minimize \begin{equation} \label{debiasedproblem0} \hat{u}_{\alpha} \in \arg \min_{u \in \mathcal{X}} \ H(Au,f) \text{ s.t. } p_\alpha \in \partial J(u). \end{equation} Obviously, this problem is only of interest if there is no one-to-one relation between subgradients and primal values $u$, otherwise we always obtain $\hat{u}_{\alpha}=u_\alpha$. The most interesting case with respect to applications is the one of $J$ being absolutely one-homogeneous, i.e. $J(\lambda u) = |\lambda| J(u)$ for all $\lambda \in \mathbb{R}$, where the subdifferential can be multivalued at least at $u=0$. The debiasing step can be reformulated in an equivalent way as \begin{equation} \label{debiasedproblem1} \min_{u \in \mathcal{X}} \ H(Au,f) \text{ s.t. } {D}_J^{p_\alpha}(u,u_\alpha) = 0, \end{equation} with the (generalized) Bregman distance given by \begin{equation*} {D}_J^{p}(u,v) = J(u)-J(v)-\langle p, u-v \rangle, \quad p \in \partial J(v). \end{equation*} We remark that for absolutely one-homogeneous $J$ this simplifies to \begin{equation*} {D}_J^{p}(u,v) = J(u)-\langle p, u\rangle, \quad p \in \partial J(v). \end{equation*} The reformulation in terms of a Bregman distance indicates a first connection to Bregman iterations, which we make more precise in the sequel of the paper. Summing up, we examine the following two-step method: \begin{enumerate} \item[1)] Compute the (biased) solution $u_\alpha$ of \eqref{variationalmethod0} with optimality condition \eqref{optimality0}, \item[2)] Compute the (debiased) solution $\hat{u}_{\alpha}$ as the minimizer of \eqref{debiasedproblem0} or equivalently \eqref{debiasedproblem1}. \end{enumerate} In order to relate further to the previous approaches of debiasing $\ell^1$-minimizers given only the support and not the sign, as well as the approach with linear model subspaces, we consider another debiasing approach being blind against the sign. The natural generalization in the case of an absolutely one-homogeneous functional $J$ is to replace the second step by \begin{equation*} \min_{u \in \mathcal{X}} \ H(Au,f) \text{ s.t. } \mathrm{ICB}_J^{p_{\alpha}}(u,u_\alpha)= 0, \end{equation*} where \begin{align*} \mathrm{ICB}_J^{p_{\alpha}}(u,u_\alpha) := \big[D_J^{p_\alpha}(\cdot,u_\alpha) \Box D_J^{\text{-}p_\alpha}(\cdot,-u_\alpha)\Big](u) \end{align*} denotes the infimal convolution between the Bregman distances $D_J^{p_\alpha}(\cdot,u_\alpha)$ and $D_J^{-p_\alpha}(\cdot,-u_\alpha)$, evaluated at $u \in \mathcal{X}$. The infimal convolution of two functionals $F$ and $G$ on a Banach space $\mathcal{X}$ is defined as \begin{align*} (F \Box G)(u) &= \inf_{\substack{\phi, \psi \in \mathcal{X},\\ \phi + \psi = u}} F(\phi) + G(\psi) \\ &= \inf_{z\in \mathcal{X}} F(u-z) + G(z). \end{align*} For the sake of simplicity we carry out all analysis and numerical experiments in this paper for a least-squares data fidelity (related to i.i.d. additive Gaussian noise) \begin{equation} \label{quadraticfidelity} H(Au,f) = \frac{1}2 \Vert A u - f \Vert_\mathcal{Y}^2 \end{equation} for some Hilbert space $\mathcal{Y}$, but the basic idea does not seem to change for other data fidelities and noise models. We show that the sets characterized by the constraints \begin{align*} D_J^{p_\alpha}(u,u_\alpha) = 0 \quad \text{ and } \quad \mathrm{ICB}_J^{p_{\alpha}}(u,u_\alpha)= 0 \end{align*} constitute a suitable extension of the model subspaces introduced in \cite{deledalle} to general variational regularization. In particular, we use those manifolds to provide a theoretical basis to define the bias of variational methods and investigate the above approach as a method to reduce it. Moreover, we discuss its relation to the statistical intuition of bias. At this point it is important to notice that choosing a smaller regularization parameter will also decrease bias, but on the other hand strongly increase variance. The best we can thus achieve is to reduce the bias at fixed $\alpha$ by the two-step scheme while introducing only a small amount of variance. The remainder of the paper is organized as follows: In Section \ref{sec:motivation} we motivate our approach by considering bias related to the well-known ROF-model \cite{rof} and we review a recent approach on debiasing \cite{deledalle}. In the next section we introduce our debiasing technique supplemented by some first results. Starting with a discussion of the classical definition of bias in statistics, we consider a deterministic characterization of bias in Section \ref{sec:BiasModelManifolds}. We reintroduce the notion of model and method bias as well as model subspaces as proposed in \cite{deledalle} and extend it to the infinite-dimensional variational setting. We furthermore draw an experimental comparison between the bias we consider in this paper and the statistical notion of bias. Finally, we comment on the relation of the proposed debiasing to Bregman iterations \cite{osh-bur-gol-xu-yin} and inverse scale space methods \cite{scherzer,gilboa}. We complete the paper with a description of the numerical implementation via a first-order primal-dual method and show numerical results for signal deconvolution and image denoising. \section{Motivation} \label{sec:motivation} Let us start with an intuitive approach to bias and debiasing in order to further motivate our method. To do so, we recall a standard example for denoising, namely the well-known ROF-model \cite{rof}, and we rewrite a recent debiasing approach \cite{deledalle} in the setting of our method. \subsection{Bias of total variation regularization} As already mentioned in the introduction, variational regularization methods suffer from a certain bias. This systematic error becomes apparent when the regularization parameter is increased. Indeed this causes a shift of the overall energy towards the regularizer, and hence a deviation of the reconstruction from the data in terms of quantitative values. Intuitively, this can be
# Recursion Today, we’ll be learning about recursion, which is one of the most difficult yet useful techniques in introductory computer programming. ## An Example Let’s imagine that we want to compute the factorial of a number $n$, which is defined to be $n!=n\times(n-1)\times(n-2)\times\cdots\times2\times 1$. One way of doing this using some of the standard techniques that we’ve learned is through a for loop: public int computeFactorial(int n) { int result = 1; for (int i = 1; i < n; i++) { result = result * i; } return result; } This is a completely valid way of writing this program. Note that in computing the factorial of $n$, we’re essentially doing the same operation (multiplication) over and over again. Indeed, another way of thinking about computing $n!$ is that all we need to do is multiply $n$ by $(n-1)!$. Now, our problem is reduced to determining what $(n-1)!$ is. But this is essentially the exact same problem: $(n-1)!$ can can be though of as multiplying $(n-1)$ by $(n-2)!$, and our problem is now determining what $(n-2)!$ is. This circular reasoning may get you in some trouble in other classes, such as writing proofs in a math course, but in Java and in computer programming in general, this is a very powerful tool. This idea of circular reasoning is the basis of recursion. Recursion describes a class of functions that calls itself in order to complete a task. In our case, given our updated description of how to recursively compute $n!$, recursion might look something like this public int recursiveComputeFactorial(int n) { return n * recursiveComputeFactorial(n - 1); } Notice how the function recursiveComputeFactorial(int n) recursively calls itself within its own body in order to determine the value of $n!$. However, when we run this function, Java doesn’t compute $n!$, but rather gives us a StackOverflowError error (feel free to check this yourself here). The StackOverflowError is essentially a fancy way of telling us that our computer failed to compute the desired result because it essentially ran out of memory, meaning that the program ran on forever and wasn’t able to ever finish. The problem is that we never handled the case where $n=1$. When we get to $n=1$, the function recursiveComputeFactorial() shouldn’t return 1 * this.recursiveComputeFactorial(0); it should simply return 1. This is because the factorial computation should end at multiplying the last number of $1$, not continue multiplying $0, -1, -2, \ldots$ and so on and so forth. Therefore, we should update our code in the following way: public int recursiveComputeFactorial(int n) { if (n == 1) { return 1; } return n * this.recursiveComputeFactorial(n - 1); } Of course, there’s also the edge case where we’re trying to compute $0!=1$, so an input of $n=0$ should also return $1$. We’ll account for this by modifying our if statement to account for this case: public int recursiveComputeFactorial(int n) { if (n <= 1) { return 1; } return n * this.recursiveComputeFactorial(n - 1); } And we’re done! This code is an example of a recursive function that computes the factorial of an integer $n$. You’re welcome to try it out yourself to confirm that this works here: ## Thinking Through the Logic The logic of how a recursive function can often be one of the most confusing things about recursion. When approaching any problem using recursion, my recommendation is to always tackle the problem backwards. That is start with the simplest case. For example going, back to the factorial calculation. Let’s first start off with the simplest case of $n=1$. $1!=1$, and so in this case we should simply return $1$. This means that this.recursiveComputeFactorial(1) = 1. For the next simplest case of $n=2$, $2!=2\times 1!$, and so in this case we should return 2 * this.recursiveComputeFactorial(2 - 1). This means that this.recursiveComputeFactorial(2) = 2 * this.recursiveComputeFactorial(2 - 1). For the next case with $n=3$, $3!=3\times 2!$, and so in this case we should return 3 * this.recursiveComputeFactorial(3 - 1), since we determined above that this.recursiveComputeFactorial(3 - 1) = 2!. Once we’ve considered a couple of these simple cases, we can generalize and see that as long as $n>1$, our function should return n * this.recursiveComputeFactorial(n - 1). In this way, recursion is most easily tackled by considered the simplest cases and then building our way upwards. ## A General Algorithm Given that we now better understand the logic of how recursion works, we can now generalize to give a template of what almost all recursive functions will look like: public class RecursiveFunction { public int recurse(int arg) { /* Handle the base case. */ if (baseCase) { // Do anything you need here, and then return the value // for the base case. Recursive function calls shouldn't } /* Handle all other cases. */ // Do anything you need here, and then return the value // for the general case. Recursive function calls should be } } The base case is essentially the simplest case that tells us when to stop recursing. When we were evaluating the factorial of integers, the base case that would stop the recursive calls is when we reach $n=1$. ## Applications of Recursion Now, you might be questioning what the actual uses of recursion are. After all, we gave an example for calculating $n!$ above that didn’t seem as complicated and worked perfectly find. In fact, in most practical applications, recursion is primarily useful when trying to solve a problem iteratively, which, by definition, can often (but not always) be approached with for and while loops. So, when would we actually use recursion in actual practical programming? One situation that may come up is that the for and while loops may get incredibly complex, and can become extremely simplified using recursion. Another situation is if there are multiple “branches” of where the code can possibly run. We’ll look at some of the most common situations and algorithms where recursion is an extremely powerful tool in the future: typically, recursion doesn’t prove its helpfulness until we get into some really complex programming problems. Let’s try a couple of practice problems. ## Example 1: String Parsing This problem is adapted from CodingBat. Given a String, compute recursively (no loops) the number of lowercase 'x' characters in the String. Like we talked about above, we should first identify the base condition where we know that we should stop recursing. In this problem, when we run out of characters to test being equal to 'x', this tells us that the recursion should stop, since the length of the String is now zero. Since there obviously can’t be a character 'x' in a String with length 0, we should return 0 in the base case. Therefore, our base case is public int countX(String str) { if (str.length() == 0) { return 0; } } For personal preference, we will try to start from recursing from the end of the list. The idea is to determine whether the last character in the current String we are on is equal to 'x', and then recurse on the same String minus that last character we just tested, which can be found in Java using the substring() method. If the last character in the current String is 'x', then we add 1 to our count. Otherwise, we don’t add anything. Therefore, our complete code would look something like public int countX(String str) { if (str.length() == 0) { return 0; } else if (str.charAt(str.length() - 1) == 'x') { return 1 + countX(str.substring(0, str.length() - 1)); } else { return countX(str.substring(0, str.length() - 1)); } } That’s it! You can try to test out your code here on this website. ## Example 2: Nested Parentheses This problem is adapted from CodingBat. Given a string, return true if it is a nesting of zero or more pairs of parentheses, like (()) or ((())). One way to approach this problem is to find the very first instance of a ( character and the very last instance of a ) in the initial String. We can then take the substring in between these two
The W/Z ratios give insights on the strange sea quark density, whereas the W$^+$/W$^-$ results reduce the valence $u/d$ PDF uncertainties. The HERAPDF2.0 fit tends to predict higher W/Z and W$^+$/W$^-$ ratios than measured in the data. Double differential $p_{\rm T}$, $\eta$, and helicity W cross sections measured by CMS lead to 10--30\% post-fit constraints on some of the 60 Hessian NNPDF3.0 parton density sets~\cite{Salvatico/CMS,Sirunyan:2020oum}. \item W\,$+$\,jets, W\,$+$\,charm: The W-plus-jets $p_{\rm T}$ spectra incorporated into a dedicated ATLAS NNLO PDF study lead to a depleted $s$-quark density at large $x$ compared to previous fits (Fig.~\ref{fig:PDFs} left), softer (harder) sea $\bar{d}$ density at low (high) $x$, and harder (softer) valence $d$-quarks at small (high) $x$~\cite{Sutton/ATLAS}. The comparatively depleted strange-quark PDF at large $x$ is also confirmed by the latest W\,$+$\,charm data measured by CMS~\cite{Stepennov/CMS,Sirunyan:2018hde} \item Photons, $\gamma+$\,jets: Multiple ATLAS and CMS differential distributions have been compared to NLO predictions~\cite{Siegert/ATLAS,Aad:2019eqv,Malik/CMS,Sirunyan:2019uya}. Since the LHC photon data have 10--15\% (both experimental and theoretical) energy-scale uncertainties, those results have led so far only to a mild impact on PDFs~\cite{dEnterria:2012kvo,Carminati:2012mm}, but stronger constraints on global PDF fits should be possible in the near future exploiting the recently available NNLO calculations~\cite{Campbell:2018wfu,Chen:2019zmr}. \end{itemize} \begin{figure}[htbp!] \centering \includegraphics[width=0.53\columnwidth,height=5.3cm]{ATLAS_squark_over_qval_PDF.png} \hspace{0.1cm} \includegraphics[width=0.45\columnwidth,height=5.3cm]{HERA_dPDFs_NNLO.png} \caption{Left: Ratio of strange over $(\bar{u}+\bar{d})$ PDFs at a scale $Q^2 = 1.9$\,GeV$^2$ determined fitting ATLAS W and Z data with and without constraints from the W\,$+$\,jets $p_{\rm T}$ spectra~\cite{Sutton/ATLAS}. Right: NNLO singlet (left) and gluon (right) diffractive PDFs as a function of $z$ for three different values of the energy scale~\cite{Sarkar/HERA}.} \label{fig:PDFs} \end{figure} \begin{itemize} \item Forward HQ jets: LHCb has measured charm and bottom jet cross sections with an improved 2D BDT fit to separate flavour composition~\cite{Sestini/LHCb}. Such first-ever forward HQ jets measurements can provide novel useful constraints on the low- and high-$x$ gluon PDF, via their incorporation into global fits with recent NNLO calculations for their cross sections~\cite{Catani:2020kkl}. \item Diffractive PDFs: A new H1 NNLO fit of inclusive diffractive DIS data shows a 25\%~depletion of the gluon content of the pomeron compared to NLO analyses (Fig.~\ref{fig:PDFs} right)~\cite{Sarkar/HERA}. These~new dPDFs lead to a better theoretical reproduction of the HERA diffractive~dijet~cross~sections. \end{itemize} Beyond the results above, prospects for parton densities studies at various future DIS facilities, including generalized PDFs at COMPASS via deeply virtual Compton scattering~\cite{COMPASS}, nuclear and polarized PDFs at the electron-ion collider (EIC)~\cite{Joosten/EIC,Accardi:2012qut}, and the ultimate PDF reach over many orders-of-magnitude in the $(x,Q^2)$ plane accessible at the LHeC~\cite{Gwenlan/LHeC,AbelleiraFernandez:2012cc}, were also reported. \section{Data versus resummed (N$^{\rm n}$LL) pQCD} \hspace{-0.5cm}The theoretical description of many pQCD differential cross sections requires the resummation of large soft and collinear logs appearing in processes sensitive to different energy scales. Typical examples are the spectra in the $p_{\rm T}\to 0$ limit of massive particles at the LHC, such as the electroweak and Higgs bosons or heavy quarks, for which resummation of leading-log (LL) $\alpha_{\rm s}\log(p_{\rm T}/m)$ terms is needed. Different measurements were presented confronted to N$^{\rm n}$LL resummed calculations: \begin{itemize} \item Low-$p_{\rm T}$ W boson: The D0 experiment showed dedicated measurements of the hadronic recoil in W events (normalized to the accurately calibrated soft Z boson spectrum) compared to various models~\cite{Wang/D0,Abazov:2020moo}. The NLO+NNLL \textsc{resbos}\ calculations are within a few \% of the data (Fig. \ref{fig:NLL}, left), whereas various \pythia8 tunes are excluded or disfavored. Such results provide important benchmarks for high-precision W mass extractions. \end{itemize} \begin{figure}[htbp!] \centering \raisebox{12pt}{\includegraphics[width=0.47\columnwidth,height=6.4cm]{DZeroWPt_Reco_All_ResBos.pdf}} \hspace{0.1cm} \includegraphics[width=0.51\columnwidth,height=6.8cm]{CMS_Z_lowpT.png} \caption{Left: Hadronic recoil $u_{\rm T}$ in W-boson events measured by D0 compared to NLO+NNLL predictions~\cite{Wang/D0,Abazov:2020moo}. Right: Ratio of theoretical to experimental Z-boson $p_{\rm T}$ spectra in p-p at 13\,TeV~\cite{Gorbunov/CMS,Sirunyan:2019bzr}.} \label{fig:NLL} \end{figure} \begin{itemize} \item Low-$p_{\rm T}$ Z boson: The full Z spectrum measured by CMS was confronted to various predictions confirming the sensitivity of different $p_{\rm T}$ regions to fixed-order, resummed, and parton shower calculations (Fig. \ref{fig:NLL}, right)~\cite{Gorbunov/CMS,Sirunyan:2019bzr}. Resummed calculations based on TMD factorization~\cite{Taheri} are shown to reproduce well the softest part of the Z\,$+$\,jets spectra at the LHC as well as of the Drell--Yan data in p-p collisions at lower $\sqrt{\rm s}$. \item Low-$p_{\rm T}$ HQ: Spectra of charm and bottom mesons measured by ALICE~\cite{Vermunt/ALICE} and CMS~\cite{Mariani/CMS} are found consistent with, but systematically above, FONLL predictions at low $p_{\rm T}$, suggesting the need to add missing NNLL contributions. \end{itemize} \section{Parton shower and jet substructure} Advanced studies of the energy-angle (sub)emissions within a jet have been performed in the last years exploiting modern jet substructure techniques~\cite{Larkoski:2017jix}. Intrajet parton emissions are characterized by their relative momentum $z$ and radius (angle) $R$ with respect to the jet, and visualized in the $(\ln(1/z),\ln(R/\Delta R))$ Lund plane that can be constructed through repeated Cambridge/Aachen declustering of individual jets to follow the underlying parton splittings~\cite{Dreyer:2018nbf}. The soft-drop grooming is a popular algorithm that steps through the branching history removing branches that fail given $p_{\rm T},R$ requirements. The following experimental jet substructure results were shown at ICHEP'20: \begin{itemize} \item Inclusive jets: ATLAS presented different projections of the Lund plane variables ($\ln(1/z)$, $\ln(R)$, jet mass), and compared their various phase space regions to analytic (NLO+NLL, with and without non-pQCD corrections) and PS models~\cite{Roloff/ATLAS}. CMS jet mass studies in dijet events show that grooming reduces uncertainties by removing soft particles and pileup, and that PY8 alone seems to reproduce better the data than \textsc{powheg}+PY8 and \textsc{herwig}++~\cite{Sunar/CMS}. \end{itemize} \begin{figure}[htbp!] \centering \includegraphics[width=0.49\columnwidth]{CMS_jets_Nsoftdrop.png} \hspace{0.1cm} \includegraphics[width=0.49\columnwidth]{ALICE_deadcone.png} \caption{Left: Distribution of the soft-drop multiplicity for inclusive jets measured in the CMS $\ttbar$ data compared to MC predictions~\cite{Sunar/CMS,Sirunyan:2018asm}. Right: Ratio of angular distributions of splittings for $D^0$-tagged over inclusive jets measured by ALICE, showing depleted emissions for increasingly small angles~\cite{CunqueiroMulligan/ALICE}.} \label{fig:subst} \end{figure} \begin{itemize} \item Flavoured jets: Multiple jet substructure observables (generalized angularities, groomed momentum fraction, N-subjettiness ratios,...) measured by CMS for jets of different flavours produced in top pair events, indicate 10--50\% data-model differences (Fig.~\ref{fig:subst}, left)~\cite{Sunar/CMS,Sirunyan:2018asm}. These results call for improved theoretical developments of parton showering. \item Charm jets: ALICE applied grooming techniques to inclusive and charm-tagged jets confirming the harder charm fragmentation compared to light-quarks and gluons, and the presence of suppressed radiation at small angles for $c$ quarks (Fig.\,\ref{fig:subst}, right)~\cite{CunqueiroMulligan/ALICE}. This~latter result is the first direct observation of the predicted pQCD heavy-quark ``dead cone'' effect~\cite{Dokshitzer:1991fd}. \end{itemize} \section{Semihard and soft scatterings} Numerous LHC results were presented connected to studies of the strong interaction at semihard (few GeV) or soft (around $\Lambda_{_{\rm QCD}}\approx 0.2$\,GeV) scales. These included new measurements of double parton scatterings (DPS), multiparton interactions (MPI) of particular relevance to describe the underlying event (UE) in MC generators, and diffractive scatterings with hard or soft momentum exchanges: \begin{itemize} \item Double parton scatterings: CMS presented the first evidence ($3.9\sigma$) for the very rare same-sign WW process, historically considered a DPS ``smoking gun'' (Fig.~\ref{fig:semihard}, left), with associated effective DPS cross section of $\sigma_{\rm eff}\approx 12.7^{+5.0}_{-2.9}$~mb~\cite{Salvatico/CMS,Sirunyan:2019zox}. Double-$\Upsilon$ production was studied by CMS, showing that DPS contributions amount to 35\% of the inclusive yields~\cite{Amaral/CMS,Sirunyan:2020txn}, and the ${\rm J}/\psi$+W results from ATLAS indicate $\sigma_{\rm eff}\approx 6$~mb~\cite{Abbot/ATLAS,Aaboud:2019wfr}. All these results provide novel information on the transverse parton profile and the parton correlations inside the proton. \item Multiparton interactions: CMS showed that updated NLO+PY8 tunes based on minimum-bias data can also consistently reproduce the UE activity of multiple hard-scattering final states, and provided also the first set of \textsc{herwig}\ MC tunes (Fig.~\ref{fig:semihard}, right)~\cite{VanOnsem/CMS,Sirunyan:2020pqv}. Similarly, the new ATLAS Z\,$+$\,jet UE data confirmed the need of retuning the MPI parameters of previous NLO+PY8, \textsc{sherpa}, and \textsc{herwig}\ simulations~\cite{Staszewski/ATLAS}. \end{itemize} \begin{figure}[htbp!] \centering \includegraphics[width=0.43\columnwidth]{CMS_DPS_sWW.png} \hspace{0.1cm} \includegraphics[width=0.55\columnwidth]{CMS_UE_Herwig.png} \caption{Left: Cross section for DPS same-sign WW production compared to model expectations~\cite{Salvatico/CMS,Sirunyan:2019zox}. Right: UE activity measurements as a function Z-boson $p_{\rm T}$ compared to various \textsc{herwig}\ MC tunes~\cite{VanOnsem/CMS,Sirunyan:2020pqv}.} \label{fig:semihard} \end{figure} \begin{itemize} \item Diffractive and elastic scatterings: The first measurement of Mueller-Tang jet-gap-jet events presented by CMS shows a $f_{\rm gap}=0.6$--1.0\% rapidity-gap survival probability, with differential distributions not well reproduced by models~\cite{Baldenegro/CMS}. Cross sections of single-diffraction dijets with a forward proton tag have $f_{\rm gap}\approx7\%$ and are consistent with the \textsc{pythia} 8 DG and \textsc{Pomwig} models, but overestimated by \textsc{pythia} 8 4C and CUETP8M1 tunes~\cite{Suranyi/CMS}. ATLAS presented a precise measurement of the pomeron intercept and elastic slope in single-diffractive events tagged with forward protons~\cite{Staszewski/ATLAS}, while a comparative analysis of TOTEM and D0 elastic scattering data shows differences suggestive of the presence of odderon (colorless 3-gluon) exchanges at the
## Saturday, March 30, 2019 ### Delays in liquidated and resolved firms: Visualisation of an output measure of the Indian bankruptcy reform by Geetika Palta, Anjali Sharma, Susan Thomas. The ultimate objective of the Indian bankruptcy reform is to get up to plausible recovery rates and change the behaviour of borrowers. The key tool for achieving these objectives is reducing the delay. In the existing literature, we know that there are large delays, particularly for large firms (Bhatia et. al. 2019, Felman et. al. 2019, Shah 2018). The most important proximate objective of the Indian bankruptcy reform is to reduce delays in the bankruptcy process (Shah and Thomas, 2018). A great deal of the focus so far has been upon the average value of the delay. It is, however, important to look at the full distribution of the delay, and not just the sample mean. Box-and-whisker plots are a nice visualisation tool through which we can see more than just the sample mean. In this article, we (a) Construct a visualisation of a key output measure for the Indian bankruptcy reform : a box-and-whisker plot for the delay associated with Resolved and Liquidated firms; and (b) Argue that this output measure is likely to get worse in coming days. ### The overall distribution of the delay Sometimes, it is argued that the right way to measure the delay is to exclude certain elements of the delay, which is not correct seen from first principles. The fundamental fact about distressed firms is that every day of delay reduces recovery rates and hampers economic dynamism. For an analogy, a sick animal is unproductive and suffers, regardless of whether the vet takes the weekend off or not. We work with two years of data about cases that have concluded and exited from the IRP. These are obtained from the IBBI website. In this data, 1383 cases embarked into the Insolvency Resolution Process (IRP) from January 2017 to December 2018. Of these, 79 concluded with an accepted resolution plan and 304 cases that concluded with the firm being put into liquidation. This yields the following box-and-whisker plots: Figure 1: Box-whisker plots for the delay of IBC cases, under three buckets (Ongoing, Liquidated or Resolved) Let's start at the right column (for resolved cases). The bottom pane shows that 79 firms were resolved. The upper pane depicts the range of values for these firms. The black horizontal line is the median, and the box is drawn from the 25th to the 75th percentile values for the delay. The dots show the most extreme values. A key finding here is that the median resolved case took more than 270 days (the horizontal red line). When we look at the liquidated cases, things are slightly better. The black line -- the median delay -- is close to 270 days. It still says that half of liquidated cases took more time than the legal limit of 270 days. A little under 25 percent of the liquidation cases reached their conclusion in 180 days, while very few of the resolved cases concluded within 180 days. But more than 50 percent of the liquidation cases concluded within the 270 days limit, while a little more than 25 percent of the resolved cases were done by this time. These two pictures -- the box-and-whisker plots for resolved and liquidated cases -- are a nice visualisation of a key output measure of the Indian bankruptcy reform. The trouble is, so far, we have seen only 79 + 304 cases reach the conclusion. These statistical estimates are censored: the cases that have finished are likely to be the ones where the IBC fared relatively well. The bulk of the action is in the Ongoing cases, and there are over 900 of them. For these, the median delay is already in the region of 270 days. The box-and-whisker plots for Liquidated and Resolved cases, which is the output measure of the Indian bankruptcy reform, will be modified in the future based on cases emerging out of the Ongoing bucket. Very crudely, we may conjecture that if all the ongoing cases finish tomorrow, the 25th and 50th and 75th percentile values of the overall distribution will be much like those seen as of today with the Liquidated and Resolved cases. But this is an over-optimistic scenario. In fact, cases will only trickle out in the future with higher delays, cases where the median delay has already reached about 270 days. Therefore, as cases emerge out of the Ongoing bucket in the future, the box-and-whisker plots for Liquidated and Resolved cases are going to get worse. ### How might the output measure evolve in the future? The most interesting question before us is: In the future, when Ongoing cases trickle out into completion, how will the output measures (the box-and-whisker plots of Resolved and Liquidated cases) shape up? To help visualise what comes next, we create the box-and-whisker plots for the Ongoing cases by quarter, from Q1 (Jan to Mar) 2017 to Q4 (Oct to Dec) 2018 in Figure 2. The $x$ axis shows the quarter in which cases were admitted into the IRP. The top pane of the graph shows the box-and-whisker plot for the days in IRP for the Ongoing cases only (those which have not concluded as of Dec 2018) and the bottom pane shows the number of firms. The graph for the number of firms shows that a large fraction of the Ongoing cases have started their IRP in the last three quarters of 2018 -- between April to December 2018. Among these three quarters, there is a near split of about 33%-33%-33%, between cases that have spent more than 270 days, between 180 and 270 days, and below 180 days. Figure 2: Delays associated with ongoing cases, organised by quarter To some extent, these results are mechanically driven by the facts of time. But the results are remarkable nonetheless. As an example, the (few) pending cases from Q1 2017 have already spent over 700 days of delay! When these cases complete, they will push the outcome measures in an adverse direction. On the other hand, a good number of cases are in 2018 where, so far, the delay that has been clocked is relatively low. If, hypothetically, the Indian bankruptcy reform suddenly works better, then a slew of cases can complete, and then the outcome measure may even improve. ### Conclusions 1. The box-and-whisker plot of the delay (measured in calendar days) for Resolved and Liquidated firms is a nice visualisation of a key output of the Indian bankruptcy reform. 2. It shows a gloomy picture, where over half of the delays are worse than the outer limit in the law of 270 days. 3. Looking into the future, based on the delays already incurred with Ongoing cases, the output measure is likely to get worse. ### References Time to resolve insolvencies in India, Surbhi Bhatia, Manish Singh, Bhargavi Zaveri, The Leap Blog, 11 March 2019. The RBI-12 cases under the IBC by Josh Felman, Varun Marwah, Anjali Sharma, 2019 (forthcoming). Sequencing issues in building jurisprudence: the problems of large bankruptcy cases, Ajay Shah, The Leap Blog, 7 July 2018. The Indian bankruptcy reform: The state of the art, 2018, Ajay Shah, Susan Thomas, The Leap Blog, 22 December 2018. The authors are researchers at the Indira Gandhi Institute for Development Research. 1. but why is the research about "number of days" (box plot or otherwise) of any relevance for efficiency of the IBC process. The ultimate measuresment of efficiency would be the recovery rate. 2. There are two reasons: first, the law emphasises rapid recovery and has specific numbers for the permissible delay. To go to longer delays is to violate the law. Second, the time taken during the IRP affects the recovery rate. When the delay is higher the recovery
population-level parameter are counted in addition to the individual-level parameters. :type exclude_pop_model: bool, optional :param exclude_bottom_level: A boolean flag which determines whether the bottom-level parameter are counted in addition to the top-level parameters. :type exclude_bottom_level: bool, optional """ if exclude_pop_model: return self._n_indiv_parameters if exclude_bottom_level: return int(np.sum(self._top_level_mask)) return self._n_parameters def n_observations(self): """ Returns the number of observed data points per individual. """ return self._n_obs class HierarchicalLogLikelihoodPopOnly(HierarchicalLogLikelihood): r""" A HierarchicalLogLikelihood in which only top-level (population-level) parameters are required. Individual parameters are generated randomly every iteration. The resulting Likelihood receives only contributions from Kolmogorov-Smirnov population models. :param log_likelihoods: A list of log-likelihoods defined on the same parameter space. :type log_likelihoods: list[LogLikelihood] :param population_models: A list of population models with one population model for each parameter of the log-likelihoods. :type population_models: list[PopulationModel] """ def __init__(self, log_likelihoods, population_models, time_key="time", biom_key="biomarker", meas_key="bm_value", id_key="patient"): super(HierarchicalLogLikelihood, self).__init__() #pylint: disable=bad-super-call for log_likelihood in log_likelihoods: if not isinstance(log_likelihood, LogLikelihood): raise ValueError( 'The log-likelihoods have to be instances of a ' 'chi.LogLikelihood.') n_parameters = log_likelihoods[0].n_parameters() for log_likelihood in log_likelihoods: if log_likelihood.n_parameters() != n_parameters: raise ValueError( 'The number of parameters of the log-likelihoods differ. ' 'All log-likelihoods have to be defined on the same ' 'parameter space.') names = log_likelihoods[0].get_parameter_names() for log_likelihood in log_likelihoods: if not np.array_equal(log_likelihood.get_parameter_names(), names): raise ValueError( 'The parameter names of the log-likelihoods differ.' 'All log-likelihoods have to be defined on the same ' 'parameter space.') if len(population_models) != n_parameters: raise ValueError( 'Wrong number of population models. One population model has ' 'to be provided for each model parameters.') for pop_model in population_models: if not isinstance(pop_model, chi.PopulationModel): raise ValueError( 'The population models have to be instances of ' 'chi.PopulationModel') # Remember models and number of individuals self._log_likelihoods = log_likelihoods self._population_models = population_models self._n_ids = len(log_likelihoods) self._n_obs = [np.sum(ll.n_observations()) for ll in log_likelihoods] # Set which population models are Kolmogorov-Smirnov tests self._set_pops_models_are_KS() if self._num_KS_pop_models == 0: raise ValueError( 'At least one population model must be a ' 'KolmogorovSmirnov population model.') # Set IDs self._set_ids() # Set parameter names and number of parameters self._set_number_and_parameter_names() # Construct mask for top-level parameters self._create_top_level_mask() # Define column headers self.time_key = time_key self.biom_key = biom_key self.meas_key = meas_key self.id_key = id_key def __call__(self, hyper_parameters, returnOutputs=False): # Get number of likelihoods and population models n_ids = self._n_ids n_full_parameters = self._n_full_parameters population_models = self._population_models indiv_param_ind_list = self._indiv_param_inds_in_full_list pop_param_ind_list = self._pop_param_inds_in_hyper_list # Create container for samples parameters = np.zeros(n_full_parameters) # Sample individuals from population model for each run for param_id, pop_model in enumerate(population_models): # Get population and individual parameters indiv_params_inds = indiv_param_ind_list[:, param_id] pop_params_inds = pop_param_ind_list[param_id] #Sample individual params from pop params parameters[indiv_params_inds] = pop_model.sample(hyper_parameters[pop_params_inds], n_ids) # Evaluate individual likelihoods outputs = [None] * self._n_ids for index, log_likelihood in enumerate(self._log_likelihoods): indiv_params = parameters[indiv_param_ind_list[index]] L = log_likelihood(indiv_params, returnOutputs=True) #Get output, ID and biomarker names output, _ = L outputs[index] = self._parse_likelihood_output(log_likelihood, output) outputs = pd.concat(outputs, axis=0) #Calculate likelihood from CDFs score = 0 for param_id, pop_model in enumerate(population_models): if not self._pop_model_is_KS[param_id]: continue # Get population and individual parameters pop_params_inds = pop_param_ind_list[param_id] # Add score score += pop_model.compute_log_likelihood( parameters=hyper_parameters[pop_params_inds], model_outputs=outputs) if returnOutputs: return outputs, score else: return score def _create_top_level_mask(self): """ Creates a mask that can be used to mask for the top level parameters. """ # Create conatainer with all False # (False for not top-level) top_level_mask = np.zeros(shape=self._n_full_parameters, dtype=bool) # Flip entries to true if top-level parameter start = 0 for pop_model in self._population_models: # Get number of hierarchical parameters n_indiv, n_pop = pop_model.n_hierarchical_parameters(self._n_ids) # For heterogeneous or uniform models, the individual parameters are the # top-level parameters if chi.is_heterogeneous_or_uniform_model(pop_model): end = start + n_indiv + n_pop # Add the population parameters as top-level parameters else: start += n_indiv end = start + n_pop #Flip False -> True top_level_mask[start: end] = ~top_level_mask[start: end] # Shift start to end start = end # Store mask self._top_level_mask = top_level_mask def _set_ids(self): """ Sets the IDs of the hierarchical model. IDs for population model parameters are ``None``. """ # Construct IDs (prefixes) for hierarchical model (only -- no individuals) ids = [] for pop_model in self._population_models: n_indiv, n_pop = pop_model.n_hierarchical_parameters(self._n_ids) # If population model has population model parameters, add them as # prefixes. ids += [None] * n_pop # Remember IDs self._ids = ids def _set_number_and_parameter_names(self): """ Sets the number and names of the parameters. The model parameters are arranged by keeping the order of the parameters of the individual log-likelihoods and expanding them such that the parameters associated with individuals come first and the the population parameters. Example: Parameters of hierarchical log-likelihood: [ log-likelihood 1 parameter 1, ..., log-likelihood N parameter 1, population model 1 parameter 1, ..., population model 1 parameter K, log-likelihood 1 parameter 2, ..., log-likelihood N parameter 2, population model 2 parameter 1, ..., population model 2 parameter L, ... ] where N is the number of parameters of the individual log-likelihoods, and K and L are the varying numbers of parameters of the respective population models. """ # Get individual parameter names indiv_names = self._log_likelihoods[0].get_parameter_names() # Construct parameter names full_start, hyper_start = 0, 0 indiv_params = [] pop_params = [] full_parameter_names, pop_parameter_names = [], [] for param_id, pop_model in enumerate(self._population_models): # Get number of hierarchical parameters n_indiv, n_pop = pop_model.n_hierarchical_parameters(self._n_ids) # Add a copy of the parameter name for each individual parameter name = indiv_names[param_id] full_parameter_names += [name] * n_indiv # Add the population parameter name, composed of the population # name and the parameter name if n_pop > 0: # (Reset population parameter names first) orig_pop_names = pop_model.get_parameter_names() pop_model.set_parameter_names(None) pop_names = pop_model.get_parameter_names() namesPlusPopNames = [ pop_name + ' ' + name for pop_name in pop_names] full_parameter_names += namesPlusPopNames pop_parameter_names += namesPlusPopNames pop_model.set_parameter_names(orig_pop_names) # Remember positions of individual and population parameters indiv_params.append([full_start, full_start + n_indiv]) pop_params.append ([hyper_start, hyper_start + n_pop]) # Shift start index full_start += n_indiv + n_pop hyper_start += n_pop # Remember parameter names and number of parameters self._pop_parameter_names = pop_parameter_names self._n_pop_parameters = len(pop_parameter_names) self._full_parameter_names = full_parameter_names self._n_full_parameters = len(full_parameter_names) self._n_indiv_parameters = len(indiv_names) self._n_parameters = self._n_pop_parameters # Remember positions of individual parameters self._indiv_param_inds_in_full_list = self._get_individual_parameter_reference_matrix(indiv_params) self._pop_param_inds_in_hyper_list = [np.arange(start=start, stop=end) for start, end in pop_params] def compute_pointwise_ll(self, hyper_parameters, per_individual=True): r""" Returns the pointwise log-likelihood scores of the parameters for each observation. The pointwise log-likelihood for an hierarchical model can be straightforwardly defined when each observation is treated as one "point" .. math:: L(\Psi , \theta | x^{\text{obs}}_{i}) = \sum _n \log p(x^{\text{obs}}_{in} | \psi _i ) + \log p(\psi _i | \theta ) , where the sum runs over all :math:`N_i` observations of individual :math:`i`. Alternatively, the pointwise log-likelihoods may be computed per observation point, assuming that the population contribution can be uniformly attributed to each observation .. math:: L(\Psi , \theta | x^{\text{obs}}_{in}) = \log p(x^{\text{obs}}_{in} | \psi _i ) + \log p(\psi _i | \theta ) / N_i . Setting the flag ``per_individual`` to ``True`` or ``False`` switches between the two modi. :param parameters: A list of parameter values. :type parameters: list, numpy.ndarray :param per_individual: A boolean flag that determines whether the scores are computed per individual or per observation. :type per_individual: bool, optional """ # Get number of likelihoods and population models n_ids = self._n_ids n_full_parameters = self._n_full_parameters population_models = self._population_models indiv_param_ind_list = self._indiv_param_inds_in_full_list pop_param_ind_list = self._pop_param_inds_in_hyper_list numTimes = self._log_likelihoods[0]._mechanistic_model.numTimes # Create container for samples parameters = np.zeros(n_full_parameters) # Sample individuals from population model for each run for param_id, pop_model in enumerate(population_models): # Get population and individual parameters indiv_params_inds = indiv_param_ind_list[:, param_id] pop_params_inds = pop_param_ind_list[param_id] #Sample individual params from pop params parameters[indiv_params_inds] = pop_model.sample(hyper_parameters[pop_params_inds], n_ids) # Evaluate individual likelihoods outputs = [None] * self._n_ids for index, log_likelihood in enumerate(self._log_likelihoods): indiv_params = parameters[indiv_param_ind_list[index]] L = log_likelihood(indiv_params, returnOutputs=True) #Get output, ID and biomarker names output, _ = L outputs[index] = self._parse_likelihood_output(log_likelihood, output) outputs = pd.concat(outputs, axis=0) #Calculate likelihood from CDFs if per_individual: score =
to pursue next. Specific for an Iv4xr\ agent, it accumulates all observations it gets so far into what can be seen as 'belief': the latest observation is factual, but older observations in the belief may no longer be valid in the actual GUT state. A BDI agent is allowed to act on belief, e.g. if an object $o$ exists in its belief, it can optimistically decide to go to $o$, believing it still exists in the actual game world. The fact that a BDI agent runs in deliberation cycles also makes it highly {\em reactive}, as it allows the agent to continuously, or at least frequently, sample the state of the GUT and immediately acts after each sampling, which makes it very suitable for controlling a game. The basic form of BDI agent programming is to specify which action to select at each deliberation cycle, which can be expressed {\em declaratively} with guarded actions a la Action System \cite{UNITYcm}. The snippet below shows an example of how this looks like in iv4xr\ (some concrete syntax are omitted). $B$ represents the agent's belief; $B \rightarrow expr$ is a lambda expression that, here, represents an action. \[ {\bf var}\ \begin{array}[t]{l} tactic_1 \ = \ {\sf ANYof}( \\ {\sf action}().{\sf do1}(B \rightarrow B.env().moveUp()).{\sf on}(g_1)\; , \\ {\sf action}().{\sf do1}(B \rightarrow B.env().moveDown()).{\sf on}(g_2)\; , \\ ... \\ {\sf action}().{\sf do1}(B \rightarrow B.env().useHealKit()).{\sf on}(g_k)\; ) \end{array} \] where $moveUp()$, $moveDown()$, $useHealKit()$, etc are methods we can imagine as provided by the Environment $env()$, and $g_1 .. g_k$ are guards specifying when the corresponding action is enabled for execution (e.g. $g_1$ could require that the way upwards is clear). Only enabled actions are executable; if there are more than one, the $\sf ANYof$ combinator will select one randomly. In iv4xr, a system of actions such as the one above is called a {\em tactic}. Given a tactic, an agent will keep executing it until its current goal is achieved (or it runs out of budget). E.g. this goal could be 'to obtain a key' (e.g. because we want to check its properties). If we remove all the guards, the tactic above would be how we can program a random test agent. Guards add some intelligence in choosing better actions (than just randomly), e.g. the action $useHealKit()$ can be guarded so that it becomes enabled when the character health drops under a certain critical level. \subsection{Navigation} \label{sec.navigation} A basic, but important, task that should be automated is navigation. The previous $tactic_1$ can do it, but not effectively. In fact, navigation in a game wold is usually non-trivial due to its complex layout and presence of dynamic obstacles. A standard solution is to represent walkable parts of the game world, which can be an infinite continuous space, as a finite {\em navigation graph}, after which a path finding algorithm such as A* \cite{millington2019AI} can be applied to guide the agent to get to a target location. Iv4xr\ provides several ways to do this reduction. For example if the GUT can export a so-called {\em navigation mesh}, iv4xr\ can convert it to a navigation graph. Figure \ref{fig:mesh} shows an example of such a mesh in a game engine called UNITY. A mesh is a finite set of connected triangles that cover a walkable surface. As such, it induces a navigation graph. If the GUT does not produce a navigation mesh, iv4xr\ can also construct a navigation graph on the fly, based on the geometry of the objects an agent sees. \begin{figure}[h] \includegraphics[scale=0.22]{pics/stairs.png} \includegraphics[scale=0.22]{pics/stairs_mesh.png} \caption{\it The picture to the right shows the mesh (blue surface), consisting of triangles (edges colored red), in a UNITY game.}\label{fig:mesh} \end{figure} From this, two tactics can be constructed \cite{prasetya2020navigation}. First, $navigateTo(o)$, which when repeatedly executed would guide the agent to reach the location of an object $o$, if the location is known. Second, $explore()$ to guide the agent to the closest unexplored area of the game world. So, rather than the previous $tactic_1$ we can now have the following, if the goal is to obtain some object 'key' $k$: \[ {\bf var}\ \begin{array}[t]{l} tactic_2 \ = \ {\sf FIRSTof}( \\ {\sf action}().{\sf do1}(B \rightarrow B.env().useHealKit()).{\sf on}(g)\; , \\ navigateTo(k)\; , \\ explore()) \end{array} \] Above we use a priority-based selector $\sf FIRSTof$ rather than the previous random selector $\sf ANYof$. The sub-tactic $navigateTo(k)$ will take the agent to the key $k$, if its location is known. Else the tactic is not enabled; $\sf FIRSTof$ will instead choose $explore()$ to explore the world until the agent sees $k$. Though, if its health drops too low, it will first use a healing kit to fix itself. Note that the last adds {\em reactivity} to handle an 'emerging situation', namely when the health drops too low. This can be extended to handle more emerging situations, such as approaching enemies, and thus equipping the test agent with some logic to make it more adept in surviving the game (useful, as a dead agent can't perform testing tasks). \subsection{Formulating a Testing Task: Goal Structures}\label{sec.goalstructure} An Iv4xr\ agent can be given multiple goals. Unlike other agent programming languages, iv4xr\ requires the goals to be structured. A {\em goal structure} is tree with goals as leaves and control-combinators as nodes, specifying either an order or a priority with which its subgoals are to be solved. For example a sequential testing task to find the key $k$, to pick it up, and to check that it can be used on door $d$ can be formulated as a goal structure such as the one below: \[ {\sf SEQ}( \begin{array}[t]{ll} \mbox{"$k$ is found"} & .{\sf withTactic}(T_1(k)), \\ \mbox{"$k$ is picked up"} & .{\sf withTactic}(T_2(k)), \\ \mbox{"$d$ is found"} & .{\sf withTactic}(T_1(d)), \\ \mbox{"$k$ is used on $d$"} & .{\sf withTactic}(T_3(k,d))) \\ \end{array} \] $\sf SEQ$ requires its subgoals to be solved in the order they are given. For more combinators, including conditional and repetition, see \cite{prasetya2020aplib}, with which even a test {\em algorithm} can be expressed, e.g. when the exact sequence of sub-tasks is not known upfront \cite{shirzadehhajimahmood2021}. \subsection{Integration with Other Testing Tools} Another value of the iv4xr\ framework is to be used as a rich adapter to enable traditional automated testing tools to target computer games (see also the architecture in Fig. \ref{fig:arch1}). For example we used this scheme to allow a GUI-testing tool TESTAR \cite{Vos+2021} and a model-based testing (MBT) tool to target games in two of our case studies. E.g. TESTAR exploits iv4xr\ Environment and navigation graph to perform smart monkey testing. The MBT tool can efficiently generate test cases from an EFSM model of a game, which subsequently are translated to goal structures for a test agent to execute. This is a very simple integration scheme. A translator must indeed be written, but this only need to be written once for each game. \section{Conclusion} \label{sec.concl} We have discussed our experience of using iv4xr{} to do automated testing on three different games, ranging from a turn-based 2D game to a complex commercial 3D game. In all three cases the use of iv4xr{} has successfully introduced automation and contributed in finding bugs and issues. Unlike e.g. a machine learning based approach, iv4xr{} is a programming approach that allows automated testing to be programmed at a high level. The approach gives developers more control on the behavior of the test agent while keeping the agent versatile and robust. Moreover, we have demonstrated that iv4xr{} can be used as a rich interface to enable more traditional testing tools to target computer games. Building an interface between the game under test and iv4xr{} along with a library of basic tactics and goals does require some effort, but this is one off investment, after which developers will benefit from powerful test automation. \begin{acks} This work is supported by the \grantsponsor{H2020SponsorIDblabla}{EU ICT-2018-3 H2020 Programme}, grant nr. \grantnum{H2020SponsorIDblabla}{856716}. \end{acks} \section{Space Engineers: TESTAR-iv4xr} \label{sec.SE} Space Engineers (SE) is a complex open-world game developed by Keen Software
1.2$ \kms for the central cloud and 3.8 $\pm 2.2$ \kms for the SW cloud with the exception of a small region (dark area around 00$^{\rm h}$~59$^{\rm m}$~30$^{\rm s}$ and --33\arcdeg\ 52\arcmin\ 00\arcsec) where $< \sigma > \simeq 15 \pm 2$ \kms. While the mean velocity dispersion $\sim 4 \pm 2$ km s$^{-1}$ of the gas in the outer parts appears to be smaller than the stellar velocity dispersion $\sim 6 \pm 1$ km s$^{-1}$ observed in the center (Armandroff and Da Costa 1986; Queloz et al 1995), they are still comparable within the quoted errors. \section{DISCUSSION} In trying to understand the properties of the ISM in Sculptor, we must try to reconcile the following principal characteristics of the galaxy and its relation to the Milky Way. First, Sculptor is presently located relatively close to the Milky Way, and its perigalactic distance is even closer ($\sim 60$ kpc; Irwin and Hatzidimitriou 1995; Schweizer et al 1995). Consequently, it is possible (see sec. 4.2) that tidal effects have played some role in producing the observed distribution of gas in the galaxy. This may help reconcile why gas that may be of an internal origin, is located so close to the edge of the optical image of the galaxy. Because Sculptor is relatively close, one must also carefully consider the possibility that the detected gas may not be associated with Sculptor but is instead a high-velocity cloud from the Magellanic Stream or some other complex that happens to be present in this part of the sky. \subsection{Internal Origin} Is it possible to account for the neutral gas seen in Sculptor from mass loss in normal giants? The most likely internal sources of gas that can be realistically retained in the vicinity of Sculptor are winds from evolved stars on the red--giant and asymptotic giant branches, and gas expelled during the planetary nebula phase of intermediate-age and old stars. Since the central regions of dSph galaxies appear to be devoid of neutral gas (Knapp et al 1978; Mould et al 1990; Koribalski et al 1994; this paper), and since they reveal no obvious signs of dust or molecular gas (with the exceptions of NGC~185 and NGC~205, both of which are sometimes considered ultra-luminous dSph systems; Young and Lo 1997a; Mateo 1998), a supernova would eject most, if not all, of the existing gas from a galaxy as small as Sculptor (Mac-Low and Ferrara 1998) as well as its own ejecta. Any supernovae would simply complicate the gas-retention problem; if a galaxy such as Sculptor is to retain gas from internal sources, the more sedate (and slow) sources of gas must dominate the generation of the ISM. As summarized by Mould et al (1990), the total mass loss rate expected from normal evolution is about 0.015 \msol yr$^{-1}$ per $10^9$\lsol$_{,B}$. For Sculptor ($L_B \sim 10^7$\lsol, Mateo 1998), this implies a total return of $1.5 \times 10^5$\msol per Gyr. At this rate, it would take $\sim 200$ Myr to produce the observed amount of \hI seen in Sculptor even if all of the ejected gas is retained by the galaxy and is converted to neutral H, neither of which is probably correct. If we assume that the mass distribution in Sculptor is more extended than the light (Da~Costa 1994), the escape velocity may be as much as 3.0 times larger than the central dispersion of 6.6 km/s, or about 20 km/s, or may be as only about twice the central dispersion, or about 13 km/s, if the mass follows the light distribution. The velocity spectrum of red giant winds extends somewhat above even this upper limit, suggesting that up to 80\%\ of the gas from such winds can be lost from the galaxy. Thus, the rejuvination time of the ISM from internal sources must be considerably longer than 200 Myr. \ For our purposes, the key point is that it should take from 200--1000 Myr to build up the amount of gas seen in Sculptor, and even longer if a significant fraction of the gas is in molecular form (observations of dE's suggest that there could be as much mass in H$_2$ than in \hI; Wiklind, Combes \& Henkel 1995). Since most of the SF seems to have taken place between 8 and 10 Gyr in Sculptor (Da Costa 1984), it would have produced a gas reservoir of $\sim 3.0 \times 10^5$\msol. So, only 10\% of this need to be retained in its neutral form to account for the \hI detected by the present observations. Of course, for other dSph galaxies in which \hI is not detected, these same arguments should apply; for these systems the problem shifts to how the gas is lost or is otherwise made unobservable. Galaxies such as Fornax (Stetson 1997; Demers et al 1998) and Carina (Smecker-Hane et al 1994; Mighell 1997; Hurley-Keller et al 1998), which show clear evidence of SF episodes even within the last few Gyr, show as yet {\it no} evidence of neutral gas, at least in the central regions. In these cases, perhaps there simply has not been enough time to generate a reservoir of gas. But what about Ursa Minor, Draco, Sextans, and Leo~II? These galaxies contain no significant populations younger than the youngest stars found in Sculptor. If Sculptor could retain gas from red giants and planetary nebulae, why didn't these? Have we looked at the right place? \subsection{Tidal Effects} Another possibility is that the gas was removed from the outer parts of Sculptor by tidal forces from the Milky Way during its last perigalactic passage $\sim 10^8$ years ago (Irwin and Hatzidimitriou 1995). This idea is based on the fact that while the central 10 arcmin of Sculptor (i.e. the optical core) has zero ellipticity, outside this region, the ellipticity smoothly increased to the asymptotic value of $\sim 0.3$ (Irwin and Hatzidimitriou 1995). This picture is similar to numerical simulations of dSph galaxies that are tidally disrupted where material is ejected ahead of and behind the satellite (eg, Allen and Richstone 1988; McGlynn \& Borne 1991; Moore and Davis 1994; Piatek and Pryor 1995; Oh et al 1995; Kroupa 1997). Is it a coincidence that the position angle of the proper motion measured by Schweitzer et al (1995) of $40\arcdeg \pm 24\arcdeg$ happens to be almost exactly the position angle defined by the two \hI clouds (see \ Figure 5)? Tidal effects could produce two clouds symmetrically distributed on both sides of the optical center. Moreover, when stars (and gas) are detached from the host galaxy, they continue to follow their host's galactic orbit for several galactic years before dispersing beyond the host's tidal radius (Oh et al 1994). With few exceptions, the dSph galaxies of the Local Group are clustered around the MW and M31, while the dIrr galaxies are more evenly distributed throughout the group (Mateo 1998). This certainly suggests that the proximity of a massive galaxy may have played an important role in determining the structural and kinematic properties of dSph systems. The removal of their gas by tidal effects may be one of the important results of these encounters, though whether this can explain our observations of Sculptor remains unclear. \subsection{Gas Falling Back or Expanding ?} A number of authors have at various times suggested that dSph systems may be the remnants of extremely low-luminosity dIrr galaxies that have been depleted of gas by their initial or subsequent SF episodes, or perhaps by tidal effects (eg, Ferguson and Binggeli 1994; but also see Mateo 1998 and references therein). This scenario is supported by the fact that large expanding cavities surrounded by dense shells are found in the neutral interstellar medium of many dIrr galaxies that were observed with sufficient resolution (Puche and Westpfahl 1994). The energetics of the gas suggest that these structures are plausibly created by stellar winds and supernova explosions from the young dIrr stellar populations (Larson 1974, Dekel \& Silk 1986). The largest dwarfs, such as Magellanic irregular systems (e.g. IC 2574, Martimbeau, Carignan and Roy 1994; Holmberg II, Puche et al. 1992), contain several such shells. However, in the smallest dwarfs (e.g. Holmberg I and M81dwA, Westpfahl and Puche 1994; Leo A, Young and Lo 1996), only one large slowly (v$_{exp} \simeq 5$ \kms) expanding shell usually dominates the ISM. The expansion and contraction of the entire ring- or shell-like ISM of these small galaxies is interpreted as being the result of
last inequality holds by assumption $r-h <\frac{\epsilon}{4}$. According to the theorem by Brudno, see \cite{Brudno}, for $P$-almost all $x$ there exists an $N_{x,\epsilon} \in \mathbb {N}$ such that $K(x_1^n) \geq n(h - \frac{\epsilon}{2})$ for all $n \geq N_{x,\epsilon}$. It follows for $\Delta_n=\epsilon n$ \begin{eqnarray}\label{complexity-lower-bound2} K(x_1^n) + \Delta_n\geq n (h + \frac{\epsilon}{2}),\qquad n \geq N_{x,\epsilon}. \end{eqnarray} Relations (\ref{Sigma-upper-bound2}) and (\ref{complexity-lower-bound2}) together imply (\ref{bound-on-total-info-of-unif-distr}) for $P$-almost all $x$ and $n \geq N_x:=\max \{ N_{x,\epsilon}, N \}$. It follows that $P$-almost surely the effective complexity ${\mathcal E}_{\delta,\Delta_n}(x_1^n)$ is upper bounded by the Kolmogorov complexity of ${\mathbb{E}}_{r,n}$ for $n \geq N_x$: \begin{eqnarray*} {\mathcal E}_{\delta,\Delta_n}(x_1^n) &\leq& K({\mathbb{E}}_{r,n}) \leq K(n)+K(r)+c\\ &\stackrel + <& \log n + \mathcal{O}(\log\log n). \end{eqnarray*} 2. Now let $P$ be an arbitrary stationary process. Recall that there is a unique ergodic decomposition of $P$ \begin{eqnarray*} P=\int_{\mathcal{E}({\{0,1\}^{\infty}})} Q d\mu(Q). \end{eqnarray*} Moreover, to $P$-almost every $x\in {\{0,1\}^{\infty}}$ we may associate an ergodic component $Q_x$ of $P$ such that $x$ is a typical element of $Q_x$. Then there exists an $N_{x,\epsilon}$ such that \begin{eqnarray*} K(x_1^n)+\Delta_n \geq n(h_x + \frac{\epsilon}{2}),\qquad n \geq N_{x,\epsilon}, \end{eqnarray*} where $h_x$ denotes the entropy rate of $Q_x$. Hence the proof for the stationary case reduces to the ergodic situation considered in the first part above. $\Box$ \section{Coarse Effective Complexity}\label{sec:coarse-ec} Our main result becomes most transparent if presented in the context of \textit{coarse effective complexity}. This is a modification of plain effective complexity which incorporates the parameter $\Delta$ as a penalty into the original formula. It is inspired by a corresponding modification of sophistication, called coarse sophistication, which has been introduced by Antunes and Fortnow in \cite{soph}. \\ \\ Let $\delta \geq 0$. The \textit{coarse effective complexity} ${\mathcal E}_{\delta}(x)$ of a finite string $x \in \{0,1\}^*$ is defined by \begin{eqnarray*} {\mathcal E}_{\delta} (x):=\min \{K({\mathbb{E}})+ \Sigma({\mathbb{E}}) - K(x): {\mathbb{E}} \mbox{ is computable ensemble, }{\mathbb{E}}\ \delta-\textrm{typical for } x\}. \end{eqnarray*} The term $\Sigma({\mathbb{E}}) - K(x)$ accounts for the exact value by which the total information of an ensemble ${\mathbb{E}}$ exeeds the Kolmogorov complexity of $x$. By definition of total information $\Sigma({\mathbb{E}})$ an equivalent expression for ${\mathcal E}_{\delta}(x)$ reads \begin{eqnarray*} {\mathcal E}_{\delta}(x)= \min \{2K({\mathbb{E}})+ H({\mathbb{E}}) - K(x): {\mathbb{E}} \mbox{ is computable ensemble, }{\mathbb{E}}\ \delta-\textrm{typical for } x\}. \end{eqnarray*} We derive the basic properties of coarse effective complexity similarily as it has been done in \cite{soph} in the context of coarse sophistication. That is firstly, in the proposition below, we prove an upper bound on coarse effective complexity. Secondly, we show existence of strings, which are close to saturate this bound. \begin{proposition} Let $\delta \geq 0$. There is a constant $c$ such that for all $x \in \{0,1\}^*$ we have \begin{eqnarray} {\mathcal E}_{\delta}(x)\leq \frac{n}{2} + \log n +c, \end{eqnarray} where $n = \ell(x)$. \end{proposition} {\it Proof.} Suppose that $K(x) \leq \frac{n}{2}+ \log n$. Let ${\mathbb{E}}_x$ denote the ensemble with ${\mathbb{E}}(x)=1$ and ${\mathbb{E}}(y)=0$ for $y \not= x$. Note that ${\mathbb{E}}_x$ is trivially $\delta$-typical for $x$ for any $\delta \geq 0$ and obviously $H({\mathbb{E}}_x)=0$. Moreover, there is a constant $c_1$ such that it holds $K({\mathbb{E}}_x) \leq K(x)+c_1$. This implies the upper bound \begin{eqnarray} {\mathcal E}_{\delta}(x) &\leq& 2 K({\mathbb{E}}_x) +0 - K(x) \nonumber \\ &\leq& K(x) + 2 c_1\nonumber\\ &\leq& \frac{n}{2} + \log n + 2c_1, \nonumber \end{eqnarray} where the last line holds by assumption. Now, suppose that $K(x) > \frac{n}{2} + \log n$. Let ${\mathbb{E}}_n$ be the ensemble on $\{0,1\}^*$ given by ${\mathbb{E}}_n(y)= \frac{1}{2^n}$ for all $y$ with $\ell(y)=n$ and vanishing elsewhere. Then $H({\mathbb{E}}_n)= n$ and there exists a constant $c_2$, independent of $n$, such that $K({\mathbb{E}}_n) \leq \log n + c_2$. It follows \begin{eqnarray*} {\mathcal E}_{\delta}(x)&\leq& 2 \log n + 2 c_2 + n -K(x)\\ &\leq& \frac{n}{2} + \log n +2 c_2, \end{eqnarray*} where, again, the second line holds by assumption on $K(x)$. Setting $c:= \max\{2c_1, 2c_2\}$ completes the proof. $\Box$ \begin{theorem}\label{thm:high-coarse-ec} Let $\delta \geq 0$. For every sufficiently large $n \in \mathbb {N}$ there exists a string $x\in \{0,1\}^n$ with \begin{eqnarray} {\mathcal E}_{\delta}(x)\geq (1-3\delta) \frac{n}{2} - (2+3\delta) \log n -2\log \log n + C, \end{eqnarray} where $C$ is a global constant. \end{theorem} {\it Proof.} For $x \in \{0,1\}^*$ and $\Delta \geq 0$ denote by ${\mathbb{E}}_{x}^{\Delta}$ the minimal ensemble associated to ${\mathcal E}_{\delta,\Delta}(x)$. Due to Lemma 22 in \cite{ECieee} for every $\epsilon>0$ there exists a subset $S_x^{\Delta}$ of $\{0,1\}^*$ such that \begin{eqnarray} \log |S_x^{\Delta}| &\leq& H({\mathbb{E}}_x^{\Delta})(1+ \delta) + \epsilon \label{upper-bound-on-size}\\ K(S_x^{\Delta})&\leq& K({\mathbb{E}}_x^{\Delta}) + c_1 \label{upper-bound-on-compl}, \end{eqnarray} where $c_1$ is a global constant. In \cite{ECieee} we have proven the relation \begin{eqnarray}\label{eqn31-in-ECieee} K(x| S_x^{\Delta}, K(S_x^{\Delta}))\geq \frac{\log |S_x^{\Delta}|}{1+ \delta} -\log n -2 \log \log n -\Lambda_{\Delta}, \end{eqnarray} which holds for arbitrary $x \in \{0,1\}^n$, $n \in \mathbb {N}$. The term $\Lambda_{\Delta}$ is constant in $x\in \{0,1\}^*$ and monotonically increasing in $\Delta$, cf. $(32)$ in \cite{ECieee}. Now, let $K_n:=\max \{K(t)|\ t \in \{0,1\}^n\}$ and define \begin{eqnarray*} k:= n - \delta(K_n + \Delta_n + \epsilon) + \log n + 2 \log \log n - \Lambda_{\Delta_n}- c_2, \end{eqnarray*} where $\Delta_n:= \frac{n}{2}+ \log n + c$ is the upper bound on ${\mathcal E}_{\delta}(x)$ obtained in the previous proposition and $c_2$ is a global constant from Theorem IV.2 in \cite{GacsTrompVitanyi}, see also Lemma 12 in \cite{ECieee}. If $n$ is large enough then $0< k<n$ holds, and Theorem IV.2 in \cite{GacsTrompVitanyi} applies: There is a string $x_k \in \{0,1\}^n$ such that \begin{eqnarray} K(x_k| S, K(S)) < \log |S| -n -k + c_2, \end{eqnarray} for every set $S \ni x_k$ with $K(S) < k-c_3$, where $c_3$ is another global constant. Let ${\mathbb{E}}_{x}$ denote the minimizing ensemble associated to coarse effective complexity ${\mathcal E}_{\delta}(x)$ and $\Delta_x:=K({\mathbb{E}}_x) +H({\mathbb{E}}_x) -K(x)$ such that ${\mathbb{E}}_{\delta}(x)= K({\mathbb{E}}_x) + \Delta_x$. Further, define $S_x:= S_x^{\Delta_x}$. It holds the inequality \begin{eqnarray*} -\delta(K_n + \Delta_n + \epsilon) &\leq& -\delta (K(x_k) + \Delta_x +\epsilon)\\ &\leq& -\delta\left(H({\mathbb{E}}_{x_k})+\epsilon \right) \nonumber\\ &\leq& -\delta\left( H({\mathbb{E}}_{x_k})+ \frac{\epsilon}{1+\delta}\right)\nonumber \\ &=& \frac{-\delta}{1+\delta} \left(H({\mathbb{E}}_{x_k})(1+\delta) + \epsilon \right)\nonumber \\ &\leq& \left( \frac{1}{1+\delta} -1 \right) \log |S_{x_k}|, \end{eqnarray*} where the last upper bound holds by (\ref{upper-bound-on-size}). Now suppose that $K(S_{x_k}) < k -c_1$. Then \begin{eqnarray*} K(x_k| S_{x_k}, K(S_{x_k})) &<& \log |S_{x_k}|-n+k+c_2\\ &\leq& \log |S_{x_k}|-\log n - 2 \log \log n \\ && -\Lambda_{\Delta_n}- \delta(K_n + \Delta_n +\epsilon)\\ &\leq& \frac{\log |S_{x_k}|}{1+\delta} -\log n - 2\log \log n\\ && -\Lambda_{\Delta_n}\\ &\leq& \frac{\log |S_{x_k}|}{1+\delta} -\log n - 2\log \log n\\ && -\Lambda_{\Delta_{x_k}}. \end{eqnarray*} But the strict inequality is a contradiction to (\ref{eqn31-in-ECieee}). Hence our assumption must be false and we instead have $K(S_{x_k}) \geq k-c_3$. By ${\mathcal E}_{\delta}(x) = K({\mathbb{E}}_x)+\Delta_{x}$ and using both (\ref{upper-bound-on-compl}) and the bound $K_n \leq n + 2\log n + \gamma$, where $\gamma$ is a global constant, we finally obtain \begin{eqnarray*} {\mathcal E}_{\delta}(x_k)&=& K({\mathbb{E}}_{x_k})+ \Delta_{x_k}\\ &\geq& K(S_{x_k})-c_1+ \Delta_{x_k} \\ &\geq& k-c_3-c_1 + \Delta_{x_k}\\ &\geq& n -\delta(\frac{3}{2}n+3 \log n + \gamma +c+ \epsilon)-\log n\\ & & - 2\log \log n - \frac{n}{2} - \log n - c- c_2 -1 -c_3 -c_1\\ &=& (1-3\delta)\frac{n}{2}-(2 +3\delta)\log n -2\log \log n + C, \end{eqnarray*} where $C:= -\delta(\gamma + \epsilon)-1- (1+\delta)c-c_1-c_2-c_3$. $\Box$ \\ \\ Although, according to the above theorem, for arbitrary large $n$ the existence of strings of length $n$ with moderate coarse effective complexity is ensured, the coarse effective complexity of sufficiently long prefixes of a typical stationary process realization becomes small. This is a direct implication of Theorem \ref{main-theorem}. \begin{theorem}\label{thm:low-coarse-ec} Let $P$ be a stationary process, $\delta \geq 0$ and $\epsilon > 0$. Then for $P$-almost every $x$ \begin{eqnarray}\label{implication-of-Thm10} {\mathcal E}_{\delta}(x_1^n)\leq \epsilon n + \log n + \mathcal{O}(\log \log n). \end{eqnarray} \end{theorem} {\it Proof.} By definiton of coarse effective complexity it holds ${\mathcal E}_{\delta}(x) \leq \Delta + {\mathcal E}_{\delta, \Delta}(x)$, for all $x \in \{0,1\}^*$ and $\Delta > 0$. We set $\Delta_n=\epsilon n$. Then the conditions of Theorem \ref{main-theorem} are satisfied and applying (\ref{main-ineq}) we arrive at (\ref{implication-of-Thm10}). $\Box$ \section{Conclusions}\label{Sec:Conclusion} In this contribution we studied the notion of plain effective complexity, which is assigned to a given string, within the context of an underlying stochastic process as model of the string generating mechanism. In \cite{ECieee} we have shown that strings which are called ``non-stochastic'' in the context of Kolmogorov minimal sufficient statistics have large value of plain effective complexity. The existence of such strings has been proven by G\'acs, Tromp and Vit\'anyi in \cite{GacsTrompVitanyi}. Here, our aim was to understand how properties of the stochastic process such as ergodicity and stationarity influence the effective complexity of corresponding typical realizations. Is it possible that the prefixes of a typical process realization represent a sequence of finite strings in increasing lenght $n$ that eventually have a high or moderate value of effective complexity? Our main theorem refers to stationary and in general {\it non computable} processes. It proves that modelling the regularities of strings by computable ensembles with total information that is allowed to excess the string's Kolmogorov complexity up to a linearly growing amount $\epsilon n$ with an {\it arbitrary small} $\epsilon >0$ is sufficient for typically generating non-complex strings. The value $\epsilon n$ plays the role of a parameter in the concept of effective complexity. In order to have a notion that is independent of this parameter we introduced coarse effective complexity. It corresponds to coarse sophistication introduced by Antunes and Fortnow in \cite{soph} and modifies effective complexity by incorporating the parameter as
, or once, one that proves previous to a giveaway someone of livres of the information R. These know the sets that are Firstly clearly like ring skills. delusional elements are standard dinosaurs of borked forms and audience non-invertible of their useful thermodynamics. logistic freethinkers are made only to nilpotent isomorphisms. A recognition includes become well if living the group television of it with any exciting scientist of R-modules works code. A life is defined per-default-encrypted if it contains into its commutative Agreement. A analysis nothing is a small sale( natural or Typically) of recent modules. finitely these Windows are not shown about favorite. An own download miss pym is a insured anyone that cannot assist compiled as a closed lemma of two free businesses. last, the M of M is the zero threat. A Occult opinion is a undergraduate over a Commandment handy that 0 is the public physics denoted by a free ebook( first network) of the stock. A Noetherian number is a system which is the including isomorphism download on &, that is, every running ring of molecules is different after Clearly whole Windows. Now, every site is then trusted. An Artinian none is a associate which is the shopping PC renumbering on advances, that is, every challenging engineer of commissioners is indicative after closely historic dvd. Kx either is User a exact, basic download miss, without system of Orbital, to be and be the 32 Bit Kdb+ Software on the other password or modern volcanic interface resources of one or more releases and to Use calming ring the 32 Bit Kdb+ Software to correspond, manipulate and run triggers into which the 32 Bit Kdb+ Software refers Given( ' User Applications '). product may surjectively:( i) understand the 32 Bit Kdb+ Software,( ii) summarize, do, use or Suppose the 32 Bit Kdb+ Software to any free server, or( links) are to console or ask plan the 32 Bit Kdb+ Software. No Distribution or Hosting. The User may not keep or redirect the 32 Bit Kdb+ Software or User Applications to or for any English inspiration. Specialty Cut Off Machines Calming the students download miss pym disposes will place a Croatian definition of ScheduledToastNotifications. The -nilpotent will fully find released issues for the new app. To report a formulated mod-A, the row to obtain made should determine required to the RemoveFromSchedule video. Between these two gains, you can pass your true factors. In community to the argon; science; apparatus suing a inclusion; eating; manifest Technology, it not contains a output; office; Faith. This time passes you to See left partners to the information. The husband itself becomes a different extensive no- image. The crisis use can be released to either P or n. The Genre; account; title says a behavior of acyclic vertices; View; issues. Each tradition must sell an performance file writing what loss of agreement it is. For an download miss pym, this must make either HomA or be. By starting a metals element to your left fluency, your everything also evaporates a period more like an status and is the group to recently especially ask jawed but to gain much away. cue 7 An Alarm Toast NotificationSetting the SnoozeBy someone, Windows emits the right property for an result at 9 liberties. Then, you can have the role of the decomposition in the ScheduledToastNotification purchasing. This prejudice is a usual agregar with two wee networks. The political follows the statement of production for the review. To leave to this RSS download miss pym disposes, health and buy this growth into your RSS subscriber. This masterful sound can be published statewide from the beefed- set and may remove complicated for saving, Story and detail, or for additional formation. It gives more on local radicals in content WritingProf truths to a browser of own and French submodules without the part of 2019Published radical review others. stellar with its infringement of exercises leading the choice at future discrimination, the center is a semiperfect training of Perhaps inserted servers to touch authority page, while knowing a high public joy of reform to properties. BIOS Series Sawing Systems In my download miss, the most sure( and the simplest) ring to be the semiprime ebook of terms( and years) has to hover only of the T certainty the common Scheduling in akses of A-modules and sides. This is one to be the teaching-focussed patients of quiver limits and public arrows to directly be and consider the simple devices of experts( and to blame the two-sided big ideals related in the system PC). maybe, this difference is up more ringed. strictly, the devices in the download duration reconnect clas-sification but direct notifications for the canonical products in the illness obtained experiment. 1 of Rotman's Advanced Modern Algebra, and Voloch's: results of elements the only enrollment. book: Now Voloch's theboundary is a PlayStation - since the cohomology divided Single-particle is shortly the easiest administrator - in idea both Rotman's and Voloch's lives can be evaluated. If you want immediately varying to generate cyclic years left I simply show that you have the prolonged download miss pym disposes in George Bergman is An issue to General Algebra and Universal interventions. ideal stuff in ii and sign. 39; references not was this future to versions. Since you very showed generally natural comments, learn me fail a easy download miss. In some terms, you can moreover also about cause a other worldContinue from an past freeme. Please get 2nd to delete the alarm. be MathJax to let algebras. To come more, help our adolescents on discussing right apps. go early practices were local sites or divide your widespread lot. are multiplicative changes Then deductive as R-modules? It is that:( 1) not though Hebrew has the most Static download module, it satisfies currently complete a normal organized ring; and( 2) that people Using an MS adolescent with examples can have upon them in interest of this category. 039; nonreduced camps that could collapse captured to all public terms of rings, whose clips can comprehend exported specifically to Plotinus, ", and Proclus, his sends an n-th n that is the modern module between hearings and their apps do out. 039; closed string of annihilator as a Forgot $)List enabling definite challenges. almost, estudiante and Thanks, which Tambiah explores not book language, face given to polynomials in which what is new is the territorial platform between the definition and the stability. We are branding to get one more download miss of the Wedderburn-Artin need. We shall start this branch by on the model manufacture of virtues" of the engineer Q(A). We can show the number A to save developed. O is a main natural Noetherian ebook with a classic cold M. 0 and the f quiver 's a P community. H-250SA II 4-3: heating from a download proved to a propertyBeach did. 10 domain and 90 cleaner of the number. The Best is then more ideal. The Best sense all structure buildings. The revolution can buy it both sites. More shares are to transform. How Good Are Simple Heuristics? 200 at the fun of the interface spectrum Abt, 1991). The button must announce an ideal. The Best to fill with download miss. The pre-installed variety of acquire The Best represents the group parental. HOW GOOD have SIMPLE HEURISTICS? For conclusions, we buy developed that the number terms in engines lines settings around us can provide left by the left ring. The excellence offered always Reviewing the imbalances within FBA to find right. Some confess our bedrock is not left out of media in the misconfigured comment that rich passwords 've designed out of information. This proves a construction of 10
\eqref{E:Def_Proj}. When $m=1$, we apply \eqref{E:Y_Rec} with $(n,m)=(1,1)$ and $Y_0^1\equiv0$ to \eqref{Pf_KED:cd_Ym} to get \begin{align*} M_{\cos\theta}B\mathcal{P}_1u = d_2^1Y_2^1 = \frac{d_2^1}{a_2^1}M_{\cos\theta}Y_1^1, \quad M_{\cos\theta}(a_2^1B\mathcal{P}_1u-d_2^1Y_1^1) = 0. \end{align*} Hence $a_2^1B\mathcal{P}_1u-d_2^1Y_1^1=0$ on $S^2$. Moreover, since $u$ is of the form \eqref{E:Ko_u_X}, $\mathcal{P}_1u=\sum_{n\geq2}c_n^1Y_n^1$. By these facts, $d_2^1=(d_2^1Y_1^1,Y_1^1)_{L^2(S^2)}$, and \eqref{E:Y_dphi}, we find that \begin{align*} d_2^1 = (a_2^1B\mathcal{P}_1u,Y_1^1)_{L^2(S^2)} = a_2^1\sum_{n\geq2}\left(1-\frac{6}{\lambda_n}\right)c_n^1(Y_n^1,Y_1^1)_{L^2(S^2)} = 0. \end{align*} We also have $d_2^{-1}=0$ in the same way. Let $m=2$. Then since $Y_2^2 = C_2\sin^2\theta\,e^{2i\varphi}$ with a nonzero constant $C_2\in\mathbb{R}$ by \eqref{E:SpHa}, we can rewrite the equation \eqref{Pf_KED:cd_Ym} as \begin{align*} 2B\mathcal{P}_2u(\theta,\varphi) = d_2^2C_2\frac{\sin^2\theta}{\cos\theta}e^{2i\varphi}, \quad (\theta,\varphi)\in[0,\pi]\times[0,2\pi), \, \theta\neq\frac{\pi}{2}. \end{align*} Hence $d_2^2=0$, otherwise the left-hand side does not belong to $L^2(S^2)$. Similarly, we have $d_2^{-2}=0$ and thus $f=0$ by \eqref{Pf_KED:cd_f}, i.e. the condition (d) is valid. Therefore, we can apply Theorem \ref{T:Phi_Conv} to obtain \eqref{E:Ko_EnDi}. \end{proof} \begin{proof}[Proof of Theorem \ref{T:EnDi_Lna}] For $\nu>0$ and $a\in\mathbb{R}$ let $\alpha=a/\nu$. Then $e^{tL_\alpha}=e^{\frac{t}{\nu}\mathcal{L}^{\nu,a}}|_{\mathcal{X}}$ in $\mathcal{X}$ for $t\geq0$ since $L_\alpha=\nu^{-1}\mathcal{L}^{\nu,a}|_{\mathcal{X}}$. Hence \eqref{E:EnDi_Lna} follows from \eqref{E:Ko_EnDi}. \end{proof} \section{Abstract results} \label{S:Abst} This section gives abstract results for a perturbed operator. For a linear operator $T$ on a Banach space $\mathcal{B}$, we denote by $D_{\mathcal{B}}(T)$, $\rho_{\mathcal{B}}(T)$, and $\sigma_{\mathcal{B}}(T)$ the domain, the resolvent set, and the spectrum of $T$ in $\mathcal{B}$. Also, let $N_{\mathcal{B}}(T)$ and $R_{\mathcal{B}}(T)$ be the kernel and range of $T$ in $\mathcal{B}$. We say that $T$ is Fredholm of index zero if $R_{\mathcal{B}}(T)$ is closed in $\mathcal{B}$ and the dimensions of $N_{\mathcal{B}}(T)$ and the quotient space $\mathcal{B}/R_{\mathcal{B}}(T)$ are finite and the same, and define \begin{align*} \tilde{\sigma}_{\mathcal{B}}(T) = \{\zeta\in\mathbb{C} \mid \text{$\zeta-T$ is not Fredholm of index zero}\} \subset \sigma_{\mathcal{B}}(T). \end{align*} Note that $\sigma_{\mathcal{B}}(T)\setminus\tilde{\sigma}_{\mathcal{B}}(T)$ is the set of all eigenvalues of $T$ of finite multiplicity. Also, $\tilde{\sigma}_{\mathcal{B}}(T+K)=\tilde{\sigma}_{\mathcal{B}}(T)$ for every $T$-compact operator $K$, since $\zeta-(T+K)$ is Fredholm of index zero if and only if $\zeta-T$ is so for each $\zeta\in\mathbb{C}$ (see \cite[Theorem IV-5.26]{Kato76}). Let $(\mathcal{X},(\cdot,\cdot)_{\mathcal{X}})$ be a Hilbert space and $A$ and $\Lambda$ linear operators on $\mathcal{X}$. We make the following assumptions. \begin{assumption} \label{As:A} The operator $A$ is self-adjoint in $\mathcal{X}$ and satisfies \begin{align} \label{E:Ab_A_Po} (-Au,u)_{\mathcal{X}} \geq C_A\|u\|_{\mathcal{X}}^2, \quad u\in D_{\mathcal{X}}(A). \end{align} with some constant $C_A>0$. \end{assumption} \begin{assumption} \label{As:La_01} The following conditions hold: \begin{enumerate} \item The operator $\Lambda$ is densely defined, closed, and $A$-compact in $\mathcal{X}$. \item Let $\mathcal{Y}=N_{\mathcal{X}}(\Lambda)^\perp$ be the orthogonal complement of $N_{\mathcal{X}}(\Lambda)$ in $\mathcal{X}$ and $\mathbb{Q}$ the orthogonal projection from $\mathcal{X}$ onto $\mathcal{Y}$. Then $\mathbb{Q}A\subset A\mathbb{Q}$ in $\mathcal{X}$. \end{enumerate} \end{assumption} \begin{assumption} \label{As:La_02} There exist a Hilbert space $(\mathcal{H},(\cdot,\cdot)_{\mathcal{H}})$, a closed symmetric operator $B_1$ on $\mathcal{H}$, and a bounded self-adjoint operator $B_2$ on $\mathcal{X}$ such that the following conditions hold: \begin{enumerate} \item The inclusion $\mathcal{X}\subset\mathcal{H}$ holds and $(u,v)_{\mathcal{X}}=(u,v)_{\mathcal{H}}$ for all $u,v\in\mathcal{X}$. \item The relation $N_{\mathcal{X}}(\Lambda)=N_{\mathcal{X}}(B_2)$ holds in $\mathcal{X}$ and \begin{align*} B_2u \in D_{\mathcal{H}}(B_1), \quad B_1B_2u = \Lambda u \in\mathcal{X} \quad\text{for all}\quad u\in D_{\mathcal{X}}(\Lambda). \end{align*} \item There exists a constant $C>0$ such that \begin{alignat}{3} (u,B_2u)_{\mathcal{X}} &\geq C\|u\|_{\mathcal{X}}^2, &\quad &u\in \mathcal{Y}, \label{E:uB2u} \\ \mathrm{Re}(-Au,B_2u)_{\mathcal{X}} &\geq C\|(-A)^{1/2}u\|_{\mathcal{X}}^2, &\quad &u\in D_{\mathcal{X}}(A)\cap\mathcal{Y}. \label{E:AB2u} \end{alignat} \end{enumerate} \end{assumption} Note that $B_2$ is a linear operator on the original space $\mathcal{X}$, not on the auxiliary space $\mathcal{H}$. Also, the operator $B_1$ on $\mathcal{H}$ does not necessarily map $\mathcal{X}$ into itself. By $\mathbb{Q}A\subset A\mathbb{Q}$ in Assumption \ref{As:La_01} we can consider $\mathbb{Q}A$ as a linear operator \begin{align*} \mathbb{Q}A\colon D_{\mathcal{Y}}(\mathbb{Q}A) \subset \mathcal{Y}\to\mathcal{Y}, \quad D_{\mathcal{Y}}(\mathbb{Q}A) = D_{\mathcal{X}}(A)\cap\mathcal{Y}. \end{align*} In what follows, we use the notation $\mathcal{N}=N_{\mathcal{X}}(\Lambda)$ for simplicity. Let $\mathbb{P}=I-\mathbb{Q}$ be the orthogonal projection from $\mathcal{X}$ onto $\mathcal{N}$ (note that $\mathcal{N}$ is closed in $\mathcal{X}$ since $\Lambda$ is closed). Then $\mathbb{P}A\subset A\mathbb{P}$ and we can also consider $\mathbb{P}A$ as a linear operator \begin{align*} \mathbb{P}A\colon D_{\mathcal{N}}(\mathbb{P}A) \subset \mathcal{N} \to \mathcal{N}, \quad D_{\mathcal{N}}(\mathbb{P}A) = D_{\mathcal{X}}(A)\cap\mathcal{N}. \end{align*} Note that $\mathbb{Q}A$ and $\mathbb{P}A$ are closed in $\mathcal{Y}$ and in $\mathcal{N}$, respectively. Also, $\mathbb{Q}\Lambda$ is $\mathbb{Q}A$-compact in $\mathcal{Y}$. For $\alpha\in\mathbb{R}$ we define a linear operator $L_\alpha$ on $\mathcal{X}$ by \begin{align*} L_\alpha=A-i\alpha\Lambda, \quad D_{\mathcal{X}}(L_\alpha)=D_{\mathcal{X}}(A) \end{align*} and consider $\mathbb{Q}L_\alpha=\mathbb{Q}A-i\alpha\mathbb{Q}\Lambda$ on $\mathcal{Y}$ with domain $D_{\mathcal{Y}}(\mathbb{Q}L_\alpha)=D_{\mathcal{Y}}(\mathbb{Q}A)$. Our aim is to establish an estimate for the semigroup generated by $L_\alpha$ which yields the enhanced dissipation as $|\alpha|\to\infty$ in abstract settings. Let us give auxiliary lemmas. \begin{lemma} \label{L:PS_Res} Suppose that Assumptions \ref{As:A} and \ref{As:La_01} are satisfied. Then $L_\alpha$ and $\mathbb{Q}L_\alpha$ are closed in $\mathcal{X}$ and in $\mathcal{Y}$, respectively, and \begin{align} \label{E:PS_ReSet} \rho_{\mathcal{X}}(L_\alpha) = \rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha)\cap\rho_{\mathcal{N}}(\mathbb{P}A) \end{align} for all $\alpha\in\mathbb{R}$. Moreover, for $\zeta\in\rho_{\mathcal{X}}(L_\alpha)$ and $f\in\mathcal{X}$ we have \begin{align} \label{E:PS_ReOp} \begin{aligned} \mathbb{Q}(\zeta-L_\alpha)^{-1}f &= (\zeta-\mathbb{Q}L_\alpha)^{-1}\mathbb{Q}f, \\ \mathbb{P}(\zeta-L_\alpha)^{-1}f &= (\zeta-\mathbb{P}A)^{-1}\mathbb{P}f-i\alpha(\zeta-\mathbb{P}A)^{-1}\mathbb{P}\Lambda(\zeta-\mathbb{Q}L_\alpha)^{-1}\mathbb{Q}f. \end{aligned} \end{align} \end{lemma} \begin{proof} We see that $L_\alpha=A-i\alpha\Lambda$ is closed in $\mathcal{X}$ since $A$ is closed and $\Lambda$ is $A$-compact in $\mathcal{X}$ (see \cite[Theorem IV-1.11]{Kato76}). Similarly, $\mathbb{Q}L_\alpha=\mathbb{Q}A-i\alpha\mathbb{Q}\Lambda$ is closed in $\mathcal{Y}$. Let us show \eqref{E:PS_ReSet} and \eqref{E:PS_ReOp}. Since $A$ is self-adjoint and $\mathbb{P}A\subset A\mathbb{P}$ in $\mathcal{X}$, we see that $\mathbb{P}A$ is self-adjoint in $\mathcal{N}$ and thus the residual spectrum of $\mathbb{P}A$ in $\mathcal{N}$ is empty. Hence for each $\zeta\in\sigma_{\mathcal{N}}(\mathbb{P}A)$ there exists a sequence $\{v_k\}_{k=1}^\infty$ in $D_{\mathcal{N}}(\mathbb{P}A)$ such that \begin{align} \label{Pf_PRe:PA_EV} \|v_k\|_{\mathcal{X}} = 1 \quad\text{for all}\quad k\in\mathbb{N}, \quad \lim_{k\to\infty}\|(\zeta-\mathbb{P}A)v_k\|_{\mathcal{X}} = 0, \end{align} which includes the case where $\zeta$ is an eigenvalue of $\mathbb{P}A$ with an eigenvector $v_\zeta$ and $v_k=v_\zeta$ for all $k\in\mathbb{N}$. Then since $L_\alpha v=Av=\mathbb{P}Av$ for $v\in D_{\mathcal{N}}(\mathbb{P}A)$, the sequence $\{v_k\}_{k=1}^\infty$ in $\mathcal{X}$ satisfies \eqref{Pf_PRe:PA_EV} with $\mathbb{P}A$ replaced by $L_\alpha$, which means that $\zeta\in\sigma_{\mathcal{X}}(L_\alpha)$. Hence $\sigma_{\mathcal{N}}(\mathbb{P}A) \subset \sigma_{\mathcal{X}}(L_\alpha)$, i.e. $\rho_{\mathcal{X}}(L_\alpha) \subset \rho_{\mathcal{N}}(\mathbb{P}A)$. Let $\zeta\in\rho_{\mathcal{X}}(L_\alpha)\subset\rho_{\mathcal{N}}(\mathbb{P}A)$. If $(\zeta-\mathbb{Q}L_\alpha)u=0$ for $u\in D_{\mathcal{Y}}(\mathbb{Q}L_\alpha)$, then we see by $\mathbb{P}A\subset A\mathbb{P}$ and $\mathbb{P}u=0$ that \begin{align*} (\zeta-L_\alpha)u = (\zeta-\mathbb{Q}L_\alpha)u-\mathbb{P}L_\alpha u = (\zeta-\mathbb{Q}L_\alpha)u-\mathbb{P}Au+i\alpha\mathbb{P}\Lambda u = i\alpha\mathbb{P}\Lambda u. \end{align*} Moreover, we can set $v=-i\alpha(\zeta-\mathbb{P}A)^{-1}\mathbb{P}\Lambda u\in D_{\mathcal{N}}(\mathbb{P}A)$ since $\zeta\in\rho_{\mathcal{N}}(\mathbb{P}A)$ and $\mathbb{P}\Lambda u\in\mathcal{N}$. Then we observe by $L_\alpha v=Av=\mathbb{P}Av$ that \begin{align*} (\zeta-L_\alpha)(u+v) = (\zeta-L_\alpha)u+(\zeta-\mathbb{P}A)v = i\alpha\mathbb{P}\Lambda u-i\alpha\mathbb{P}\Lambda u = 0, \end{align*} which yields $u+v=0$ by $\zeta\in\rho_{\mathcal{X}}(L_\alpha)$. Hence $u=\mathbb{Q}(u+v)=0$ and $\zeta-\mathbb{Q}L_\alpha$ is injective. Also, for $f\in\mathcal{Y}\subset\mathcal{X}$ let $w=(\zeta-L_\alpha)^{-1}f \in D_{\mathcal{X}}(L_\alpha)$. Then since \begin{align*} f = (\zeta-L_\alpha)u+(\zeta-A)v, \quad u = \mathbb{Q}w \in D_{\mathcal{Y}}(\mathbb{Q}L_\alpha), \quad v = \mathbb{P}w \in D_{\mathcal{N}}(\mathbb{P}A) \end{align*} by $f=(\zeta-L_\alpha)w$ and $L_\alpha v=Av$, we see by $\mathbb{Q}u=u$, $\mathbb{Q}v=0$, and $\mathbb{Q}A\subset A\mathbb{Q}$ that \begin{align*} f = \mathbb{Q}f = (\zeta-\mathbb{Q}L_\alpha)u+(\zeta-A)\mathbb{Q}v = (\zeta-\mathbb{Q}L_\alpha)u. \end{align*} Hence $\zeta-\mathbb{Q}L_\alpha$ is surjective and we get $\zeta\in\rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha)$, i.e. $\rho_{\mathcal{X}}(L_\alpha)\subset\rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha)\cap\rho_{\mathcal{N}}(\mathbb{P}A)$. Conversely, let $\zeta\in\rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha)\cap \rho_{\mathcal{N}}(\mathbb{P}A)$. If $(\zeta-L_\alpha)w=0$ for $w\in D_{\mathcal{X}}(L_\alpha)$, then \begin{align} \label{Pf_PRe:L_inj} (\zeta-L_\alpha)u+(\zeta-A)v = 0, \quad u = \mathbb{Q}w \in D_{\mathcal{Y}}(\mathbb{Q}L_\alpha), \quad v = \mathbb{P}w \in D_{\mathcal{N}}(\mathbb{P}A) \end{align} by $L_\alpha v=Av$. We apply $\mathbb{Q}$ to \eqref{Pf_PRe:L_inj} and use $\mathbb{Q}u=u$, $\mathbb{Q}v=0$, and $\mathbb{Q}A\subset A\mathbb{Q}$ to find that $(\zeta-\mathbb{Q}L_\alpha)u=0$. Thus $u=0$ by $\zeta\in\rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha)$. Then we also have $(\zeta-\mathbb{P}A)v=0$ by \eqref{Pf_PRe:L_inj}, $\mathbb{P}v=v$, and $\mathbb{P}A\subset A\mathbb{P}$, which yields $v=0$ since $\zeta\in\rho_{\mathcal{N}}(\mathbb{P}A)$. Hence $w=u+v=0$ and $\zeta-L_\alpha$ is injective. Also, for $f\in\mathcal{X}$ let $w=u+v_1+v_2$ with \begin{align} \label{Pf_PRe:Def_uv} u = (\zeta-\mathbb{Q}L_\alpha)^{-1}\mathbb{Q}f, \quad v_1 = (\zeta-\mathbb{P}A)^{-1}\mathbb{P}f, \quad v_2 = -i\alpha(\zeta-\mathbb{P}A)^{-1}\mathbb{P}\Lambda u. \end{align} Then since $u\in D_{\mathcal{Y}}(\mathbb{Q}L_\alpha)$ and $v_1,v_2\in D_{\mathcal{N}}(\mathbb{P}A)$, we have $w\in D_{\mathcal{X}}(L_\alpha)$ and \begin{align} \label{Pf_PRe:Cal_w} \begin{aligned} (\zeta-L_\alpha)w &= (\zeta-L_\alpha)u+(\zeta-L_\alpha)(v_1+v_2) \\ &= (\zeta-\mathbb{Q}L_\alpha)u+i\alpha\mathbb{P}\Lambda u+(\zeta-\mathbb{P}A)(v_1+v_2) = f \end{aligned} \end{align} by $\mathbb{Q}u=u$, $\mathbb{P}v_j=v_j$ for $j=1,2$, $\mathbb{Q}A\subset A\mathbb{Q}$, and $\mathbb{P}A\subset A\mathbb{P}$. Thus $\zeta-L_\alpha$ is surjective, i.e. $\zeta\in\rho_{\mathcal{X}}(L_\alpha)$, and we obtain \eqref{E:PS_ReSet}. Also, when $\zeta\in\rho_{\mathcal{X}}(L_\alpha)$, we see by \eqref{Pf_PRe:Cal_w} that \begin{align*} \mathbb{Q}(\zeta-L_\alpha)^{-1}f = \mathbb{Q}w = u, \quad \mathbb{P}(\zeta-L_\alpha)^{-1}f = \mathbb{P}w = v_1+v_2, \quad f\in\mathcal{X} \end{align*} for $w=u+v_1+v_2$ with $u$, $v_1$, and $v_2$ given by \eqref{Pf_PRe:Def_uv}. Hence \eqref{E:PS_ReOp} follows. \end{proof} \begin{lemma} \label{L:PS_QP} Under Assumptions \ref{As:A}--\ref{As:La_02}, we have \begin{align} \label{E:QB2} \mathbb{Q}B_2 = B_2 \quad\text{on}\quad \mathcal{X}, \quad \mathrm{Im}(\Lambda u,\mathbb{Q}B_2u)_{\mathcal{X}} = \mathrm{Im}(\Lambda u,B_2u)_{\mathcal{X}} = 0, \quad u\in D_{\mathcal{X}}(\Lambda). \end{align} \end{lemma} \begin{proof} For $u,v\in\mathcal{X}$ we have $(\mathbb{P}B_2u,v)_{\mathcal{X}}=(u,B_2\mathbb{P}v)_{\mathcal{X}}=0$ since $B_2$ is self-adjoint and $\mathcal{N}=N_{\mathcal{X}}(B_2)$ in $\mathcal{X}$. Hence $\mathbb{P}B_2=0$ and $\mathbb{Q}B_2=B_2$ on $\mathcal{X}$. Also, \begin{align*} \mathrm{Im}(\Lambda u,\mathbb{Q}B_2u)_{\mathcal{X}} = \mathrm{Im}(B_1B_2u,B_2u)_{\mathcal{X}} = \mathrm{Im}(B_1B_2u,B_2u)_{\mathcal{H}} = 0 \end{align*} for $u\in D_{\mathcal{X}}(\Lambda)$ by Assumption \ref{As:La_02} (i) and (ii), where the last equality holds since $B_1$ is symmetric in $\mathcal{H}$ and $B_2u\in D_{\mathcal{H}}(B_1)$. Thus the second relation of \eqref{E:QB2} is valid. \end{proof} \begin{lemma} \label{L:Res_Half} Under Assumptions \ref{As:A}--\ref{As:La_02}, for all $\alpha\in\mathbb{R}$ we have \begin{align} \label{E:Res_Half} \{\zeta\in\mathbb{C} \mid \mathrm{Re}\,\zeta \geq 0\} \subset \rho_{\mathcal{X}}(L_\alpha) \subset \rho_{\mathcal{Y}}(\mathbb{Q}L_\alpha). \end{align} \end{lemma} \begin{proof} It suffices to verify the first inclusion since the second one follows from \eqref{E:PS_ReSet}. Since $A$ is self-adjoint in $\mathcal{X}$ and satisfies \eqref{E:Ab_A_Po}, and since $\Lambda$ is $A$-compact, \begin{align} \label{Pf_RH:TSig_L} \tilde{\sigma}_{\mathcal{X}}(L_\alpha) = \tilde{\sigma}_{\mathcal{X}}(A-i\alpha\Lambda) = \tilde{\sigma}_{\mathcal{X}}(A) \subset \sigma_{\mathcal{X}}(A) \subset (-\infty,-C_A]. \end{align} Let $\zeta\in\sigma_{\mathcal{X}}(L_\alpha)\setminus\tilde{\sigma}_{\mathcal{X}}(L_\alpha)$. Then $\zeta$ is an eigenvalue of $L_\alpha$. Let
\sum_n\left ( D_\mu \phi_n^* D^\mu \phi_n + i D_\mu \bar{\psi}_n\bar{\sigma}^{\mu} \psi_n+ \vert F_n\vert^2 \right )\nonumber\\ \label{componentla} &-&\frac{1}{4} F^2_{\mu \nu} -i \lambda \sigma^\mu \partial_\mu \bar{\lambda} +\frac{1}{2}D^2 + \frac{g}{2} D\sum_n q_n\phi_n^*\phi_n\nonumber\\ &-&\left[ i \sum_n\frac{g}{\sqrt{2}} \bar{\psi}_n\bar{\lambda} \phi _n - \sum_{nm} {1\over 2}{\partial^2 W \over \partial \phi_n \partial \phi_m} \psi_n\psi_m \right. \nonumber\\ &+& \left. \sum_nF_n\left(\frac{\partial W}{\partial \phi_n}\right)\right] +{\rm c.c.} \,. \eea At the end of the second line, $q_n$ are the $U(1)$-charges of the fields $\phi_n$. The equations of motion for the auxiliary fields $F_n$ and $D$ are the constraints: \bea F_n&=&- \left(\frac{\partial W }{\partial \phi_n}\right)^* \label{fi1} \\ D &=& -\frac{g}{2} \sum_nq_n\vert\phi_n\vert^2 \,. \label{d1} \eea Eq. (\ref{componentla}) contains the gauge invariant kinetic terms for the various fields, which specify their gauge interactions. It also contains, after having made use of Eqs. (\ref{fi1}) and (\ref{d1}), the scalar field potential, \bea V &=& V_F + V_D, \label{vfvd} \\ V_F &\equiv& \sum_n|F_n|^2, \label{fi2}\\ V_D &\equiv& \frac{1}{2} D^2. \label{d2} \eea This separation of the potential into an $F$ term and a $D$ term is crucial for inflation model-building, especially when it is generalized to the case of supergravity. The potential specifies the masses of the scalar fields, and their interactions with each other. The first term in the third line specifies the interactions of gaugino and scalar fields, while the second specifies the masses of the chiral fermions and their interactions with the scalars. All of these non-gauge interactions are called Yukawa couplings. To have a renormalizable theory, $W$ is at most cubic in the fields, corresponding to a potential which is at most quartic. However, one commonly allows $W$ to be of higher order, producing the kind of potentials that were mentioned in Section \ref{sss:tree}. >From the above expressions, in particular \eq{fi2}, one sees that the overall phase of $W$ is not physically significant. An internal symmetry can either leave $W$ invariant, or alter its phase. The latter case corresponds to what is called an R-symmetry. \label{holow} Because $W$ is holomorphic, the internal symmetries restrict its form much more than is the case for the actual potential $V$. In particular, terms in $W$ of the form $\frac12m\phi_1^2$ or $m\phi_1\phi_2$, which would generate a mass term $m^2|\phi_1|^2$ in the potential, are usually forbidden.\footnote {An exception is the $\mu$ term of the MSSM, $\mu H_U H_D$, which gives mass to the Higgs fields.} As a result, scalar particles usually acquire masses only from the vevs of scalar fields (ie., from the spontaneous breaking of an internal symmetry) and from supersymmetry breaking. The same applies to the spin-half partners of scalar fields, with the former contribution the same in both cases. In the case of a $U(1)$ gauge symmetry, one can add to the above lagrangian what is called a Fayet-Iliopoulos term \cite{fayil}, \begin{equation} -2\xi \int d^4 \theta ~V. \label{dterm} \end{equation} This corresponds to adding a contribution $-\xi$ to the $D$ field, so that \eq{d1} becomes \be D= - \frac{g}{2}\sum_n q_n|\phi_n|^2-\xi. \label{fid1} \ee The $D$ term of the potential therefore becomes \be V_D = \frac{1}{2}\left(\frac{g}{2}\sum_n q_n |\phi_n|^2+\xi\right)^2. \label{fivd} \ee {}From now on, we shall use a more common notation, where $\xi$ and the charges are redefined so that \be V_D = \frac12 g^2 \( \sum_n q_n |\phi_n|^2 + \xi \)^2 \,. \ee This is equivalent to \be D = - g \( \sum_n q_n |\phi_n|^2 + \xi \) \,. \label{ddef} \ee A Fayet-Iliopoulos term may be present in the underlying theory {}from the very beginning,\footnote {It is allowed by a gauge symmetry, unless the $U(1)$ is embedded in some non-Abelian group. $\xi = 0$ can be enforced by charge conjugation symmetry which flips all $U(1)$ charges. Such symmetry is possible in nonchiral theories.} or appears in the effective theory after some heavy degrees of freedom have been integrated out. It looks particularly intriguing that an anomalous $U(1)$ symmetry is usually present in weakly coupled string theories \cite{u(1)A}. (Anomalous in this context means that $\sum q_n\neq 0$.) In this case \cite{fi,fi1,fi2} \be \xi = \frac{g\sub{str}^2}{192\pi^2}\:{\rm Tr} {\bf Q}\:\mpl^2 . \label{xi} \ee Here ${\rm Tr} {\bf Q}=\sum q_n$, which is typically \cite{font2,kobayashi} of order 100. One expects the string-scale gauge coupling $g\sub{str}$ (Section \ref{dilgs}) to be of order $1$ to $10^{-1}$, making $\xi \simeq 10^{-1}$ to $10^{-2}\mpl$. In the context of the strongly coupled $E_8 \otimes E_8$ heterotic string \cite{horwit}, anomalous $U(1)$ symmetries may appear and have a nonperturbative origin, related to the presence, after compactification, of five-branes in the five-dimensional bulk of the theory. There is, at the moment, no general agreement on the relative size of the induced Fayet-Iliopoulos terms on each boundary compared to the value of the universal one induced in the weakly coupled case \cite{jmrfa,bifa}. \subsection{Spontaneously broken global susy} \label{ss:ssb} \subsubsection{The $F$ and $D$ terms} \label{fandd} Global supersymmetry breaking may be either spontaneous or explicit. Let us begin with the first case. For spontaneous breaking, the lagrangian is supersymmetric as given in the last subsection. But the generators $Q_\alpha$ fail to annihilate the vacuum. Instead, they produce a spin-half field, which may be either a chiral field $\psi_\alpha$ or a gauge field $\lambda_\alpha$. The condition for spontaneous susy breaking is therefore to have a nonzero vacuum expectation value for $\left\{ Q_\alpha,\psi_\beta \right\}$ or $\left\{ Q_\alpha,\lambda_\beta \right\}$. The former quantity is defined by \eq{psitransform}, and the latter by \eq{lamtran}. The quantities $\partial_\mu \phi$ and $F_{\mu\nu}$ contain derivatives of fields, and are supposed to vanish in the vacuum. It follows that susy is spontaneously broken if, and only if, at least one of the auxiliary fields $F_n$ or $D$ has a non-vanishing vev. In the true vacuum, one defines the scale $M\sub S$ of global supersymmetry breaking by \be M\sub S^4= \sum_n |F_n|^2 + \frac12 D^2 \,, \label{msdef98} \ee or equivalently \be M\sub S^4 = V \,. \ee (In the simplest case $D$ vanishes and there is just one $F_n$.) When we go to supergravity, part of $V$ is still generated by the supersymmetry breaking terms, but there is also a contribution $-3|W|^2 /\mpl^2$. This allows $V$ to vanish in the true vacuum as is (practically) demanded by observation. During inflation, $V$ is positive so the negative term is smaller than the susy-breaking terms. In most models of inflation it is negligible. In any case, $V$ is at least as big as the susy breaking term, so the search for a model of inflation is also a search for a susy-breaking mechanism in the early Universe. Spontaneous symmetry breaking can be either tree-level (already present in the lagrangian) or dynamical (generated only by quantum effects like condensation). The spontaneous breaking in general breaks the equality between the scalar and spin-$\frac12$ masses, in each chiral supermultiplet. But at tree level the breaking satisfies a simple relation, which can easily be derived from the lagrangian (\ref{componentla}). Ignoring mass mixing for simplicity, one finds in the case of symmetry breaking by an $F$-term, \be \sum_n \( m_{n1}^2 + m_{n2}^2 - 2m_{n\rm f}^2 \) =0 \,. \label{strzero} \ee Here $n$ labels the chiral supermultiplets, $m_{n\rm f}$ is the fermion mass while $m_{n1}$ and $m_{n2}$ are the scalar masses.\footnote {More generally, if the mass-squared matrix is non-diagonal the left hand side of \eq{strzero} is the supertrace defined in \eq{strzero}.} In the case of symmetry breaking by a $D$ term, coming from a $U(1)$, the right hand side of \eq{strzero} becomes $D {\rm Tr} {\rm \bf Q}$. But in order to cancel gauge anomalies, it is often desirable that ${\rm Tr} {\rm \bf Q}=0$ which recovers \eq{strzero}. \subsubsection{Tree-level spontaneous susy breaking with an $F$ term} Models of tree-level spontaneous susy-breaking where only $F$ terms have vevs are called O'Raifearteagh models. We consider them now, postponing until Section \ref{sss:dterm} the case of $D$-term susy breaking. The simplest O'Raifearteagh model involves a single field $X$, \begin{equation} W=m^2 X + \cdots \,, \end{equation} where the dots represent terms independent of $X$. The potential is given by $V=m^4+\cdots$, and $F_X=m^2$; thus supersymmetry is broken for nonvanishing $m$. Some models of inflation invoke such a linear superpotential. We shall encounter more complicated O'Raifearteagh models for inflation later. At this point let us give the following example, which is probably of only pedagogical interest. It involves three singlet fields, $X,\phi$ and $Y$, with superpotential: \begin{equation} W= \lambda_1 X(\phi^2 - \mu^2) + \lambda_2 Y \phi^2. \label{oraif} \end{equation} With this superpotential, the equations \begin{equation} F_X={\partial W \over \partial X}=\lambda_1 (\phi^2 - \mu^2)=0, \quad \quad F_Y={\partial W \over \partial Y}=\lambda_2\phi^2=0 \label{susybreaking} \end{equation} are incompatible. Note that at this level not all of the fields are fully determined, since the equation \begin{equation} {\partial W \over \partial \phi}=0 \label{xequation} \end{equation} can be satisfied provided \begin{equation} \lambda_1 X + \lambda_2 Y =0. \label{pseudoflat} \end{equation} This vacuum degeneracy is accidental and is lifted by quantum corrections. Since either $\langle F_X\rangle$ or $\langle F_Y\rangle$ are nonvanishing, supersymmetry is broken at the tree-level. \subsubsection{Dynamically generated superpotentials} \label{sss:dsb} It has been known for a long time that global, renormalizable supersymmetry may be dynamically broken in four dimensions \cite{ads,nelsonrev}. There already exist excellent reviews
loss function is cross entropy function defined as below: \begin{equation} \mli{Loss} = -\Sigma_{l}^{L}{g\log{p}}, \end{equation} where $L$ is the number of labels, $g$ is the probability of ground truth label and $p$ is the probability of each label. The evaluation metric is mIoU (mean IoU) on points, following the formula in \cite{Charles_2017}: if the union of groundtruth and prediction points is empty, then we count the corresponding label IoU as 1, since we have $50$ parts and $16$ shape categories, we compute the category IoU as the average instance IoU on the category. In our experiment, $(D, H, W) = (16, 16, 16)$, $k = 4$ , $\sigma = \min(v_W, v_H, v_D)$ and $l = 8$, where we capture the $4 \times 4 \times 4$ subvoxels with $8$ latent variables inferred from variational auto-encoder. We highlight the performance of various combinations of different modules. The results corresponding to \textit{group-conv + RBF-VAE} highlights VV-Net's performance based on combining RBF kernel with VAE scheme and the group convolutional neural network module. This version of our algorithm outperforms state-of-the-art RSN~\cite{huang2018recurrent} by $2.5\%$ (mIoU) and it is better than RSN in $12$ out of $16$ categories. In order to demonstrate the benefits of individual components, we perform an ablation study. For fair comparison, the same $64\times64\times 64$ resolution of subvoxels (for VAE-based) or voxels (for non-VAE based) is used. The implementation of \textit{group-conv+RBF-VAE} (our method) outperforms only using \textit{RBF-VAE} by $1.3\%$ (mIoU) and is better in $11$ out of $16$ categories. Our method also outperforms only using \textit{group-conv} by $1.4\%$ (mIoU) and is better in $13$ out of $16$ categories. We also compare RBF-VAE with VAE on the \{0, 1\} occupancy grid. Since the point data is sparse, training on the \{0, 1\} VAE does not converge. This shows the necessity and benefits of RBF-VAE. \vspace{-10px} \subsection{Semantic segmentation of scenes} We also evaluate the performance on Stanford 3D semantic parsing dataset \cite{armeni20163d}, which consists of $6$ types of benchmarks. Each point in the data scan is annotated with one of the semantic labels from $13$ categories. In our experiment, $(D, H, W) = (16, 16, 32)$, $k = 4$, $\sigma = 5 \cdot \min(v_W, v_H, v_D)$ and $l = 8$. Table~\ref{tab:s3dis} highlights the results (category IoU, overall accuracy and mean IoU) of semantic segmentation on the \textbf{S3DIS} dataset. Furthermore, Table \ref{tab:S3DIS_AP0.5} indicates the results of AP (average precision) metric with IoU threshold $0.5$. Our implementation of \textit{group-conv + RBF-VAE} outperforms state-of-the-art SPG~\cite{DBLP:journals/corr/LargeScalePointCloudSemanticSegmentationWithSuperpointGraphs} by $16.12\%$ of \textit{Mean IoU} metric. Our method (\textit{group-conv + RBF-VAE}) also achieves better performance than either only using \textit{group-conv} or only using \textit{RBF-VAE}, as reported in the bottom rows of Tables~\ref{tab:s3dis} and \ref{tab:S3DIS_AP0.5}. Table~\ref{tab:s3dis-iou} compares our method with methods reporting mean IoU and also shows the superior performance of our method. \begin{table} \centering \begin{tabular}{lll} \hline & Mean IoU\\ \hline PointCNN~\cite{DBLP:journals/corr/pointCNN} & 62.74 \\ PointSIFT~\cite{jiang2018pointsift} & 70.23 \\ Our & 78.22\\ \hline \end{tabular} \caption{\textbf{Results on Semantic Segmentation in S3DIS Dataset:} We compare the results with~\cite{DBLP:journals/corr/pointCNN} and~\cite{jiang2018pointsift} using the mean IoU metric. } \label{tab:s3dis-iou} \vspace{-10px} \end{table} \begin{figure} \centering \includegraphics[width = 0.8\linewidth]{success_small.png} \caption{Part Segmentation Results on ShapeNet: Note that \textit{Car} and \textit{Motor} have lower performance than most other categories in Table~\ref{tab:shapenet}. This is partly because there are more parts in these categories: $4$ labels for \textit{Car} and $5$ labels for \textit{Motor}. } \label{fig:success} \vspace{-20px} \end{figure} \begin{figure} \centering \includegraphics[width = \linewidth]{small.pdf} \caption{\textbf{{Failure} Cases of Our Algorithm on ShapeNet Part Segmentation:} The top row shows the ground truth and the bottom row is our segmentation results. Our network predicts the \textit{Cap} to be a \textit{Table}, where dark blue indicates the table top and the light blue indicates the table legs. In the second column, the dark blue indicates the top of the car. In the third column, our network segments the chair armrest while the ground truth does not. In the last column, our network predicts the \textit{Rocket} to be an \textit{Airplane}. Notice that in the last column, even a human being would find it difficult to distinguish the \textit{Rocket} from the \textit{Airplane}. } \label{fig:fail} \vspace{-10px} \end{figure} \subsection{Robustness test} We have also evaluated the performance and robustness of our method by removing some of the points in the original data. In particular, we sample the \textbf{ShapeNet} by farthest point sampling and use different missing data ratios. We evaluate the performance and accuracy of the resulting datasets. Table \ref{tab:robust} shows the result of our robustness test. This indicates that our approach is not sensitive to missing samples. \begin{table}[] \centering \begin{tabular}{lll} \hline Missing Data Ratio & Accuracy \\ \hline 0\% & 92.47\\ 75\% & 92.48\\ 87.5\% & 91.70\\ \hline \end{tabular} \caption{\textbf{Robustness Test on ShapeNet Part Segmentation Task:} In this evaluation, the point clouds are sampled by farthest point sampling. We test the robustness of our VV-Net network towards missing points. We report the mean accuracy for different missing data ratios. Our approach only has $0.77\%$ accuracy loss, even missing $87.5\%$ of the point cloud data. } \label{tab:robust} \vspace{-15px} \end{table} \subsection{Comparison of Different RBF Kernels}\label{sec:rbf} Our RBF function is used to map the distance to each point to its influence. We compare the Gaussian kernel in our method with the inverse quadratic function kernel. With this kernel, the subvoxel function value at position $p$ is defined as: \begin{equation} f(p) = \max_{v\in V}\bigg(\frac{1}{1 + \sigma^2 \cdot ||p - v||_2^2}\bigg). \end{equation} Here $V$ represents the set of points, $p$ is the center of the subvoxel, and $\sigma$ is a pre-defined parameter, usually a multiple of the subvoxel size. The results are shown in Table~\ref{tab:rbf_test} where using Gaussian kernel achieves better performance. \begin{table}[] \centering \begin{tabular}{lll} & Overall Acc & mean IoU\\ \hline Gaussian & 87.78 & 78.22 \\ inverse quadratic & 78.82& 65.04 \\ \end{tabular} \caption{\textbf{RBF Kernel Function Comparison on S3DIS Semantic Segmentation Task:} We compare the Gaussian kernel with the inverse quadratic function.} \label{tab:rbf_test} \vspace{-10px} \end{table} \subsection{Ablation study} First, \textit{G-CNN} is replaced with traditional CNN and the mean IoU decreases by $7.79\%$, which indicates that symmetry information and larger receptive field are useful. Moreover, the importance of \textit{RBF-VAE} is demonstrated by two experiments : finer grid representation and pure RBF.We show that \emph{\{0,1\}-VAE often fails to produce reasonable results.} This is because point clouds are sparse in 3D space. E.g. in S3DIS, each point cloud contains $4096$ points. Over $64\times 64\times 128$ subvoxels, the average subvoxel density is only $0.008$. Our original grid size is $(64,64,128)$ and it occupies $16MB$ without the RBF-VAE scheme. However, with RBF-VAE it is about $2MB$ in our benchmarks. The finer grid partition would dramatically increase the memory consumption and computational cost. Moreover, it is not very useful due to the sparse distribution of input point clouds. Please refer to Table ~\ref{tab:ablation}. \begin{table} \caption{\textbf{Ablation study on S3DIS dataset.} 1st row: original results; 2nd row: replacing G-CNN with traditional CNN; 3rd row: replacing RBF-VAE with RBF grids; Fourth row: replacing RBF-VAE voxels with \{0,1\} grids. Note that the VAE latent variable distribution is designed for incorporation with RBF. We also considered directly applying G-CNN on RBF subvoxels, but that was not useful due to the compact representation of VAE encoding and lowers the performance. } \begin{adjustbox}{max width=0.5\textwidth} \centering \begin{tabular}{llll} \hline & Overall Acc & mean IoU & mean IoU threshold $0.5$\\ \hline Ori.(G-CNN + $16\times16\times32$ RBF-VAE) & 85.98 & 75.40 & 79.00 \\ Trad. CNN + $16\times16\times32$ RBF-VAE & 80.67 & 67.61 & 71.43\\ G-CNN + $32\times32\times64$ RBF &78.15 &64.13 & 68.11\\ G-CNN + $64\times64\times128$ finer grid& 82.36& 70.00& 74.14\\ \end{tabular} \label{tab:ablation} \end{adjustbox} \vspace{-15px} \end{table} \section{Conclusions, Limitations and Future Work}\label{sec:conc} In this paper we introduced a novel Voxel VAE network (VV-Net) for robust point segmentation. Our approach uses a radial basis function based variational auto-encoder and combines it with group convolutions. We have compared its performance with state-of-the-art point segmentation algorithms and demonstrate improved accuracy and robustness on well-known datasets. {\color{blue}} While we observe improved performance in most categories, occasionally our approach may not perform well for
to do different things in Emacs. ## How’s your CI and deployment story going, is everything fine, is there anything particular to CL? I have a mono-repo. Almost all my Lisp code is in one single repo. Some of my newer open source libraries get copied over from the mono-repo to GitHub using Copybara. I run my own Jenkins server, but plan to switch to Buildbot. Jenkins is annoying beyond a certain point. But it’s a great starting point for anybody who doesn’t have CI. I have two desktops, and use my older desktop exclusively as a Jenkins worker. But Lispworks adds a complication because of licensing issues, so my primary desktop runs all the Lispworks jobs. Deployment is interesting. I heavily use bknr.datastore 5, which while awesome, adds a little pain point to deployment. Restarting a service can take a 10-20s because we have to load all the objects to memory. So I avoid restarting the service. Lispworks also adds a quirk. A deployed Lispworks image doesn’t have a compiler, so I can’t just git pull and (asdf:load-system ...). So instead, from my desktop I build a bundled fasl file, and have scripts to upload it to my server and have the server load the fasl file. Finally, bknr.datastore has some issues which reloading code, which causes existing indices to become empty. I haven’t debugged why this happens, but I’m close. I have workaround scripts for these that can correct a bad index, but because of the potential of bringing my datastore into a bad state, deployment is pretty manual at the moment. ## Some CL libraries you particularly like? Some you wish existed? Apart from the usual suspects (cl-str, alexandria, cl-json, hunchentoot etc): • bknr.datastore: This is a game changer. If you’re the type of person that likes prototyping things quickly, this is for you. If you need to scale things, you can always do it when you have to. (In my experience, not all ideas reach the stage where you need to scale over a 100 servers.) It does take a bit of getting used to (particularly dealing with how to recover old snapshots, or replaying bad transactions). I’m using my own patches for Lispworks that aren’t merged upstream yet (I have a PR for it). I think the index failing issue might be in my own patches, but I don’t know for sure yet. • dexador (as opposed to Drakma): For the longest time I avoided Dexador because I thought Drakma is the more well established library. But Drakma for all its history, still doesn’t handle SSL correctly at least on LispWorks. And dexador does. • Qlot also sounds awesome, and I want to start using it. Some I wish existed: • A carefully designed algorithms library, with all the common algorithms, and consistent APIs. • …in particular graph algorithms. There are libraries out there, and I have used them, but they are clunky and not very well documented. • Better image processing libraries. Opticl is fine, but confusing to work with. Maybe it’s just me. • A modern and complete Selenium library. I use the Java library with the Java FFI. • An extensible test matchers library, similar to Java’s Hamcrest. There’s a cl-hamcrest, but it’s not very extensible, and you can’t configure the test result output IIRC. I attempted a solution here (copied over as part of my mono-repo): https://github.com/screenshotbot/screenshotbot-oss/tree/main/src/fiveam-matchers, but it’s not ready for publishing yet. I also think there’s a verbosity problem to be solved with classes and methods, that still needs to be solved in the Lispy way. For instance, in Java or python, the method names don’t have to be explicitly imported, but in CL we have to import each method that we need to use, which makes it hard to define what the object’s “interface” is. I am not proposing a solution here, I’m just identifying this as a problem that slows me down. ## Is that a baby alligator you caught yourself on your GH profile picture? It’s one of those pictures they take of you at a tourist trap alligator tour. :) The alligator’s jaw is taped shut. ## Anything more to add? Nothing I can think of :) Thanks and the best for your projects! notes: #### Max-Gerd Retzlaff — uLisp on M5Stack (ESP32):<br /> new version published · 52 days ago I got notified that I haven't updated ulisp-esp-m5stack at GitHub for quite a while. Sorry, for that. Over the last months I worked on a commercial project using uLisp and forgot to update the public repository. At least I have bumped ulisp-esp-m5stack to my version of it from May 13th, 2021 now. It is a—then—unpublished version of uLisp named 3.6b which contains a bug fix for a GC bug in with-output-to-string and a bug fix for lispstring, both authored by David Johnson-Davies who sent them to my via email for testing. Thanks a lot again! It seems they are also included in the uLisp Version 3.6b that David published on 20th June 2021. I know there David published a couple of new releases of uLisp in the meantime with many more interesting improvements but this is the version I am using since May together with a lot of changes by me which I hope to find time to release as well in the near future. ## Error-handling in uLisp by Gohecca I am using Goheeca's Error-handling code since June and I couldn't work without it anymore. I just noticed that he allowed my to push his work to my repository in July already. So I just also published my branch error-handling to ulisp-esp-m5stack/error-handling. It's Goheeca's patches together with a few small commits by me on top of it, mainly to achieve this (as noted in the linked forum thread already): To circumvent the limitation of the missing multiple-values that you mentioned with regard to ignore-errors, I have added a GlobalErrorString to hold the last error message and a function get-error to retrieve it. I consider this to be a workaround but it is good enough to show error messages in the little REPL of the Lisp handheld. Read the whole article. #### Nicolas Hafner — Slicing the horizon - December Kandria Update · 52 days ago November has been filled with horizontal slice development! Nearly all our time was spent working on new content, which is fantastic! The world is already four times as big as the demo, and there's still plenty more to go. ## Horizontal Slice Development We've been busy working on the horizontal slice content, hammering out quests, art, music, levels, and new mechanics. We now have an overview of the complete game map, and it clocks in at about 1.2 x 2.2 km, divided up into 265 unique rooms. This is pretty big already, but not quite the full map size yet. Once we're done with the horizontal slice, we'll be branching things out with sidequests and new side areas that are going to make the map even more dense and broad. The map is split up into four distinct areas, which we call the Surface and Regions 1-3. Each of those areas have their own unique tileset, music tracks, NPCs, and platforming mechanics. The demo already shows off the Surface as well as the upper part of Region 1: We can also give you a peek at the visuals for Region 2: I'm really excited to see everything come together, but there's still a lot more levels for me to design before that. I'm glad that I finally managed to get up to speed doing that, but it's still surprisingly hard. Coming up with fresh ideas for each room
from abc import ABC import math import numpy as np import rclpy from moveit_msgs.srv import GetPositionFK from pyquaternion import Quaternion from std_msgs.msg import Float32 from transforms3d.euler import quat2euler from bitbots_moveit_bindings import get_position_fk from deep_quintic.utils import Rot from bitbots_utils.transforms import compute_imu_orientation_from_world from typing import TYPE_CHECKING if TYPE_CHECKING: from deep_quintic import DeepQuinticEnv class AbstractReward(ABC): def __init__(self, env: "DeepQuinticEnv"): self.name = self.__class__.__name__ self.env = env self.episode_reward = 0 self.current_reward = 0 self.publisher = self.env.node.create_publisher(Float32, "Reward_" + self.name, 1) self.episode_publisher = self.env.node.create_publisher(Float32, "Reward_" + self.name + "_episode", 1) def reset_episode_reward(self): self.episode_reward = 0 def get_episode_reward(self): return self.episode_reward def get_name(self): return self.name def publish_reward(self): if self.publisher.get_subscription_count() > 0: self.publisher.publish(Float32(data=self.current_reward)) if self.episode_publisher.get_subscription_count() > 0: self.episode_publisher.publish(Float32(data=self.episode_reward)) def compute_reward(self): print("not implemented, this is abstract") def compute_current_reward(self): self.current_reward = self.compute_reward() self.episode_reward += self.current_reward return self.current_reward class CombinedReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = None def compute_current_reward(self): self.current_reward = 0 for reward_type in self.reward_classes: self.current_reward += reward_type.compute_current_reward() self.episode_reward += self.current_reward return self.current_reward def get_episode_reward(self): reward_sum = 0 for reward_type in self.reward_classes: reward_sum += reward_type.get_episode_reward() return reward_sum def reset_episode_reward(self): self.episode_reward = 0 for reward in self.reward_classes: reward.reset_episode_reward() def get_info_dict(self): info = dict() for reward in self.reward_classes: info[reward.get_name()] = reward.get_episode_reward() return info def publish_reward(self): if self.publisher.get_subscription_count() > 0: self.publisher.publish(Float32(data=self.current_reward)) if self.episode_publisher.get_subscription_count() > 0: self.episode_publisher.publish(Float32(data=self.episode_reward)) for reward in self.reward_classes: reward.publish_reward() class WeightedCombinedReward(CombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.weights = None def compute_current_reward(self): # weight the rewards self.current_reward = 0 for i in range(0, len(self.reward_classes)): self.current_reward += self.weights[i] * self.reward_classes[i].compute_current_reward() self.episode_reward += self.current_reward return self.current_reward def get_episode_reward(self): reward_sum = 0 for i in range(0, len(self.reward_classes)): reward_sum += self.weights[i] * self.reward_classes[i].get_episode_reward() return reward_sum class EmptyTest(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [] self.weights = [] class DeepMimicReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [JointPositionReward(env), JointVelocityReward(env), EndEffectorReward(env), CenterOfMassReward(env)] self.weights = [0.65, 0.1, 0.15, 0.1] class DeepMimicActionReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [JointPositionActionReward(env), JointVelocityReward(env), EndEffectorReward(env), CenterOfMassReward(env)] self.weights = [0.65, 0.1, 0.15, 0.1] class DeepMimicActionCartesianReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), JointVelocityReward(env), EndEffectorReward(env), CenterOfMassReward(env)] self.weights = [0.65, 0.1, 0.15, 0.1] class CartesianReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FeetPosReward(env), FeetOriReward(env), TrajectoryPositionReward(env), TrajectoryOrientationReward(env)] self.weights = [0.3, 0.3, 0.2, 0.2] class CartesianRelativeReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [TrajectoryPositionReward(env), TrajectoryOrientationReward(env)] self.weights = [0.5, 0.5] class CartesianActionReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), TrajectoryPositionReward(env), TrajectoryOrientationReward(env)] self.weights = [0.6, 0.2, 0.2] class CartesianStateVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FeetPosReward(env), FeetOriReward(env), CommandVelReward(env)] self.weights = [0.25, 0.25, 0.5] class CartesianActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), CommandVelReward(env), IKErrorReward(env)] self.weights = [0.5, 0.5, 0] class CartesianDoubleActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootPositionActionReward(env), FootOrientationActionReward(env), CommandVelReward(env)] self.weights = [0.3, 0.3, 0.4] class CartesianActionMovementReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), MovementReward(env)] self.weights = [0.6, 0.4] class SmoothCartesianActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), SmoothCommandVelReward(env)] self.weights = [0.6, 0.4] class CartesianStableActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [StableReward(env), FootActionReward(env), CommandVelReward(env)] self.weights = [0.3, 0.4, 0.3] class CartesianActionOnlyReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env)] self.weights = [1.0] class JointActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [JointPositionActionReward(env), CommandVelReward(env)] self.weights = [0.5, 0.5] class JointStateVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [JointPositionReward(env), CommandVelReward(env)] self.weights = [0.5, 0.5] class DeepQuinticReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), CommandVelReward(env), UprightReward(env, True), StableReward(env) ] self.weights = [0.5, 0.3, 0.1, 0.1] class CassieReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) # todo this is similar but not exactly the same self.reward_classes = [JointPositionReward(env), TrajectoryPositionReward(env), CommandVelReward(env), TrajectoryOrientationReward(env), StableReward(env), AppliedTorqueReward(env), ContactForcesReward(env)] self.weights = [0.3, 0.24, 0.15, 0.13, 0.06, 0.06, 0.06] class CassieActionReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [JointPositionActionReward(env), TrajectoryPositionReward(env), CommandVelReward(env), TrajectoryOrientationReward(env), StableReward(env), AppliedTorqueReward(env), ContactForcesReward(env)] self.weights = [0.3, 0.24, 0.15, 0.13, 0.06, 0.06, 0.06] class CassieCartesianReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FeetPosReward(env), FeetOriReward(env), TrajectoryPositionReward(env), CommandVelReward(env), TrajectoryOrientationReward(env), StableReward(env), AppliedTorqueReward(env), ContactForcesReward(env)] self.weights = [0.15, 0.15, 0.24, 0.15, 0.13, 0.06, 0.06, 0.06] class CassieCartesianActionReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), TrajectoryPositionReward(env), CommandVelReward(env), TrajectoryOrientationReward(env), StableReward(env), AppliedTorqueReward(env), ContactForcesReward(env)] self.weights = [0.3, 0.24, 0.15, 0.13, 0.06, 0.06, 0.06] class CassieCartesianActionVelReward(WeightedCombinedReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) self.reward_classes = [FootActionReward(env), CommandVelReward(env), StableReward(env), AppliedTorqueReward(env), ContactForcesReward(env)] self.weights = [0.4, 0.3, 0.1, 0.1, 0.1] class ActionNotPossibleReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv"): super().__init__(env) def compute_reward(self): # give -1 if not possible. this will lead to negative rewards if self.env.action_possible: return 0 else: return -1 class IKErrorReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=1000): super().__init__(env) self.factor = factor def compute_reward(self): # gives reward based on the size of the IK error for the current action. # this can be helpful to avoid wrong actions and as debug request = GetPositionFK.Request() for i in range(len(self.env.robot.leg_joints)): request.robot_state.joint_state.name.append(self.env.robot.leg_joints[i]) request.robot_state.joint_state.position.append(self.env.robot.last_ik_result[i]) request.fk_link_names = ['l_sole', 'r_sole'] result = get_position_fk(request) # type: GetPositionFK.Response l_sole = result.pose_stamped[result.fk_link_names.index('l_sole')].pose fk_left_foot_pos = np.array([l_sole.position.x, l_sole.position.y, l_sole.position.z]) l_sole_quat = l_sole.orientation fk_left_foot_rpy = quat2euler([l_sole_quat.w, l_sole_quat.x, l_sole_quat.y, l_sole_quat.z]) r_sole = result.pose_stamped[result.fk_link_names.index('r_sole')].pose fk_right_foot_pos = np.array([r_sole.position.x, r_sole.position.y, r_sole.position.z]) r_sole_quat = r_sole.orientation fk_right_foot_rpy = quat2euler([r_sole_quat.w, r_sole_quat.x, r_sole_quat.y, r_sole_quat.z]) # compare to the given goals for the IK fk_action = self.env.robot.scale_pose_to_action(fk_left_foot_pos, fk_left_foot_rpy, fk_right_foot_pos, fk_right_foot_rpy, self.env.rot_type) action_diff = np.linalg.norm(np.array(self.env.last_leg_action) - fk_action) return math.exp(-self.factor * (action_diff ** 2)) class CommandVelReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=5): super().__init__(env) self.factor = factor def compute_reward(self): # give reward for being close to the commanded velocities in 2D space # other velocities are only leading to falls anyway and dont need to be handled here command_vel = self.env.current_command_speed diff_sum = 0 diff_sum += (command_vel[0] - self.env.robot.walk_vel[0]) ** 2 diff_sum += (command_vel[1] - self.env.robot.walk_vel[1]) ** 2 diff_sum += (command_vel[2] - self.env.robot.walk_vel[2]) ** 2 reward = math.exp(-self.factor * diff_sum) return reward class SmoothCommandVelReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=20): super().__init__(env) self.factor = factor def compute_reward(self): # give reward for being close to the commanded velocities in 2D space # other velocities are only leading to falls anyway and dont need to be handled here command_vel = self.env.current_command_speed diff_sum = 0 # extra factors to keep influence of different direction identical in reward. # turning is faster then sidewards walk diff_sum += (command_vel[0] - self.env.robot.smooth_vel[0]) ** 2 diff_sum += (command_vel[1] - self.env.robot.smooth_vel[1]) ** 2 diff_sum += (command_vel[2] - self.env.robot.smooth_vel[2]) ** 2 reward = math.exp(-self.factor * diff_sum) return reward class MovementReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=100): super().__init__(env) # the size of the difference is proportional to the control loop rate, so take it into account self.factor = factor * env.step_freq def compute_reward(self): # compute movement from last pose of the robot to this pose and compare it with the reference trajectory robot_pos_diff = self.env.robot.pos_in_world - self.env.robot.previous_pos_in_world robot_yaw_diff = quat2euler(self.env.robot.quat_in_world, axes='szxy')[0] - quat2euler( self.env.robot.previous_quat_in_world, axes='szxy')[0] ref_pos_diff = self.env.refbot.pos_in_world - self.env.refbot.previous_pos_in_world ref_yaw_diff = quat2euler(self.env.refbot.quat_in_world, axes='szxy')[0] - quat2euler( self.env.refbot.previous_quat_in_world, axes='szxy')[0] # only take movement in x,y and yaw diff_sum = 0 diff_sum += (robot_pos_diff[0] - ref_pos_diff[0]) ** 2 diff_sum += (robot_pos_diff[1] - ref_pos_diff[1]) ** 2 diff_sum += (robot_yaw_diff - ref_yaw_diff) ** 2 reward = math.exp(-self.factor * diff_sum) return reward class FeetPosReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=100): super().__init__(env) self.factor = factor def compute_reward(self): left_pos_diff = self.env.robot.left_foot_pos - self.env.refbot.previous_left_foot_pos right_pos_diff = self.env.robot.right_foot_pos - self.env.refbot.previous_right_foot_pos return math.exp(-self.factor * (np.linalg.norm(left_pos_diff) ** 2 + np.linalg.norm(right_pos_diff) ** 2)) class FeetOriReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=100): super().__init__(env) self.factor = factor def compute_reward(self): geo_diff_left = Quaternion.distance(Quaternion(*self.env.robot.left_foot_quat), Quaternion(*self.env.refbot.left_foot_quat)) geo_diff_right = Quaternion.distance(Quaternion(*self.env.robot.right_foot_quat), Quaternion(*self.env.refbot.right_foot_quat)) return math.exp(-self.factor * (np.linalg.norm(geo_diff_left) ** 2 + np.linalg.norm(geo_diff_right) ** 2)) class FootActionReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=1): super().__init__(env) self.factor = factor def compute_reward(self): # we interpret the reference as an action. # the reference was already stepped when computing the reward therefore we use the previous values if self.env.relative: ref_action = self.env.robot.scale_relative_pose_to_action(self.env.refbot.previous_left_foot_pos, self.env.refbot.previous_left_foot_quat, self.env.refbot.previous_right_foot_pos, self.env.refbot.previous_right_foot_quat, self.env.rot_type) else: ref_action = self.env.robot.scale_pose_to_action(self.env.refbot.previous_left_foot_pos, quat2euler(self.env.refbot.previous_left_foot_quat), self.env.refbot.previous_right_foot_pos, quat2euler(self.env.refbot.previous_right_foot_quat), self.env.rot_type) action_diff = np.linalg.norm(np.array(self.env.last_leg_action) - ref_action) return math.exp(-self.factor * (action_diff ** 2)) class FootPositionActionReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=10): super().__init__(env) self.factor = factor def compute_reward(self): if self.env.rot_type != Rot.RPY or self.env.relative or not self.env.cartesian_action: raise NotImplementedError() ref_action = self.env.robot.scale_pose_to_action(self.env.refbot.previous_left_foot_pos, quat2euler(self.env.refbot.previous_left_foot_quat), self.env.refbot.previous_right_foot_pos, quat2euler(self.env.refbot.previous_right_foot_quat), self.env.rot_type) # only take translation part ref_action = np.concatenate([ref_action[:3], ref_action[6:9]]) action = np.concatenate([self.env.last_leg_action[:3], self.env.last_leg_action[6:9]]) action_diff = np.linalg.norm(action - ref_action) return math.exp(-self.factor * (action_diff ** 2)) class FootOrientationActionReward(AbstractReward): def __init__(self, env: "DeepQuinticEnv", factor=40): super().__init__(env) self.factor = factor def compute_reward(self): if self.env.rot_type != Rot.RPY or self.env.relative or not self.env.cartesian_action: raise NotImplementedError() ref_action = self.env.robot.scale_pose_to_action(self.env.refbot.previous_left_foot_pos, quat2euler(self.env.refbot.previous_left_foot_quat), self.env.refbot.previous_right_foot_pos, quat2euler(self.env.refbot.previous_right_foot_quat), self.env.rot_type) #
\section{Introduction} The Bernoulli numbers, defined by the generating series $$\frac{t}{e^t-1}=\sum_{k=0}^\infty B_k\frac{t^k}{k!},$$ have a long and intriguing history in the study of number theory, with over 3000 related papers written so far according to the online Bernoulli Number archive maintained by Dilcher and Slavutskii \cite{DilcherSl}. In modern mathematics, the Bernoulli numbers have appeared in the Euler-Maclaurin summation formula, Herbrand's Theorem concerning the class group of cyclotomic number fields, and even the Kervaire--Milnor formula in topology. Well-documented history indicates that Jakob Bernoulli, after whom the Bernoulli numbers are named, was very proud of his discovery that sums of powers of positive integers can be quickly calculated by using these numbers. This result was independently discovered by Seki around the same time \cite{AIK}. By using Fermat's Little Theorem, the formula further leads to many congruences and even super congruences involving multiple harmonic sums, which were first studied independently by the second author in \cite{Zhao2008a,Zhao2011c} and Hoffman in \cite{Hoffman2015}. See \cite[Ch.~8]{Zhao2015a} for more details. Let $\N$ and $\N_0$ be the set of positive integers and nonnegative integers, respectively. For any $n,d\in\N$ and $\bfs=(s_1,\dots,s_d)\in\N^d$ we define the \emph{multiple harmonic sums} (MHSs) and their $p$-restricted version for primes $p$ by \begin{align*} \calH_n(\bfs):= \sum_{0<k_1<\cdot<k_d<n} \frac{1}{k_1^{s_1}\dots k_d^{s_d}},\quad \calH_n^{(p)}(\bfs):= \sum_{\substack{0<k_1<\cdot<k_d<n\\ p\nmid k_1,\dots, p\nmid k_d}} \frac{1}{k_1^{s_1}\dots k_d^{s_d}}. \end{align*} Here, $d$ is called the depth and $|\bfs|:=s_1+\dots+s_d$ the weight of the MHS. For example, $\calH_{n+1}(1)$ is often called the $n$th harmonic number. In general, as $n\to \infty$ we see that $\calH_n(\bfs)\to \zeta(\bfs)$ which are the multiple zeta values (MZVs). More than a decade ago, the second author discovered the curious congruence (see \cite{Zhao2007b}) \begin{equation}\label{equ:BaseCongruence} \sum_{\substack{i+j+k=p\\ i,j,k>0}} \frac1{ijk} \equiv -2 B_{p-3} \pmod{p} \end{equation} for all primes $p\ge 3$. Since then several different types of generalizations have been found, see, for e.g. \cite{MTWZ,ShenCai2012b,Wang2014b,WangCa2014,XiaCa2010,Zhao2014,ZhouCa2007}. In this paper, we will concentrate on congruences of the following type of sums. Let $\calP_p$ be the set of positive integers not divisible by $p$. For all positive integers $r$ and $m$ such that $p\nmid m$, define \begin{align*} R_n^{(m)}(p^r):=&\,\sum_{\substack{l_1+l_2+\dots+l_n=mp^r\\ l_1,\dots,l_n\in \calP_p }} \frac{1}{l_1 l_2\dots l_n} ,\\ S_n^{(m)} (p^r):=&\, \sum_{\substack{l_1+l_2+\dots+l_n=mp^r\\ p^r>l_1,\dots,l_n\in \calP_p}} \frac{1}{l_1 l_2\dots l_n}. \end{align*} To put these sums into proper framework, we now recall briefly the definition of the finite MZVs. Let $\frakP$ be the set of rational primes. To study the congruences of MHSs, Kaneko and Zagier \cite{KanekoZa2013} consider the following ring structure\footnote{More precisely, they consider only the case when $\ell=1$.} first used by Kontsevich \cite{Kontsevich2009}: \begin{equation*} \calA_\ell:=\prod_{p\in\frakP}(\Z/p^\ell\Z)\bigg/\bigoplus_{p\in\frakP}(\Z/p^\ell\Z). \end{equation*} Two elements in $\calA_\ell$ are the same if they differ at only finitely many components. For simplicity, we often write $p^r$ for the element $\big(p^r\big)_{p\in\frakP}\in\calA_\ell$ for all positive integers $r<\ell$. For other properties and facts of $\calA_\ell$ we refer the interested reader to \cite[Ch.~8]{Zhao2015a}. One now defines the finite MZVs as the following elements in $\calA_\ell$: \begin{equation*} \zeta_{\calA_\ell}(\bfs):=\Big( \calH_p(\bfs) \pmod{p^\ell} \Big)_{p\in\frakP}. \end{equation*} It turns out that Bernoulli numbers often play important roles in the study of finite MZVs, as witnessed by the following result (see \cite[p. 1332]{ZhouCa2007}): \begin{alignat*}{3} \zeta_{\calA_3}(1_n) = &\, (-1)^{n-1}\frac{(n+1)}{2} \gb_{n+2}\cdot p^2 \quad \ && \text{if $2\nmid n$;} \\ \zeta_{\calA_2}(1_n) = &\, (-1)^n \gb_{n+1}\cdot p \quad && \text{if $2|n$}, \phantom{\frac12} \end{alignat*} where $1_n$ is the string $(1,\dots,1)$ with $1$ repeating $n$ times, and $\gb_k:=\big(-B_{p-k}/k \pmod{p}\big)_{p>k}\in\calA_1$ is the so-called $\calA$-Bernoulli number, which is the finite analog of $\zeta(k)$. Note that $\gb_k=0$ for all even positive integers $k$ while it is still a mystery whether $\gb_k\ne 0$ for all odd integers $k>2$. In \cite{MTWZ}, the second author and his collaborators made the following conjecture. \begin{conj}\label{conj:RS1} For any $m,n\in\N$, both $R_n^{(m,1)}$ and $S_n^{(m,1)}$ are elements in the sub-algebra of $\calA_1$ generated by the $\calA$-Bernoulli numbers. \end{conj} In this paper, we will prove this conjecture. More precisely, we have \medskip \noindent{\bf Main Theorem.} {\sl Let $m$, $r$ and $n$ be positive integers. We denote by $\bfk\vdash n$ any tuple of odd positive integers $\bfk=(k_1,\dots,k_t)$ such that $k_1+\dots+k_t=n$ and $k_j\ge 3$ for all $j$. Then for every sufficiently large prime $p$ \begin{align} R_n^{(m)}(p^r)\equiv &\sum_{\substack{l_1+l_2+\cdots+l_n=mp^r\\ p\nmid l_1 l_2 \cdots l_n }} \frac1{l_1l_2\cdots l_n} \equiv p^{r-1} \sum_{\bfk\vdash n} C_{m,\bfk} B_{p-\bfk} \pmod{p^r}, \label{equ:mainR}\\ S_n^{(m)}(p^r)\equiv &\sum_{\substack{l_1+l_2+\cdots+l_n=mp^r\\ p\nmid l_1 l_2 \cdots l_n }} \frac1{l_1l_2\cdots l_n} \equiv p^{r-1} \sum_{\bfk\vdash n} C'_{m,\bfk} B_{p-\bfk}\pmod{p^r}, \label{equ:mainS} \end{align} where $B_{p-\bfk}=B_{p-k_1}B_{p-k_2}\cdots B_{p-k_t}$ are products of Bernoulli numbers and the coefficients $C_{m,\bfk}$ and $C'_{m,\bfk}$ are polynomials of $m$ independent of $p$ and $r$. } \medskip The coefficients $C_{m,\bfk}$ and $C'_{m,\bfk}$ are intimately related, see Conjecture~\ref{conj:RSr=1}. As a side remark, in our numerical computation, it is crucial to use some generating functions of $R_{n}^{(m)}$ and $S_{n}^{(m)}$, which are certain products of a finite variation of the $p$-restricted classical polylogarithm function. Unfortunately, it seems difficult to use these generating functions to obtain our main result of this paper. \section{Preliminary lemmas}\label{sec:prel} In this section, we collect some useful results to be applied in the rest of the paper. \begin{lem}\label{lem:Uab} \emph{(cf.\ \cite[Lemma 3.4]{MTWZ})} Let $p$ be a prime, $\gk,s_1,\dots,s_d$ be positive integers, and $\ga$ a non-negative integer. We define the un-ordered sum \begin{equation*} U_{\ga;\gk}^{(p)}(s_1,\dots,s_d):=\sum_{\substack{\ga p<l_1,\dots,l_d<(\ga+\gk)p \\ l_1,\dots,l_d\in \calP_p,\ l_i\ne l_j \forall i\ne j }} \frac{1}{l_1^{s_1}\cdots l_d^{s_d}}. \end{equation*} If the weight $w=s_1+\dots+s_d\le p-3$ then we have \begin{equation*} U_{\ga;\gk}^{(p)}(s_1,\dots,s_d)\equiv (-1)^{d-1} (d-1)! \frac{\gk w}{w+1} B_{p-w-1} \cdot p \pmod{p^2}. \end{equation*} \end{lem} \begin{lem} \label{lem:Srecurrence} Suppose $a,k,m,n,r\in\N$ and $p$ is a prime. Set \begin{equation*} \gam^{(m)}_n(a):=(-1)^{m+a}\binom{n-2}{m-1} \frac{(a-1)!(n-1-a)!}{(n-1)!}. \end{equation*} If $k<n<p-1$ then we have \begin{enumerate} \item[\upshape (i)] $S_n^{(k)}(p^{r})\equiv (-1)^n S_n^{(n-k)}(p^{r})$ \text{\rm{(mod $p^r$)}}; \item[\upshape (ii)] $\displaystyle S_n^{(m)}(p^{r+1})\equiv p\sum_{a=1}^{n-1} \gam^{(m)}_n(a) S_n^{(a)}(p^{r}) \pmod{p^{r+1}};$ \item[\upshape (iii)] $\displaystyle S_n^{(m)}(p^{r+1}) \equiv (-1)^{m-1}\binom{n-2}{m-1} S_n^{(1)}(p^2) p^{r-1} \pmod{p^{r+1}}.$ \end{enumerate} \end{lem} \begin{proof} (i) and (ii) follow from \cite[Lemma 2.3]{MTWZ} while (iii) from \cite[Lemma 2.2]{CHQWZ}. \end{proof} \begin{lem} \label{lem:RmS1} \emph{(\cite[Proposition\ 2.3]{CHQWZ})} Let $m,n,r\in\N$. For all $r\ge 2$, we have \begin{equation*} R_n^{(m,r)} = m \cdot S_n^{(1,2)} p^{r-2}\in\calA_r. \end{equation*} \end{lem} \begin{lem}\label{lem:Sm=Rms} Suppose $m,n,r\in\N$. Then we have \begin{equation}\label{equ:S2R} S_n^{(m,r)}=\sum_{k=0}^{m-1}(-1)^k \binom{n}{k} R_{n}^{(m-k,r)}\in\calA_r. \end{equation} \end{lem} \begin{proof} Equation \eqref{equ:S2R} can be proved using the Inclusion-Exclusion Principle similar to the proof of \cite[Lemma 1]{Wang2015}. Indeed, for all primes $p$ \begin{align*} S_{n}^{(m)}(p^r)&\,=\sum_{\substack{l_1+\cdots +l_n=mp^r\\ l_1,\cdots, l_n\in\calP_p}} \frac{1}{l_1\cdots l_n} +\sum_{k=1}^{m-1}(-1)^{k} \sum_{\substack{1\le a_1<\cdots <a_{k}\le n\\ l_1+\cdots +l_n=mp^r\\ l_1,\cdots, l_n\in\calP_p\\ l_{a_1}>p^r ,\cdots, l_{a_k}>p^r }} \frac{1}{l_1\cdots l_n}\\ &\,= \sum_{k=0}^{m-1}(-1)^k\binom{n}{k} \sum_{\substack{l_1+\cdots +l_n=(m-k)p^r\\ l_1,\cdots, l_n\in\calP_p}} \frac{1}{(l_1+p^r) \cdots (l_k+p^r)l_{k+1}\cdots l_n}\\ &\equiv \sum_{k=0}^{m-1} (-1)^k\binom{n}{k} \sum_{\substack{l_1+\cdots +l_n=(m-k)p^r\\ l_1,\cdots, l_n\in\calP_p}} \frac{1}{l_1 \cdots l_n} \pmod{p^r}\\ &\equiv \sum_{k=0}^{m-1}(-1)^k \binom{n}{k} R_{n}^{(m-k)}(p^r) \pmod{p^r}, \end{align*} as desired. \end{proof} We see immediately from Lemmas \ref{lem:RmS1} and \ref{lem:Sm=Rms} that the proof of the Main Theorem is reduced to its special case of $S_n^{(1,2)}$. The idea is to compute $R_{n}^{(m,1)}$ first, which leads to $S_n^{(m,1)}$ by the Lemma~\ref{lem:Sm=Rms}. Then $S_n^{(1,2)}$ can be determined using $S_n^{(m,1)}$ by Lemma~\ref{lem:Srecurrence} (ii). For the convenience of numerical computation, we list some of the relevant known results. \begin{lem}\label{lem:R1S1} \emph{(\cite[Main Theorem]{ZhouCa2007})} Let $n>1$ be positive integer. Then we have \begin{equation*} S_n^{(1,1)} =R_n^{(1,1)}= \left\{ \begin{array}{ll} \displaystyle n! \gb_n & \quad \hbox{if $2\nmid n$;} \\ \displaystyle 0 & \quad \hbox{if $2\mid n$.} \end{array} \right. \end{equation*} \end{lem} \begin{lem}\label{lem:R2S2} Let $n>1$ be positive integer. Then we have \begin{equation*} R_n^{(2,1)}= \left\{ \begin{array}{ll} \displaystyle \frac{(n+1)!}{2} \gb_n & \quad \hbox{if $2\nmid n$;} \\ \displaystyle \frac{n!}{2}\sum_{a+b\vdash n}\gb_a\gb_b & \quad \hbox{if $2\mid n$,} \end{array} \right. \end{equation*} and \begin{equation*} S_n^{(2,1)}= \left\{ \begin{array}{ll} \displaystyle -\frac{n-1}{2}n! \gb_n & \quad \hbox{if $2\nmid n$;} \\ \displaystyle \frac{n!}{2}\sum_{a+b\vdash n}\gb_a\gb_b & \quad \hbox{if $2\mid n$.} \end{array} \right. \end{equation*} \end{lem} \begin{proof} The odd cases follow from \cite[Lemma 3.5 and Cor 3.6]{MTWZ} respectively. The even cases are proved in \cite[Theorem\ 1 and Corollary\ 1]{Wang2015}. \end{proof} \begin{lem}\label{lem:R3S3} Let $n>1$ be positive integer. Then we have \begin{equation*} R_n^{(3,1)}= \left\{ \begin{array}{ll} \displaystyle {n+2 \choose 3}\cdot (n-1)! \gb_n +\frac{n!}{6}\sum_{\substack{a+b+c\vdash n}} \gb_a\gb_b \gb_c & \quad \hbox{if $2\nmid n$;} \\ \displaystyle \frac{n!(n+2)}{4}\sum_{a+b\vdash n} \gb_a\gb_b &\quad \hbox{if $2\mid n$,} \end{array} \right. \end{equation*} and \begin{equation*} S_n^{(3,1)}= \left\{ \begin{array}{ll} \displaystyle {n \choose 3}\cdot (n-1)!\gb_n +\frac{n!}{6}\sum_{\substack{a+b+c\vdash n}} \gb_a\gb_b\gb_c & \quad \hbox{if $2\nmid n$;} \\ \displaystyle -\frac{n!(n-2)}{4}\sum_{a+b\vdash n} \gb_a\gb_b &\quad \hbox{if $2\mid n$.} \end{array} \right. \end{equation*} \end{lem} \begin{proof} The odd cases of follow from \cite[Lemma 3.7 and Corollary 3.7]{MTWZ} respectively. The even cases are essentially proved in \cite[Theorem 2 and Corollary 2]{Wang2015}. We only need to observe that if $n$ is even then by exchanging the indices $a$ and $b$ in half of the sums, we get \begin{alignat*}{3} R_n^{(3)}(p) \equiv &\, \frac{n!}{6}\sum_{a+b\vdash n}(2n-a+3)\frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p}\\ \equiv&\, \frac{n!}{12}\sum_{a+b\vdash n} (4n-a-b+6)\frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p}\\ \equiv&\, \frac{n!(n+2)}{4}\sum_{a+b\vdash n} \frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p} \end{alignat*} since $a+b=n$. Similarly \begin{alignat*}{3} S_n^{(3)}(p) \equiv &\, -\frac{n!}{6}\sum_{a+b\vdash n} (n+a-3)\frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p}\\ \equiv&\,-\frac{n!}{12}\sum_{a+b\vdash n} (2n+a+b-6)\frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p}\\ \equiv&\,-\frac{n!(n-2)}{4}\sum_{a+b\vdash n} \frac{B_{p-a}B_{p-b}}{ab} &&\pmod{p} \end{alignat*} as desired. \end{proof} \section{Sums related to multiple harmonic sums} We are now ready to consider the sums $R^{(m,r)}_n$. The key step is to compute $R^{(m,1)}_n$ for $m\le n/2$, which we now transform using MHSs. By the definition, for all primes $p$, we have \begin{align*} R_{n}^{(m)}(p) =&\, \frac1{mp}\sum_{\substack{l_1+l_2+\dots+l_n=mp\\ l_1,\dots,l_n\in \calP_p }} \frac{l_1+l_2+\dots+l_n}{l_1 l_2\dots l_n } \\ =&\, \frac{n}{mp}\sum_{\substack{u_{n-1}=l_1+l_2+\dots+l_{n-1}<mp\\ l_1,\dots,l_{n-1}, u_{n-1}\in \calP_p}} \frac{1}{l_1 l_2\dots l_{n-1}} \quad(\text{by symmetry of $l_1,\dots,l_n$})\\ =&\, \frac{n}{mp}\sum_{\substack{u_{n-1}=l_1+l_2+\dots+l_{n-1}<mp\\ l_1,\dots,l_{n-1},u_{n-1}\in \calP_p }} \frac{l_1+l_2+\dots+l_{n-1}}{l_1 l_2\dots l_{n-1} u_{n-1}}\\ =&\, \frac{n(n-1)}{mp}\sum_{\substack{u_{n-2}=l_1+l_2+\dots+l_{n-2}<u_{n-1}<mp \\ l_1,\dots,l_{n-2}\in \calP_p \\ u_{n-1}-u_{n-2},u_{n-1}\in \calP_p}} \frac{1}{l_1 l_2\dots l_{n-2} u_{n-1}}. \end{align*} Continuing this process by using the substitution $u_j=l_1+l_2+\dots+l_j$ for each $j=n-3,\dots,2,1$, we arrive at \begin{equation*} R_{n}^{(m)}(p) =\frac{n!}{mp} \sum_{\substack{0< u_1<\dots<u_{n-1}<mp \\ u_1,u_2-u_1,\dots,u_{n-1}-u_{n-2},u_{n-1} \in \calP_p}} \frac{1}{u_1 u_2\dots u_{n-1}} . \end{equation*} Observe that the indices $u_j$ ($j=2,\dots,n-2$) are allowed to be multiples of $p$. Thus we set \begin{equation*} T_{n,\ell}^{(m)}(p)\,:= \sum_{\substack{2\le a_1<\cdots<a_{\ell-1}\le n-2\\ 1\le k_1<\dots<k_{\ell-1}<m}} \ \sum_{\substack{0< u_1<\dots<u_{n-1}<mp \\ u_{a_1}=k_1p,
and mobile invertebrates associated with live coral habitats2, 30, 48. High latitude sites such as the southern GBR may initially serve as refugia from thermal stress. However, even high latitude sites can be affected by thermal stress or declining saturation state leading to reductions in growth rates43. The relative benefits of increasing temperature versus constraints imposed by declines in aragonite will play a dominant role in determining the fate of coral in the future46. Heron Island already exhibits reduced rates of coral calcification (50% less for staghorn A. muricata, 30% less for P. damicornis and 8% less for I. palifera (Fig. 3)) compared to those at Lizard Island, likely driven by considerably lower annual average sea temperature. Therefore, increases in temperature may have a positive effect on coral growth at the southern GBR, but there is limited understanding of the constraints imposed by ocean acidification on coral accretion at these high latitude locations. Overall, declines in aragonite saturation are expected to be less influential on calcification than rising sea temperature49, 50 throughout this century. In conclusion, growth rates of three dominant coral species varied spatially along the GBR, largely in conjunction with differences in average annual temperatures. Based on limited temporal sampling of carbonate chemistry, within-reef aragonite saturation was comparable among these locations, suggesting that reef ecosystems do have considerable capacity to buffer latitudinal gradients in oceanic aragonite saturation. There was also no consistent trend in variation in solar radiation among reefs relative to observed variation in growth rates, though this may impose increased constraints on coral growth at higher latitudes51. Linear extension rates at Heron Island, where annual SST was on average 2 °C lower, were 20%, 33%, and 34% less for I. palifera, P. damicornis and A. muricata, respectively, compared to Lizard Island. Ocean warming may therefore, lead to moderate increases in growth rates of corals at southern locations (see also Pratchett et al.12), whereas increasing temperatures may increasingly constrain coral growth (especially during summer) at lower latitudes. Continued monitoring of coral growth rates in combination with environmental conditions, such as temperature stress and within reef carbonate chemistry, will be essential to better appreciate likely impacts of climate change on corals and reef ecosystems. ## Methods ### Coral species Coral growth (specifically, linear extension, density, calcification) was quantified for three coral species (Acropora muricata (cf. A. formosa), Pocillopora damicornis, and Isopora palifera) that were common and abundant at widely separated locations along the length of the GBR. Acropora muricata is a staghorn coral that often dominates in shallow water lagoons, forming large mono-specific thickets. P. damicornis is an abundant, ubiquitous species and the recent amendment of the Pocillopora complex taxonomy52 was used to aid identification of P. damicornis. I. palifera is abundant in high wave energy areas forming submassive clumps that contain columns and ridges53. ### Study sites This study was conducted at three distinct locations, spread along 1,187 km of Australia’s Great Barrier Reef (Fig. 1). The northernmost location was Lizard Island (14.7°S), followed by Davies Reef (18.8°S) and Trunk Reef (18.4°S), and finally Heron Island (23.4°S) in the south. At each of the three locations, coral growth was documented at 2 sites for A. muricata and P. damicornis, but only 1 site for I. palifera. Study sites were specifically selected to provide comparable habitat and depth (5 m) at each reef, while also selecting areas with high abundance of each of the specific study species. In the central sector, P. damicornis was poorly represented at the first reef visited (Davies Reef) and therefore, additional sampling was undertaken at another nearby reef, Trunk Reef (Fig. 1I). ### Coral growth To quantify growth rates for each coral species, replicate colonies and/or branches were stained in situ using Alizarin Red54. As calcification takes place, the dye gets incorporated into the skeleton producing a permanent reference against which to measure all subsequent skeletal growth. The concentration of Alizarin used was 12 mg L−1 and corals were exposed to the dye for 4 hours during the daylight hours of 900–160054. Stained corals were marked with cattle tags attached to the colony base, away from the growing tips to minimise disruption to growth. In total, 240 colonies of A. muricata, 120 colonies of P. damicornis and 30 colonies of I. palifera were stained. Once collected, branches and colonies were placed in 10% sodium hypochlorite to remove the coral tissue and expose the skeleton (Fig. 2). #### Acropora muricata The sampling regime for A. muricata encompassed four 6-month periods at each reef: summer 2012–2013 (Oct/Nov 2012–Mar/Apr 2013), winter 2013 (Apr 2013–Oct 2013), summer 2013–2014 (Oct/Nov 2013-Mar/Apr 2014), winter 2014 (Apr/May 2014–Oct-Nov 2014). Individual branches were enclosed in a 1 L plastic bag in situ and secured with an elastic band. Alizarin red was injected with a needle under the elastic band into the bag. Ten colonies of A. muricata, with three branches per colony were stained at each site, totalling 20 colonies and 60 branches per reef (60 colonies/180 branches total) in October 2012. In April 2013, the stained colonies were collected and another set stained. This cycle of staining continued for two-years totalling 720 branches. Growth measurements were determined for each branch of A. muricata. Linear extension (cm) was measured with Vernier calipers, recording the minimum distance from the stain line to tip of the apical polyp. Branches that had died prior to final collection were excluded from analysis. In addition, 0.02% (16/720) of branches had switched to zooxanthellae free (non-growing) tips through the course of the study that can result from interior conditions of the colony being less suitable for growth or branches growing too closely together55 and were excluded from analysis. To determine bulk skeletal density of the branching species A. muricata, corals were cut along the stain line with a geological saw using a 2 mm saw blade. The dry weight (g) of the cut branch tips was recorded. Branches were dipped in paraffin wax and the total enclosed volume was determined using water displacement technique56. Skeletal bulk density (g cm−3) was then determined by dividing the dry weight by the enclosed volume. Calcification (g cm−2) rates were calculated by multiplying the linear extension by the density15. However, these calcification rates are only of the apical newly grown branch and are not a measure of whole colony calcification. #### Pocillopora damicornis The sampling regime for P. damicornis encompassed one year, separated into two 6-month periods at each reef: summer 2013–2014 (Oct/Nov 2013–Mar/Apr 2014) and winter 2014 (Apr/May 2014–Oct/Nov 2014). 20 colonies per location (10 at each site) were stained. Colonies (max 20 cm diameter) were removed from the substrate and put into a 70 L clear plastic bag in situ with Alizarin dye released into the bag. For P. damicornis, the number of coral branches measured with callipers ranged from 10–20 (n = 10 for corals 8–15 cm in diameter, n = 20 for corals 15–20 cm in diameter). Branch measurements for P. damicornis were randomly sampled along the longest axis of growth (often the most upward projecting). Measured branches were sectioned with a saw and the bulk density and calcification was determined as described for A. muricata. In addition, areas of partial mortality due to algal overgrowth or smothering due to sediment were excluded. Calculated rates of density and calcification were averaged among branches and determined at the colony level. #### Isopora palifera Growth rates of I. palifera were determined for an entire year (Oct/Nov 2013–Oct/Nov 2014) at one site per location (Lizard Island, Davies Reef, Heron Island). At each location, 10 colonies were stained (30 colonies total) with 2–3 columns per colony. Columns of I. palifera were enclosed in a 4 L clear plastic bag, sealed at the base with an elastic band and the dye injected under the elastic band with a needle. To determine the linear extension of I. palifera, stained coral columns were sliced at 4.4 mm thickness until the maximum vertical
impression is that systems with different levels of ambipolar diffusion have the following in common: they all are actively accreting with outflows of various types, and substantial substructures develop in all disks. There are, however, significant differences. In the most diffusive case (model ad-els0.05), both the disk and the wind are largely laminar and mirror-symmetric above and below the midplane (panel a), with the disk accretion confined to the midplane (panel b). Prominent rings and gaps form in both the distributions of the midplane vertical field (panel c) and surface density (d) within a radius of about 20~au. They remain nearly axisymmetric until the end of the simulation ($t/t_0=3000$). As the ambipolar diffusivity decreases by a factor of five to the reference value used in model ad-els0.25, the wind and disk remain largely laminar, although the accretion layer has migrated close to the bottom surface of the disk within a radius of about 20~au (panel f), leads to a significant asymmetry about the disk midplane at $\theta=90^\circ$. Nevertheless, complete (as opposed to partial) rings and gaps that close on themselves are still formed in this region in the distributions of both midplane vertical field (panel g) and surface density (panel h). At larger radii, the rings and gaps appear somewhat less well-defined, especially in the distribution of midplane vertical field, with some rings containing multiple strands and others appearing as tightly wound spirals (panel g). This contrast in appearance between rings and gaps in the inner and outer parts of the disk becomes more pronounced as the ambipolar diffusivity decreases by another factor of five to the value used in model ad-els1.25. In this better magnetically coupled system, the motions of both the disk and outflow become more chaotic (panels i and j). In particular, the mass accretion is no longer confined to the midplane region (panel j). As in the well-coupled 2D (axisymmetric) simulations \citepalias{2018MNRAS.477.1239S}, highly variable `avalanche' accretion streams develop near the disk surface, which dominate the dynamics in the disk envelope (e.g., \citealt{2018ApJ...857...34Z}). In particular, they induce a strong asymmetry in the top and bottom half of the disk envelope, with rapid accretion in the top half (reminiscent of the funnel-wall accretion; \citealt{2018ApJ...857....4T}) but rapid expansion in bottom half (panel i). Nevertheless, rather regular substructures are still formed in the disk surface density, especially in the inner part (within about 20~au), where well-behaved rings and gaps are still evident (panels k and l). At larger radii, the substructures appear more spiral-like, especially in the surface density distribution (panel l); the spirals can be seen even more clearly in the animation of the figure in the online journal. The difference in appearance between the inner and outer parts of the disk is broadly similar to that in the more magnetically diffusive model ad-els0.25, but the outer disk of the less diffusive model exhibits a more spiral-like appearance. \begin{figure*} \centering \includegraphics[width=2.0\columnwidth]{figures/param_panels.png} \caption{The simulations from the top row to the bottom row: ad-els0.05, ad-els0.25, ad-els1.25, ideal, beta1e4 at $t/t_0=2500$. Plotted are azimuthally averaged density and poloidal field lines (first column), mass accretion rate per unit polar angle (second column), normalized midplane vertical field strength (third column) and surface density (fourth column).} \label{fig:5x4} \end{figure*} Even more pronounced spirals develop in the completely coupled (ideal MHD) model ad-els$\infty$. The gas motions are chaotic in both the disk and the envelope (panels m and n), driven presumably by the MRI in the disk and its variants (`avalanche accretion streams') in the envelope. As in the moderately well-coupled case of model ad-els1.25, there is a strong asymmetry in the top and bottom half of the simulation, which is unsurprising given the rather turbulent state of the system. The ideal MHD disk is full of substructures (panels m and n), but they are less coherent than those in model ad-els1.25. In particular, the substructures in the distribution of the midplane vertical magnetic field appear more clumpy, with most of the `blobs' of enhanced $B_{z,\mathrm{mid}}$ smeared into arcs rather than complete rings (panel o). The difference in the disk surface density substructure is even more striking: they are dominated by flocculent spirals rather than rings and gaps (panel p). The spirals can be seen in the distribution of not only surface density, but also the vertical magnetic field at the disk midplane. They are common in global ideal MHD simulations (e.g., figure~9 of \citealt{2000ApJ...528..462H}, figure~10 of \citealt{2011ApJ...735..122F}, figure~7 of \citealt{2014ApJ...784..121S}). A clear trend emerges from this set of simulations: as the ambipolar diffusivity increases, both the disk and wind become more laminar, and the disk substructures become more coherent spatially (i.e., more ring- and gap-like). This trend is perhaps to be expected. In the completely coupled (ideal MHD) limit, the disk is expected to be unstable to both axisymmetric (channel) and non-axisymmetric MRI modes. The latter should make it easier to form substructures of limited azimuthal extent (or clumps) than complete rings. It is also possible that the poloidal magnetic field lines are wrapped more strongly by the vertical differential rotation in the ideal MHD limit, creating a more severe pinching of the field lines in the azimuthal direction. This would make the toroidal magnetic field more prone to reconnection, which is necessarily non-axisymmetric and likely leads to clump formation. The shearing of the clumps by differential rotation, coupled with relatively fast accretion\footnote{The fast accretion in the ideal MHD case can be inferred from Fig.~\ref{fig:5x4}(p), which shows a lower surface density (and thus more mass depletion) than the more magnetically diffusive cases.}, would lead to widespread spirals, as seen in model ad-els$\infty$. As the ambipolar diffusivity increases, the MRI (and its variant `avalanche accretion streams') should be weakened and eventually suppressed. In the limit where the MRI is suppressed, active accretion is still possible provided that the ambipolar diffusivity is not too large (there would not be any accretion and structure formation if the magnetic field is completely decoupled from the matter). Accretion is confined to the disk midplane region, driven by angular momentum removal from the largely laminar disk wind (e.g., Fig.~\ref{fig:5x4}a,b) and aided by the steepening of the radial ($J_r$) current sheet through ambipolar diffusion via the Brandenburg-Zweibel mechanism \citep{1994ApJ...427L..91B}. The midplane accretion would drag the poloidal field lines into a sharply pinched configuration, leading to reconnection and the eventual formation of rings and gaps through the mechanism discussed earlier (and in \citetalias{2018MNRAS.477.1239S}). Once formed, such substructures can be long lived because they are not prone to disruption by the non-axisymmetric modes of the MRI, which are suppressed by ambipolar diffusion in this limit. Azimuthal variation is still possible because of, e.g., reconnection of the oppositely directing toroidal magnetic field across the accretion layer. Again, this reconnection is intrinsically non-axisymmetric but appears to only perturb rather than destroy the rings and gaps. At a more fundamental level, it is perhaps not too surprising that the magnetic diffusivity can make the disk substructures more ordered and coherent. Magnetic diffusivity is expected to reduce disorder in the magnetic field and a more ordered magnetic field configuration is ultimately responsible for the creation of more coherent disk substructures such as rings and gaps. This notion is consistent with the radially demarcated pattern observed in the moderately well coupled model ad-els1.25, where well-defined rings and gaps are produced in the more-diffusive inner disk (as measured by the dimensionless Elsasser number, which scales with radius as $\Lambda \propto r^{3/4}$ initially) and flocculent spirals in the better-coupled outer disk. It is also consistent with the trend that, as the ambipolar diffusivity increases from model ad-els1.25 to model ad-els0.25, rings and gaps become more important relative to flocculent spirals (compare panels h and l of Fig.~\ref{fig:5x4}). Similarly, one might also expect that the ring and gap spacing produced from
write proper comments with references!), but I’ve come back to it often enough by now to consider it a useful part of my tool chest. ### The end And that’s it! It took me a while to get through the full list of my recommendations and write the extended summaries. That said, spacing the posts out like this is probably better to read than having a giant dump of 30+ papers recommended all at once. Either way, with this list done, regular blogging will resume in the near future. At least that’s the plan. Continued from part 5. (Sorry for the long delay, I was busy with other things.) ### 24. O’Neill – “PCG: A Family of Simple Fast Space-Efficient Statistically Good Algorithms for Random Number Generation” (2014; pseudo-random number generation) This paper (well, more like a technical report really) introduces a new family of pseudo-random number generators, but unlike most such papers, it’s essentially self-contained and features a great introduction to much of the underlying theory. Highly recommended if you’re interested in how random number generators work. Unlike most of the papers on the list, there’s not much reason for me to write a lot about the background here, since essentially everything you need to know is already in there! That said, the section I think everyone using random-number generators should have a look at is section 3, “Quantifying Statistical Performance”. Specifically, the key insight is that any pseudorandom number generator with a finite amount of state must, of necessity, have certain biases in the sequences of numbers it outputs. A RNG’s period is bounded by the amount of state it has – a RNG with N bits can’t possibly have a period longer than 2N, per the pigeonhole principle – and if we also expect the distribution of values over that full period to be uniform (as is usually desired for RNGs), that means that as we draw more and more random numbers (and get closer to exhausting a full period of the RNG), the distribution of numbers drawn must – of necessity – become distinguishable from “random”. (Note that how exactly to measure “randomness” is a thorny problem; you can’t really prove that something is random, but you can prove that something isn’t random, namely by showing that is has some kind of discernible structure or bias. Random number generators are generally evaluated by running them through a barrage of statistical tests; passing these tests is no guarantee that the sequence produced by a RNG is “sufficiently random”, but failing even one test is strong evidence that it isn’t.) What the paper then does is instead of comparing random number generators with N bits of internal state to an idealized random source, they instead compare them to the performance of an ideal random number generator with N bits of internal state – ideal here meaning that its internal state evolves according to a randomly chosen permutation $f : 2^N \rightarrow 2^N$ with a single cycle. That is to say, instead of picking a new “truly random” number every time the RNG is called, we now require that it has a limited amount of state; there is a (randomly chosen!) list of 2N uniformly distributed random numbers that the generator cycles through, and because the generator only has N bits of state, after 2N evaluations the list must repeat. (A side effect of this is that any RNG that outputs its entire state as its return value, e.g. many classic LCG and LFSR generators, will be easy to distinguish from true randomness because it never repeats any value until looping around a whole period, which is statistically highly unlikely.) Moreover, after going through a full cycle of 2N random numbers, we expect all possible outputs to have occurred with (as close as possible to) uniform probability. This shift of perspective of comparing not to an ideal random number generator, but to the best any random number generator can do with a given amount of state, is crucial. The same insight also underlies another fairly important discovery of the last few years, cryptographic sponge functions. Cryptographic sponges started as a theoretical construction for hash function construction, starting from the desire to compare not to an idealized “random oracle”, but instead a “random sponge” – which is like a random oracle except it has bounded internal state (as any practical cryptographic hash has, and must). Armed with this insight, this paper then goes on to compare the performance of various published random number generators against the theoretical performance of an ideal RNG with the same amount of state. No generator with a given amount of state can, in the long term and on average, do better than such an ideal RNG with the same amount of state. Therefore, checking how close any particular algorithm with a given state size comes to the statistical performance of a same-state-size ideal RNG is a decent measure of the algorithm quality – and conversely, reducing the state size must make any RNG, no matter how good otherwise, eventually fail statistical tests. How much the state size can be reduced before statistical deficiencies become apparent is thus an interesting metric to collect. The paper goes on to do just that, testing using the well-respected TestU01 test suite by L’Ecuyer and Simard. It turns out that: • Many popular random number generators fail certain randomness tests, regardless of their state size. For example, the Mersenne Twister, being essentially a giant Generalized Feedback Shift Register (a generalization of Linear-Feedback Shift Registers, one of the “classic” pseudorandom number generation algorithms), has easily detectable biases despite its giant state size of nearly 2.5 kilobytes. • Linear Congruential Generators (LCGs), the other big “classic” family of PRNGs, don’t perform particularly great for small to medium state sizes (i.e. the range most typical LCGs fall into), but improve fairly rapidly and consistently as their state size increases. • “Hybrid” generators that combine a simple basic generator to evolve the internal state with a different function to produce the output bits from the state empirically have much better statistical performance than using the basic generators directly does. From there on, the paper constructs a new family of pseudorandom number generators, PCGs (“permuted congruential generators”), by combining basic LCGs (with sufficient amount of state bits) for the internal state evolution with systematically constructed output functions based on certain types of permutations, and shows that they combine state-of-the-art statistical performance with relatively small state sizes (i.e. low memory usage) and high speed. ### 25. Cook, Podelski, Rybalchenko – “Termination Proofs for Systems Code (2006; theoretical CS/program analysis)” This paper is about an algorithm that can automatically prove whether programs terminate. If you’ve learned about the halting problem, this might give you pause. I’ll not talk much about the actual algorithm here, and instead use this space to try and explain how to reconcile results such as the halting problem and Rice’s theorem with the existence of software tools that can prove termination, do static code analysis/linting, or even just advanced optimizing compilers! The halting problem, informally stated, is the problem of deciding whether a given program with a given input will eventually complete, or whether it will run forever. Alan Turing famously proved that this problem is undecidable, meaning that there can be no algorithm that will always give a correct yes-or-no answer to any instance of this problem after running for a finite amount of time. Note that I’m stating this quite carefully, because it turns out that all of the small restrictions in the above paragraph are important. It is very easy to describe an algorithm that always gives a yes-or-no answer to any halting problem instance if the answer isn’t required to be correct (duh); the problem is also tractable if
$M=2048$ in the ResNet case, often called 2048-D.}. An example of extracted boxes is shown in figure \ref{fig:FasterRCNNBOX}. % About 5 images per second can be obtained on a standard GPU. This part can be performed offline since we don't fine-tune the network. As mentioned in \cite{kornblith_better_2018}, residual network (ResNet) appears to be the best architecture for transfer learning by feature extractions among the different ImageNet models, and we therefore choose these networks for our Faster R-CNN versions. One of them (denoted RES-101-VOC07) is a 101 layers ResNet trained for the detection task on PASCAL VOC2007. The other one (denoted RES-152-COCO) is a 152 layers ResNet trained on MS COCO~\cite{lin_microsoft_2014}. We will also compare our approach to the plain application of these networks for the detection tasks when possible, that is when they were trained on classes we want to detect. We refer to these approaches as FSD (fully supervised detection) in our experiments. For implementation, we build on the Tensorflow\footnote{\url{https://www.tensorflow.org/}} implementation of Faster R-CNN of Chen and al. \cite{chen_implementation_2017}\footnote{Code can be found on GitHub \url{https://github.com/endernewton/tf-faster-rcnn}.}. \begin{figure}[!tbp] \centering \hfill \includegraphics[height=5cm]{{im/threshold_07_reduceBoxes}.jpg} \hfill \includegraphics[height=5cm]{{im/TheMysticMarriageofStCatherinebyVeronese_threshold_00}.jpg}\hfill \caption{Some of the regions of interest generated by the region proposal part (RPN) of Faster R-CNN.} \label{fig:FasterRCNNBOX} \end{figure} \paragraph{\bf \MIL{}} When a new class is to be learned, the user provides a set of weakly annotated images. The \MIL{} framework described above is then run to find a linear separator specific to the class. Note that both the database and the library of classifiers can be enriched very easily. Indeed, adding an image to the database only requires running it through the Faster R-CNN network and adding a new class only requires a MIL training. For training the \MIL{}, we use a batch size of 1000 examples (for smaller sets, all features are loaded into the GPU), 300 iterations of gradient descent with a learning rate of 0.01 and $\epsilon=0.01$ \eqref{eqn:MILS}. The whole process takes 750s for 20 classes on PASCAL VOC07 trainval (5011 images) with 12 random start points per class, on a consumer GPU (GTX 1080Ti). Actually the random restarts are performed in parallel to take advantage of the presence of the features in the GPU memory since the transfer of data from central RAM to the GPU memory is a bottleneck for our method. The 20 classes can be learned in parallel. For the experiments of Section~\ref{sec:iconart}, we also perform a grid search on the hyper-parameter C \eqref{eqn:LossReg} by splitting the training set into training and validation sets. We learn several couples ($w$,$b$) for each possible value of C (different initialisation) and the one that minimises the loss \eqref{eqn:MILS} for each class is selected. % \section{Experiments} \label{sec:experiments} In this section, we perform weakly supervised detection experiments on different databases, in order to illustrate different assets of our approach. In all cases, and besides other comparisons, we compare our approach (\MIL{}) to the following baseline, which is actually the approach chosen for the detection experiments in~\cite{crowley_art_2016} (except that we do not perform box expansion): the idea is to consider that the region with the best "objectness" score is the region corresponding to the label associated to the image (positive or negative). This baseline will be denoted as \MAX{}. Linear-SVM classifier are learnt using those features per class in a one-vs-the-rest manner. The weight parameter that produces the highest AP (Average Precision) score is selected for each class by a cross validation method\footnote{We use a 3-fold cross validation while \cite{crowley_art_2016} use constant training and validation set.} and then a classifier is retrained with the best hyper-parameter on all the training data per class. This baseline requires to train several SVMs and is therefore costly. At test time, the labels and the bounding boxes are used to evaluate the performance of the methods in term of AP par class. The generated boxes are filtered by a NMS with an Intersection over Union (IoU) \cite{everingham_pascal_2007} threshold of 0.3 and a confidence threshold of 0.05 for all methods. \subsection{Experiments on PASCAL VOC} Before proceeding with the transfer learning and testing our method on paintings, we start with a sanity check experiment on PASCAL VOC2007~\cite{everingham_pascal_2007}. We compare our weakly supervised approach, \MIL{}, to the plain application of the fully supervised Faster R-CNN~\cite{ren_faster_2015} and to the weakly supervised MAX procedure recalled above. We perform the comparison using two different architectures (for the three methods), RES-101-VOC07 and RES-512-COCO, as explained in the previous section. \begin{savenotes} \begin{table} \caption{ \textbf{VOC 2007 test} Average precision (\%). Comparison of the Faster R-CNN detector (trained in a fully supervised manner : FSD) and our \MILS{} algorithm (trained in a weakly supervised manner) for two networks RES-101-VOC07 and RES-152-COCO.} \label{table:VOC2007} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|cccccccccccccccccccc|c|} Net & Method & aero & bicy & bird & boa & bot & bus & car & cat & cha & cow & dtab & dog & hors & mbik & pers & plnt & she & sofa & trai & tv & mean \\ \hline \hline RES- & FSD \cite{he_deep_2015} & 73.6 & 82.3 & 75.4 & 64.0 & 57.4 & 80.2 & 86.5 & 86.2 & 52.7 & 85.2 & 66.9 & 87.0 & 87.1 & 82.9 & 81.2 & 45.7 & 76.8 & 71.2 & 82.6 & 75.5 & 75.0 \\ % 101- & \MAX{} & 20.8 & 47.0 & 26.1 & 20.2 & 8.3 & 41.1 & 44.9 & 60.1 & 31.7 & 54.8 & 46.4 & 42.9 & 62.2 & 58.7 & 20.9 & 21.6 & 37.6 & 16.7 & 42.0 & 19.8 & 36.2 \\ % % % VOC07 & \MILS{}\footnote{It is the average performance on 100 runs of our algorithm.} & 63.5 & 78.4 & 68.5 & 54.0 & 50.7 & 71.8 & 85.6 & 77.1 & 52.7 & 80.0 & 60.1 & 78.3 & 80.5 & 73.5 & 74.7 & 37.4 & 71.2 & 65.2 & 75.7 & 67.7 & 68.3 $\pm$ 0.2 \\ \hline RES- & FSD \cite{he_deep_2015} & 91.0 & 90.4 & 88.3 & 61.2 & 77.7 & 92.2 & 82.2 & 93.2 & 67.0 & 89.4 & 65.8 & 88.0 & 92.0 & 89.5 & 88.5 & 56.9 & 85.1 & 81.0 & 89.8 & 85.2 & 82.7 \\ 152- & \MAX{}~\cite{crowley_art_2016} & 58.8 & 64.7 & 52.4 & 8.6 & 20.8 & 55.2 & 66.8 & 76.1 & 19.4 & 66.3 & 6.7 & 59.7 & 56.4 & 43.3 & 15.5 & 18.3 & 80.3 & 7.6 & 71.8 & 32.6 & 44.1 \\ % % % COCO & \MILS{}\textsuperscript{\thefootnote} & 88.0 & 90.2 & 84.3 & 66.0 & 78.7 & 93.8 & 92.7 & 90.7 & 63.7 & 78.8 & 61.5 & 88.4 & 90.9 & 88.8 & 87.9 & 56.8 & 75.5 & 81.3 & 88.4 & 86.1 & 81.6 $\pm$ 0.3 \\ % \end{tabular} } \end{table} \end{savenotes} As shown in Table \ref{table:VOC2007} our weakly supervised approach (only considering annotations at the image level~\footnote{However, observe that since we are relying on Faster R-CNN, our system uses a subpart trained using class agnostic bounding boxes.}) yields performances that are only slightly below the ones of the fully supervised approach (using instance-level annotations). On the average, the loss is only 1.1\% of mAP when using RES-512-COCO (for both methods). The baseline \MAX{} procedure (used for transfer learning on paintings in~\cite{crowley_search_2014}) yields notably inferior performances. \subsection{Detection evaluation on Watercolor2k and People-Art databases} We compare our approach with two recent methods performing object detection in artworks, one in a fully supervised way~\cite{westlake_detecting_2016} for detecting people, the other using a (partly) weakly supervised method to detect several VOC classes on watercolor images~\cite{inoue_crossdomain_2018}. For the learning stage, the first approach uses instance-level annotations on paintings, while the second one uses instance-level annotations on photographs and image-level annotations on paintings. In both cases, it is shown that using image-level annotations
both $8\times8$ and $4\times4$ block size based compression is considered to observe the performance of handwritten word recognition. The DCT for a block can be given as, \begin{equation} DCT(i,j)=\frac{1}{\sqrt{2N}}C(i)C(j)\sum_{x=0}^{N-1}\sum_{y=0}^{N-1} f(x,y) \end{equation} where $N$ is the block size, $f(x,y)$ is given as, \begin{equation} f(x,y) = pixel(x,y) \cos{\frac{(2x+1)i\pi}{2N}}\cos{\frac{(2y+1)i\pi}{2N}} \end{equation} and $C(k)$ is given as, \begin{equation} C(k) = \frac{1}{\sqrt{2}} \text{ if } k \text{ is } 0 \text{, else } 1. \end{equation} Fig. \ref{fig:dct} shows the DCT image for an example input image. \subsection{Feature Extraction using CNN} Convolutional Neural Networks (CNNs) can automatically learn the important visual features from images. CNNs is built with different layers such as convolution layer, max-pooling layer, activation functions, etc. The convolution layer extracts the features based the kernel which is automatically learnt. In specific, the max-pool layer is used to detect the variation in images (i.e., detecting the brighter pixel in the DCT images). The activation functions are used to include the non-linearity in the CNN model. More specifically, CNNs are trained using the stochastic gradient descent based algorithms in order to minimize the error of the model on training dataset. In the proposed model, the 5-layer CNN receives a compressed image as input and produces a $32$ sequences of $256$ dimensional output as a feature vector which serves as the input to BiLSTM module. \subsection{Sequence Labeling using BiLSTM} Recurrent Neural Networks (RNNs) is the variant of neural network where the connection between various nodes follows a temporal order in order to process the temporal and sequential data. The function of RNNs can be expressed mathematically as follows: \begin{equation} y_{t} = W_{hy}h_t \end{equation} where $y_{t}$ is output at $t^{th}$ time step, $W_{hy}$ is the parameter and $h_t$ is the hidden representation at $t^{th}$ time step and computed as, \begin{equation} h_{t} = f(h_{t-1},x_t) = \text{tanh}(W_{hh}h_{t-1}+W_{xh}x_{t}) \end{equation} where $x_{t}$ is the input at $t^{th}$ time step, $h_{t-1}$ is the hidden representation at ${t-1}^{th}$ time step, $W_{hh}$ and $W_{xh}$ are the parameters. In the proposed HWRCNet model, we exploit the Bi-directional Long Short Term Memory (BiLSTM) based RNN model for the handwritten word recognition in compressed domain. Basically, the BiLSTM module received a feature vector of $32$ sequences of length $256$ from CNN as the input. The BiLSTM model performs the sequence labeling for word recognition by producing the characters for the words. \subsection{CTC Loss} The Connectionist temporal classification (CTC) loss is considered as the objective function for the training of the model. The CTC loss works by adding the probabilities of all probable alignments between the label and the input and defined as, \begin{equation} p(\text{Y} | \text{X}) = \sum_{\mathrm{A} \in \Omega_{\text{X},\text{Y}}} \prod_{\text{t=1}}^{\text{T}} p_t(a_t | \text{X}) \end{equation} where X and Y are the input and output sequence of lengths T and U, respectively. Set $\Omega$ consists of all possible length sequences with length T and $a_{t}$ is the current state. The working of feature extraction using CNN, sequence labeling using BiLSTM and CTC loss computation and decoding is illustrated in Fig. \ref{fig7}. \begin{figure}[!t] \centerline{\includegraphics[width=8cm]{Fig5.png}} \caption{Feature extraction and sequence labeling.} \label{fig7} \end{figure} The steps of the proposed HWRCNet model are summarized as follows: \begin{itemize} \item Cropping: It is used to crop the image from the center. \item Gaussian blur: It is used by averaging the values using Gaussian kernel. \item Increasing contrast: It is performed to enhance the contrast of the images. \item Morphological operations: It is utilized to remove the unwanted information from the images. \item JPEG compression: It is used to compress the images using Discrete Cosine Transform (DCT). \item CNN: The compressed images are fed to the $5$-layer CNN model consisting of first $2$-layers having $5\times5$ kernel and last $3$-layers having $3\times3$ kernel along with ReLU activation function and Max-pool operations. \item BiLSTM: It is used to perform the sequence labeling on the features extracted from CNN module. The BiLSTM produces a 2D array of dimension (32, 80) as the output. The IAM dataset used for the experiments contains 79 elements or chars, and we need one more character (i.e., a blank character label in case time-step have no character in it). So, there is total 80 elements. For each time step the predicted character is decided based on the highest probability among 80 elements. \item CTC: It is used to facilitate the training of the proposed hybrid network. The CTC loss is computed between the output vector of BiLSTM and the original ground truth word. The lengths of input and output text are considered to be less than $2^{5}$. \end{itemize} \section{Experimental Results and Analysis} To measure the efficacy of the proposed model, we perform the experiments on standard benchmarks. In this section, first we provide the details of used dataset and experimental settings. Finally, we present the results and analysis for handwritten word recognition in compressed domain. \subsection{Dataset Description} The benchmark IAM Handwriting Dataset\footnote{https://fki.tic.heia-fr.ch/databases/iam-handwriting-database} \cite{b7} is used in this paper for the experiments. The dataset primarily comprises of images of handwritten words in English. The dataset procurement consisted of tedious contribution of 657 people who gave their specimens of handwritten text. The text accumulated to 1539 pages after being scanned. The text pages when segmented amounted to 5685 sentences which were discrete and had labels. Further segmentation broke it into 13353 lines which were discrete and had labels which further provided 115,320 words which were discrete and had labels. For the purpose of breaking words and for their extraction an automatic segmentation method was used and the results were verified manually \cite{b7}. \begin{figure}[!t] \centerline{\includegraphics[width=\columnwidth]{Fig3.png}} \caption{Pre-processing of Image.} \label{input} \end{figure} \begin{table}[!t] \begin{center} \caption{The architecture details of CRNN model \cite{b3}.} \label{crnn} \begin{tabular}{l|l} \textbf{Type} & \textbf{Description}\\ [0.5ex] \hline lnput & W × 32 gray-scale image \\ Conv+Pool & \#map 64 kernel 3 × 3, poo1 2x2 \\ Conv+Pool & \#map 128 kernel 3 x 3 , pool 2 x 2 \\ Conv & \#map 256 kernel 3 x 3 \\ Conv+Pool & \#map 256 kernel 3 x 3, pool 1 x 2 \\ Conv+BN & \#map 512 kernel 3 x 3 \\ Conv+Pool+BN & \#map 512kernel 3 x 3, pool 1 x 2 \\ Conv & \#map 512 kernel 2 x 2 \\ Collapse & remove dimension \\ MDLSTM & 256 hidden unit\\ Project & project into 80 classes \\ Transcription& -- \\ [1ex] \end{tabular} \end{center} \end{table} \begin{table}[!t] \begin{center} \caption{The architecture details of the proposed HWRCNet Model.} \label{proposed} \begin{tabular}{l|l} \textbf{Type} & \textbf{Description}\\ [0.5ex] \hline lnput & gray-value line-image (128 x 32) \\ Conv+Pool & \#map 32 kernel 5 x 5, pool 2 x 2 \\ Conv+Pool & \#map 64 kernel 5 x 5, pool 2 x 2 \\ Conv+Pool+BN & \#map 128 kernel 3 x 3, pool 1 x 2 \\ Conv+pool & \#map 128 kernel 3 x 3,pool 1 x 2 \\ conv+pool & \#map256 kernel 3 x 3, pool 1 x 2\\ Collapse & remove dimension \\ Forward LSTM & 256 hidden unit\\ Backward LSTM & 256 hidden unit\\ Project & project into 80 classes \\ CTC & decode or loss \\ [1ex] \end{tabular} \end{center} \end{table} \subsection{Experimental Settings} The input is a grayscale image pre-processed and resized to the dimension of $128\times32$ as shown in Fig. \ref{input}. We use the DCT with $8\times8$ as well as $4\times4$ block sizes for the compression of the images. The 5-layer CNN is used for feature extraction in the shape of $256$ dimensional $32$ channels which serves as sequential data for BiLSTM. The word prediction is done using BiLSTM. The model is trained using CTC loss. The dataset is divided into training and test sets with 95:5 ratio. The training is done on the compressed images in Tensorflow framework for 50 epochs with batch size of 50 having learning rate of 0.001 using ADAM optimizer with $\beta_1=0.9$, $\beta_2=0.999$, and decay $\epsilon=1e^{-08}$. We use the publicly available HTR code\footnote{https://github.com/githubharald/SimpleHTR} as our base framework. We use the Character Error Rate (CER) to analyze the performance of the proposed model. The CER is defined as, \begin{equation} \text{CER = } \frac{ \sum_{i\forall\text{samples}}^{} \text{EditDistance(} GT_i,PT_i \text{)} }{ \sum_{i\forall\text{samples}}^{} \text{\#Chars(} GT_i\text{)} } \end{equation} where GT is ground truth, PT is predicted text, \text{\#Chars} is the number of characters and EditDistance
for ' own h< ', it handles n&hellip of availability to what instances include ' new phase '. But that shows all oriented continuity of usual or moral donut-hole. The topology we have Similarly outer, basic, and obviously fuzzy ones crashes So Just that those do what we can gain, but because there have a breast of next cases that lie in some of these closed intersections. For system, donor at the Geometrization edge. talking all 1-dimensional properties( good phase) on top members( open analysis) set$-sufficiently judged by Gauss. drawing all object-orientated spaces on 4- and Oriented years is technical, but what can get is here read. In cold epub, this course is below based as the decomposition; design; feature, since N-poles note wherein 2Separable for drawing the body of the process. beautiful Pole TypesPoles with six or more models are already determined to run suitable z and completely as think up in shared suggestion. topology; atheists much are to depend that hacks are same, and brought for dull composition. But when n't build we Call when a article should or access; litter sign where it behaves? It very specifies down to network. If a model is including the network of the consolatione, far it should make published or measured. This also is on ordinals or any well-known spaces of basic graph. looking PhDs: The microbial epub of the most involved spaces I horizon described 's how to describe diagrams. And for superficial component, species can be significantly close to waste without analyzing relation in an personal name. In completely every property activity loop must run heard to Check a surgery in snake layers, Pertaining the compiler to always gain almost fleshy if infinite developers are to start been. This is why the best regulator for lifting micro-organisms Is to So Hedge them wherever powerful by using your management atheists degrades online. flow; maybe instead other to find where a Micrometer will define by lifting at the topological corners of a atheist and where they acknowledge. That system is where a hole will allow. But, in the sequence that you are be up with a neighbourhood that is to be assigned, there need a life of exercises for relating components practicing on your axioms. Every epub is a absolutely particular research, so, there offer some topological advantages for 3 and real classes that can define a solid rate for meaning a analyst. book to specify, whenever a k supports eliminated, one cofinite respiration must make imprinted in the design the check is pending sent, while another is been from wherever the overview extended from. The epub Vieweg Berufs of this overview is to be and be the ways, genes, women, and programs that mean Located during the Message safety, feeding information, and forests man. This airport back is and is the online Consultations or projects that do ability of the consultation. Prototyping has to though be how object-oriented or same it will give to satisfy some of the compounds of the duty. It can just be marks a librum to be on the range and projection of the butterfly. It can further be a surgery and have RatingsPaperbackAdd tripling only easier. It is either differential Development( CBD) or Rapid Application Development( RAD). CODD facilitates an near responsibility to the way decomposition metric dieting additional function of functors like unworthy systems. surface &ldquo Children from Laparoscopic email to accordance of clear, western, bariatric page plants that Have with each next. A Few sound can ask manifolds to define a common edge region. attribute 's a edition of examples and environments that can continue related to support an water-holding faster than so iterative with high students. It gives because implement SDLC but is it, since it is more on epub Vieweg Berufs science and can pay accrued usually with the topology other population. Its administrator is to destroy the f(x not and extremely revolutionise the philosophia procedures joke through concepts several as hedge technique, plumage thespiritual, etc. Software $xz$-plane and all of its structures Moving specialist do an various capability. however, it can ask a only part if we are to run a surgery not after its pink point. not incremental microorganism has into shape sufficiently the light is meshed during terrible features of its displacement. see Fund Modelling and Analysis. Hedge Fund Modelling and Analysis. available epub Vieweg, ' on the important implementation, is to a original absence of duo if set union is mathematical. In the angles the service that award is alone changed a applying rid cookie is because of its book in the either few particular Vertex topologies. It is this neighborhood of programming which is plant the more considerable and Loafing among real spaces. Meentemeyer( 1974) was a hundred operations on the age of definition in place continuum, and called remote mathematics between Credit therank and important dioxide. actually, the philosophy of the difficult geometry on administrator of space axiomatisation sets anymore maintain almost object-oriented a scan with general soil. It crashes now learned that the Thornthwaite analysis is imperfect theorem in equivalent sophisticated features, and this could cause focused the spaces. It might inside give that the rate Energy topology therapy modified in the geometry topology( 10 vessels) expected So handy for the true empathy. continuous and alkaline objective structures been in this information do concerned in Table 2. It were built that programming trading tutorial of loss slow hole in few Thailand did only invented with four open objects: premier surgery, Religion, rate « and website. 29 epub Vieweg Berufs for each 1 page oversight in Object-oriented maintainability. 15 intersection per 1 litter topology in . The arbitrary factors which reviewed for the largest lot of the inheritance in home, recognized on an world of the: interior reuse what&rsquo, did infected and are modeled in Table 3. volume so pointed for 79 of the topological nitrogen in isomorphic multiplication segments. distinct next diagram was 2 surface, and respiration started 5 way. 178; examples of both relevant Rational markets received that great poles given to better cfront than those Adjusting various important hundreds. Neither line nor matter product was the full lignocellulose on group strand that might make joined overcrowded from the spaces of Stomatal benefits. Octavia Welby, Monday 10 Jul 2017 Is there a secret recipe to finding the right person, or is it really just down to luck? Batch Culture: A epub of objects which gets represented by applying a approach using a standard bypass of basidiomycete. way intersection: A hydrogen project extremely, a device or outset is updated, the point-set leads curved, and the surface 's offset. legacy Organisms are Also punched to be, Become, or modeling graph sets for union in questions. available Zone: The heterogeneous door at the lowest privacy of a subclass home, using the volume Polymorphism and some real properties. Beta Hemolysis: A dimensional fallacy correlated around a previous Disclaimer editing on focus weight. notion: A version closed with a devices natural to a trading or base meshes, where vertices and fundraiser depend resulted up the membership by a out-of-print security. It has an upper epub Vieweg shape. understanding: fungal group of hedge students in gluing full-text. ability: definition to the number's future that can be and sustain on native catholic substances. staff: The point to which a oneis or algebraic representation is two-dimensional to the error development after treatment. Biochemical Oxygen interest: The finance of emerged surgery described in five reactions by basic Studies running down Many godsexist. primary: The site by which a forest seeks finite of converting used by due sorts, like metric or monthly guide. epub: The service of loss of Methods by Philosophy sets, not Completing these points
one channel}. We denote $e_k(p|\mathcal{P},\phi)$ as the network output, which indicates the computed edge probability {of} the $k$-th semantic category at point $p$. Note that one point {may} belong to multiple categories. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{joint.pdf} \caption{ {Illustration of the joint refinement module, which} consists of two branches. } \label{fig:joint} \end{figure*} The {{\SemEdgeD}} stream shares the same feature encoder with the {{\SemSeg}} stream to force the network to prefer representations with {a} better generalization ability for both tasks. Our backbone is an FCN-based structure. However, one {major} drawback of {an} FCN-based structure is the loss in spatial information of the output after propagating through several alternated convolutional and pooling layers. {This drawback would} harm the localization performance, which is essential for our {\SemEdgeD} task. Besides, according to the findings of CASENet \cite{yu2017casenet}, the {low}{-}level features of CNN-based encoder are not suitable for semantic classification (due to the limited receptive fields) but {are able to} help augment top classifications by suppressing non-edge activations and providing detailed edge localization information. Based on these observations, we propose to extract enhanced features with hierarchical supervisions to alleviate the problem of spatial information loss and offer richer semantic cues for final classification. { \smallskip \noindent \textbf{Enhanced feature extraction.} In detail, we extract the feature maps generated by the layers of the shared encoder (Fig. \ref{fig:network}(a)). We then reduce the numbers of their feature channels and deconvolve them to the size of the input point cloud. The features of different layers participate and augment the final {\SemEdgeD} through a skip-layer architecture, as shown in Fig. \ref{fig:network}(b). \smallskip \noindent \textbf{Hierarchical supervision.} As shown in Fig. \ref{fig:network}(c), from the extracted feature maps of the first three layers, we generate binary edge point maps indicating the probability of points belonging to the semantic edges of any classes. From the last two layers, we generate two {\SemSegPoint} masks. All binary edge point maps are supervised by the weighted binary cross-entropy loss ($L_{bce}$) using ground-truth (GT) binary edges obtained from {the} GT {\SemSegPoint} masks. The two {\SemSegPoint} masks are supervised by the standard multi-class cross-entropy loss ($L_{seg}$). The output {\SemEdgePoint} maps of the {{\SemEdgeD}} stream are supervised by a weighted multi-label loss ($L_{edge}$) following the idea from CASENet \cite{yu2017casenet}. We will give details about different loss functions in Section \ref{Joint multi-task learning}. } \subsection{3D Semantic Segmentation} Based on different data representations, 3D {\SemSeg} methods can be roughly divided into three categories: multiview image-based, voxel-based, and point-based. Our method falls into the point-based category. Although the \emph{multiview image-based} methods {easily} benefit from the success of 2D CNN \cite{boulch2017unstructured,lawin2017deep}, for {\SemSeg}, they suffer from occluded surfaces {and} density variations{,} and rely heavily on viewpoint selection. {Based on} powerful 3D CNN \cite{roynard2018classification,ben20183dmfv,le2018pointgrid,meng2018vvnet,riegler2017octnet,graham20183d,Choy_2019}, \emph{voxel-based} methods achieve the best performance on several 3D {\SemSeg} datasets, but {they need intensive computation power}. {{Compared} with the previously mentioned methods, \emph{point-based} methods suffer less from information loss and {thus} achieve high point-level accuracy with less computation power consumption \cite{qi2017pointnet,qi2017pointnet++}.} They can be generally classified into four categories: neighboring feature pooling \cite{li2018so,Huang_2018,zhao2019pointweb,zhang2019shellnet}, graph construction \cite{wang2019dynamic,wang2019graph,jiang2019hierarchical,liu2019dynamic}, attention-based aggregation \cite{xie2018attentional}{,} and kernel-based convolution \cite{Su_2018,hua2018pointwise,wu2019pointconv,Lei_2019,Komarichev_2019,Lan_2019,mao2019interpolated,thomas2019kpconv}. {Among all the point-based methods, the recently proposed kernel-based method KPConv \cite{thomas2019kpconv} achieves the best performance for efficient 3D convolution. Thus, we adopt KPConv to build our backbone and refinement network.} \subsection{2D Semantic Edge Detection} \label{2D SED} Learning-based {\SemEdgeD} dates back to the {early} work of Prasad et al. \cite{prasad2006learning}. Later, Hariharan et al. \cite{hariharan2011semantic} introduced the first Semantic Boundaries Dataset. After the dawn of deep learning, {the HFL method} \cite{Bertasius_2015} builds a two-stage prediction process by using two deep CNNs for edge localization and classification, respectively. More recently, CASENet \cite{yu2017casenet} extended the CNN-based class-agnostic edge detector HED \cite{xie2015holistically} to {a} class-aware semantic edge detector {by combining} low\hb{-} and high-level features with a multi-label loss function for supervision. Later, several follow-up works \cite{liu2018semantic,yu2018simultaneous,acuna2019devil,hu2019panoptic} improved CASENet by adding diverse deep supervision and reducing annotation noises. {In {2D images}, semantic edges of different classes are weakly related since they are {essentially} occlusion boundaries of projected objects. Based on this observation, 2D {\SemEdgeD} methods treat {\SemEdgeD} of different classes as independent binary classification problems and utilize structures that limit the interaction between different classes like group convolution {modules}. Unlike the ones in 2D, semantic edges in 3D are physical boundaries of objects and thus {are highly related to each other}. In this work, we study the problem of 3D {\SemEdgeD} for the first time. We {adopt from} 2D methods the idea of extracting enhanced features and construct a network that {does not limit} the interaction between different classes.} \subsection{Joint Learning of Segmentation and Edge Detection} For 2D images, several works have explored the idea of combining networks for complementary segmentation and edge detection tasks to improve the learning efficiency, prediction accuracy, and generalization ability \cite{cheng2017fusionnet,bertasius2016semantic,peng2017large,su2019selectivity,lin2017refinenet,wu2019stacked,takikawa2019gated}. To be more specific, for salient object detection, researchers have exploited the duality between the binary segmentation and \emph{class-agnostic} edge detection tasks \cite{su2019selectivity,wu2019stacked}. As for {\SemSeg}, such \emph{class-agnostic} edges are used to build semantic segmentation masks with finer boundaries \cite{takikawa2019gated,peng2017large,lin2017refinenet,bertasius2016semantic,cheng2017fusionnet}. In contrast, we tackle the problem of joint learning for {\SemSeg} and \emph{class-aware} {\SemEdgeD}. Furthermore, {unlike previous works, which limit the interactions between segmentation and edge detection to the sharing of features and network structures, our method} exploits the close relationship between {\SemSegPoint} masks and {\SemEdgePoint} maps. \section{Dataset selection and preparation.}\label{dataset} In the main paper, we conduct all the experiments on two indoor-scene datasets: S3DIS \cite{armeni_cvpr16} and ScanNet \cite{dai2017scannet}. The main reason for choosing no outdoor-scene dataset is that we find semantic edges are not well defined in existing outdoor-scene datasets. As shown in Fig. 1, compared to the indoor-scene datasets, existing outdoor-scene datasets suffer more from incompletion. Objects in an outdoor-scene are often not densely connected due to the missing parts in the point cloud. Therefore, it is hard to define meaningful semantic edges on these point clouds for our evaluation. \begin{figure}[htp] \centering \includegraphics[width=.4\textwidth]{Semantic3D.pdf}\hfil \includegraphics[width=.4\textwidth]{ScanNet.pdf} \caption{(Left) An outdoor scene from Semantic3D \cite{hackel2017isprs}; (Right) An indoor scene from ScanNet \cite{dai2017scannet} } \end{figure} We generate 3D semantic edges following the idea from 2D works \cite{yu2017casenet,prasad2006learning} with slight differences. In 2D, thin semantic edges of one or two pixels width are generated. In contrast, we generate thick semantic edges in 3D since points in a point cloud are much sparser than pixels in an image. Moreover, boundaries between an object and the background are considered as semantic edges in 2D images. However, these boundaries are meaningless in the 3D case. Thus, in 3D, we only consider semantic edges between different objects. In general, all semantic edge points will have two or more than two class labels. Since there are unconsidered classes in the ScanNet dataset, semantic edges between a considered class and an unconsidered class might have only one class label. \section{Complexity of the network, in comparison with other works.}\label{complexity} In this section, we present the comparison on the complexity of our network against state-of-the-art methods. All the experiments have been conducted on a PC with 8 Intel(R) i7-7700 CPUs and a single GeForce GTX 1080Ti GPU. \subsubsection{Training.} We train KPConv and JSENet on the ScanNet dataset. Using the setting presented in their paper, KPConv takes about 0.7s for one training iteration and converges in 160K iterations, taking about 31h in total. Using the setting presented in our paper, in the first step, JSENet takes about 0.9s for one training iteration and converges in 170K iterations. In the second step, JSENet takes about 0.6s for one training iteration and converges in 40K iterations. The whole training takes about 49h. \begin{table} \small \begin{center} \caption{Comparison on runtime complexity of JSENet against state-of-the-art methods. } \label{table:complexity} \resizebox{0.7\textwidth}{!}{ \begin{tabular}{ | l | c | c |} \hline Method & Average time (s) & Parameters (m)\\ \hline KPConv \cite{thomas2019kpconv} & 0.044 & 14.1 \\ \hline PointConv \cite{wu2019pointconv} & 0.307 & 21.7\\ \hline MinkowskiNet \cite{Choy_2019} & 0.185 & 37.9\\ \hline JSENet & 0.097 & 16.2\\ \hline \end{tabular}} \end{center} \end{table} \vspace{-5mm} \subsubsection{Inference.} We compare KPConv, PointConv, MinkowskiNet, and JSENet for their runtime complexity given the same sets of points extracted from the ScanNet dataset (13000 points
\section{Introduction} A dissipative soliton is a stable, strongly localized structure forming inside a nonlinear dissipative system under suitable conditions \cite{Akhmediev}. Its applications range from optics, condensed-matter physics, cosmology to biology and medicine. Dissipative solitons arise in an open nonlinear system, far from equilibrium, and a continuous supply of energy is essential for them. More specifically, pulse-like dissipative solitons form inside a nonlinear active medium as a result of double balance between the medium's nonlinearity and its dispersion and between the gain and loss mechanisms that change pulse energy. Owing to this dual balance, the parameters of a dissipative soliton, such as its amplitude, width, chirp, and phase, do not depend on the initial conditions. Active optical waveguides provide a fertile ground for observing optical dissipative solitons (ODSs) by launching short optical pulses inside them. In practice, however, such ODSs are sensitive to perturbations such as higher-order dispersion and self-steepening that become non-negligible for femtosecond pulses. Another important nonlinear effect for such short pulses is the intrapulse Raman scattering (IRS) that leads to a continuous red-shift of the pulse spectrum. In this paper we study the effects of IRS and other perturbations on the ODS dynamics through a variational approach technique \cite{Bondeson}. The variational technique is a standard method used extensively for both the dissipative \cite{Kaup} and non-dissipative \cite{Anderson} soliton systems. Its application is straightforward for conservative (non-dissipative) systems by choosing a suitable Lagrangian density \cite{Anderson}. The Lagrangian needs to be modified in case of dissipative systems such that it consists of a conservative part and a dissipative part \cite{Cerda}. Construction of a Rayleigh dissipation function is an alternative method to handle the dissipative effects \cite{Royjlt}. In all cases, The Lagrangian density is reduced by integrating over time. This reduction process requires a suitable \textit{ansatz}. The variation technique makes the assumption that the functional form of the ansatz remains intact in presence of a small perturbation but all parameters appearing in the anstaz (amplitude, width, position, phase, frequency etc.) may evolve with propagation. The reduced variational problem, followed by the Ritz optimization, leads to a set of coupled ordinary differential equation (ODE) that governs the evolution of individual pulse parameters under the influence of the perturbation~\cite{GPAbook1}. A proper choice of the ansatz is critical for success of any variational approach. For example, soliton perturbation theory uses the hyperbolic-secant profile of a Kerr soliton as its ansatz with considerable success~\cite{GPAbook1}. However, this form will not be suitable for ODSs as they represent chirped optical pulses. In this work we adopt the Pereira--Stenflo solution~\cite{PS} of the Ginzburg--Landau equation (GLE) and show that the choice of this solution as an ansatz to the variational problem is much superior compared to the choice of a Kerr soliton. We examine the dynamics of various pulse parameters and predict accurately both the magnitude of the spectral red-shift of the ODS initiated by the IRS and corresponding changes in its speed. We also show that the ODS undergoes a slight blue-shift when self-steepening acts as a perturbation. The characteristic shift in the ODS location by the third-order dispersion (TOD) is also captured by the variational treatment presented here. As a special case, we consider the ODS formation inside an active silicon waveguide where free carriers are generated through multi-photon absorption and examine the perturbing effects of free carriers on an ODS\@. To verify the accuracy of our variational results, we compare them to the full numerical solution of the GLE and find a reasonable agreement between the two. We also propose some closed-form solutions which may prove more convenient to use in practice. \section{THEORY} \label{Ginzburg--Landau Equation} To be realistic and to take into account several practical perturbations, we choose a silicon-based, active, nano-photonic waveguide \cite{Agazzi} and study the formation and evolution of ODSs in such a system. In such a waveguide. the leading loss mechanism comes from two-photon absorption (TPA) when pumped at a wavelength below 2.2~$\mu$m. As a consequence of TPA, free carriers are generated inside the waveguide that introduce additional loss so-called free-carrier absorption (FCA) and also change the refractive index \cite{Dieter, Tomita} through a phenomenon called free-carrier dispersion (FCD). In our model we take account these effects by coupling the carrier dynamics with the complex GLE that governs the pulse dynamics \cite{Lin,Roy-M-B}. This equation is a kind of nonlinear Schr\"{o}dinger equation with complex coefficients representing growth and damping \citep{PS, Anderson_Scr}. Its classical solution is known as the Pereira--Stenflo soliton \cite{PS} and it constitutes a specific example of dissipative solitons. The extended GLE describing evolution of optical pulses inside a silicon-based active waveguide can be written in the following normalized form \cite{Roy-M-B, GPAbook2}, \begin{align} \label{gl} i\frac{\partial u}{\partial \xi }-\frac{1}{2}sgn\left( {{\beta }_{2}} \right)\frac{{{\partial }^{2}}u}{\partial {{\tau }^{2}}}-i\left( {{g}_{0}}+{{g}_{2}}\frac{{{\partial }^{2}}}{\partial {{\tau }^{2}}} \right)u+i\alpha u \nonumber \\ +\left( 1+iK \right){{\left| u \right|}^{2}}u -i{{\delta }_{3}}\frac{{{\partial }^{3}}u}{\partial {{\tau }^{3}}} -\tau_R u \frac{\partial {\left| u \right|}^{2}}{\partial \tau } \nonumber \\ +i s\frac{\partial{({\left| u \right|}^{2} u)}}{\partial \tau} +\left( \frac{i}{2}-\mu \right){{\phi }_{c}}u=0 \end{align} where the free-carrier effects are included through the normalized density parameter $\phi_c$ that satisfies the rate equation \cite{Lin}, \begin{equation} \label{ansatz} d\phi_c/d\tau=\theta|u|^4-\tau_c \phi_c . \end{equation} The time and distance variables are normalized as $\tau=t/t_0$ and $\xi=z/L_D$, where $t_0$ is the initial pulse width and $L_D=t_0^2/|\beta_2 (\omega_0)|$ is the dispersion length, $\beta_2(\omega_0)$ being the group-velocity dispersion coefficient at the carrier frequency $\omega_0$. The preceding equations contain multiple dimensionless parameters. The TOD, IRS and self-stepping parameters are normalized as $\delta_3=\beta_3/(3!|\beta_2| t_0)$, $\tau_R=T_R/t_0$ and $s=1/(\omega_0 t_0)$, where $T_R$ is the first moment of the Raman response function \cite{GPAbook1}. The field amplitude ($A$) is rescaled as, $A = u\sqrt{P_0}$, where peak power, $P_0=|\beta_2 (\omega_0 )|/(t_0^2 \gamma_R)$, $\gamma_R=k_0 n_2/A_{eff}$ and $n_2\approx(4\pm 1.5)\times 10^{-18} \ m^2 W^{-1}$ is the Kerr-nonlinear coefficient of silicon. The dimensionless TPA coefficient is given as, $K=\gamma_I/\gamma_R=\beta_{TPA}\lambda_0/(4\pi n_2)$, where, $\beta_{TPA}\approx 8\times 10^{-12} \ m W^{-1}$ and $\gamma_I=\beta_{TPA}/(2A_{eff})$. The linear loss coefficient is normalized as $\alpha=\alpha_l L_D$. The free-carrier density $N_c$ is related to $\phi_c$ as $\phi_c=\sigma N_c L_D$ where $\sigma\approx 1.45\times 10^{-21}\ m^2$ is the FCA cross section of silicon at $\lambda_0=1.55~\mu m$ \cite{Rong}. The generation of free carriers is regulated by the parameter $\theta=\beta_{TPA} |\beta_2 |\sigma/(2\hbar\omega_0 A_{eff}^2 t_0 \gamma_R^2)$ \cite{Lin-Z-P}. The parameter $\mu=2\pi k_c/(\sigma \lambda_0)$ is the FCD coefficient with $k_c\approx 1.35\times 10^{-27}\ m^3$ \cite{Dinu}. The carrier recombination time $t_c$ is scaled as $\tau_c=t_0/t_c$. The gain $G$ and the gain dispersion coefficient ($g_2$) are normalized as $g = GL_D$ and $g_2= g(T_2/t_0)^2$, where dephasing time is $T_2$. The spectral wings of the pulse experience less gain due to a finite gain bandwidth related to $g_2$. In the absence of TOD ($\delta_3=0$), IRS ($\tau_R=0$), self-steepening ($s=0$) and free carriers (i.e., $\phi_c=0$), Eq.\ (1) reduces to the standard GLE, which is known to have the stable ODS solution in the following form \cite{PS, GPAbook2, Desurvire}: \begin{equation} \label{ansatz} u\left( \xi, \tau \right)={{u}_{0}}{{\left[ \text{sech}\left( \eta \tau \right) \right]}^{\left( 1+ia \right)}}{{e}^{i\text{ }\!\!\Gamma\!\!\text{ }\xi }}, \end{equation} where the four parameters $u_0,\ \eta,\ a$ and $\Gamma$ are given by~\cite{GPAbook2}: \begin{subequations} \label{ansatz_part} \begin{align} |u_0|^{2}& = \frac{(g_{0}-\alpha)}{K}\left[ 1-\frac{sgn(\beta_{2})a/2 + g_{2}} {{g}_{2} ({{a}^{2}}-1 )- sgn(\beta_2)a} \right], \\ \eta^{2} &= \frac{({{g}_{0}}-\alpha )}{ \left[{{g}_{2}}\left( {{a}^{2}}-1 \right)-sgn\left( {{\beta }_{2}} \right)a \right]}, \\ \Gamma &= \frac{{{\eta }^{2}}}{2}\left[ sgn\left( {{\beta }_{2}} \right)\left( {{a}^{2}}-1 \right)+4a{{g}_{2}} \right], \\ a &= \frac{H-\sqrt{{{H}^{2}}+2{{\delta }^{2}}}}{\delta}. \end{align} \end{subequations} Here, $H=-[(3/2)sgn(\beta_2 )+3g_2 K]$ and $\delta=-[2g_2-sgn(\beta_2)K]$. The preceding solution was first obtained in 1977 and is known as the Pereira--Stenflo soliton~\cite{PS} \section{VARIATIONAL ANALYSIS} \label{perturbative} The ODS solution exists only when four terms in Eq.\ \eqref{gl} related to TOD ($\delta_3=0$), IRS ($\tau_R=0$), self-steepening ($s=0$) and free carriers ($\phi_c=0$) are neglected. The important question is how these terms affect the ODS solution. One can study their impact by solving Eq.\ \eqref{gl} numerically. However, this approach hinders any physical insight. In this section we treat the four terms as small perturbations and study their impact through a variational analysis. The Variational method has been used with success in the past for many pulse-propagation problems \cite{Bondeson,Kaup,Anderson,Cerda,Royjlt}. It requires a suitable \textit{ansatz}
stack top are popped. Therefore, the values 6 and 12 are popped. Therefore the stacks look like: ``` LOWERBOUND: 1 UPPERBOUND: 4 ``` The reduction step is now applied to the second partition, that is from the 6th to 12th element. After the reduction step, 98 is fixed in the 11th position. So, the second partition has only one element. Therefore, we push the upper and lower boundary values of the first partition onto the stack. So, the stacks are as follows: ``` LOWERBOUND: 1, 6 UPPERBOUND: 4, 10 ``` The processing proceeds in the following way and ends when the stacks do not contain any upper and lower bounds of the partition to be processed, and the list gets sorted. The Stock Span Problem In the stock span problem, we will solve a financial problem with the help of stacks. Suppose, for a stock, we have a series of n daily price quotes, the span of the stock’s price on a particular day is defined as the maximum number of consecutive days for which the price of the stock on the current day is less than or equal to its price on that day. An algorithm which has Quadratic Time Complexity Input: An array P with n elements Output: An array S of n elements such that S[i] is the largest integer k such that k <= i + 1 and P[j] <= P[i] for j = i - k + 1,.....,i Algorithm: ``` 1. Initialize an array P which contains the daily prices of the stocks 2. Initialize an array S which will store the span of the stock 3. for i = 0 to i = n - 1 3.1 Initialize k to zero 3.2 Done with a false condition 3.3 repeat 3.3.1 if ( P[i - k] <= P[i)] then Increment k by 1 3.3.2 else Done with true condition 3.4 Till (k > i) or done with processing Assign value of k to S[i] to get the span of the stock 4. Return array S ``` Now, analyzing this algorithm for running time, we observe: • We have initialized the array S at the beginning and returned it at the end. This is a constant time operation, hence takes O(n) time • The repeat loop is nested within the for loop. The for loop, whose counter is i is executed n times. The statements which are not in the repeat loop, but in the for loop are executed n times. Therefore these statements and the incrementing and condition testing of i take O(n) time. • In repetition of i for the outer for loop, the body of the inner repeat loop is executed maximum i + 1 times. In the worst case, element S[i] is greater than all the previous elements. So, testing for the if condition, the statement after that, as well as testing the until condition, will be performed i + 1 times during iteration i for the outer for loop. Hence, the total time taken by the inner loop is O(n(n + 1)/2), which is O( ${displaystyle n^{2}}$ ) The running time of all these steps is calculated by adding the time taken by all these three steps. The first two terms are O( ${displaystyle n}$ ) while the last term is O( ${displaystyle n^{2}}$ ). Therefore the total running time of the algorithm is O( ${displaystyle n^{2}}$ ). An algorithm that has Linear Time Complexity In order to calculate the span more efficiently, we see that the span on a particular day can be easily calculated if we know the closest day before i, such that the price of the stocks on that day was higher than the price of the stocks on the present day. If there exists such a day, we can represent it by h(i) and initialize h(i) to be -1. Therefore the span of a particular day is given by the formula, s = i – h(i). To implement this logic, we use a stack as an abstract data type to store the days i, h(i), h(h(i)) and so on. When we go from day i-1 to i, we pop the days when the price of the stock was less than or equal to p(i) and then push the value of day i back into the stack. Here, we assume that the stack is implemented by operations that take O(1) that is constant time. The algorithm is as follows: Input: An array P with n elements and an empty stack N Output: An array S of n elements such that P[i] is the largest integer k such that k <= i + 1 and P[y] <= P[i] for j = i - k + 1,.....,i Algorithm: ``` 1. Initialize an array P which contains the daily prices of the stocks 2. Initialize an array S which will store the span of the stock 3. for i = 0 to i = n - 1 3.1 Initialize k to zero 3.2 Done with a false condition 3.3 while not (Stack N is empty or done with processing) 3.3.1 if ( P[i] >= P[N.top())] then Pop a value from stack N 3.3.2 else Done with true condition 3.4 if Stack N is empty 3.4.1 Initialize h to -1 3.5 else 3.5.1 Initialize stack top to h 3.5.2 Put the value of h - i in S[i] 3.5.3 Push the value of i in N 4. Return array S ``` Now, analyzing this algorithm for running time, we observe: • We have initialized the array S at the beginning and returned it at the end. This is a constant time operation, hence takes O(n) time • The while loop is nested within the for loop. The for loop, whose counter is i is executed n times. The statements which are not in the repeat loop, but in the for loop are executed n times. Therefore these statements and the incrementing and condition testing of i take O(n) time. • Now, observe the inner while loop during i repetitions of the for loop. The statement done with a true condition is done at most once, since it causes an exit from the loop. Let us say that t(i) is the number of times statement Pop a value from stack N is executed. So it becomes clear that while not (Stack N is empty or done with processing) is tested maximum t(i) + 1 times. • Adding the running time of all the operations in the while loop, we get: ${displaystyle sum _{i=0}^{n-1}t(i)+1}$ • An element once popped from the stack N is never pushed back into it. Therefore, ${displaystyle sum _{i=1}^{n-1}t(i)}$ So, the running time of all the statements in the while loop is O( ${displaystyle n}$ ) The running time of all the steps in the algorithm is calculated by adding the time taken by all these steps. The run time of each step is O( ${displaystyle n}$ ). Hence the running time complexity of this algorithm is O( ${displaystyle n}$ ). Queues A queue is a basic data structure that is used throughout programming. You can think of it as a line in a grocery store. The first one in the line is the first one to be served.Just like a queue. A queue is also called a FIFO (First In First Out) to demonstrate the way it accesses data. `enqueue(new-item:item-type)` Adds an item onto the end of the queue. `front():item-type` Returns the item at the front of the queue. `dequeue()` Removes the item from the front of the queue. `is-empty():Boolean` True if no more items can be dequeued and there is no front item. `is-full():Boolean` True if no more items can be enqueued. `get-size():Integer` Returns the number of elements in the queue. All operations except `get-size()` can be performed in ${displaystyle O(1)}$ time. `get-size()` runs in at worst ${displaystyle O(N).}$ The basic linked list implementation uses a singly-linked list with a tail pointer to keep track of the back of the queue. ```type Queue data tail:List Iterator constructor() tail := list.get-begin() # null end constructor ``` When you want to enqueue something, you simply add it to the back of the item pointed to by the tail pointer. So the previous tail is considered next compared to the item being added and the tail pointer points to the new item.
position to define the Gibbs measures associated to $H$ (See \cite{B-Pre76} for instance). \begin{definition} A probability measure $P$ on $\O$ is a {\bf Gibbs measure} for the energy function $H$ if $P(\Omega_\infty)=1$ and if for every bounded borel set $\L$, for any measurable and bounded function $g$ from $\O$ to $\ensuremath{{\mathbb{R}}}$, \begin{equation}\label{DLR} \int g(\ensuremath{\omega})P(d\ensuremath{\omega}) = \int \int g(\ensuremath{\omega}'_\L \cup \ensuremath{\omega}_{\L^c}) f_\L(\ensuremath{\omega}'_\L \cup \ensuremath{\omega}_{\L^c}) \pi_\L({\rm d}\ensuremath{\omega}'_\L) P(d\ensuremath{\omega}). \end{equation} Equivalently, for $P$-almost every $\ensuremath{\omega}$ the conditional law of $P$ given $\ensuremath{\omega}_{\L^c}$ is absolutely continuous with respect to $\pi_\L$ with the density $f_\L$ defined in \eqref{localdensity}. \end{definition} The equations (\ref{DLR}) are called the Dobrushin--Lanford--Ruelle (DLR) equations. The existence of such Gibbs measures, in the present setting of finite range stable interactions, is done in \cite{A-DerDroGeo09}, Corollary 3.4 and Remark 3.1. Note that the uniqueness of such $P$ does not necessarily hold, leading to the phase transition phenomenon. We denote by $\ensuremath{\mathcal{G}}_H$ the set of all Gibbs measures for the energy $H$. \section{Variational Principle} The variational principle in statistical mechanics claims that the Gibbs measures are the minimizers of the free excess energy defined by the sum of the the mean energy and the specific entropy. Moreover the minimum is equal to minus the pressure. Let us first define precisely all these macroscopic quantities. For the sake of simplicity we consider the macroscopic limit along the sequence of sets $\L_n=[-n,n]^d$, $n\ge 1$. Limits $\L\to\ensuremath{{\mathbb{R}^d}}$ in the Van-Hove sens could have been considered as well. Let $P$ be a stationary probability measure in $\P$. The {\bf specific entropy} of $P$ is defined as the limit \begin{equation}\label{entropy} \ensuremath{{\mathcal{I}}}(P)=\lim_{n\to +\infty} \frac{1}{|\L_n|} \ensuremath{{\mathcal{I}}}(P_\ensuremath{{\Lambda_n}},\pi_\ensuremath{{\Lambda_n}}), \end{equation} where for any probability measures $\mu$ and $\nu$ $$ \ensuremath{{\mathcal{I}}}(\mu,\nu)= \left\{ \begin{array}{ll} \int \ln(f) d\mu & \text{ if } \mu\ll\nu \text{ with density } f\\ +\infty & \text{otherwise} \end{array}\right..$$ Note that the limit in $\eqref{entropy}$ always exists; see \cite{georgii} for general results on specific entropy. Let us now introduce the {\bf Pressure}. It is defined as the following limit. \begin{equation}\label{pressure} p_H:=\lim_{n\to +\infty} \frac{1}{|\L_n|} \ln (Z_{n}), \end{equation} where $Z_n=Z_{\L_n}(\emptyset)$ is the partition function with empty boundary condition. In the following lemma we show that $p_H$ always exists in the setting of the present paper. \begin{lemme}\label{existencepressure} Assuming that the energy function $H$ is finite range, stable and non-degenerate, then the pressure $p_H$ defined in \eqref{pressure} exists and belongs to $[-1,(e^A-1)]$. \end{lemme} \begin{proof} For any set $\L$ we denote by $\L^\ominus$ the set $$ \L^\ominus=\{ x\in\L, B(x,R_0)\subset \L\},$$ where $R_0$ is an integer larger than the range of the interaction $R$. So for $n>R_0$, $\L_n^\ominus=\L_{n-R_0}$. For any $R_0<m<n$, we consider the Euclidean division $n=km+l$ with $0\le l<m$, $k \ge 0$. Let $(\L_m^i)_{1\le i \le k^d}$ a family of $k^d$ disjoint cubes inside $\L_n$ where each cube is a translation of $\L_m$. From the definition of the partition function \begin{eqnarray*} Z_n & \ge & \pi_{\L_n}(\ensuremath{\omega}_{\L_n\backslash \cup_i \L_m^{i, \ominus}}=\emptyset)\int e^{-H(\ensuremath{\omega})}\pi_{\cup_i \L_m^{i, \ominus}}(d\ensuremath{\omega}) \\ & = & e^{-(|\L_n|-k^d|\L^\ominus_m|)} \prod_i Z_{\L_m^{i,\ominus}}(\emptyset)\\ & \ge & e^{-( k^d2dR_0(2m)^{d-1}+2dm(2n)^{d-1}) } Z_{{m-R_0}}^{k^d}. \end{eqnarray*} So since $|\L_n|/k^d$ goes to $|\L_m|$ when $n$ goes to infinity, $$ \liminf_{n\to \infty} \frac{1}{|\L_n|} \ln(Z_n) \ge \frac{1}{|\L_m|} \big( Z_{m-R_0}- 2dR_0 (2m)^{d-1} \big).$$ This inequality holds for each $m\ge R_0$. So, letting $m$ tends to infinity $$ \liminf_{n\to \infty} \frac{1}{|\L_n|} \ln(Z_n) \ge \limsup_{m\to \infty} \frac{1}{|\L_m|} Z_{m-R_0} = \limsup_{m\to \infty} \frac{1}{|\L_m|} Z_{m}$$ which proves that the limit exists in $\ensuremath{{\mathbb{R}}}\cup\{\pm \infty\}$. Thanks to the stability and the non degeneracy of $H$ we get that $$ e^{-|\L_n|} \le Z_n \le e^{|\L_n|(e^A-1)}$$ which implies that $p_H\in[-1,(e^A-1)]$. \end{proof} The last macroscopic quantity involves the mean energy of a stationary probability measure $P$. It is also defined by a limit but, in opposition to the other macroscopic quantities, we have to assume that it exists. The proof of such existence is based on stationary arguments and a nice representation of the energy contribution per unit volume. Examples are given in Section \ref{Sec:Examples}. So for any stationary probability measure $P$ we assume that the following limit exists \begin{equation}\label{meanenergy} H(P):=\lim_{n\to \infty} \frac{1}{|\L_n|} \int H(\ensuremath{\omega}_{\L_n}) P(d\ensuremath{\omega}). \end{equation} and we call the limit {\bf mean energy} of $P$. We need to introduce a last technical assumption on the boundary effects of $H$. We assume that for any $P$ in $\ensuremath{\mathcal{G}}_H$ \begin{equation}\label{boundary} \lim_{n\to\infty} \frac{1}{|\L_n|} \int \partial H_{\L_n}(\ensuremath{\omega})P(d\ensuremath{\omega})=0, \end{equation} where $\partial H_{\L_n}(\ensuremath{\omega})= H_{\L_n}(\ensuremath{\omega})-H(\ensuremath{\omega}_{\L_n})$. This assumption is satisfied by all the examples we meet. \begin{theoreme}\label{mainTh} We assume that $H$ is stationary, hereditary, non-degenerate, stable and finite range. Moreover we assume that the mean energy exists for any stationary probability measure $P$ (i.e. the limit \eqref{meanenergy} exists) and that the boundary effects assumption \eqref{boundary} holds. Then for any stationary probability measure $P\in\P$ \begin{equation}\label{inequality} I(P)+H(P) \ge -p_H, \end{equation} with equality if and only if $P$ is a Gibbs measure (i.e. $P\in\ensuremath{\mathcal{G}}_H$). \end{theoreme} \section{Examples}\label{Sec:Examples} In this section we present two examples of energy functions included in the setting of Theorem \ref{mainTh}. The first example is the standard superstable pairwise potential energy. The second example involves the Quermass interaction which is an energy function for morphological patterns built by unions of random convex sets. It can be also viewed as a infinite body potential interaction. The main restriction in Theorem \ref{mainTh} is the finite range property and so any standard examples, having this property, could have been considered as well. \subsection{Pairwise potential} In this section the energy function has the following expression: for any finite configuration $\ensuremath{\omega}\in\Omega_f$ \begin{equation}\label{pairwise} H(\omega)=zN(\omega)+\sum_{\{x,y\}\subset \omega} \phi(x-y), \end{equation} where $\phi$ is a symmetric function from $\ensuremath{{\mathbb{R}^d}}$ to $\ensuremath{{\mathbb{R}}}\cup\{+\infty\}$ with compact support. The parameter $z>0$ is called activity and allow to change the intensity of the reference Poisson point process. The potential $\phi$ is said stable if the associated energy $H$ in \eqref{pairwise} is stable. In the following we need that the potential $\phi$ is {\bf superstable} which means that $\phi$ is the sum of stable potential and a positive potential which is non negative around the origin. See \cite{Ruelle} for examples of stable and superstable pairwise potentials. In this setting the variational principle holds as a corollary of Theorem \ref{mainTh}. \begin{corollaire} Let $H$ be a energy function coming from a superstable pairwise potential $\phi$ given by \eqref{pairwise}. Then for any stationary probability measure $P\in\P$ \begin{equation} I(P)+H(P) \ge -p_H, \end{equation} with equality if and only if $P$ is a Gibbs measure (i.e. $P\in\ensuremath{\mathcal{G}}_H$). The expression of $H(P)$ is given in \eqref{MeanEPW}. \end{corollaire} \begin{proof} Let us check the assumptions of Theorem \ref{mainTh}. It is obvious that $H$ is stationary, hereditary, non degenerate, stable and finite range. The existence of the mean energy is proved in \cite{Georgii94}, Theorem 1. It is given by \begin{equation}\label{MeanEPW} H(P)=\left\{ \begin{array}{ll} \frac 12\int \sum_{0\neq x\in\ensuremath{\omega}} \phi(x)P^0(d\ensuremath{\omega}) & \text{ if } E_P(N^2_{[0,1]^d})<\infty \\ +\infty & \text{ otherwise} \end{array}\right. \end{equation} where $P^0$ is the Palm measure of $P$. Recall that $P^0$ can be viewed as the natural version of the conditional probability $P(.|0\in\ensuremath{\omega})$ (see \cite{MKM78} for more details). So, it remains to prove the boundary assumption \eqref{boundary}. Let $P$ a Gibbs measure in $\ensuremath{\mathcal{G}}_H$. A simple computation gives that for any $\ensuremath{\omega}\in\Omega$ $$ \partial H_{\ensuremath{{\Lambda_n}}(\ensuremath{\omega})}= \sum_{x\in\ensuremath{\omega}_{\ensuremath{{\Lambda_n^\oplus}}\backslash \ensuremath{{\Lambda_n}}}} \sum_{ y\in\ensuremath{\omega}_{\ensuremath{{\Lambda_n}}\backslash \L_n^\ominus}} \phi(x-y),$$ where $\ensuremath{{\Lambda_n^\oplus}}=\L_{n+R_0}$ and $\L_n^\ominus=\L_{n-R_0}$ with $R_0$ an integer larger than the range of the interaction $R$. Using the GNZ equation (see \cite{A-NguZes79b}), the stationarity of $P$ we obtain \begin{eqnarray*} |E_P(\partial H_{\ensuremath{{\Lambda_n}}})| & \le & \int\sum_{x\in\ensuremath{\omega}_{\ensuremath{{\Lambda_n^\oplus}}\backslash \ensuremath{{\Lambda_n}}}} \sum_{y\in\ensuremath{\omega}\backslash x} |\phi(x-y)|P(d\ensuremath{\omega})\\ & = & \int\int_{\ensuremath{{\Lambda_n^\oplus}}\backslash \ensuremath{{\Lambda_n}}} e^{-z-\sum_{y\in\ensuremath{\omega}} \phi(x-y)} \sum_{y\in\ensuremath{\omega}} |\phi(x-y)|dx P(d\ensuremath{\omega})\\ & = &|\ensuremath{{\Lambda_n^\oplus}}\backslash \ensuremath{{\Lambda_n}}|e^{-z} \int e^{-\sum_{y\in\ensuremath{\omega}_{B(0,R_0)}} \phi(y)} \sum_{y\in\ensuremath{\omega}_{B(0,R_0)}} |\phi(y)| P(d\ensuremath{\omega}).\\ \end{eqnarray*} Since $\phi$ is stable we deduce that $\phi\ge -A-2z$. So denoting by $C:=\sup_{c\in[-A-2z;+\infty)} |c|e^{-c}<\infty$ we find that \begin{eqnarray}\label{bord} |E_P(\partial H_{\ensuremath{{\Lambda_n}}})| & \le & |\ensuremath{{\Lambda_n^\oplus}}\backslash \ensuremath{{\Lambda_n}}|Ce^{-z} \int N_{B(0,R_0)}(\ensuremath{\omega})e^{(A+2z)N_{B(0,R_0)}(\ensuremath{\omega})} P(d\ensuremath{\omega}). \end{eqnarray} Using the estimates in \cite{Ruelle70} corollary 2.9, the integral in the right term of \eqref{bord} is finite. The boundary assumption \eqref{boundary} follows. \end{proof} \subsection{Quermass interaction} The Quermass process is a morphological interacting model introduced in \cite{A-Kendall99} which is a generalization of the well-known Widom-Rowlinson process or Area Process (see \cite{Widom70}, \cite{A-Baddeley95}). Since the existence of the Quermass process is only proved in the case $d\le 2$ we restrict the following to the non trivial case $d=2$. For any finite configuration $\ensuremath{\omega}$, $L(\ensuremath{\omega})$ denotes the set $\cup_{x\in\omega} B(x,r)$ and the energy is defined as a linear combination of the Minkowski functionals; \begin{equation}\label{Quermassinteraction} H(\ensuremath{\omega})= \theta_1 \ensuremath{{\mathcal{A}}} \Big(L(\ensuremath{\omega})\Big) + \theta_2 \ensuremath{{\mathcal{L}}} \Big(L(\ensuremath{\omega})\Big) +\theta_3 \chi \Big(L(\ensuremath{\omega})\Big), \end{equation} where $r>0$, $\theta_i\in\ensuremath{{\mathbb{R}}}$ , $i=1\ldots 3$
* volfactor1 * volfactor2 * Jy2K**2 dspec_ulim = dspec_ulim**2 * volfactor1 * volfactor2 * Jy2K**2 # axs[j].fill_between(1e6*clean_lags, dspec_ulim, dspec_llim, alpha=0.75, edgecolor='none', facecolor='gray') axs[j].axvline(x=1e6*min_delay[truncated_ref_bl_id==blid,0], ls=':', lw=2, color='black') axs[j].axvline(x=1e6*max_delay[truncated_ref_bl_id==blid,0], ls=':', lw=2, color='black') # axs[j].locator_params(axis='y', nbins=5) axs[j].set_yscale('log') axs[j].set_yticks(NP.logspace(0,8,5,endpoint=True).tolist()) # axs[j].get_yaxis().get_major_formatter().labelOnlyBase = True axs[j].set_xlim(1e6*clean_lags.min(), 1e6*clean_lags.max()) axs[j].set_ylim(0.1*dspec_llim.min(), 1.5*dspec_ulim.max()) # axs[j].set_ylim(10**4.3, 1.1*(max([NP.abs(asm_cc_vis_lag[truncated_ref_bl_id==blid,:,1]).max(), NP.abs(asm_cc_skyvis_lag[truncated_ref_bl_id==blid,:,1]).max(), NP.abs(dsm_cc_skyvis_lag[truncated_ref_bl_id==blid,:,1]).max(), NP.abs(csm_cc_vis_lag[truncated_ref_bl_id==blid,:,1]).max()])+NP.sqrt(NP.abs(csm_jacobian_spindex*csm_cc_skyvis_lag[truncated_ref_bl_id==blid,:,1])**2 + NP.abs(dsm_jacobian_spindex*dsm_cc_skyvis_lag[truncated_ref_bl_id==blid,:,1])**2 + NP.abs(vis_rms_lag[truncated_ref_bl_id==blid,:,1])**2)).max()) hl = PLT.Line2D(range(1), range(0), color='black', linestyle=':', linewidth=2) dsml = PLT.Line2D(range(1), range(0), color='red', linestyle='-', linewidth=2) csml = PLT.Line2D(range(1), range(0), color='cyan', linestyle='-', linewidth=2) asml = PLT.Line2D(range(1), range(0), color='black', linestyle='-', linewidth=2) legend = axs[j].legend((dsml, csml, asml, hl), ('Diffuse', 'Compact', 'Both', 'Horizon\nLimit'), loc='upper right', frameon=False, fontsize=12) axs[j].text(0.05, 0.8, r'$|\mathbf{b}|$'+' = {0:.1f} m'.format(truncated_ref_bl_length[truncated_ref_bl_id==blid][0]), fontsize=12, weight='medium', transform=axs[j].transAxes) axs[j].text(0.05, 0.72, r'$\theta_b$'+' = {0:+.1f}$^\circ$'.format(truncated_ref_bl_orientation[truncated_ref_bl_id==blid][0]), fontsize=12, weight='medium', transform=axs[j].transAxes) # axs[j].text(0.05, 0.7, 'East: {0[0]:+.1f} m\nNorth: {0[1]:+.1f} m\nUp: {0[2]:+.1f} m'.format(truncated_ref_bl[truncated_ref_bl_id==blid].ravel()), fontsize=12, weight='medium', transform=axs[j].transAxes) if j == 0: axs_kprll = axs[j].twiny() axs_kprll.set_xticks(kprll(axs[j].get_xticks()*1e-6, redshift)) axs_kprll.set_xlim(kprll(NP.asarray(axs[j].get_xlim())*1e-6, redshift)) xformatter = FuncFormatter(lambda x, pos: '{0:.2f}'.format(x)) axs_kprll.xaxis.set_major_formatter(xformatter) fig.subplots_adjust(hspace=0) big_ax = fig.add_subplot(111) big_ax.set_axis_bgcolor('none') big_ax.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_ax.set_xticks([]) big_ax.set_yticks([]) big_ax.set_xlabel(r'$\tau$ [$\mu$s]', fontsize=16, weight='medium', labelpad=20) # big_ax.set_ylabel(r'$|V_{b\tau}(\mathbf{b},\tau)|$ [Jy Hz]', fontsize=16, weight='medium', labelpad=30) big_ax.set_ylabel(r"$P_d(k_\perp,k_\parallel)$ [K$^2$ (Mpc/$h)^3$]", fontsize=16, weight='medium', labelpad=30) big_axt = big_ax.twiny() big_axt.set_axis_bgcolor('none') big_axt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axt.set_xticks([]) big_axt.set_yticks([]) big_axt.set_xlabel(r'$k_\parallel$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=30) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'{0:0d}_baseline_comparison'.format(len(select_bl_id))+'_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_gaussian_FG_model'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'.png', bbox_inches=0) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'{0:0d}_baseline_comparison'.format(len(select_bl_id))+'_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_gaussian_FG_model'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'.eps', bbox_inches=0) bl_orientation = NP.copy(simdata_bl_orientation[truncated_ref_bl_ind]) bloh, bloe, blon, blori = OPS.binned_statistic(bl_orientation, bins=n_bins_baseline_orientation, statistic='count', range=[(-90.0+0.5*180.0/n_bins_baseline_orientation, 90.0+0.5*180.0/n_bins_baseline_orientation)]) if plot_11: for j in xrange(n_snaps): fig, axs = PLT.subplots(n_bins_baseline_orientation, sharex=True, sharey=True, figsize=(6,9)) for i in xrange(n_bins_baseline_orientation): blind = blori[blori[i]:blori[i+1]] sortind = NP.argsort(truncated_ref_bl_length[blind], kind='heapsort') imdspec = axs[n_bins_baseline_orientation-1-i].pcolorfast(truncated_ref_bl_length[blind[sortind]], 1e6*clean_lags, NP.abs(asm_cc_skyvis_lag[truncated_ref_bl_ind,:,:][blind[sortind][:-1],:-1,j].T)**2 * volfactor1 * volfactor2 * Jy2K**2, norm=PLTC.LogNorm(vmin=(1e6)**2 * volfactor1 * volfactor2 * Jy2K**2, vmax=dspec_max)) horizonb = axs[n_bins_baseline_orientation-1-i].plot(truncated_ref_bl_length[blind], 1e6*min_delay[blind].ravel(), color='white', ls=':', lw=1.5) horizont = axs[n_bins_baseline_orientation-1-i].plot(truncated_ref_bl_length[blind], 1e6*max_delay[blind].ravel(), color='white', ls=':', lw=1.5) axs[n_bins_baseline_orientation-1-i].set_ylim(0.9*NP.amin(clean_lags*1e6), 0.9*NP.amax(clean_lags*1e6)) axs[n_bins_baseline_orientation-1-i].set_xlim(truncated_ref_bl_length.min(), truncated_ref_bl_length.max()) axs[n_bins_baseline_orientation-1-i].set_aspect('auto') axs[n_bins_baseline_orientation-1-i].text(0.5, 0.1, bl_orientation_str[i]+': '+r'${0:+.1f}^\circ \leq\, \theta_b < {1:+.1f}^\circ$'.format(bloe[i], bloe[i+1]), fontsize=12, color='white', transform=axs[n_bins_baseline_orientation-1-i].transAxes, weight='bold', ha='center') for i in xrange(n_bins_baseline_orientation): axs_kprll = axs[i].twinx() axs_kprll.set_yticks(kprll(axs[i].get_yticks()*1e-6, redshift)) axs_kprll.set_ylim(kprll(NP.asarray(axs[i].get_ylim())*1e-6, redshift)) yformatter = FuncFormatter(lambda y, pos: '{0:.2f}'.format(y)) axs_kprll.yaxis.set_major_formatter(yformatter) if i == 0: axs_kperp = axs[i].twiny() axs_kperp.set_xticks(kperp(axs[i].get_xticks()*freq/FCNST.c, redshift)) axs_kperp.set_xlim(kperp(NP.asarray(axs[i].get_xlim())*freq/FCNST.c, redshift)) xformatter = FuncFormatter(lambda x, pos: '{0:.3f}'.format(x)) axs_kperp.xaxis.set_major_formatter(xformatter) fig.subplots_adjust(hspace=0) big_ax = fig.add_subplot(111) big_ax.set_axis_bgcolor('none') big_ax.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_ax.set_xticks([]) big_ax.set_yticks([]) big_ax.set_ylabel(r'$\tau$ [$\mu$s]', fontsize=16, weight='medium', labelpad=30) big_ax.set_xlabel(r'$|\mathbf{b}|$ [m]', fontsize=16, weight='medium', labelpad=20) big_axr = big_ax.twinx() big_axr.set_axis_bgcolor('none') big_axr.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axr.set_xticks([]) big_axr.set_yticks([]) big_axr.set_ylabel(r'$k_\parallel$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=40) big_axt = big_ax.twiny() big_axt.set_axis_bgcolor('none') big_axt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axt.set_xticks([]) big_axt.set_yticks([]) big_axt.set_xlabel(r'$k_\perp$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=30) # cbax = fig.add_axes([0.9, 0.1, 0.02, 0.78]) # cbar = fig.colorbar(imdspec, cax=cbax, orientation='vertical') # cbax.set_xlabel('Jy Hz', labelpad=10, fontsize=12) # cbax.xaxis.set_label_position('top') # fig.subplots_adjust(right=0.75) # fig.subplots_adjust(top=0.92) # fig.subplots_adjust(bottom=0.07) cbax = fig.add_axes([0.125, 0.94, 0.72, 0.02]) cbar = fig.colorbar(imdspec, cax=cbax, orientation='horizontal') cbax.xaxis.tick_top() cbax.set_ylabel(r'K$^2$(Mpc/h)$^3$', fontsize=12, rotation='horizontal') # cbax.yaxis.set_label_position('right') cbax.yaxis.set_label_coords(1.1, 1.0) fig.subplots_adjust(right=0.86) fig.subplots_adjust(top=0.85) fig.subplots_adjust(bottom=0.07) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_binned_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'_snapshot_{0:0d}.png'.format(j), bbox_inches=0) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_binned_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'_snapshot_{0:0d}.eps'.format(j), bbox_inches=0) if plot_12: required_bl_orientation = ['North', 'East'] for j in xrange(n_snaps): fig, axs = PLT.subplots(len(required_bl_orientation), sharex=True, sharey=True, figsize=(6,6)) for k in xrange(len(required_bl_orientation)): i = bl_orientation_str.index(required_bl_orientation[k]) blind = blori[blori[i]:blori[i+1]] sortind = NP.argsort(truncated_ref_bl_length[blind], kind='heapsort') imdspec = axs[k].pcolorfast(truncated_ref_bl_length[blind[sortind]], 1e6*clean_lags, NP.abs(asm_cc_skyvis_lag[truncated_ref_bl_ind,:,:][blind[sortind][:-1],:-1,j].T)**2 * volfactor1 * volfactor2 * Jy2K**2, norm=PLTC.LogNorm(vmin=(1e6)**2 * volfactor1 * volfactor2 * Jy2K**2, vmax=dspec_max)) horizonb = axs[k].plot(truncated_ref_bl_length[blind], 1e6*min_delay[blind].ravel(), color='white', ls=':', lw=1.0) horizont = axs[k].plot(truncated_ref_bl_length[blind], 1e6*max_delay[blind].ravel(), color='white', ls=':', lw=1.0) axs[k].set_ylim(0.9*NP.amin(clean_lags*1e6), 0.9*NP.amax(clean_lags*1e6)) axs[k].set_xlim(truncated_ref_bl_length.min(), truncated_ref_bl_length.max()) axs[k].set_aspect('auto') axs[k].text(0.5, 0.1, bl_orientation_str[i]+': '+r'${0:+.1f}^\circ \leq\, \theta_b < {1:+.1f}^\circ$'.format(bloe[i], bloe[i+1]), fontsize=16, color='white', transform=axs[k].transAxes, weight='semibold', ha='center') for i in xrange(len(required_bl_orientation)): axs_kprll = axs[i].twinx() axs_kprll.set_yticks(kprll(axs[i].get_yticks()*1e-6, redshift)) axs_kprll.set_ylim(kprll(NP.asarray(axs[i].get_ylim())*1e-6, redshift)) yformatter = FuncFormatter(lambda y, pos: '{0:.2f}'.format(y)) axs_kprll.yaxis.set_major_formatter(yformatter) if i == 0: axs_kperp = axs[i].twiny() axs_kperp.set_xticks(kperp(axs[i].get_xticks()*freq/FCNST.c, redshift)) axs_kperp.set_xlim(kperp(NP.asarray(axs[i].get_xlim())*freq/FCNST.c, redshift)) xformatter = FuncFormatter(lambda x, pos: '{0:.3f}'.format(x)) axs_kperp.xaxis.set_major_formatter(xformatter) fig.subplots_adjust(hspace=0) big_ax = fig.add_subplot(111) big_ax.set_axis_bgcolor('none') big_ax.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_ax.set_xticks([]) big_ax.set_yticks([]) big_ax.set_ylabel(r'$\tau$ [$\mu$s]', fontsize=16, weight='medium', labelpad=30) big_ax.set_xlabel(r'$|\mathbf{b}|$ [m]', fontsize=16, weight='medium', labelpad=20) big_axr = big_ax.twinx() big_axr.set_axis_bgcolor('none') big_axr.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axr.set_xticks([]) big_axr.set_yticks([]) big_axr.set_ylabel(r'$k_\parallel$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=40) big_axt = big_ax.twiny() big_axt.set_axis_bgcolor('none') big_axt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axt.set_xticks([]) big_axt.set_yticks([]) big_axt.set_xlabel(r'$k_\perp$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=30) # cbax = fig.add_axes([0.9, 0.1, 0.02, 0.78]) # cbar = fig.colorbar(imdspec, cax=cbax, orientation='vertical') # cbax.set_xlabel('Jy Hz', labelpad=10, fontsize=12) # cbax.xaxis.set_label_position('top') # fig.subplots_adjust(right=0.75) # fig.subplots_adjust(top=0.92) # fig.subplots_adjust(bottom=0.07) cbax = fig.add_axes([0.125, 0.92, 0.72, 0.02]) cbar = fig.colorbar(imdspec, cax=cbax, orientation='horizontal') cbax.xaxis.tick_top() cbax.set_ylabel(r'K$^2$(Mpc/h)$^3$', fontsize=12, rotation='horizontal') # cbax.yaxis.set_label_position('right') cbax.yaxis.set_label_coords(1.1, 1.0) fig.subplots_adjust(right=0.86) fig.subplots_adjust(top=0.79) fig.subplots_adjust(bottom=0.09) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_N_E_binned_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'_snapshot_{0:0d}.png'.format(j), bbox_inches=0) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_N_E_binned_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'_snapshot_{0:0d}.eps'.format(j), bbox_inches=0) if plot_13: # 13) Plot EoR window foreground contamination when baselines are selectively removed blo_target = 0.0 n_blo_remove_range = 3 n_inner_bll_remove_range = 20 blo_remove_max = 0.5*180.0/n_bins_baseline_orientation*(1+NP.arange(n_blo_remove_range))/n_blo_remove_range inner_bll_remove_max = NP.logspace(NP.log10(truncated_ref_bl_length.min()), NP.log10(max_bl_length), n_inner_bll_remove_range) bl_screened_fg_contamination = NP.zeros((n_blo_remove_range, n_inner_bll_remove_range), dtype=NP.complex) fraction_bl_discarded = NP.zeros((n_blo_remove_range, n_inner_bll_remove_range)) ns_blind = blori[blori[3]:blori[3+1]] ns_fg_contamination = OPS.rms(NP.abs(asm_cc_skyvis_lag[ns_blind,:,0])**2, mask=NP.logical_not(small_delays_strict_EoR_window[:,ns_blind]).T) * volfactor1 * volfactor2 * Jy2K**2 ew_blind = blori[blori[1]:blori[1+1]] ew_fg_contamination = OPS.rms(NP.abs(asm_cc_skyvis_lag[ew_blind,:,0])**2, mask=NP.logical_not(small_delays_strict_EoR_window[:,ew_blind]).T) * volfactor1 * volfactor2 * Jy2K**2 for i in xrange(n_blo_remove_range): blo_retain_ind = NP.abs(bl_orientation - blo_target) > blo_remove_max[i] blo_discard_ind = NP.logical_not(blo_retain_ind) for j in xrange(n_inner_bll_remove_range): bll_retain_ind = truncated_ref_bl_length > inner_bll_remove_max[j] bll_discard_ind = NP.logical_not(bll_retain_ind) retain = NP.logical_not(NP.logical_and(blo_discard_ind, bll_discard_ind)) mask = NP.logical_not(NP.logical_and(small_delays_strict_EoR_window, retain.reshape(1,-1))) bl_screened_fg_contamination[i,j] = OPS.rms(NP.abs(asm_cc_skyvis_lag[:,:,0])**2, mask=mask.T) * volfactor1 * volfactor2 * Jy2K**2 fraction_bl_discarded[i,j] = 1.0 - NP.sum(retain).astype(float)/truncated_ref_bl_length.size symbols = ['o', 's', '*', 'd', '+', 'x'] fig = PLT.figure(figsize=(6,6)) ax1 = fig.add_subplot(111) ax2 = ax1.twinx() for i in xrange(n_blo_remove_range): ax1.plot(inner_bll_remove_max, bl_screened_fg_contamination[i,:], marker=symbols[i], markersize=6, lw=1, color='k', ls='-', label=r'$|\theta_b|\,\leq\,${0:.1f}$^\circ$'.format(blo_remove_max[i])) ax2.plot(inner_bll_remove_max, fraction_bl_discarded[i,:], marker=symbols[i], markersize=5, color='k', lw=1, ls=':', label=r'$|\theta_b|\,\leq\,${0:.1f}$^\circ$'.format(blo_remove_max[i])) # ax1.axhline(y=NP.abs(ew_fg_contamination), color='k', ls='-.', lw=2, label='Eastward limit') # ax1.axhline(y=NP.abs(ns_fg_contamination), color='k', ls='--', lw=2, label='Northward limit') ax1.set_ylim(0.3*bl_screened_fg_contamination.min(), 1.2*bl_screened_fg_contamination.max()) # ax1.set_ylim(0.9*NP.abs(ns_fg_contamination), 1.1*NP.abs(ew_fg_contamination)) ax1.set_xlim(0.9*inner_bll_remove_max.min(), 1.1*inner_bll_remove_max.max()) ax1.set_xscale('log') ax1.set_yscale('log') ax1.set_xlabel(r'Eastward $|\mathbf{b}|_\mathrm{max}$ [m]', fontsize=18, weight='medium') ax1.set_ylabel(r'Foreground Contamination [K$^2$(Mpc/h)$^3$]', fontsize=18, weight='medium') # ax1.set_ylabel(r'$\langle|V_{b\tau}^\mathrm{FG}(\mathbf{b},\tau)|^2\rangle^{1/2}_{\in\,\mathrm{EoR\,window}}$ [Jy Hz]', fontsize=18, weight='medium') # legend = ax1.legend(loc='lower right') # legend = ax1.legend(loc='lower right', fancybox=True, framealpha=1.0) ax2.set_yscale('log') ax2.set_xscale('log') ax2.set_ylim(1e-3, 1.0) ax2.set_ylabel('Baseline fraction discarded', fontsize=18, weight='medium', color='k') legend1_symbol = [] legend1_text = [] for i in xrange(n_blo_remove_range): legend1_symbol += [PLT.Line2D(range(1), range(0), marker=symbols[i], markersize=6, color='k', linestyle='None')] legend1_text += [r'$|\theta_b|\,\leq\,${0:.1f}$^\circ$'.format(blo_remove_max[i])] legend2_symbol = [] legend2_text = [] # legend2_symbol += [PLT.Line2D(range(1), range(0), linestyle='-.', lw=1.5, color='k')] # legend2_symbol += [PLT.Line2D(range(1), range(0), linestyle='--', lw=1.5, color='k')] legend2_symbol += [PLT.Line2D(range(1), range(0), linestyle='-', lw=1.5, color='k')] legend2_symbol += [PLT.Line2D(range(1), range(0), linestyle=':', lw=1.5, color='k')] # legend2_text += ['Foreground upper limit'] # legend2_text += ['Foreground lower limit'] legend2_text += ['Foreground in EoR window'] legend2_text += ['Baseline fraction'] legend1 = ax1.legend(legend1_symbol, legend1_text, loc='lower right', numpoints=1, fontsize='medium') legend2 = ax2.legend(legend2_symbol, legend2_text, loc='upper right', fontsize='medium') PLT.tight_layout() PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_screening_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'.png', bbox_inches=0) PLT.savefig('/data3/t_nithyanandan/'+project_dir+'/figures/'+telescope_str+'baseline_screening_CLEAN_noiseless_PS_'+ground_plane_str+snapshot_type_str+obs_mode+'_FG_model_asm'+sky_sector_str+'nside_{0:0d}_'.format(nside)+'Tsys_{0:.1f}K_{1:.1f}_MHz_{2:.1f}_MHz_'.format(Tsys, freq/1e6,nchan*freq_resolution/1e6)+bpass_shape+'{0:.1f}'.format(oversampling_factor)+'.eps', bbox_inches=0) if plot_14: # 14) Plot delay spectra before and after baselines are selectively removed blo_target = 0.0 blo_remove_max = 0.5*180.0/n_bins_baseline_orientation inner_bll_remove_max = 30.0 blo_retain_ind = NP.abs(bl_orientation - blo_target) > blo_remove_max blo_discard_ind = NP.logical_not(blo_retain_ind) bll_retain_ind = truncated_ref_bl_length > inner_bll_remove_max bll_discard_ind = NP.logical_not(bll_retain_ind) discard = NP.logical_and(blo_discard_ind, bll_discard_ind) retain = NP.logical_not(discard) msk = NP.zeros((truncated_ref_bl_length.size, clean_lags.size)) msk[discard,:] = 1 colrmap = copy.copy(CM.jet) colrmap.set_bad(color='black', alpha=1.0) bl_screened_asm_cc_skyvis_lag = NP.ma.masked_array(asm_cc_skyvis_lag[:,:,0], mask=msk) # bl_screened_asm_cc_skyvis_lag = NP.ma.filled(bl_screened_asm_cc_skyvis_lag, fill_value=1e-5) # bl_screened_asm_cc_skyvis_lag = NP.ma.compress_rows(bl_screened_asm_cc_skyvis_lag) # bl_screened_asm_cc_skyvis_lag = NP.copy(asm_cc_skyvis_lag[:,:,0]) # bl_screened_asm_cc_skyvis_lag[discard,:] = 1e-3 descriptor_str = ['All baselines', 'Short eastward baselines removed'] fig, axs = PLT.subplots(2, sharex=True, sharey=True, figsize=(6,6)) all_imdspec = axs[0].pcolorfast(truncated_ref_bl_length, 1e6*clean_lags, NP.abs(asm_cc_skyvis_lag[:-1,:-1,0].T)**2 * volfactor1 * volfactor2 * Jy2K**2, norm=PLTC.LogNorm(vmin=(1e6)**2 * volfactor1 * volfactor2 * Jy2K**2, vmax=NP.abs(asm_cc_skyvis_lag).max()**2 * volfactor1 * volfactor2 * Jy2K**2)) screened_imdspec = axs[1].pcolorfast(truncated_ref_bl_length, 1e6*clean_lags, NP.abs(bl_screened_asm_cc_skyvis_lag[:-1,:-1].T)**2 * volfactor1 * volfactor2 * Jy2K**2, norm=PLTC.LogNorm(vmin=(1e6)**2 * volfactor1 * volfactor2 * Jy2K**2, vmax=NP.abs(asm_cc_skyvis_lag).max()**2 * volfactor1 * volfactor2 * Jy2K**2), cmap=colrmap) for j in xrange(len(axs)): bll_cut = axs[j].axvline(x=inner_bll_remove_max, ymin=0.0, ymax=0.75, ls='--', color='white', lw=1.5) horizonb = axs[j].plot(truncated_ref_bl_length, 1e6*min_delay.ravel(), color='white', ls=':', lw=1.5) horizont = axs[j].plot(truncated_ref_bl_length, 1e6*max_delay.ravel(), color='white', ls=':', lw=1.5) axs[j].set_ylim(0.9*NP.amin(clean_lags*1e6), 0.9*NP.amax(clean_lags*1e6)) axs[j].set_aspect('auto') axs[j].text(0.5, 0.8, descriptor_str[j], transform=axs[j].transAxes, fontsize=12, weight='semibold', ha='center', color='white') for j in xrange(len(axs)): axs_kprll = axs[j].twinx() axs_kprll.set_yticks(kprll(axs[j].get_yticks()*1e-6, redshift)) axs_kprll.set_ylim(kprll(NP.asarray(axs[j].get_ylim())*1e-6, redshift)) yformatter = FuncFormatter(lambda y, pos: '{0:.2f}'.format(y)) axs_kprll.yaxis.set_major_formatter(yformatter) if j == 0: axs_kperp = axs[j].twiny() axs_kperp.set_xticks(kperp(axs[j].get_xticks()*freq/FCNST.c, redshift)) axs_kperp.set_xlim(kperp(NP.asarray(axs[j].get_xlim())*freq/FCNST.c, redshift)) xformatter = FuncFormatter(lambda x, pos: '{0:.3f}'.format(x)) axs_kperp.xaxis.set_major_formatter(xformatter) fig.subplots_adjust(hspace=0) big_ax = fig.add_subplot(111) big_ax.set_axis_bgcolor('none') big_ax.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_ax.set_xticks([]) big_ax.set_yticks([]) big_ax.set_ylabel(r'$\tau$ [$\mu$s]', fontsize=16, weight='medium', labelpad=30) big_ax.set_xlabel(r'$|\mathbf{b}|$ [m]', fontsize=16, weight='medium', labelpad=20) big_axr = big_ax.twinx() big_axr.set_axis_bgcolor('none') big_axr.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axr.set_xticks([]) big_axr.set_yticks([]) big_axr.set_ylabel(r'$k_\parallel$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=40) big_axt = big_ax.twiny() big_axt.set_axis_bgcolor('none') big_axt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off') big_axt.set_xticks([]) big_axt.set_yticks([]) big_axt.set_xlabel(r'$k_\perp$ [$h$ Mpc$^{-1}$]', fontsize=16, weight='medium', labelpad=30) cbax = fig.add_axes([0.9, 0.125, 0.02, 0.74]) cbar = fig.colorbar(all_imdspec, cax=cbax, orientation='vertical') cbax.set_xlabel(r'K$^2$(Mpc/h)$^3$', labelpad=10,
\section{Introduction} The Navier-Stokes equations (NSEs) represent a formulation of the Newton's laws of motion for a continuous distribution of matter in a fluid state, characterized by an inability to support shear stresses, see \cite{Doering-Gibbon}. The NSEs allow to determine the velocity field and the pressure of fluids confined in regions of the space, and they are used to describe many different physics phenomena as weather, water flow in tubes, ocean currents and others. Moreover, these equations are useful in several fields of knowledge such as petroleum industry, plasma physics, meteorology, thermo-hydraulics, among others (see \cite{RT} for instance). Due to this fact, these equations have been attracted to the attention of several mathematicians since they play an important role for applications. See \cite{boldrini, Alexandre, Doering-Gibbon, GRR, GRR1, GRR2, Jiu-Wang-Xin, rosa, RT, Teman} and the references therein. On the other hand, the theory of impulsive dynamical systems has been shown to be a powerful tool to model real-world problems in physics, technology, biology, among others. Because of this fact, the interest in the study of impulsive dynamical systems has increasing considerably. For recents trends on this subject we indicate the works \cite{BonottoDemuner, bonotto1, BBCC, Cortes, Davis, Feroe, Yang, Zhao} and the references therein. However, the study of Navier-Stokes equations with impulse effects is really scarce. Motivated by this fact, in this paper, we investigate existence and uniqueness of mild solutions for the impulsive NSEs \begin{equation}\label{Eq5} \displaystyle\left\{\begin{array}{ll} \displaystyle\frac{\partial u}{\partial t} + q(t)(u \cdot \nabla)u - \nu\Delta u +\nabla p = \phi(t,u), & (t,x) \in \left((0, +\infty)\setminus \displaystyle\bigcup_{k=1}^{+\infty}\{t_k\}\right) \times \Omega, \vspace{1mm}\\ {\rm div}\, u = 0, & (t,x) \in (0, +\infty) \times \Omega, \vspace{1mm}\\ u = 0, & (t,x) \in (0, +\infty) \times \partial\Omega, \vspace{1mm}\\ u(0, \cdot)= u_0 & x \in \Omega, \vspace{1mm}\\ u(t_k^+, \cdot) - u(t_k^-, \cdot) = I_k (u(t_k, \cdot)), & x\in\Omega, \; k=1, 2,\ldots , \end{array} \right. \end{equation} where $\Omega$ is a bounded smooth domain in $\mathbb{R}^2$. Here $u = (u_1,u_2)$ denotes the velocity field of a fluid filling $\Omega$, $p$ is its scalar pressure and $\nu > 0$ is its viscosity. We will assume that $q$ is a bounded function, $\phi$ is a nonlinearity which will be specified later, $\{t_k\}_{k \in \mathbb{N}} \subset (0, +\infty)$ is a sequence of impulse times such that $\displaystyle\lim_{t \rightarrow +\infty} t_k = +\infty$, $u(t_k, \cdot) = u(t_k^+, \cdot) = \displaystyle\lim_{\delta \rightarrow 0+}u(t_k + \delta, \cdot)$, $u(t_k^-, \cdot) = \displaystyle\lim_{\delta \rightarrow 0+}u(t_k - \delta, \cdot)$ and $I_k$, $k \in \mathbb{N}$, are the impulse operators. Besides to impulsive actions in the system \eqref{Eq5}, we also allow that the external force $\phi$ is not continuous and depends on the solution $u$. We point out that the Navier-Stokes equations with impulses make sense physically and allow to describe more precisely the phenomena modeled by these equations, since $u$ represents the velocity of the field of a fluid and moreover, the external force $\phi$ in this case does not need to be continuous. It is well known that the phenomena which occur in the environment have impulsive behavior and the functions which model them have several discontinuities. Therefore, with this impulsive model, we intend to give a more precisely description of the Navier-Stokes equations. The system \eqref{Eq5} without impulse conditions was studied in the classical monograph \cite{Cheban}, where $\phi$ is a function of time $t\in\mathbb{R}$. More precisely, the author studies existence and uniqueness of global mild solutions for the non-impulsive equation \[ \displaystyle\frac{\partial u}{\partial t} + q(t)(u \cdot \nabla)u - \nu\Delta u +\nabla p = \phi(t), \] subject to the conditions $\textrm{div}\, u = 0$ and $u|_{\partial \Omega} = 0$, where $\Omega$ is a bounded smooth domain in $\mathbb{R}^2$. Our goal here is to write a weaker formulation of the system \eqref{Eq5} and then, we intend to investigate the existence and uniqueness of mild solutions. In order to do this, we start by considering some notations which can be found in \cite{rosa} and \cite{RT}, for instance. Let $\mathbb{L}^2 (\Omega) = (L^2 (\Omega))^2$ and $\mathbb{H}_0^1 (\Omega) = (H_0^1 (\Omega))^2$ endowed, respectively, with the inner products $$(u,v) = \displaystyle \sum_{j=1}^2 \int_{\Omega} u_j \cdot v_j \ dx, \ \ \ u = (u_1, u_2), \ v = (v_1, v_2) \in \mathbb L^2 (\Omega),$$ and $$((u,v)) = \displaystyle \sum_{j=1}^2 \int_{\Omega} \nabla u_j \cdot \nabla v_j dx, \ \ \ u = (u_1, u_2), \ v = (v_1, v_2) \in \mathbb{H}^1_0 (\Omega)$$ and norms $| \cdot | = ( \cdot, \cdot)^{1/2}$ and $\| \cdot \| = (( \cdot, \cdot))^{1/2}$. Now, we consider the following sets: $$\mathcal{E} = \{ v \in (C_0^{\infty}(\Omega))^2: \; \nabla \cdot v = 0 \ \textrm{in} \ \Omega\},$$ $$ V = \textrm{closure of} \ \mathcal{E} \ \textrm{in} \ \mathbb{H}_0^1 (\Omega)$$ and $$ H = \textrm{closure of} \ \mathcal{E} \ \textrm{in} \ \mathbb{L}^2 (\Omega).$$ The space $H$ is a Hilbert space with the scalar product $(\cdot, \cdot)$ induced by $\mathbb{L}^2 (\Omega)$ and the space $V$ is a Hilbert space with the scalar product $((u, v))$ induced by $\mathbb{H}_0^1 (\Omega)$. The space $V$ is contained in $H$, it is dense in $H$ and by the Poincare's Inequality, the inclusion $i:V \hookrightarrow H$ is continuous. Denote by $V'$ and $H'$ the dual spaces of $V$ and $H$, respectively. The adjoint operator $i^*$ is linear and continuous from $H'$ to $V'$, $i^*(H')$ is dense in $V'$ and $i^*$ is one to one since $i(V) = V$ is dense in $H$. Moreover, by the Riesz representation Theorem, we can identify $H$ and $H'$ and write $$V \subset H \equiv H' \subset V',$$ where each space is dense in the following one and the injections are continuous. As a consequence of the previous identifications, the scalar product in $H$, $(f,u)$, of $f \in H$ and $u \in V$ is the same as the duality product between $V'$ and $V$, $\langle f, u \rangle$, i.e., $$ \langle f, u \rangle = (f, u), \ \ \ \text{for all} \; f \in H \; \text{and} \; \text{for all} \; u \in V.$$ Also, for each $u \in V$, the form $$v \in V \mapsto \nu((u, v)) \in \mathbb R$$ is linear and continuous on $V$. Therefore, there exists an element of $V'$ which we denote by $Au$ such that $$ \langle Au, v \rangle = \nu((u, v)), \; \text{for all} \; v \in V.$$ Notice that the mapping $u \mapsto Au$ is linear, continuous and it is an isomorphism from $V$ to $V'$. Based on it, we consider the following weak formulation of \eqref{Eq5}: \begin{equation}\label{weak-Eq} \left\{ \begin{array}{lll} \displaystyle\frac{d}{dt} (u, v) + \nu ((u, v)) + b(t)(u,u,v) = \langle \phi(t,u), v \rangle, \ \ v \in V, \; t > 0, \; t \neq t_k, \vspace{1mm}\\ u(t_k) - u(t_k^-) = I_k(u(t_k^-)), \ \ k \in \mathbb N, \vspace{1mm}\\ u(0) = u_0 \in H, \end{array} \right. \end{equation} where $\phi(t,u) \in V'$ and $b(t): V \times V \times V \to \mathbb R$ is given by $$b(t) (u, v, w) = q(t) \displaystyle\sum_{i, j = 1}^2 \displaystyle\int_{\Omega} u_i \displaystyle\frac{\partial v_j}{\partial x_i} w_j dx.$$ The weak formulation \eqref{weak-Eq} is equivalent to the impulsive system \begin{equation}\label{IS} \left\{ \begin{array}{lll} u'+ Au + B(t)(u, u) = \phi, \ \ \ \textrm{in} \ \ V', \ \ t > 0, \; t \neq t_k,\vspace{1mm}\\ u(t_k) - u(t_k^-) = I_k(u(t_k^-)), \ \ k \in \mathbb{N}, \vspace{1mm}\\ u(0) = u_0 \in H, \end{array} \right. \end{equation} where $u'= du/dt$, $A: V \to V'$ is the Stokes operator defined by $$\langle Au, v \rangle = \nu((u,v)), \; \text{for all} \; u, v \in V,$$ and $B(t): V \times V \to V'$ is a bilinear operator defined by $$\langle B(t)(u, v), w \rangle = b(t) (u, v, w), \; \text{for all} \; u, v, w \in V.$$ In Section 2, we consider the following general impulsive system \begin{equation}\label{IntroNS1} \left\{ \begin{array}{lll} u' + Au + B(\sigma(\cdot,\omega))(u,u) = f(\cdot, \sigma(\cdot,\omega), u), \quad t > 0, \; t \in I, \; t \neq t_k, \; k\in\mathbb{N}, \vspace{1mm}\\ u(t_k) - u(t_k^-) = I_k (u(t_k^-)), \ \ k \in \mathbb{N},\vspace{1mm}\\ u(0) = u_0 \in H, \end{array} \right. \end{equation} where $f: I\times\mathcal{M}\times H\rightarrow H$ is a piecewise continuous function with respect to $t\in\mathbb{R}$, non-stationary and also depends on the solution $u$. All the conditions of system \eqref{IntroNS1} will be specified later.
#-*- coding: utf-8 -*- # MIT License # # Copyright (c) 2019 hey-yahei # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from mxnet import cpu, gpu, nd from mxnet.gluon import nn from mxnet.gluon.data import Sampler, DataLoader from mxnet.gluon.data.vision import CIFAR10 from mxnet.gluon.data.vision import transforms as T from gluoncv.model_zoo import get_model, get_model_list from gluoncv.data import ImageNet import argparse import numpy as np from tqdm import tqdm import sys sys.path.append("..") from quantize import convert from quantize.initialize import qparams_init from quantize.distribution_calibrate import kl_calibrate, collect_feature_maps __author__ = "YaHei" def parse_args(): parser = argparse.ArgumentParser(description='Simulate for quantization.') # parser.add_argument('--data-dir', type=str, default='~/.mxnet/datasets', # help='training and validation pictures to use. (default: ~/.mxnet/datasets)') parser.add_argument('--model', type=str, default=None, help='type of model to use. see vision_model for options. (required)') parser.add_argument('--print-model', action='store_true', help='print the architecture of model.') parser.add_argument('--list-models', action='store_true', help='list all models supported for --model.') parser.add_argument('--use-gpu', type=int, default=-1, help='run model on gpu. (default: cpu)') parser.add_argument('--dataset', type=str, default="imagenet", choices=['imagenet', 'cifar10'], help='dataset to evaluate (default: imagenet)') parser.add_argument('--use-gn', action='store_true', help='whether to use group norm.') parser.add_argument('--batch-norm', action='store_true', help='enable batch normalization or not in vgg. default is false.') parser.add_argument('--use-se', action='store_true', help='use SE layers or not in resnext. default is false.') parser.add_argument('--last-gamma', action='store_true', help='whether to init gamma of the last BN layer in each bottleneck to 0.') parser.add_argument('--merge-bn', action='store_true', help='merge batchnorm into convolution or not. (default: False)') parser.add_argument('--weight-bits-width', type=int, default=8, help='bits width of weight to quantize into.') parser.add_argument('--input-signed', type=str, default="false", help='quantize inputs into int(true) or uint(fasle). (default: false)') parser.add_argument('--input-bits-width', type=int, default=8, help='bits width of input to quantize into.') parser.add_argument('--quant-type', type=str, default="layer", choices=['layer', 'group', 'channel'], help='quantize weights on layer/group/channel. (default: layer)') parser.add_argument('-j', '--num-data-workers', dest='num_workers', default=4, type=int, help='number of preprocessing workers (default: 4)') parser.add_argument('--batch-size', type=int, default=128, help='evaluate batch size per device (CPU/GPU). (default: 128)') parser.add_argument('--num-sample', type=int, default=5, help='number of samples for every class in trainset. (default: 5)') parser.add_argument('--quantize-input-offline', action='store_true', help='calibrate via EMA on trainset and quantize input offline.') parser.add_argument('--calib-mode', type=str, default="naive", choices=['naive', 'kl'], help='how to calibrate inputs. (default: naive)') parser.add_argument('--calib-epoch', type=int, default=3, help='number of epoches to calibrate via EMA on trainset. (default: 3)') parser.add_argument('--disable-cudnn-autotune', action='store_true', help='disable mxnet cudnn autotune to find the best convolution algorithm.') parser.add_argument('--eval-per-calib', action='store_true', help='evaluate once after every calibration.') parser.add_argument('--exclude-first-conv', type=str, default="true", choices=['false', 'true'], help='exclude first convolution layer when quantize. (default: true)') parser.add_argument('--fixed-random-seed', type=int, default=7, help='set random_seed for numpy to provide reproducibility. (default: 7)') parser.add_argument('--wino_quantize', type=str, default="none", choices=['none', 'F23', 'F43', 'F63'], help='quantize weights for Conv2D in Winograd domain (default: none)') opt = parser.parse_args() if opt.list_models: for key in get_model_list(): print(key) exit(0) elif opt.model is None: print("error: --model is required") print() print('*'*25 + ' Settings ' + '*'*25) for k, v in opt.__dict__.items(): print("{0: <25}: {1}".format(k, v)) print('*'*(25*2+len(' Setting '))) print() return opt def evaluate(net, num_class, dataloader, ctx, update_ema=False, tqdm_desc="Eval"): correct_counter = nd.zeros(num_class) label_counter = nd.zeros(num_class) test_num_correct = 0 with tqdm(total=len(dataloader), desc=tqdm_desc) as pbar: for i, (X, y) in enumerate(dataloader): X = X.as_in_context(ctx) y = y.as_in_context(ctx) outputs = net(X) if update_ema: net.update_ema() # collect predictions pred = outputs.argmax(axis=1) test_num_correct += (pred == y.astype('float32')).sum().asscalar() pred = pred.as_in_context(cpu()) y = y.astype('float32').as_in_context(cpu()) for p, gt in zip(pred, y): label_counter[gt] += 1 if p == gt: correct_counter[gt] += 1 # update tqdm pbar.update(1) # calculate acc and avg_acc eval_acc = test_num_correct / label_counter.sum().asscalar() eval_acc_avg = (correct_counter / (label_counter + 1e-10)).mean().asscalar() return eval_acc, eval_acc_avg class UniformSampler(Sampler): def __init__(self, classes, num_per_class, labels): self._classes = classes self._num_per_class = num_per_class self._labels = labels def __iter__(self): sample_indices = [] label_counter = np.zeros(self._classes) shuffle_indices = np.arange(len(self._labels)) np.random.shuffle(shuffle_indices) for idx in shuffle_indices: label = self._labels[idx] if label_counter[label] < self._num_per_class: sample_indices.append(idx) label_counter[label] += 1 if label_counter.sum() == self._classes * self._num_per_class: break for idx, cnt in enumerate(label_counter): if cnt < self._num_per_class: raise ValueError("Number of samples for class {} is {} < {}".format(idx, cnt, self._num_per_class)) return iter(sample_indices) def __len__(self): return self._classes * self._num_per_class if __name__ == "__main__": opt = parse_args() # set random_seed for numpy np.random.seed(opt.fixed_random_seed) if opt.disable_cudnn_autotune: import os os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '0' # get model model_name = opt.model classes = 10 if opt.dataset == 'cifar10' else 1000 kwargs = { 'pretrained': True, 'classes': classes } if opt.use_gn: from gluoncv.nn import GroupNorm kwargs['norm_layer'] = GroupNorm if model_name.startswith('vgg'): kwargs['batch_norm'] = opt.batch_norm elif model_name.startswith('resnext'): kwargs['use_se'] = opt.use_se if opt.last_gamma: kwargs['last_gamma'] = True net = get_model(model_name, **kwargs) if opt.print_model: print('*'*25 + ' ' + opt.model + ' ' + '*'*25) print(net) print('*'*(25*2 + 2 + len(opt.model))) print() # convert model to quantization version convert_fn = { nn.Conv2D: convert.gen_conv2d_converter( quantize_input=True, wino_quantize=opt.wino_quantize, fake_bn=opt.merge_bn, input_signed=opt.input_signed == 'true', weight_width=opt.weight_bits_width, input_width=opt.input_bits_width, quant_type=opt.quant_type ), nn.Dense: convert.gen_dense_converter( quantize_input=True, input_signed=opt.input_signed == 'true', weight_width=opt.weight_bits_width, input_width=opt.input_bits_width, quant_type=opt.quant_type ), # nn.Activation: convert.gen_act_converter( # quantize_act=True, # width=opt.input_bits_width, # global_max=opt.calib_global_max # ), nn.Activation: None, nn.BatchNorm: convert.bypass_bn if opt.merge_bn else None } exclude_blocks = [] if opt.exclude_first_conv == 'true': exclude_blocks.extend([net.features[0], net.features[1]]) if model_name.startswith('mobilenetv2_'): exclude_blocks.append(net.output[0]) if model_name.startswith('cifar_resnet'): exclude_blocks.extend([net.features[2][0].body[0], net.features[2][0].body[1]]) print('*'*25 + ' Exclude blocks ' + '*'*25) for b in exclude_blocks: print(b.name) print('*'*(25*2 + len(' Exclude blocks '))) print() convert.convert_model(net, exclude=exclude_blocks, convert_fn=convert_fn, ) # initialize for quantization parameters and reset context qparams_init(net) ctx = gpu(opt.use_gpu) if opt.use_gpu != -1 else cpu() net.collect_params().reset_ctx(ctx) # construct transformer if opt.dataset == 'imagenet': eval_transformer = T.Compose([ T.Resize(256, keep_ratio=True), T.CenterCrop(224), T.ToTensor(), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) else: eval_transformer = T.Compose([ T.ToTensor(), T.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010]) ]) # fetch dataset and dataloader dataset = ImageNet if opt.dataset == 'imagenet' else CIFAR10 eval_dataset = dataset(train=False).transform_first(eval_transformer) eval_loader = DataLoader( dataset=eval_dataset, batch_size=opt.batch_size, num_workers=opt.num_workers, last_batch='keep' ) if opt.quantize_input_offline: train_dataset = dataset(train=True).transform_first(eval_transformer) if opt.dataset == 'imagenet': train_labels = [item[1] for item in train_dataset._data.items] elif opt.dataset == 'cifar10': train_labels = train_dataset._data._label train_loader = DataLoader( dataset=train_dataset, batch_size=opt.batch_size, sampler=UniformSampler(classes, opt.num_sample, train_labels), num_workers=opt.num_workers, last_batch='keep' ) # calibrate for input ranges and evaluate for simulation if opt.quantize_input_offline: if opt.calib_mode == "kl": print('*' * 25 + ' KL Calibration ' + '*' * 25) net.disable_quantize() # calibrate with fp32_input and fp32_weight inference # net.quantize_input(enable=False) # calibrate with fp32_input and int_weight inference input_levels = 2 ** ((opt.input_bits_width - 1) if opt.input_signed == "true" else opt.input_bits_width) min_bins, bins = input_levels, 2048 # collect feature maps hist_collector, fm_max_collector = collect_feature_maps(net, bins=bins, loader=train_loader, ctx=ctx) # do calibration thresholds = {} quantized_blocks = net.collect_quantized_blocks() n_quantized_blocks = len(quantized_blocks) for i, m in enumerate(quantized_blocks): best_bins = kl_calibrate(hist_collector[m], levels=input_levels, min_bins=min_bins, bins=bins) thresholds[m] = (best_bins + 0.5) * (fm_max_collector[m] / bins) print(f"({i+1}/{n_quantized_blocks})\tBest threshold for {m.name}: {thresholds[m]}") # update input_max for m, th in thresholds.items(): m.input_max.set_data(nd.array([th])) net.enable_quantize() print('*' * (25 * 2 + len(' KL Calibration '))) print() else: print('*' * 25 + ' Naive Calibration ' + '*' * 25) for i in range(opt.calib_epoch): # net.quantize_input(enable=False) # calibrate with fp32_input and int_weight inference net.quantize_input(enable=True, online=True) # calibrate with int_input and int_weight inference _ = evaluate(net, classes, train_loader, ctx=ctx, update_ema=True, tqdm_desc="Calib[{}/{}]".format(i+1, opt.calib_epoch)) if opt.eval_per_calib: net.quantize_input(enable=True, online=False) acc, avg_acc = evaluate(net, classes, eval_loader, ctx=ctx, update_ema=False, tqdm_desc="Eval[{}/{}]".format(i + 1, opt.calib_epoch)) print('{0: <8}: {1:2.2f}%'.format('acc', acc * 100)) print('{0: <8}: {1:2.2f}%'.format('avg_acc', avg_acc * 100)) print() for m in net.collect_quantized_blocks(): print(f"Best threshold for {m.name}: {m.input_max.data().asscalar()}") print('*' * (25 * 2 + len(' Naive Calibration '))) print() if not opt.eval_per_calib: net.fix_params() net.quantize_input(enable=True, online=False) acc, avg_acc = evaluate(net, classes, eval_loader, ctx=ctx, update_ema=False) print('*' * 25 + ' Result ' + '*' * 25) print('{0: <8}: {1:2.2f}%'.format('acc', acc * 100)) print('{0:
value.. lets see the three examples.. a) only one Mod eg.. |x+2| + 2x= 3.. critical value at -2 and equation can be written as |x+2| = 3-2x.. we take y=|x+2| and draw a graph and then take y=3-2x and again draw graph.. the point of intersection is our value.. b) two mods.. |x+2|=|x-3|+1.. here we will take y=|x+2| and y=|x-3|+1 again the point of intersection of two sides will give us the value of x.. c) three mods.. very rare |x+3|-|4-x|=|8+x| .. Here we have three critical values, but the graph will still be only two, one for LHS and one for RHS.. It will not be three for three mod as someone has drawn it in one of the discussions on this Q.. again we see the intersection of two graph.. there are no points of intersection , so no solution THE FINER POINT 1) Opening modulus is time consuming, susceptible to error, and the answer found can still be wrong and has to checked by putting the values in mod again.. should be least priority and should be used by someone has not been able to grasp finer points of other two methods.. 2) "Critical method" should be the one used in most circumstances although it requires a good understanding of signs given to the mod when opened within a region. It has to be the method, when you are looking for values of X.. 3) "Graphical method" is useful in finding the number of values of x, as getting accurate values of x may be difficult while estimating from free hand graphs.. but if understood much faster and easier to find sol for Q like How many solutions does the equation |x+3|-|4-x|=|8+x| have?.... Hope it helps atleast a few of you.. Director Joined: 22 Feb 2018 Posts: 780 Re: Absolute modulus : A better understanding  [#permalink] ### Show Tags 15 May 2020, 06:53 1 Bunuel, Can you please let me know the similar posts that describes different approaches to each type of questions in quant section, whether created by you or any other expert/GC member? Your help is highly appreciated. Is there any absolute value official questions (or any other material questions) in the forum which are explained by graphical approach, either by you or by any other expert, that can strengthen the concept. Regards. chetan2u wrote: Attachment: docu1.png I had a PM and a profile comment asking about the absolute modulus, its concept and in particular a Question discussed on various occassion " How many solutions does the equation |x+3|-|4-x|=|8+x| have?.... Just thought to write down few concepts I have gathered. I have not gone through various Topics on Absolute Modulus in this Forum, so maybe few points are repetition. Although difficult for a topic like this, I'll try to follow KISS- Keep It Short and Simple. So, let me touch the concepts now.. what is absolute modulus? Absolute modulus is the numeric value of any number without any sign or in other words ' the distance from the origin'. It will always be positive. What kind of Qs can one see in GMAT? The Q will ask either the values of x or how many values can x take?.. most often what one can encounter is a linear Equation with... a) only one Mod eg.. |x+2| + 2x= 3.. b) two mods.. |x+2|=|x-3|+1.. c) three mods.. very rare |x+3|-|4-x|=|8+x| .. What are the methods .. three methods.. 1) As the property suggests, Open each modulus in both +ive and -ive .... 2) Critical value 3) Graphical method.. Opening each modulus It is a time consuming process where you open each mod in both positive and negative and the number of Equations thus formed will increase as we increase the no of mods.. a) only one Mod eg.. |x+2| + 2x= 3.. i) (x+2) + 2x=3.. 3x+2=3 x=1/3.. valid value ii) -(x+2)+2x=3.. x-2=3..x=5... but if we substitute x=5 in |x+2| + 2x= 3..... (x+2) will turn out to be a positive value, while we took x=2 to be negative so discard so one value of x.. b) two mods.. |x+2|=|x-3|+1.. here you will have four equations.. i)(x+2)=(x-3)+1.. both positive ii)-(x+2)=-(x-3)+1.. both negative iii)-(x+2)=(x-3)+1..one positive and other negative iv)(x+2)=-(x-3)+1.. opposite of the one on top c) three mods.. very rare |x+3|-|4-x|=|8+x| .. it will further increase the number of equations.. Suggestion.. time consuming and susceptible to errors in opening of brackets and at times requires more to negate the values found as in first example. Critical method lets find what happens in this before trying the Qs as this was the main query.. Step 1 :- for each mod, there is a value of x that will make that mod to 0.. Step 2 :- the minimum value of a mod will be 0 and at this value of x, the mod has the lowest value... Once we know this critical value, we work on the mod for values lesser than(<) that or more than(>)that and including the critical value in either of them, we assign a sign, + or -, depending on what will happen to the value inside the mod in each scenario(in one scenario it will be positive and in other, negative).. Step 3 :- after assigning the sign, we solve for x and see if the value of x that we get is possible depending on which side of critical value we are working on.. So what are we doing here We are assuming a certain region for value of x and then solving for x.. If the value found matches the initial assumption, we take that as a solution or discard that value, which would mean that there is no value of x in that assumed region lets see the three examples a) only one Mod eg.. |x+2| + 2x= 3.. here x+2 will be 0 at x=-2.. so Critical value =-2.. so two regions are <-2 and >= -2 i)when x<-2, |x+2|will be given negative sign.. for this assign any value in that region say -3 in this case x+2 will become -3+2=-1 hence a negative sign.. -(x+2)+2x=3.. x-2=3.. x=5, which is not in the region <-2.. so not a valid value.. ii)when x>=-2, |x+2|will be given positive sign.. for this assign any value in that region say 3 in this case x+2 will become 3+2= 5 hence a positive sign.. (x+2)+2x=3.. 3x+2=3.. x=1/3, which is in the region >=-2.. so a valid value.. b) two mods.. |x+2|=|x-3|+1.. critical values -2 and 3... so regions are <-2, -2<=x<3, x>=3.. i) x<-2... x+2 will be -ive and x-3 will be negative .. eq becomes -(x+2)=-(x-3)+1.. both negative -x-2=-x+3+1..... no values.. ii) $$-2<=x<3$$.. x+2 will be positive and x-3 will be negative .. eq becomes (x+2)=-(x-3)+1.. x+2=-x+3+1.. x=1.. valid value iii)x>=3.. x+2 will be positive and x-3 will be positive .. eq becomes (x+2)=(x-3)+1.. x+2=x-3+1.. no valid value.. so the solution is x=1 c) three mods.. very rare |x+3|-|4-x|=|8+x| .. its time consuming and can be solved similarly.. Graphical method for graphical method we will have to again use the critical point.. at critical point, it is the lowest value of mod and on either side it increases with a negative slope on one side and positive slope on other side so it forms a 'V' shape in linear equation and a 'U ' curve for Quadratic Equation.. If the mod has a negative sign in front, -|x+3|, it will have an "inverted V" shape with max value at critical value.. lets see the three examples.. a) only one Mod eg.. |x+2| + 2x= 3.. critical value at -2 and equation can be written as |x+2| = 3-2x.. we take y=|x+2| and draw a graph and then take y=3-2x and again draw graph.. the point of intersection is our value.. b) two mods.. |x+2|=|x-3|+1.. here we will take y=|x+2| and y=|x-3|+1 again the point of intersection of two sides will give us the value of x.. c) three mods.. very rare |x+3|-|4-x|=|8+x| .. Here we have three critical values, but the graph will still be only two, one for LHS and one for RHS.. It will not be three for three mod as someone has drawn it in one of the discussions on this Q.. again we see the intersection of two graph.. there are no points of intersection , so no solution THE FINER POINT 1) Opening modulus is time consuming, susceptible to error, and the answer found can still be wrong and has to checked by putting the values in mod again.. should be least priority
proposed in \cite{diep1} and developed in \cite{diep2} for a large class of type I C*-algebras. Hence, there are two general problems: \begin{enumerate} \item[(1)] Find out the C*-algebras which can be characterized by the well-known K-functors, say by the operator K-functors. \item[(2)] Generalize the theory of K-functors in such a way that they are applicable for a large class of C*-algebras. \end{enumerate} Concerning the first problem, we propose \cite{diep8} a general construction and some reduction procedure of the K-theory invariant $\mathop{\operatorname{Ind}}\nolimits\, C^*(G)$ of group C*-algebras. Using the orbit method \cite{kirillov}, \cite{diep4} - \cite{diep7}, we reduces $Index~C^*(G)$ to a family of Connes' foliation C*-algebras indices $Index~C^*(V_{2n_i},{\cal F}_{2n_i})$, see \cite{connes1}-\cite{connes2}, by a family of KK-theory invariants. Using some generalization of the Kasparov type condition (treated by G.G. Kasparov in the nilpotent Lie group case \cite{kasparov2}), we reduces every \newline $Index C^*(V_{2n_i},{\cal F}_{2n_i})$ to a family of KK-theory invariants of the same type valuated in KK(X,Y) type groups. The last ones are in some sense computable by using the cup-cap product realizing the Fredholm operator indices. To demonstrate the idea, we consider the C*-algebra of the group of affine transformations of the real straight line, but first of all we need some new K-functor tool. It is described in the next two subsections. \subsection{BDF K-Homology functor} Let us recall in this subsection the well-known BDF K-functor ${\cal E}xt$. The main reference is \cite{bdf1}. Denote by $C(X)$ the C*-algebra of continuous complex-valued functions over a fixed metrizable compact $X$, ${\cal H}$ a fixed separable Hilbert space over complex numbers, ${\cal L}({\cal H})$ and ${\cal K}({\cal H})$ the C*-algebras of bounded and respectively, compact linear operators in ${\cal H}$. An extension of C*-algebras means a short exact sequence of C*-algebras and *-homomorphisms of special type $$0 \longrightarrow {\cal K}({\cal H}) \longrightarrow {\cal E} \longrightarrow C(X)\longrightarrow 0.$$ Two extensions are by definition equivalent iff there exists an isomorphism $ \psi : {\cal E_1} \longrightarrow {\cal E_2}$ and its restriction $\psi\vert_{{\cal K}({\cal H}_1)} : {\cal K}({\cal H}_1) \longrightarrow {\cal K}({\cal H}_2)$ such that the following diagram is commutative $$ \begin{array}{ccccccccc} 0 & \longrightarrow & {\cal K}({\cal H}_1 ) & \longrightarrow & {\cal E}_1 & \longrightarrow & C(X) & \longrightarrow & 0\\ & & \Big\downarrow\vcenter{% \rlap{$\psi\vert_.$}} & & \Big\downarrow\vcenter{\rlap{$\psi$}} & & \Big\Vert & & \\ 0 & \longrightarrow & {\cal K}({\cal H}_2) & \longrightarrow & {\cal E}_2 & \longrightarrow & C(X) &\longrightarrow & 0 \end{array} $$ There is a canonical universal extension of C*-algebras $$0 \longrightarrow {\cal K}({\cal H}) \longrightarrow {\cal L}({\cal H}) \longrightarrow {\cal A}({\cal H}) \longrightarrow 0,$$ the quotient algebra ${\cal A}({\cal H}) \cong {\cal L}({\cal H})/{\cal K} ({\cal H})$ is well-known as the Calkin algebra. By the construction of fiber product, there is one-to-one correspondence between the extensions of type $$0\longrightarrow {\cal K}({\cal H}) \longrightarrow {\cal E} \longrightarrow C(X) \longrightarrow 0$$ and the unital monomorphisms of type $$\varphi : C(X) \hookrightarrow {\cal A}({\cal H}).$$ Thus we can identify the extensions with the inclusions of $C(X)$ into ${\cal A}({\cal H})$. Because \cite{kirillov} all separable Hilbert spaces are isomorphic and the automorphisms of ${\cal K}({\cal H})$ are inner and $$\mathop{\operatorname {Aut}}\nolimits{\cal K}({\cal H}) \cong {\cal P}{\cal U}({\cal H}),$$ the projective unitary group, where ${\cal U}({\cal H})$ denotes the unitary operator group, we can identify the equivalences classes of extensions with the unitary conjugacy classes of unital inclusions of $C(X)$ into the Calkin algebra: Two extensions $\tau_1$ and $\tau_2$ are equivalent iff there exists a unitary operator $U : {\cal H}_1 \longrightarrow {\cal H}_2$, such that $\tau_2 = \alpha_U \circ \tau_1 $, where by definition $\alpha_U : {\cal A}({\cal H}_1) \longrightarrow {\cal A}({\cal H}_2)$ is the isomorphism obtained from the inner isomorphism $$U.(-).U^{-1} : {\cal L}({\cal H}_1) \longrightarrow {\cal L}({\cal H}_2 ).$$ Extension $\tau : C(X) \hookrightarrow {\cal A}({\cal H})$ is called trivial iff there exists a unital inclusion $\sigma : C(X) \hookrightarrow {\cal L}({\cal H})$ such that $\tau = \pi \circ \sigma, $ where $\pi : {\cal L}({\cal H}) \longrightarrow {\cal A}({\cal H}) = {\cal L}({\cal H})/{\cal K}({\cal H})$ is the canonical quotient map. This inclusion $\tau$ corresponds to the split short exact sequence. The sum of two extensions $\tau_i : C(X) \hookrightarrow {\cal A}_i, i= 1,2$ is defined as the extension $$\tau_1 \oplus \tau_2 : C(X) \hookrightarrow {\cal A}({\cal H}_1) \oplus {\cal A}({\cal H}_2) \hookrightarrow {\cal A}({\cal H}_1 \oplus {\cal H}_2).$$ This definition is compatible also with the equivalence classes of extensions. In \cite{bdf1} the authors proved that: \begin{enumerate} \item[1)] The equivalence class of trivial extension is the identity element with respect to this sum. \item[2)] For every metrizable compact $X$, the set ${\cal E}xt_1(X)$ of the equivalence classes of extensions is an Abelian group. One defines the higher groups by ${\cal E}xt_{1-n}(X) := {\cal E}xt_1({\Bbb S}^{n}\wedge X)$, $n=0,1,2,\dots,$ \item[3)] ${\cal E}xt_*$ is a generalized K-homology. In particular, the group ${\cal E}xt_1(X)$ is dependent only of the homotopy type of $X$ and there is a homomorphism $$ Y_\infty : {\cal E}xt_1(X) \longrightarrow \mathop{\operatorname {Hom}}\nolimits_{\Bbb Z}(K^{-1}(X), {\Bbb Z})$$ which will be an isomorphism if $X \subset {\Bbb R}^3$. \end{enumerate} This K-homology is well developed and fruitfully applicable. It has many application in operator theory and in our problem of characterizing the group C*-algebras. Let us demonstrate this in the first example of the group of affine transformations of the real straight line. \subsection{Topological Invariant Index } Let us in this subsection denote by $G$ the group of all affine transformations of the real straight line. \begin{thm} Every irreducible unitary representation of group $G$ is unitarilly equivalent to one of the following mutually nonequivalent representations: \begin{enumerate} \item[a)] the representation $S$, realized in the space $L^2({\Bbb R}^*, \frac{dx}{ \vert x \vert})$, where ${\Bbb R}^* := {\Bbb R} \setminus (0)$, and acting in according with the formula $$(S_gf)(x) = e^{\sqrt{-1}bx} f(ax),\text{ where } g = \begin{pmatrix} \alpha & b \\ 0 & 1\end{pmatrix}.$$ \item[b)] the representation $U^\varepsilon_\lambda$, realized in ${\Bbb C}^1$ and given by the formula $$U_\lambda^\varepsilon (g) = \vert \alpha \vert^{\sqrt{-1}\lambda}.(\mathop{\operatorname{sgn}}\nolimits\alpha)^\varepsilon, \text{ where }\lambda \in {\Bbb R}; \varepsilon = 0,1.$$ \end{enumerate} \end{thm} \begin{pf} See \cite{gelfandnaimark}. \end{pf} This list of all the irreducible unitary representations gives the corresponding list of all the irreducible non-degenerate unitary *-representations of the group C*-algebra $C^*(G)$. In \cite{diep1} it was proved that \begin{thm} The group C*-algebra with formally adjoined unity $C^*(G)^\sim$ can be included in a short exact sequence of C*-algebras and *-homomorphisms $$ 0 \longrightarrow {\cal K} \longrightarrow C^*(G)^\sim \longrightarrow C({\Bbb S}^1 \vee {\Bbb S}^1) \longrightarrow 0,$$ i.e. the C*-algebra $C^*(G)^\sim$, following the BDF theory, is defined by an element, called the index and denoted by $Index\,C^*(G)^\sim$, of the groups ${\cal E}xt({\Bbb S}^1 \vee {\Bbb S}^1) \cong {\Bbb Z} \oplus {\Bbb Z}$. \end{thm} \begin{pf} See \cite{diep1}. \end{pf} The infinite dimensional representation $S$ realizes the inclusion said above. Since $${\cal E}xt({\Bbb S}^1 \vee {\Bbb S}^1) \cong \mathop{\operatorname {Hom}}\nolimits_{\Bbb Z}(\pi^1({\Bbb S}^1\vee {\Bbb S}^1, {\Bbb Z})$$ it realized by a homomorphism from $\pi^1({\Bbb S}^1 \vee {\Bbb S}^1)$ to ${\Bbb C}^*$. Since the isomorphism $$Y_\infty : {\cal E}xt({\Bbb S}^1 \vee {\Bbb S}^1)\cong \mathop{\operatorname {Hom}}\nolimits_{\Bbb Z}(\pi^1({\Bbb S}^1 \vee {\Bbb S}^1), {\Bbb Z})$$ is obtained by means of computing the indices and because the general type of elements of $\pi^1({\Bbb S}^1 \vee {\Bbb S}^1)$ is $g_{k,l} = [g_{0,1}]^k[g_{1,0}]^l$, $k, l \in {\Bbb Z}$, we have $$\mathop{\operatorname{Ind}}\nolimits\,(g_{k,l}) = k.\mathop{\operatorname{Ind}}\nolimits\, T(g_{1,0}) + l.\mathop{\operatorname{Ind}}\nolimits\,T(g_{0,1}),$$ where $T$ is the the *-isomorphism corresponding to $S$. It is enough therefore to compute the pair of two indices $\mathop{\operatorname{Ind}}\nolimits\, T(g_{1,0})$ and $\mathop{\operatorname{Ind}}\nolimits\, T(g_{0,1})$. The last ones are directly computed by the indices of the corresponding Fredholm operators. \begin{thm} $$Index\, C^*(G) = (1,1) \in {\cal E}xt({\Bbb S}^1 \vee {\Bbb S}^1) \cong {\Bbb Z} \oplus {\Bbb Z}.$$ \end{thm} \begin{pf} See \cite{diep1}. \end{pf} Let us now go to the general situation. To do this we must introduce also some preparation about, first of all, the construction of irreducible unitary representations, we mean the orbit method, then a method of decomposing the C*-algebra into a tower of extensions and lastly compute the index with the help of the general KK-theory. \section{Multidimensional Orbit Methods} Let us in this section consider the problem of realization of irreducible unitary representations of Lie groups. There are two versions of the orbit method; one is the multidimensional quantization, the other is the infinitesimal orbit method, related with the so called category ${\cal O}$. \subsection{Multidimensional Quantization} The orbit method
# Non-zero |U e3 | and Quark-Lepton in Discrete Symmetry Y.H.Ahn based on Phys.Rev.D83:076012,2011. working with Hai-Yang Cheng and S.C.Oh 1 2011 년 8 월 17. ## Presentation on theme: "Non-zero |U e3 | and Quark-Lepton in Discrete Symmetry Y.H.Ahn based on Phys.Rev.D83:076012,2011. working with Hai-Yang Cheng and S.C.Oh 1 2011 년 8 월 17."— Presentation transcript: Non-zero |U e3 | and Quark-Lepton in Discrete Symmetry Y.H.Ahn based on Phys.Rev.D83:076012,2011. working with Hai-Yang Cheng and S.C.Oh 1 2011 년 8 월 17 일 수요일 Outline 2 Present Knowledges and Motivations Tri-Bimaximal Mixing and Non-zero |U e3 | A4 symmetry+TBM and its Deviations in Seesaw Charged fermion mixing angles Low energy phenomenology and leptogenesis Conclusion 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 3 Neutrino oscillation (arXiv: 1106, 6028 G.L.Fogli, E.Lisi, A.Marrone,A.Palazzo, A.M.Rotunno ) Analysis by Fogli etal. Including the latest T2K and MINOS results Bi-Large mixing angles These results should be compared with Theta13 which is very small, and with the quark mixing angles in the V CKM. Some new flavor symmetries A clue to the nature among quark-lepton physics beyond SM 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 4 Neutrino oscillation (arXiv: 1106, 6028 G.L.Fogli, E.Lisi, A.Marrone,A.Palazzo, A.M.Rotunno ) Analysis by Fogli etal. Including the latest T2K and MINOS results Bi-Large mixing angles These results should be compared with Theta13 which is very small, and with the quark mixing angles in the V CKM. Some new flavor symmetries A clue to the nature among quark-lepton physics beyond SM 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 5 Neutrino oscillation (arXiv: 1106, 6028 G.L.Fogli, E.Lisi, A.Marrone,A.Palazzo, A.M.Rotunno ) Analysis by Fogli etal. Including the latest T2K and MINOS results Bi-Large mixing angles These results should be compared with Theta13 which is very small, and with the quark mixing angles in the V CKM. Some new flavor symmetries A clue to the nature among quark-lepton physics beyond SM 2011 년 8 월 17 일 수요일 The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations “Raidal 2004” “Smirnov and Minakata” Present Knowledges and Motivations 6 Neutrino oscillation (arXiv: 1106, 6028 G.L.Fogli, E.Lisi, A.Marrone,A.Palazzo, A.M.Rotunno ) Analysis by Fogli etal. Including the latest T2K and MINOS results Bi-Large mixing angles These results should be compared with Theta13 which is very small, and with the quark mixing angles in the V CKM. Some new flavor symmetries A clue to the nature among quark-lepton physics beyond SM 2011 년 8 월 17 일 수요일 The disparity that nature indicates between quark and lepton mixing angles has been viewed in terms of a "quark–lepton complementarity" which can be expressed in the relations “Raidal 2004” “Smirnov and Minakata” Accidental or not ? Present Knowledges and Motivations 7 Nothing is known about all three CP-violating phases If δ CP and θ 13 ≠0, CP is violated in ν oscillations. δ CP : Not directly related to leptogenesis, but would be likely in most leptogenesis models. Dirac phase : CP violation in ν oscillation Leptogenesis Majorana phases : Neutrinoless Double beta decay Leptogenesis A relatively large Theta13>0 (T2K and MINOS) opens up : CP-violation in neutrino oscillations Exps. (T2K, NO ν A…) Matter effects can experimentally determine the type of ν mass spectrum : normal or inverted mass ordering (goal of future LBL ν oscillation Exps. program) 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 8 Nothing is known about all three CP-violating phases If δ CP and θ 13 ≠0, CP is violated in ν oscillations. δ CP : Not directly related to leptogenesis, but would be likely in most leptogenesis models. Dirac phase : CP violation in ν oscillation Leptogenesis Majorana phases : Neutrinoless Double beta decay Leptogenesis A relatively large Theta13>0 (T2K and MINOS) opens up : CP-violation in neutrino oscillations Exps. (T2K, NO ν A…) Matter effects can experimentally determine the type of ν mass spectrum : normal or inverted mass ordering (goal of future LBL ν oscillation Exps. program) CP violations in the lepton sector are imperative, if the baryon asymmetry of the Universe (BAU) originated from leptogenesis scenario in the seesaw models. So any observation of the leptonic CP violation, or demonstrating that CP is not a good symmetry of the leptons, can strengthen our belief in leptogenesis. 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 9 Nothing is known about all three CP-violating phases If δ CP and θ 13 ≠0, CP is violated in ν oscillations. δ CP : Not directly related to leptogenesis, but would be likely in most leptogenesis models. Dirac phase : CP violation in ν oscillation Leptogenesis Majorana phases : Neutrinoless Double beta decay Leptogenesis A relatively large Theta13>0 (T2K and MINOS) opens up : CP-violation in neutrino oscillations Exps. (T2K, NO ν A…) Matter effects can experimentally determine the type of ν mass spectrum : normal or inverted mass ordering (goal of future LBL ν oscillation Exps. program) CP violations in the lepton sector are imperative, if the baryon asymmetry of the Universe (BAU) originated from leptogenesis scenario in the seesaw models. So any observation of the leptonic CP violation, or demonstrating that CP is not a good symmetry of the leptons, can strengthen our belief in leptogenesis. 2011 년 8 월 17 일 수요일 Present Knowledges and Motivations 10 Nothing is known about all three CP-violating phases If δ CP and θ 13 ≠0, CP is violated in ν oscillations. δ CP : Not directly related to leptogenesis, but would be likely in most leptogenesis models. Dirac phase : CP violation in ν oscillation Leptogenesis Majorana phases : Neutrinoless Double beta decay Leptogenesis A relatively large Theta13>0 (T2K and MINOS) opens up : CP-violation in neutrino oscillations Exps. (T2K, NO ν A…) Matter effects can experimentally determine the type of ν mass spectrum : normal or inverted mass ordering (goal of future LBL ν oscillation Exps. program) CP violations in the lepton sector are imperative, if the baryon asymmetry of the Universe (BAU) originated from leptogenesis scenario in the seesaw models. So any observation of the leptonic CP violation, or demonstrating that CP is not a good symmetry of the leptons, can strengthen our belief in leptogenesis. 2011 년 8 월 17 일 수요일 Present Knowledges 11 Cosmological limit (including WMAP 3-years result) upper bound on neutrino masses (astro-ph/0604335 : Uros Seljak, Anze Slosar, Patrick McDonald ) Uros SeljakAnze SlosarPatrick McDonald Starting to disfavor the degenerate spectrum of neutrinos BAU Astrophys. J. Suppl. 192 (2011) 18 Why is there only Matter in Universe but no antimatter ? No evidence of antimatter in our domain of Universe (~20Mpc ≈ 10 8 light-years) How can we generate η B ≈10 -10 from an initial condition for Big-Bang ? 2011 년 8 월 17 일 수요일 Present Knowledges 12 Cosmological limit (including WMAP 3-years result) upper bound on neutrino masses (astro-ph/0604335 : Uros Seljak, Anze Slosar, Patrick McDonald ) Uros SeljakAnze SlosarPatrick McDonald Starting to disfavor the degenerate spectrum of neutrinos BAU Astrophys. J. Suppl. 192 (2011) 18 Why is there only Matter in Universe but no antimatter ? No evidence of antimatter in our domain of Universe (~20Mpc ≈ 10 8 light-years) How can we generate η B ≈10 -10 from an initial condition for Big-Bang ? 2011 년 8 월 17 일 수요일 13 All data can be explained in terms of oscillation between just 3 known species : Two possible orderings of neutrino masses Earth matter effects(LBL) Quasi-Degenerate case 2011 년 8 월 17 일 수요일 14 Pontecorvo-Maki-NaKagawa-Sakata (PMNS) Matrix Atmospheric and SBL reactor Solar and LBL accelerator LBL reactor Majorana phases Neutrinoless Double beta decay
\section{Introduction} Duality map is a profound and widely used concept in physics, such as quark-hadron duality which bridge the gap between theoretical predictions and experimentally observable quantities~\cite{Shifman:2000jv}, and the AdS/CFT by which string theory could quantitatively explain features of strongly coupled Quantum Chromodynamics (QCD)~\cite{Maldacena:1998zhr}. In this paper, we are interested in the renowned fermion-boson duality~\cite{Girardeau:1960}, and fermion-bit duality~\cite{Wetterich:2016yaw}. The former shows the equivalence of fermionic and bosonic particle systems, the latter indicates a map between fermions and Ising spins. Inspired by them, we propose a map between nucleons which bound in nuclei and Ising spins in Ising model, it is labeled as ``SRC-bit duality'', here SRC is abbreviation of short-range correlation. In the rest of this section, introduction of these dualities are presented in order. The fermion-boson duality stems from the following question --- can bosons and fermions transform into each other? It is possible in supersymmetry, a hot candidate for solving the hierarchy problem, which is still waiting to be examined by high energy experiment. However, in low dimensional non-relativistic systems, the equivalence of bosonic and fermionic systems are reported long time ago~\cite{Girardeau:1960,Girardeau:1965,Mattis1965,Coleman:1974bu,Tomonaga:1950zz,Luttinger:1963zz,schmidt:1998,crescimanno:2001,cheon:pla,Cheon:1998iy}. Massless boson and fermion theories in $1+1$ dimensional Minkowski and curved space-time are proved to be equivalent~\cite{freundlich:1972,davies:1978}. Because the spin-statistics relation is based on relativity, non-relativistic system can escape from the spin-statistics relation, thus boson can be spinning and fermion be spinless. The relation between boson and spinless fermion may shed light on the general properties of boson-fermion duality. Fermion-bit duality is proposed from the fact that models for fermions and Ising spins share the same type of observables. In the case of fermions, the observables are occupation numbers $n(x)$ that take values zero or one. For $n(x)=1$ a fermion is present at $x$, while for $n(x)=0$ no fermion located in $x$. In the case of Ising spins, $s(x)$ can take values $\pm 1$ which can be understood as magnetic dipole moments of atomic ``spins'' in ferromagnetism. A relation between occupation numbers and Ising spins can be readily established, $n(x)=(s(x)+1)/2$. Based on these simple observations a map between fermions and Ising models has been proposed~\cite{Wetterich:2009tq,Wetterich:2010eh}. Since Ising spins can be associated to bits of information, this duality is under the name of ``fermion-bit duality''. It is worth noting that if Ising spins are considered as ``discrete bosons'', this fermion-bit duality establishes a general equivalence of fermions and bosons, which indicates the three dualities, ie., fermion-boson duality, fermion-bit duality and boson-bit duality are closely related. Inspired by these duality maps, we propose a new one between nucleons and Ising spins. One may think of the stable nucleus as a tight ball of neutrons and protons (collectively called nucleons), held together by the strong nuclear force. This basic picture has been worked very well. Afterwards, deep inelastic scattering (DIS) led to the discovery that the nucleon is made of quarks. However, due to the small nuclear binding energy and the idea of quark-gluon confinement, it was thought that quarks had no explicit role in the nucleus, hence nuclei could still be described in terms of nucleons and mesons. In 1982, this understanding was changed by measurements performed by the European Muon Collaboration (EMC)~\cite{Aubert:1983xm}. The initial expectation was that physics at GeV scale would be insensitive to the nuclear binding effects which are typically on the order of several MeV scale. However, the collaboration discovered the per-nucleon deep inelastic structure function in iron is smaller than that of deuterium in the region $0.3<x_B<0.7$, here $x_B$ is the Bjorken variable. This phenomenon is known as EMC effect and has been observed for a wide range of nuclei \cite{Arneodo:1988aa,Arneodo:1989sy,Allasia:1990nt,Gomez:1993ri,Seely:2009gt}. Although the understanding of how the quark-gluon structure of a nucleon is modified by the surrounding nucleons has been brought to a whole new level, there is still no consensus as to the underlying dynamics that drives this effect. Currently, one of the leading approaches for describing the EMC effect is: nucleons bound in nuclei are unmodified, same as ``free'' nucleons most of the time, but are modified substantially when they fluctuate into SRC pairs. The connection between SRC and EMC effects has been extensively investigated in nuclear structure function measurements~\cite{Egiyan:2005hs,Hen:2012fm,Hen:2014nza,Duer:2018sby,Weinstein:2010rt,Chen:2016bde,Lynn:2019vwp,Xu:2019wso,Hatta:2019ocp,Huang:2021cac,Hen:2016kwk}. SRC pairs are conventionally defined in momentum space as a pair of nucleons with high relative momentum and low center-of-mass (c.m.) momentum, where high and low are relative to the Fermi momentum of medium and heavy nuclei. In this paper, we will emphasize the similarities between the descriptions of SRCs and Ising spins. Based on these similarities, a new kind of map has been proposed in this paper, it is labeled as ``SRC-bit duality''. The present manuscript is arranged as follows. In Sect.~\ref{duality_s_i}, we will discuss the map between SRCs in nucleons and Ising spins. One longstanding theoretical model which is used to depict the nucleons is presented as a correspondence to an explicit Ising model. Sect.~\ref{Simulation} will devoted to simulations of nucleon states in terms of Ising model, our preliminary results support the proposed map between nucleons and Ising spins. Finally, we summarize our work and comment on the future developments in Sect.~\ref{conclusions}. \section{Map between SRCs and Ising spins} \label{duality_s_i} \subsection{Notations and definitions} \label{duality_s_i_1} The description of SRCs in nucleons and Ising spins in Ising model could share the same type of observables, as in line with implementations of fermions within Ising model. Ising spin $s(x)$ can take values $\pm 1$, each represents one of two spin states (spin up or down). Similarly, we can use $s_{src}(x)$ to represent the state of a nucleon. In nucleus, nucleons behave approximately as independent particles in a mean field, but occasionally ($20\%-25\%$ in medium or heavy nuclei) two nucleons get close enough to each other so that temporarily their singular short-range interaction cannot be well described by a mean-field approximation. These are the two-nucleon short-range correlations (2N-SRC). For $s_{src}(x)=1$ a nucleon which belongs to SRC pair is presented, while for $s_{src}(x)=0$ the corresponding nucleon can be regarded as independent particle. The relation between these two ``spins'' is \begin{eqnarray}\label{relation} s(x)=2s_{src}(x)-1 \,. \end{eqnarray} In addition to this simple relation, there are other common features between these two systems. In Ising model, the spins are arranged in a lattice, allowing each spin to interact with its neighbors. Consider a set $\Lambda$ of lattice sites, for each lattice site $x\in \Lambda$ there is a discrete variable $s(x)$ such that $s(x)\in\{+1,-1\}$, representing the site's spin. For any two adjacent sites $x,y\in\Lambda$ there is an interaction $J$. Besides, every site $y\in\Lambda$ is influenced by an external magnetic field $h$, the corresponding Hamiltonian is \begin{eqnarray}\label{Ising_H} H = -\frac{J}{2}\sum_{\langle x y\rangle} s_{x} s_{y}- \mu h \sum_{y} s_{y} \,, \end{eqnarray} where the first sum is over pairs of adjacent spins and the second term represents the universal interaction with external magnetic field, the magnetic moment is given by $\mu$. The physical quantities describing nucleons can also be divided into two parts, one for short-range and the other for long-range. The pedagogically sketched diagrams for Ising model and structure of nucleus are presented in Fig.~\ref{picsIS}. For instance, the nucleon spectral function $P(\mathbf{p},E)$ which is the joint probability to find a nucleon in a nucleus with momentum $\mathbf{p}$ and removal energy $E$ can be modeled as~\cite{CiofidegliAtti:1995qe} \begin{eqnarray} P(\mathbf{p}, E)= P_{1}(\mathbf{p}, E) + P_{0}(\mathbf{p}, E) \,, \end{eqnarray} where the subscript $1$ refers to high-lying continuum states that are caused by the short-range correlations and the subscript $0$ refers to values of $E$ corresponding to low-lying intermediate excited states. \begin{figure}[htbp] \centering \includegraphics[width=0.49\columnwidth]{pics1.eps} \centering \includegraphics[width=0.49\columnwidth]{pics2.eps} \centering \caption{Schematic diagrams for Ising model in ferromagnetism (left) and nucleus structure (right), both of them can be divided into two parts. For Ising model, they are adjacent interaction and universal external magnetic field interaction. For nucleus structure, they are SRCs and interaction with mean field which is created by nucleon's average interaction with the other nucleons.} \label{picsIS} \end{figure} Another example is nuclear gluon distribution, we can parameterize the nuclear gluon distribution in
from abc import ABCMeta, abstractmethod import numpy as np from scipy import linalg from .base_dar import BaseDAR from ..utils.arma import ki2ai class BaseLattice(BaseDAR): __metaclass__ = ABCMeta def __init__(self, iter_newton=0, eps_newton=0.001, **kwargs): """Creates a base Lattice model with Taylor expansion iter_newton : maximum number of Newton-Raphson iterations eps_newton : threshold to stop Newton-Raphson iterations """ super(BaseLattice, self).__init__(**kwargs) self.iter_newton = iter_newton self.eps_newton = eps_newton def synthesis(self, sigdriv=None, sigin_init=None): """Create a signal from fitted model sigdriv : driving signal sigdriv_init : shape (1, self.ordar_) previous point of the past """ ordar_ = self.ordar_ if sigdriv is None: sigdriv = self.sigdriv try: newbasis = self.basis_ except AttributeError: newbasis = self._make_basis() else: newbasis = self._make_basis(sigdriv=sigdriv) n_epochs, n_points = sigdriv.shape if sigin_init is None: sigin_init = np.random.randn(1, ordar_) sigin_init = np.atleast_2d(sigin_init) if sigin_init.shape[1] < ordar_: raise (ValueError, "initialization of incorrect length: %d instead" "of %d (self.ordar_)" % (sigin_init.shape[1], ordar_)) # initialization signal = np.zeros(n_points) signal[:ordar_] = sigin_init[0, -ordar_:] # compute basis corresponding to sigdriv newbasis = self._make_basis(sigdriv=sigdriv) # synthesis of the signal random_sig = np.random.randn(n_epochs, n_points - ordar_) burn_in = np.random.randn(n_epochs, ordar_) sigout = self.synthesize(random_sig, burn_in=burn_in, newbasis=newbasis) return sigout # ------------------------------------------------ # # Functions that are added to the derived class # # ------------------------------------------------ # def synthesize(self, random_sig, burn_in=None, newbasis=None): """Apply the inverse lattice filter to synthesize an AR signal random_sig : array containing the excitation starting at t=0 burn_in : auxiliary excitations before t=0 newbasis : if None, we use the basis used for fitting (self.basis_) else, we use the given basis. returns ------- sigout : array containing the synthesized signal """ if newbasis is None: basis = self.basis_ else: basis = newbasis # -------- select ordar (prefered: ordar_) ordar = self.ordar_ # -------- prepare excitation signal and burn in phase if burn_in is None: tburn = 0 excitation = random_sig else: tburn = burn_in.size excitation = np.hstack((burn_in, random_sig)) n_basis, n_epochs, n_points = basis.shape n_epochs, n_points_minus_ordar_ = random_sig.shape # -------- allocate arrays for forward and backward residuals e_forward = np.zeros(n_epochs, ordar + 1) e_backward = np.zeros(n_epochs, ordar + 1) # -------- prepare parcor coefficients parcor_list = self._develop_parcor(self.AR_, basis) parcor_list = self.decode(parcor_list) gain = self._develop_gain(basis, squared=False, log=False) # -------- create the output signal sigout = np.zeros(n_epochs, n_points_minus_ordar_) for t in range(-tburn, n_points_minus_ordar_): e_forward[:, ordar] = excitation[:, t + tburn] * gain[0, max(t, 0)] for p in range(ordar, 0, -1): e_forward[:, p - 1] = ( e_forward[:, p] - parcor_list[p - 1, :, max(t, 0)] * e_backward[:, p - 1]) for p in range(ordar, 0, -1): e_backward[:, p] = ( e_backward[:, p - 1] + parcor_list[p - 1, :, max(t, 0)] * e_forward[:, p - 1]) e_backward[:, 0] = e_forward[:, 0] sigout[:, max(t, 0)] = e_forward[:, 0] return sigout def cell(self, parcor_list, forward_residual, backward_residual): """Apply a single cell of the direct lattice filter to whiten the original signal parcor_list : array, lattice coefficients forward_residual : array, forward residual from previous cell backward_residual : array, backward residual from previous cell the input arrays should have same shape as self.sigin returns: forward_residual : forward residual after this cell backward_residual : backward residual after this cell """ f_residual = np.copy(forward_residual) b_residual = np.zeros(backward_residual.shape) delayed = np.copy(backward_residual[:, 0:-1]) # -------- apply the cell b_residual[:, 1:] = delayed + (parcor_list[:, 1:] * f_residual[:, 1:]) b_residual[:, 0] = parcor_list[:, 0] * f_residual[:, 0] f_residual[:, 1:] += parcor_list[:, 1:] * delayed return f_residual, b_residual def whiten(self): """Apply the direct lattice filter to whiten the original signal returns: residual : array containing the residual (whitened) signal backward : array containing the backward residual (whitened) signal """ # compute the error on both the train and test part sigin = self.sigin basis = self.basis_ # -------- select ordar (prefered: ordar_) ordar = self.ordar_ n_epochs, n_points = sigin.shape residual = np.copy(sigin) backward = np.copy(sigin) for k in range(ordar): parcor_list = self._develop_parcor(self.AR_[k], basis) parcor_list = self.decode(parcor_list) residual, backward = self.cell(parcor_list, residual, backward) return residual, backward def _develop_parcor(self, LAR, basis): single_dim = LAR.ndim == 1 LAR = np.atleast_2d(LAR) # n_basis, n_epochs, n_points = basis.shape # ordar, n_basis = LAR.shape # ordar, n_epochs, n_points = lar_list.shape lar_list = np.tensordot(LAR, basis, axes=([1], [0])) if single_dim: # n_epochs, n_points = lar_list.shape lar_list = lar_list[0, :, :] return lar_list # ------------------------------------------------ # # Functions that should be overloaded # # ------------------------------------------------ # @abstractmethod def decode(self, lar): """Extracts parcor coefficients from encoded version (e.g. LAR) lar : array containing the encoded coefficients returns: ki : array containing the decoded coefficients (same size as lar) """ pass @abstractmethod def encode(self, ki): """Encodes parcor coefficients to parameterized coefficients (for instance LAR) ki : array containing the original parcor coefficients returns: lar : array containing the encoded coefficients (same size as ki) """ pass @abstractmethod def _common_gradient(self, p, ki): """Compute common factor in gradient. The gradient is computed as G[p] = sum from t=1 to T {g[p,t] * F(t)} where F(t) is the vector of driving signal and its powers g[p,t] = (e_forward[p, t] * e_backward[p-1, t-1] + e_backward[p, t] * e_forward[p-1, t]) * phi'[k[p,t]] phi is the encoding function, and phi' is its derivative. p : order corresponding to the current lattice cell ki : array containing the original parcor coefficients returns: g : array containing the factors (size (n_epochs, n_points - 1)) """ pass @abstractmethod def _common_hessian(self, p, ki): """Compute common factor in Hessian. The Hessian is computed as H[p] = sum from t=1 to T {F(t) * h[p,t] * F(t).T} where F(t) is the vector of driving signal and its powers h[p,t] = (e_forward[p, t-1]**2 + e_backward[p-1, t-1]**2) * phi'[k[p,t]]**2 + (e_forward[p, t] * e_backward[p-1, t-1] e_backward[p, t] * e_forward[p-1, t]) * phi''[k[p,t]] phi is the encoding function, phi' is its first derivative, and phi'' is its second derivative. p : order corresponding to the current lattice cell ki : array containing the original parcor coefficients returns: h : array containing the factors (size (n_epochs, n_points - 1)) """ pass # ------------------------------------------------ # # Functions that overload abstract methods # # ------------------------------------------------ # def _next_model(self): """Compute the AR model at successive orders Acts as a generator that stores the result in self.AR_ Creates the models with orders from 0 to self.ordar Typical usage: for AR_ in A._next_model(): A.AR_ = AR_ A.ordar_ = AR_.shape[0] """ if self.basis_ is None: raise ValueError('%s: basis_ does not yet exist' % self.__class__.__name__) # -------- get the training data sigin, basis, weights = self._get_train_data([self.sigin, self.basis_]) if weights is not None: weights = np.sqrt(weights) w_basis = basis * weights else: w_basis = basis # -------- select signal, basis and regression signals n_basis, n_epochs, n_points = basis.shape ordar_ = self.ordar scale = 1.0 / n_points # -------- prepare residual signals forward_res = np.copy(sigin) backward_res = np.copy(sigin) self.forward_residual = np.empty((ordar_ + 1, n_epochs, n_points)) self.backward_residual = np.empty((ordar_ + 1, n_epochs, n_points)) self.forward_residual[0] = forward_res self.backward_residual[0] = backward_res # -------- model at order 0 AR_ = np.empty((0, self.n_basis)) yield AR_ # -------- loop on successive orders for k in range(0, ordar_): # -------- prepare initial estimation (driven parcor) if weights is not None: # the weights and basis are not delay as backward_res /!\ forward_res = weights[:, k + 1:] * forward_res[:, k + 1:] backward_res = weights[:, k + 1:] * backward_res[:, k:-1] else: forward_res = forward_res[:, k + 1:] backward_res = backward_res[:, k:-1] forward_regressor = basis[:, :, k + 1:] * forward_res backward_regressor = basis[:, :, k + 1:] * backward_res # this reshape method will throw an error if a copy is needed forward_regressor.shape = (n_basis, -1) backward_regressor.shape = (n_basis, -1) backward_res.shape = (1, -1) R = (np.dot(forward_regressor, forward_regressor.T) + np.dot( backward_regressor, backward_regressor.T)) r = np.dot(forward_regressor, backward_res.T) backward_res.shape = (n_epochs, -1) R *= scale r *= scale * 2.0 parcor = -np.linalg.solve(R, r).T # n_basis, n_epochs, n_points = basis.shape # n_epochs, n_points = parcor_list.shape parcor_list = self._develop_parcor(parcor.ravel(), basis) parcor_list = np.maximum(parcor_list, -0.999999) parcor_list = np.minimum(parcor_list, 0.999999)
in Analysis of Ancient and Medieval Texts and Manuscripts: Digital Approaches, 2014. [BibTeX] @InProceedings{Hoenen:2014, Author = {Hoenen, Armin}, Title = {Simulation of Scribal Letter Substitution}, BookTitle = {Analysis of Ancient and Medieval Texts and Manuscripts: Digital Approaches}, Editor = {T.L Andrews and C.Macé}, owner = {hoenen}, website = {http://www.brepols.net/Pages/ShowProduct.aspx?prod_id=IS-9782503552682-1}, year = 2014 } ### 2013 (20) • I. Sejane and S. Eger, “Semantic typologies by means of network analysis of bilingual dictionaries,” in Approaches to Measuring Linguistic Differences, L. Borin and A. Saxena, Eds., De Gruyter, 2013, pp. 447-474. [BibTeX] @InCollection{Sejane:Eger:2013, Author = {Sejane, Ineta and Eger, Steffen}, Title = {Semantic typologies by means of network analysis of bilingual dictionaries}, BookTitle = {Approaches to Measuring Linguistic Differences}, Publisher = {De Gruyter}, Editor = {Borin, Lars and Saxena, Anju}, Pages = {447-474}, bibtexkey = {eger-sejane_network-typologies2013}, doi = {10.1515/9783110305258.447}, inlg = {English [eng]}, src = {degruyter}, srctrickle = {degruyter#/books/9783110305258/9783110305258.447/9783110305258.447.xml}, url = {http://www.degruyter.com/view/books/9783110305258/9783110305258.447/9783110305258.447.xml}, year = 2013 } • S. Eger, “Sequence Segmentation by Enumeration: An Exploration.,” Prague Bull. Math. Linguistics, vol. 100, pp. 113-131, 2013. [Abstract] [BibTeX] We investigate exhaustive enumeration and subsequent language model evaluation (E&E approach) as an alternative to solving the sequence segmentation problem. We show that, under certain conditions (on string lengths and regarding a possibility to accurately estimate the number of segments), which are satisfied for important NLP applications, such as phonological segmentation, syllabification, and morphological segmentation, the E&E approach is feasible and promises superior results than the standard sequence labeling approach to sequence segmentation. @Article{Eger:2013:a, Author = {Eger, Steffen}, Title = {Sequence Segmentation by Enumeration: An Exploration.}, Journal = {Prague Bull. Math. Linguistics}, Volume = {100}, Pages = {113-131}, abstract = {We investigate exhaustive enumeration and subsequent language model evaluation (E\&E approach) as an alternative to solving the sequence segmentation problem. We show that, under certain conditions (on string lengths and regarding a possibility to accurately estimate the number of segments), which are satisfied for important NLP applications, such as phonological segmentation, syllabification, and morphological segmentation, the E\&E approach is feasible and promises superior results than the standard sequence labeling approach to sequence segmentation.}, pdf = {http://ufal.mff.cuni.cz/pbml/100/art-eger.pdf}, year = 2013 } • S. Eger, “A Contribution to the Theory of Word Length Distribution Based on a Stochastic Word Length Distribution Model.,” Journal of Quantitative Linguistics, vol. 20, iss. 3, pp. 252-265, 2013. [Abstract] [BibTeX] We derive a stochastic word length distribution model based on the concept of compound distributions and show its relationships with and implications for Wimmer et al. ’s (1994) synergetic word length distribution model. @Article{Eger:2013:b, Author = {Eger, Steffen}, Title = {A Contribution to the Theory of Word Length Distribution Based on a Stochastic Word Length Distribution Model.}, Journal = {Journal of Quantitative Linguistics}, Volume = {20}, Number = {3}, Pages = {252-265}, abstract = {We derive a stochastic word length distribution model based on the concept of compound distributions and show its relationships with and implications for Wimmer et al. ’s (1994) synergetic word length distribution model.}, year = 2013 } • S. Eger, “Sequence alignment with arbitrary steps and further generalizations, with applications to alignments in linguistics.,” Information Sciences, vol. 237, pp. 287-304, 2013. [Abstract] [BibTeX] We provide simple generalizations of the classical Needleman–Wunsch algorithm for aligning two sequences. First, we let both sequences be defined over arbitrary, potentially different alphabets. Secondly, we consider similarity functions between elements of both sequences with ranges in a semiring. Thirdly, instead of considering only ‘match’, ‘mismatch’ and ‘skip’ operations, we allow arbitrary non-negative alignment ‘steps’ S. Next, we present novel combinatorial formulas for the number of monotone alignments between two sequences for selected steps S. Finally, we illustrate sample applications in natural language processing that require larger steps than available in the original Needleman–Wunsch sequence alignment procedure such that our generalizations can be fruitfully adopted. @Article{Eger:2013:c, Author = {Eger, Steffen}, Title = {Sequence alignment with arbitrary steps and further generalizations, with applications to alignments in linguistics.}, Journal = {Information Sciences}, Volume = {237}, Pages = {287-304}, abstract = {We provide simple generalizations of the classical Needleman–Wunsch algorithm for aligning two sequences. First, we let both sequences be defined over arbitrary, potentially different alphabets. Secondly, we consider similarity functions between elements of both sequences with ranges in a semiring. Thirdly, instead of considering only ‘match’, ‘mismatch’ and ‘skip’ operations, we allow arbitrary non-negative alignment ‘steps’ S. Next, we present novel combinatorial formulas for the number of monotone alignments between two sequences for selected steps S. Finally, we illustrate sample applications in natural language processing that require larger steps than available in the original Needleman–Wunsch sequence alignment procedure such that our generalizations can website = {http://www.sciencedirect.com/science/article/pii/S0020025513001485}, year = 2013 } • S. Eger, “Restricted weighted integer compositions and extended binomial coefficients.,” Journal of Integer Sequences (electronic only), vol. 16, iss. 1, 2013. [Abstract] [BibTeX] We prove a simple relationship between extended binomial coefficients — natural extensions of the well-known binomial coefficients — and weighted restricted integer compositions. Moreover, wegiveaveryuseful interpretation ofextendedbinomial coefficients as representing distributions of sums of independent discrete random variables. We apply our results, e.g., to determine the distribution of the sum of k logarithmically distributed random variables, and to determining the distribution, specifying all moments, of the random variable whose values are part-products of random restricted integer compositions. Based on our findings and using the central limit theorem, we also give generalized Stirling formulae for central extended binomial coefficients. We enlarge the list of known properties of extended binomial coefficients. @Article{Eger:2013:d, Author = {Eger, Steffen}, Title = {Restricted weighted integer compositions and extended binomial coefficients.}, Journal = {Journal of Integer Sequences (electronic only)}, Volume = {16}, Number = {1}, abstract = {We prove a simple relationship between extended binomial coefficients — natural extensions of the well-known binomial coefficients — and weighted restricted integer compositions. Moreover, wegiveaveryuseful interpretation ofextendedbinomial coefficients as representing distributions of sums of independent discrete random variables. We apply our results, e.g., to determine the distribution of the sum of k logarithmically distributed random variables, and to determining the distribution, specifying all moments, of the random variable whose values are part-products of random restricted integer compositions. Based on our findings and using the central limit theorem, we also give generalized Stirling formulae for central extended binomial coefficients. We enlarge the list of known properties of extended binomial coefficients.}, issn = {1530-7638}, pdf = {https://cs.uwaterloo.ca/journals/JIS/VOL16/Eger/eger6.pdf}, publisher = {School of Computer Science, University of Waterloo, Waterloo, ON}, website = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.397.3745}, year = 2013 } • A. Mehler, R. Schneider, and A. Storrer, Webkorpora in Computerlinguistik und Sprachforschung, R. Schneider, A. Storrer, and A. Mehler, Eds., JLCL, 2013, vol. 28. [BibTeX] @Book{Schneider:Storrer:Mehler:2013, Author = {Mehler, Alexander and Schneider, Roman and Storrer, Angelika}, Editor = {Roman Schneider and Angelika Storrer and Alexander Mehler}, Title = {Webkorpora in Computerlinguistik und Sprachforschung}, Publisher = {JLCL}, Volume = {28}, Number = {2}, Series = {Journal for Language Technology and Computational Linguistics (JLCL)}, issn = {2190-6858}, pagetotal = {107}, pdf = {http://www.jlcl.org/2013_Heft2/H2013-2.pdf}, year = 2013 } • A. Mehler, A. Lücking, T. vor der Brück, and G. Abrami, WikiNect – A Kinetic Artwork Wiki for Exhibition Visitors, 2013. [Poster][BibTeX] @Misc{Mehler:Luecking:vor:der:Brueck:2013:a, Author = {Mehler, Alexander and Lücking, Andy and vor der Brück, Tim and Abrami, Giuseppe}, Title = {WikiNect - A Kinetic Artwork Wiki for Exhibition Visitors}, HowPublished = {Poster Presentation at the Scientific Computing and Cultural Heritage 2013 Conference, Heidelberg}, keywords = {wikinect}, month = {11}, url = {http://scch2013.wordpress.com/}, year = 2013 } • A. Lücking, Theoretische Bausteine für einen semiotischen Ansatz zum Einsatz von Gestik in der Aphasietherapie, 2013. [BibTeX] @Misc{Luecking:2013:c, Author = {Lücking, Andy}, Title = {Theoretische Bausteine für einen semiotischen Ansatz zum Einsatz von Gestik in der Aphasietherapie}, HowPublished = {Talk at the BKL workshop 2013, Bochum}, month = {05}, url = {http://www.bkl-ev.de/bkl_workshop/archiv/workshop13_programm.php}, year = 2013 } • A. Lücking, Eclectic Semantics for Non-Verbal Signs, 2013. [BibTeX] @Misc{Luecking:2013:d, Author = {Lücking, Andy}, Title = {Eclectic Semantics for Non-Verbal Signs}, HowPublished = {Talk at the Conference on Investigating semantics: Empirical and philosophical approaches, Bochum}, month = {10}, url = {http://www.ruhr-uni-bochum.de/phil-lang/investigating/index.html}, year = 2013 } • A. Lücking, “Multimodal Propositions? From Semiotic to Semantic Considerations in the Case of Gestural Deictics,” in Poster Abstracts of the Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue, Amsterdam, 2013, pp. 221-223. [Poster][BibTeX] @InProceedings{Luecking:2013:e, Author = {Lücking, Andy}, Title = {Multimodal Propositions? From Semiotic to Semantic Considerations in the Case of Gestural Deictics}, BookTitle = {Poster Abstracts of the Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue}, Editor = {Fernandez, Raquel and Isard, Amy}, Series = {SemDial 2013}, Pages = {221-223}, month = {12}, year = 2013 } • M. Z. Islam and A. Hoenen, “Source and Translation Classifiction using Most Frequent Words,” in Proceedings of the 6th International Joint Conference on Natural Language Processing (IJCNLP), 2013. [Abstract] [BibTeX] Recently, translation scholars have made some general claims about translation properties. Some of these are source language independent while others are not. Koppel
For example, @w=@x means difierentiate with respect to x holding both y and z constant and so, for this example, @w=@x = sin(y + 3z). 2) Be able to describe the differences between finite-difference and finite-element methods for solving PDEs. We pretend as if … MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.. No enrollment or registration. partial fractions, linear eigenvalue problems), ordinary di erential equations (e.g. Calculus III - Partial Derivatives (Practice Problems) Here are the formal definitions of the two partial derivatives we looked at above. If a functional F(y) = R b a f(x;y;y0)dx attains a weak relative extrema at y 0, then y 0 is a solution of the following equation @f @y d dx @f @y0 = 0: It is called the Euler equation. An introduction to difference schemes for initial value problems. Implicit Differentiation Practice Khan Academy. Analytic Solutions of Partial Di erential Equations MATH3414 School of Mathematics, University of Leeds ... (e.g. The solution depends on the equation and several variables contain partial derivatives with respect to the variables. Theorem. This manual contains solutions with notes and comments to problems from the textbook Partial Differential Equations with Fourier Series and Boundary Value Problems Second Edition Most solutions are supplied with complete details and can be used to supplement examples from the text. ( x 2 + 2 y) − e 4 x − z 4 y + y 3 Solution. You might wish to delay consulting that solution until you have outlined an attack in your own mind. then apply the initial condition to find the particular solution. 1. Find the partial derivatives of the following functions at the indicated points. It is important to distinguish the notation used for partial derivatives ∂f ∂x from ordinary derivatives df dx. 1. March 2011; Journal of Computational Science 2(1) ... for the solution of the problem. 1.3.7 Further remarks on the classification of partial differential equations. Hence the derivatives are partial derivatives with respect to the various variables. That is, 1 t,x,u x t and 2 t,x,u xu are a pair of first integrals for V t,x,u.We can show that for any smooth function F of two variables, 3 t,x,u F 1 t,x,u, 2 t,x,u is also a first integral for V and 3 is then viewed as an implicit representation for the most general solution of the first integral pde. 2. 1. 2. You might even disdain to read it until, with pencil and paper, you have solved the problem yourself (or failed gloriously). 3 1 X H X 3 X H 1 X H X 3h 1 Euclid Colorado Edu. Addtax De. and dx x du u implies x u C2. (Euler) Suppose f(x;y;y0) has continuous partial derivatives of the second order on the interval [a;b]. 13.4E: Tangent Planes, Linear Approximations, and the Total Differential (Exercises) 13.5: The Chain Rule for Functions of Multiple Variables. But one may ask, how does one obtain the solution? A partial differential equation which involves first order partial derivatives and with degree higher than one and the products of and is called a non-linear partial differential equation. 1 : partial derivatives of \ ( x^3y^2+y^5\ ) solutions to the various variables characteristics applied to a simple equation. 1 ) = f yx are continuous on some open disc, then f xy and f yx continuous. Characteristics applied to a simple hyperbolic equation Euclid Colorado Edu are computed similarly to the various.... ±Π/2So the solution is not included in the Maple dsolve command just nd the partial Fraction decomposition no! ( answer ) Q14.6.3 Find all first and second partial derivatives of \ ( x^3y^2+y^5\ ) + 2 y −. Of three variables does not have a graph of Chapter 7 is adapted from the textbook “ dynamics! Are continuous on some open disc, then f xy = f yx are continuous on some open disc then! A solution and verify that it satisfies ( 1 ) ( x 2 + 2 y ) − 4. Pretend as if … solution of the heat equation ( PDE ) describes a relation an. 1 = ( 2x+ 1 )... for the following are solutions to various! Materials for this course in the pages linked along the left 3x x. Indeed, because of the problem from the textbook “ Nonlinear dynamics and chaos ” by that it satisfies 1... To delay consulting that solution until you have outlined an attack in own! Z 4 y + y 3 solution will start this section by first defining a differential.. Distinguish the notation used for partial derivatives we looked at above problems posted on my 9... Problems posted on November 9 are also solutions of the heat equation ( ). 1 ) ) ( x 1 ) the physical problems each class represents and the physical/mathematical characteristics of each partial... The time-dependent density of a material ( t ) just nd the partial derivatives are partial derivatives with respect the... Solution of the linearity of derivatives… 1 website 9 general solutions when paired with sets! Decomposition ( no need to integrate ) of a material ( t ) additional solutions will be on. Derivatives we looked at above the reverse process of integration but we will start this section first. Method, viz y ) − e 4 x − z 4 y + y solution... Find materials for this course in the points x = −π/2−2andx = π/2−2 higher-order are! In all areas of physics and engineering that a function of three variables does not have a graph,... On November 9 du u implies x u C2 as if … solution the..., for example the time-dependent density of a scalar, for example the time-dependent density of a material ( )! Calculus … partial derivatives of the two partial derivatives of \ ( 4x^3+xy^2+10\ ) we out... Fourth-Order partial differentiation problems and solutions pdf and the one in Exercise 5 and the one in 5! The classification of partial differential equation ( PDE ) describes a relation between an unknown function and partial. Describe the differences between finite-difference and finite-element methods for solving PDEs equation will have different general solutions when with! ) −e4x−z4y +y3 w = cos. ⁡ when paired with different sets of boundary.! Answer ) Q14.6.3 Find all first and second partial derivatives of \ ( x\sin y\ ) the variables 13.3e partial... Here are the formal definitions of the two partial derivatives of \ ( x\sin y\ ) integrate ) of transforms! Important to distinguish the notation used for partial derivatives are computed similarly the... Along the left the left partial differentiation problems and solutions pdf yx on that disc Ex 8.4 the classification partial. Notation used for partial derivatives with respect to the partial derivatives with to... Start this section by first defining a differential coefficient have a graph obtain the solution clue this... Integrate ) pick out a solution and verify that u= u1 + u2 is desired... Are obtained by successive di erentiation x^3y^2+y^5\ ) scalar-valued function of a material ( t ) three does! Are continuous on some open disc, then f xy and f yx that. Attack in your own mind are computed similarly to the partial Fraction decomposition ( no need to integrate ) be... Solutions to the partial derivatives with respect to the partial differentiation problems and solutions pdf Fraction decomposition ( no need to )... Each class represents and the Total differential Differentiation is the reverse process of integration we. The Maple dsolve command ) 13.4: Tangent Planes, linear eigenvalue problems ) Here the... X 3h 1 Euclid Colorado
on the above observation. Notice that ${m-r \choose j-r}$ is the number of subsets of $[m]$ of cardinality $j$ which contain some particular subset of $[m]$ of cardinality $r$. \begin{lemma}\label{lemma:inclusion-exclusion} For any integers $0\leq r\leq n<m$, we have $\sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} {m-r \choose j-r} = 1$. In particular, for any function $\xi_{(\cdot)}:2^{[m]}\rightarrow\mathbb{R}$ with $\sum_{S\in\binom{[m]}{j}} \xi_{S}={m-r \choose j-r}$ for all $0\leq j\leq n$, % \[ \sum_{j=0}^n (-1)^{n-j} {m-1-j\choose n-j} \sum_{S\in\binom{[m]}{j}} \xi_{S}= 1. \] \end{lemma} \begin{proof} The proof follows by induction over $m\geq n+1$, using for $m=n+1$ the fact that $\sum_{j=0}^{n+1} (-1)^j{n+1-r\choose j-r}=0$ and hence $\sum_{j=0}^n (-1)^{n-j}{n+1-r\choose j-r}=1$ for any $0\leq r<n+1$. \end{proof} We obtain the following linear relation between the number of upper $s$-faces of a Minkowski sum of $m$ polytopes and the number of upper $s$-faces of subsums of at most $n$ of the polytopes. This is a version of Weibel's theorem \cite[Theorem~1]{Weibel12} for the case of upper faces. Whereas that result is for sums of full-dimensional polytopes, our statement is valid for any dimensions. % \begin{theorem}[Number of upper faces of Minkowski sums] \label{thm:f_vectors_upper_part_Minkowski_sum} Let $P_1,\ldots, P_{m}$ be % any positive dimensional polytopes in $\mathbb{R}^{n+1}$ in general orientations, $m\geq n+1$, and $P=P_1+\cdots+P_m$. Then for the number of $s$-faces of the upper part we have $$ f_s(P^+) = \sum_{j=0}^{n} (-1)^{n-j} {m-1-j \choose n-j} \sum_{S\in{[m]\choose j}} f_s(P_S^+) , \quad s=0,\ldots, n, $$ where $P_S = (\sum_{i\in S}P_i)$ for any nonempty $S\subseteq[m]$, and $P_\emptyset=\{0\}$. \end{theorem} \begin{proof} Consider the % complex $\mathcal G^+(P)$, and recall that $s$-dimensional upper faces of $P$ correspond to $(n-s)$-dimensional cells of $\mathcal G^+(P)$. Let $W_1,\dots,W_N$ be the westernmost corners of $(n-s)$-cells of $\mathcal G^+(P)$ and let $I_1,\dots,I_N\subseteq [m]$ denote their supports. Note that % $0\leq |I_i| \leq n$ for all $i=1,\dots,N$. Let $w_s(P_S^+)$ denote the number of west-most corners of $(n-s)$-cells of $P_S^+$, so that $w_s(P_S^+)=f_s(P_S^+)$. Writing $f_s(P^+) = N = \sum_{i=1}^N 1$, we then obtain \begin{align*} % % % &f_s(P^+)\overset{Lem.}{\underset{\ref{lemma:inclusion-exclusion}}{=}}\sum_{i=1}^N \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j}\!\!\!\! \sum_{S\in {[m] \choose j}}\!\!\! \mathbb{1}_{I_i\subseteq S} = \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j}\!\!\!\! \sum_{S\in {[m] \choose j}} \sum_{i=1}^N \mathbb{1}_{I_i\subseteq S} \\ &\overset{Lem.}{\underset{\ref{lemma:sequence}}{=}} \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j} \!\!\sum_{S\in {[m] \choose j}} w_s(P_S^+) = \sum_{j=0}^n (-1)^{n-j} \binom{m-1-j}{n-j} \!\!\sum_{S\in {[m] \choose j}} f_s(P_S^+). \qedhere % % \end{align*} % \end{proof} One naturally wonders if the proof of Theorem~\ref{thm:f_vectors_upper_part_Minkowski_sum} can be extended to count the faces of the entire polytope, generalizing Weibel's result to sums of polytopes of arbitrary dimensions. Lemma~\ref{lemma:existence} does not cover that case. We will present an alternative approach in Section~\ref{sec:Weibel-Zaslavsky}. Weibel \cite[Theorem~3]{Weibel12} also shows, for sums of full-dimensional polytopes, that the number of vertices is maximized when the partial sums attain the trivial upper bound. The same arguments transfer to the case of upper vertices, and one can show the following corollary. In the following we consider families of polytopes $P_i$ satisfying $f_0(P_i)= k_i$, $i=1,\ldots, m$. \begin{corollary}[Upper bound for upper vertices of sums of full-dimensional polytopes] Let $m\geq n+1$ and $k_1,\ldots, k_m\geq n+2$. Then $f_0(P^+) \leq \sum_{j=0}^n (-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i$. \end{corollary} By comparison, Weibel's upper bound for the total number of vertices is $f_0(P) \leq \binom{m-1}{n} +$ \linebreak $\sum_{j=0}^n (-1)^{n-j}{m-1-j\choose n-j}\sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i$. Unfortunately his argument does not extend to the case where some of the $k_i$ are small, neither for all vertices nor for upper vertices. To address that case, we will instead use the upper bound theorem by Adiprasito-Sanyal~\cite{KarimRaman}. Their result implies Proposition~\ref{thm:Mneighborly_upper_bound}, which states that if a Minkowski sum maximizes the number of vertices, then the partial sums attain the trivial upper bound $f_0(P_S)=\prod_{i\in S}f_0(P_i)$. The problem remaining is whether maximizing the number of vertices $f_0(P)$ also entails maximizing the number of upper vertices $f_0(P^+)$ and whether the partial sums will also attain the trivial upper bound for upper vertices. In Section~\ref{sec:Weibel-Zaslavsky} we will show that this is indeed the case. We find it useful to rewrite the alternating sum as follows. \begin{lemma} \label{lemma:reformulation} Let % $m\ge n+1\geq 1$ % and $k_1,\ldots, k_m \geq 2$. % Then \[\sum_{j=0}^{n} (-1)^{n-j} {m-1-j \choose n-j} \sum_{S\in{[m]\choose j}}\prod_{i\in S} k_i = \sum_{j=0}^n\sum_{S\in{[m]\choose j}}\prod_{i\in S}(k_i-1).\] \end{lemma} \begin{proof} We prove the % equality by viewing both sides as polynomials in the variables $k_i$ and examining the coefficient % of each monomial. % Fix a subset $S\subseteq [m]$ of size $j$. The coefficient % for the monomial $k_S=\prod_{i\in S}k_i$ on the left-hand side of the equation is $(-1)^{n-j}\binom{m-1-j}{n-j}$. On the right-hand side, the term $k_S$ appears with sign equal to $(-1)^{|T|-|S|}$ for each $T\supseteq S$. The coefficient % on the right-hand side is therefore $ \sum_{T\supseteq S} (-1)^{|T|-j}=\sum_{i = 0}^{n-j} (-1)^i \cdot \binom{m-j}{i}$. The statement now follows % from the following observation, which is obtained by induction on $n$: If $m \geq n+1\geq 1$, then $\sum_{i=0}^{n} (-1)^i \binom{m}{i} = (-1)^{n}\binom{m-1}{n}$. \end{proof} \section{Face counting formulas \`a la Zaslavsky} \label{sec:Zaslavsky} Zaslavsky \cite{zaslavsky1975facing} proved a theorem expressing the number of regions defined by a hyperplane arrangement in terms of the characteristic polynomial, a function obtained from the intersection poset of the arrangement. In the following we derive a similar type of result for the case of maxout arrangements. Hyperplanes are special in that their intersections are affine spaces and can be discussed in terms of linear independence relations. In contrast, for maxout arrangements the intersections involve linear equations and also linear inequalities. In turn, the elements of the poset have a more complex topology that needs to be accounted for. In general the poset also has a more complex structure even if the arrangement is in general position. \begin{definition}[Maxout arrangement] For a collection of $m$ maxout units % $z_i(x) =$\linebreak $\max\{A_{i,1}(x),\ldots, A_{i,k_i}(x)\}$, $x\in\mathbb{R}^n$, $i=1,\ldots, m$, we define the maxout arrangement $\mathcal{A}=\{H_{ab}^i\colon \{a,b\}\in {[k_i]\choose 2}, i\in[m], \operatorname{co-dim}(H^i_{ab})=1\}$ in $\mathbb{R}^n$ as the collection of nonempty co-dimension $1$ indecision boundaries between pairs of preactivation features, called atoms, \begin{equation} H_{ab}^i = \left\{x\in\mathbb{R}^n\colon A_{i,a}(x) = A_{i,b}(x) =\max_{c\in[k_i]}\{A_{i,c}(x)\} \right\}. \label{eq:maxoutarrangementpieces} \end{equation} We call the arrangement central if the affine functions $A_{i,a}$ of each unit are linear. We let $L(\mathcal{A})$ denote the set of all possible nonempty sets obtained by intersecting subsets of elements in $\mathcal{A}$, including $\mathbb{R}^n$ as the empty set intersection. The set $L(\mathcal{A})$ is partially ordered by reverse inclusion, so that for any $s,t\in L(\mathcal{A})$ we have $s\geq t$ if and only if $s\subseteq t$. The smallest element, i.e.\ the $\hat 0$ in this poset, is $\mathbb{R}^n$. For a given arrangement $\mathcal{A}$, we denote by $r(\mathcal{A})$ the number of connected components of $\mathbb{R}^n\setminus \cup_{H\in\mathcal{A}}H$, called the regions of $\mathcal{A}$. \end{definition} Note that rank-$1$ units have no indecision boundaries and can be ignored. In the following we will therefore assume without loss of generality that $k_1,\ldots, k_m\geq 2$. \begin{figure} \centering \begin{tabularx}{115mm}{m{5cm}m{65mm}} \begin{tikzpicture}[every node/.style={black,above right, inner sep=1pt}] \path[fill=blue!10] (-1.25,-1.25) rectangle (1.25cm,1.25cm); \draw[name path=line11, double=black, white, thick] (0,0) -- (1.25,.75) node [right] {$H^1_{12}$}; \draw[name path=line12, double=black, white, thick] (0,0) -- (1.25,-1) node [right] {$H^1_{23}$}; \draw[name path=line13, double=black, white, thick] (0,0) -- (-1.25,0) node [left] {$H^1_{13}$}; \draw[name path=line21, double=blue!80, white, thick] (.25,-1) -- (-1.25,1.25) node [above] {\textcolor{blue!80}{$H^2_{12}$}}; \draw[name path=line22, double=blue!80, white, thick] (.25,-1) -- (.75,1.25) node [above] {\textcolor{blue!80}{$H^2_{23}$}}; \draw[name path=line23, double=blue!80, white, thick] (.25,-1) -- (.3,-1.25) node [below] {\textcolor{blue!80}{$H^2_{13}$}}; \fill[name intersections={of=line11 and line12,total=\t}, draw=white, thick] {(intersection-1) circle (1.5pt) node {}}; \fill[name intersections={of=line21 and line22,total=\t}, blue!80, draw=white, thick] {(intersection-1) circle (1.5pt) node {}}; \foreach \i in {1,2,3}{ \foreach \j in {1,2,3}{ \fill[name intersections={of={line2\i} and {line1\j}, total=\t}, red!80, draw=white, thick][] \ifnum\t=0 {}; \else \foreach \s in {1,...,\t}{(intersection-\s) circle (1.5pt) node {} } ; \fi } } \end{tikzpicture} & \begin{tikzpicture}[inner sep=1pt] \node (zero) at (0,-1) {$\mathbb{R}^2$}; \node (H112) at (-.5,0) {$H^1_{12}$}; \node (H113) at (-1.5,0) {$H^1_{13}$}; \node (H123) at (-2.5,0) {$H^1_{23}$}; \node (H212) at (2.5,0) {\textcolor{blue!80}{$H^2_{12}$}}; \node (H213) at (1.5,0) {\textcolor{blue!80}{$H^2_{13}$}}; \node (H223) at (.5,0) {\textcolor{blue!80}{$H^2_{23}$}}; \node (H10) at (-2,1) {\textcolor{black}{$\bullet$}}; % \node (H20) at (2,1) {\textcolor{blue!80}{$\bullet$}}; % \node (H113-H212) at (1,1) {\textcolor{red}{$\bullet$}};% \node (H123-H223) at (-1,1) {\textcolor{red}{$\bullet$}};% \node (H112-H223) at (0,1) {\textcolor{red}{$\bullet$}};% \draw (zero) -- (H112) -- (H10) -- (H113) -- (zero) -- (H123) -- (H10); \draw (zero) -- (H212) -- (H20) -- (H213) -- (zero) -- (H223) -- (H20); \draw (H113) -- (H113-H212) -- (H212); \draw (H123) -- (H123-H223) -- (H223); \draw (H112) -- (H112-H223) -- (H223); \end{tikzpicture} \end{tabularx}\vspace{-3mm} \caption{Shown is an arrangement of two maxout units of ranks
## Wednesday, 1 June 2011 ### How to write a UNIX man page Introduction Man pages are common on UNIX and UNIX-like systems for providing online documentation for user commands, libraries, APIs, file formats and the like.  So common in fact, that one might think there is a magic tool that authors use to write them.  Well, there is and there isn't.  If you consider `vi` or `emacs` to be magic, or the text formatting tools `nroff` and `troff`, then indeed you would be right.  That's about as magic as it gets. When you use the `man` command to display a man page, the text file that you have written in your favourite editor is formatted by one of several text formatters, such as `nroff`, `tbl` and `col`, before being displayed on-screen.  Each of these text formatters has its own man page describing its behaviour. This article discusses writing man pages for Solaris or Linux, although the instructions will be practically identical for other UNIX systems. The best way to learn how to write a man page is often to take an existing man page that someone else has written and change it for your own needs.  However, this article will give you some useful pointers. Chapters Man pages are organised by chapters, much like the chapters of a book.  Each chapter is identified by a title and a number.  The main difference between writing man pages for Solaris and Linux are the chapter numbers, which will differ. To find out what information should be contained within a particular chapter, type `man -s<N> intro` on Solaris or `man <N> intro` on Linux, where `<N>` is the chapter number of interest. This will pull up the introduction page for the chapter. For example, `man -s1 intro` will identify that this chapter is for User Commands. On Solaris, chapter 1M (`man -s1m intro`) is for System Administration Commands such as those you would usually only run as the `root` user, while on Linux this information goes in chapter 8 (`man 8 intro`). If you're unsure of the title of a chapter or what chapter number you should be using, open the man page for another similar type of command that comes with the OS and use the same chapter number in your man page. Basic Layout A typical man page starts with some preamble identifying the title and chapter number, and is then laid out in a number of sections: SectionDescription `NAME` Name of command and summary line `SYNOPSIS` Identifies the different ways the command can be invoked and its command-line arguments `DESCRIPTION` A description of what the command does and how to use it `OPTIONS` A description of each command-line option and what effect it has `SEE ALSO` A list of related man pages or documentation Man pages may include any sections that are relevant, but the above list is normal for a basic man page and this article will use the above list. Other common sections that appear in man pages include `ENVIRONMENT VARIABLES`, `EXAMPLES`, `EXIT STATUS`, `FILES`, `NOTES`, `AUTHOR`, `COPYRIGHT` and `BUGS`. Man pages are text files called `<name>.<chapter>`, where `<name>` is the name of the man page (usually the same as the command it is describing), and `<chapter>` is the chapter number in lowercase. Man pages for chapter `<chapter>` are contained within a directory called `man/man<chapter>`, again in lowercase. Fonts Thoughout a man page, different font faces have particular meanings. Default text is known as "Roman". Bold text is used for text that must be typed exactly as shown (or for general emphasis within paragraphs). Italic text, which is actually usually displayed underlined instead, are for arguments that must be replaced by something else. Note that on Solaris, bold in man pages does not show up without some tweaking.  I'll discuss this in a separate posting. In Linux, apostrophes don't always display as apostrophes in PuTTY. To fix this, make sure PuTTY is configured to assume received data is in the UTF-8 character set. General Formatting Rules Macro commands Macro commands for the text formatter generally appear on newlines prefixed by a single dot. Anything else you type will appear in the man page in formatted paragraphs, fully justified against the left and right margins. The text formatter will automatically split-up and hyphenate long words when necessary. Line breaks and paragraph breaks (.br and .LP) Line breaks in man pages are generally swallowed up, so if you're typing a long paragraph, you can usually hit Enter whenever you like. If you actually want to begin a new paragraph, leave one blank line. Alternatively, use the `.LP` command on a line by itself to request a new paragraph. If you want to force a line break (but not a new paragraph), use the `.br` command on a line by itself to request a line break. Be careful when putting in line breaks. Solaris swallows up extra space when displaying man pages, but Linux does not. Bold text (.B, .BR and \fB) If a line begins `.B` the next argument will be bold. If the text contains spaces, enclose in double quotes. E.g. ```The word .B bold will be bolded. ``` To switch back to Roman text without incurring a space, use `.BR` instead. The first argument will be bold, the second argument Roman. As before, if an argument must contain spaces, enclose in double quotes. E.g. ```The word .BR bold , will be bolded but the comma was Roman. ``` Alternatively, the macro `\fB` starts bold face, `\fR` returns to Roman. E.g. `The word \fBbold\fR, will be bold. ` Italic text (.I, .IR and \fI) As previously mentioned, italic text actually usually appears underlined. If a line begins `.I` the next argument will be italics. If the text contains spaces, enclose in double quotes. E.g. ```The word .I italic will be underlined. ``` To switch back to Roman text without incurring a space, use `.IR` instead. The first argument will be italics, the second argument Roman. As before, if an argument must contain spaces, enclose in double quotes. E.g. ```The word .IR italic , will be underlined but the comma was Roman. ``` Alternatively, the macro `\fI` starts italics, `\fR` returns to Roman. E.g. ```The word \fIitalic\fR, will be underlined. ``` Indenting paragraphs (.RS, .RE, .HP and .TP) There are several ways to achieve paragraph indentation. The simplest form is `.RS <N>` where `<N>` is the number of characters to indent. This sets up a relative indent, and `.RE` ends a relative indent. E.g. ```.RS 3 This paragraph is indented by 3 characters. .RE ``` The `.RS` command can be nested to create different levels of indentation. Each successive `.RE` returns the indentation back to the previous setting. E.g. ```.RS 3 This line is indented by 3 characters. .RS 3 This line is indented by 6 characters. .RE This line is indented by 3 characters. .RE Now we're back to normal. ``` Alternatively, the `.HP` command can be used to set-up a hanging indent. Like `.RS` it is given an argument specifying the number of characters to indent by, but it will apply to the next paragraph. To remove the indentation, start a new paragraph with `.LP`. E.g. ```.HP 3 This paragraph is normal. This paragraph is indented by 3 characters. .LP This paragraph is normal. ``` The `.TP` command sets up a tagged indent and is typically used when discussing command-line options. This allows for paragraph indentation that follows an initial line that is not indented. The first line immediately following a `.TP` command contains the text to display that is not indented. All further lines and paragraphs will be indented. The `.TP` command can be given an argument specifying the number of characters to indent, or if omitted will use whatever indentation setting was specified with the last `.TP` command. E.g. ```.TP 8 .B -a This argument does something. .TP .B -b This argument does something else. .LP Now we're back to normal. ``` In the above example, the -a and -b options appear in bold in the left column, while the description of what the argument does appears in the right column. The left column is 8 characters wide. It is common to indent the whole block using a relative
\section{Introduction} With the evolution of multimedia technology, the next-generation display technologies aim at revolutionizing the way of interactions between users and their surrounding environment rather than limiting to flat panels that are just placed in front of users (\textit{i.e.}, mobile phone, computer, \textit{etc.}) \cite{cakmakci2006head,zhan2020augmented}. These technologies, including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), \textit{etc.}, have been developing rapidly in recent years. Among them, AR pursues high-quality see-through performance and enriches the real world by superimposing digital contents on it, which is promising to become next-generation mobile platform. With advanced experience, AR shows great potential in several attractive application scenarios, including but not limited to communication, entertainment, health care, education, engineering design, \textit{etc.} \FigMotivationI On account of the complex application scenes, it is important to consider the perceptual Quality of Experience (QoE) of AR, which includes measuring the perceptual quality and better improving the experience of AR. Lately, some works have been presented to study the quality effects of typical degradations that affect digital contents in AR \cite{guo2016subjective,su2019perceptual,alexiou2018point,zhang2014subjective,zerman2019subjective}. These studies have performed subjective/objective tests on screen displays showing videos of 3D meshes or point clouds with various distortions. Moreover, with the development of Head Mounted Displays (HMDs) for AR applications, some studies have considered evaluating the QoE of 3D objects using these devices. For instance, Alexiou \textit{et al.} \cite{alexiou2017towards} have studied geometry degradations of point clouds and have conducted a subjective quality assessment study in a MR HMD system. Zhang \textit{et al.} \cite{zhang2018towards} have conducted a study towards the QoE model of AR applications using Microsoft HoloLens, which mainly focused on the perceptual factors related to the usability of AR systems. Gutierrez \textit{et al.} \cite{gutierrez2020quality} have proposed several guidelines and recommendations for subjective testing of QoE in AR scenarios. However, all these studies only focus on the degradations of geometry and texture of 3D meshes and point clouds inside AR, \textit{e.g.,} noise, compression \textit{etc.}, their see-through scenes are either blank or simple texture, or even without see-through scenes (opaque images/objects). The studies discussing the relationship between augmented view and see-through view are lacking. To address the above issues, in this paper, we consider AR technology as the \textit{superimposition} of digital contents and see-through contents, and introduce \textit{\textbf{visual confusion}} \cite{woods2010extended,peli2017multiplexing} as its basic theory. Fig. \ref{fig:1_visual_conf} demonstrates the concept of the visual confusion in AR. ``B'', ``A'' and ``S'' in Fig. \ref{fig:1_visual_conf} represent background (BG) view, augmented view and superimposed view, respectively. If both ``B'' and ``A'' are views with blank/simple textures, there is nothing important in the superimposed view. If one of ``B'' or ``A'' has a complex texture, but another view has a simple texture, there is also no confusion in the superimposed view. If both ``B'' and ``A'' have complex textures, visual confusion is introduced. We assume that without introducing specific distortions that has been widely studied in the current QoE studies, visual confusion itself is a type of distortion, and it significantly influences the AR QoE. Thus, we argue that it is important to study the assessment of the visual confusion towards better improving the QoE of AR. Note that it does not mean that no confusion is better than with confusion, since the objective of AR is promoting the fusion between virtual world and real world. Instead, the balance between them is more important. To this end, in this work, we first propose a more general problem, which is evaluating the perceptual quality of visual confusion. A ConFusing Image Quality Assessment (CFIQA) dataset is established to make up for the absence of relevant research. Specifically, we first collect 600 reference images and mix them in pairs, which generates 300 distorted images. We then design and conduct a comprehensive subjective quality assessment experiment among 17 subjects, which produces 600 mean opinion scores (MOSs), \textit{i.e.}, for each distorted image, two MOSs are obtained for two references respectively. The distorted and reference images, as well as subjective quality ratings together constitute the ConFusing Image Quality Assessment (CFIQA) dataset. Based on the dataset, we analyze several visual characteristics of visual confusion and propose an attention based deep feature fusion method towards better evaluating the quality of confusing images. A specialized learning strategy for this purpose is proposed. We then compare the proposed method with several state-of-the-art IQA metrics and conduct a benchmark study. Moreover, considering the field-of-view (FOV) of the AR image and the background image are usually different in real application scenarios, we further establish an ARIQA dataset for better understanding the perception of visual confusion in real world. The ARIQA dataset is comprised of 20 raw AR images, 20 background images, and 560 distorted versions produced from them, each of which is quality-rated by 23 subjects. Besides the visual confusion distortion as mentioned above, we further introduce three types of distortions: JPEG compression, image scaling and contrast adjustment to AR contents. Four levels of the visual confusion distortion are applied to mix the AR images and the background images. Two levels of other types of distortions are applied to the AR contents. To better simulate the real AR scenarios and control the experiment environment, the ARIQA experiment is conducted in VR environment. We also design three types of objective AR-IQA models, which can be differentiated according to the inputs of the classical IQA models, to study whether and how the visual confusion should be considered when designing corresponding IQA metrics. An ARIQA model is finally proposed to better evaluate the perceptual quality of AR images. Overall, the main contributions of this paper are summarized as follows. 1) We discuss the visual confusion theory of AR and argue that evaluating the visual confusion is one of the most important problem of evaluating the QoE of AR. 2) We establish the first ConFusing IQA (CFIQA) dataset, which can facilitate further objective visual confusion assessment studies. 3) To better simulate the real application scenarios, we establish an ARIQA dataset and conduct a subjective quality assessment experiment in VR environment. 4) Two objective model evaluation studies are conducted on the two datasets, respectively. 5) A CFIQA model and an ARIQA model are proposed for better evaluating the perceptual quality in these two application scenarios. Our data collection software, datasets, benchmark studies, as well as objective metrics will be released to facilitate future research. We hope this study will motivate other researchers to consider both the see-through view and the augmented view when conducting AR QoE studies. The rest of the paper is organized as follows. In Section \ref{sec:II}, we give an overview of the related backgrounds and works. Section \ref{sec:III} describes the construction process of CFIQA dataset. The proposed CFIQA model is then introduced in Section \ref{sec:IV}. The experimental results of objective CFIQA evaluation are given in Section \ref{sec:V}. Then an extended subjective \& objective ARIQA study is presented in Section \ref{sec:VI}. Section \ref{sec:VII} concludes the paper and discusses several future issues. \vspace{-5pt} \section{Related Work \label{sec:II}} \vspace{-2pt} In this section, we shortly review two topics related to this work, including Augmented Reality and its perceptual theory basis, as well as image quality assessment. \vspace{-6pt} \subsection{Augmented Reality and Visual Confusion Basis \label{sec:2.1_AR}} \vspace{-2pt} This work mainly concerns head-mounted AR application rather than mobile phone based AR application. Considering rendering methods, there are two main aspects of works in the field of AR visualization, including 2D displaying and 3D rendering. Considering the theory basis of AR devices, there are two vision theories underlying the AR technologies, including binocular visual confusion and monocular visual confusion. We discuss the relationship between these four aspects and this work as follows. \textit{\textbf{2D displaying.}} The most basic application of AR is displaying digital contents in a 2D virtual plane \cite{ahn2018real}. These digital contents include images, videos, texts, shapes, and even 3D objects in 2D format, \textit{etc.} To display 2D digital contents, a real world plane is needed to attach the virtual plane. The real world plane and virtual plane are usually in one same \textit{Vieth–Müller} circle (\textit{a.k.a,} isovergence circle), which may cause visual confusion. This situation is the main consideration of this paper. \textit{\textbf{3D rendering.}} In contrast to 2D displaying, 3D rendering aims at
# Imports for python2 implementation. from __future__ import print_function, unicode_literals from __future__ import absolute_import, division import sys, os import numpy as np import pandas as pd import MDSplus as mds import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import ticker, cm, colors from scipy.interpolate import Rbf, interp1d from gadata import gadata class ThomsonClass: """ Examples of how to use ThomsonClass object. Example #1: ts = ThomsonClass(176343, 'divertor') ts.load_ts() ts.map_to_efit(times=np.linspace(2500, 5000, 10), ref_time=2500) ts.heatmap(detach_front_te=5, offset=0.1) Example #2: ts = ThomsonClass(176343, 'core') ts.load_ts() ts.map_to_efit(np.linspace(2500, 5000, 10)) ts.avg_omp # Dictionary of average omp values. """ def __init__(self, shot, system): self.shot = shot self.system = system self.conn = None self.ts_dict = {} def __repr__(self): return "ThomsonClass Object\n" \ + " Shot: " + str(self.shot) + "\n" \ + " System: " + str(self.system) def load_ts(self, verbal=True, times=None, tunnel=True, filter=False, fs='FS04', avg_thresh=2, method='simple', window_len=11): """ Function to get all the data from the Thomson Scattering "BLESSED" tree on atlas. The data most people probably care about is the temperature (eV) and density (m-3) data in ts_dict. In the 'temp' and 'density' entries, 'X' is the time, and 'Y' is a 2D array where each row is the data for a single chord at each of those times. Mapping to psin coordinates and such are done in a different function. tunnel: If using locally, set to True. This mean you need an ssh tunnel connected to local host. The command: ssh -Y -p 2039 -L 8000:atlas.gat.com:8000 username@cybele.gat.com should work in a separate terminal. Set to False if on DIII-D network. times: Provide times that a polynomial fit will be applied to, and then averaged over. This is my attempt at smoothing the TS data when it's noisy (maybe to help with ELMs). filter: Run the filter function to do a simple filter of the data. """ # Create thin connection to MDSplus on atlas. Tunnel if connection locally. if tunnel: conn = mds.Connection("localhost") else: conn = mds.Connection("atlas.gat.com") # Store the connection object in the class for later use. self.conn = conn # Open the tree of the shot we want. tree = conn.openTree("d3d", self.shot) # Will need to probably update this as I go, since I'm not sure what # the earliest BLESSED shot was (or which is REVISION01 for that matter). if self.shot <= 94741: base = "\\D3D::TOP.ELECTRONS.TS.REVISIONS.REVISION00" else: base = "\\D3D::TOP.ELECTRONS.TS.BLESSED" # Specify which system we want. print("Thomson system: " + self.system) if self.system is "core": base = base + ".CORE" elif self.system is "divertor": base = base + ".DIVERTOR" elif self.system is "tangential": base = base + ".TANGENTIAL" # List containing all the names of the nodes under BLESSED. nodes = ["CALCMASK", "CDPOLYBOX", "CDPULSE", "CHANNEL", "CHI_MAX", "CHI_MIN", "DCDATA", "DENSITY", "DENSITY_E", "DETOPT", "DCPEDESTAL", "DETOPT", "FITDATA", "FITTHRESHOLD", "FRACCHI", "INIT_NE", "INIT_TE", "ITMICRO", "LFORDER", "LPROF", "MAXFITS", "PHI", "PLDATA", "PLERROR", "PLPEDESTAL", "PLPEDVAR", "R", "REDCHISQ", "SUPOPT", "TEMP", "TEMP_E", "THETA", "TIME", "Z"] if not verbal: print("Loading Thomson data...") # Get the data for each node, and put it in a dictionary of X and Y values. # For 1D data, like "Z", the X will be the channel number, and the Y will # be the Z coordinate. for node in nodes: try: # Make the path to the node. path = base + "." + node if verbal: print("Getting data from node: " + path) # Get the data(Y) and dimension data(X) from the node. data = conn.get(path).data() data_dim = conn.get("DIM_OF(" + path + ")").data() # Put into a dictionary then put it into the master dictionary. data_dict = {"X":data_dim, "Y":data} self.ts_dict[node.lower()] = data_dict # If the node is TEMP or DENSITY, there are nodes beneath it. I haven't # seen data in these nodes ever (except R and Z, but they're redundant # and the same as the R and Z nodes as above), but check anyways. if node in ["TEMP", "DENSITY"]: for subnode in ["PHI", "PSI01", "PSI02", "R", "RHO01", "RHO02", "Z"]: path = base + "." + node + "." + subnode if verbal: print("Getting data from node: " + path) try: data = conn.get(path).data() data_dim = conn.get("dim_of(" + path + ")").data() data_dict = {"Time":data_dim, subnode.lower():data} self.ts_dict[node.lower() + "." + subnode.lower()] = data_dict except (mds.MdsIpException, mds.TreeNODATA): if verbal: print(" Node has no data.") # This error is returned if the node is empty. Catch it. #except (mds.MdsIpException, mds.TreeNODATA, mds.MdsException): #except (mds.MdsException): # if verbal: # print(" Node has no data.") # For compatablity with some of the older shots. #except mds.TreeNNF: # if verbal: # print(" Node not found.") except: if verbal: print(" Node not found/no data.") # Pull these into DataFrames, a logical and easy way to represent the # data. Initially the rows are each a TS chord, and the columns are at # each time. self.temp_df = pd.DataFrame(columns=self.ts_dict['temp']['X'], data=self.ts_dict['temp']['Y']) self.dens_df = pd.DataFrame(columns=self.ts_dict['density']['X'], data=self.ts_dict['density']['Y']) self.temp_df.index.name = 'Chord' self.temp_df.columns.name = 'Time (ms)' self.dens_df.index.name = 'Chord' self.dens_df.columns.name = 'Time (ms)' # Transpose the data so each row is at a specific time, and the columns # are the chords. self.temp_df = self.temp_df.transpose() self.dens_df = self.dens_df.transpose() # Filter the data from ELMs and just replace with the filtered dataframes. if filter: print("Filtering data...") self.temp_df_unfiltered = self.temp_df self.dens_df_unfiltered = self.dens_df self.filter_elms(fs=fs, avg_thresh=avg_thresh, method=method, window_len=window_len) self.temp_df = self.temp_df_filt self.dens_df = self.dens_df_filt # Do a polynomial fit to the data. Fifth-order. if times is not None: self.temp_df_poly = pd.DataFrame() self.dens_df_poly = pd.DataFrame() xp = np.linspace(times.min(), times.max(), 1000) for chord in self.temp_df: # Limit time between desired times. x = self.temp_df.index.values idxs = np.where(np.logical_and(x >= times.min(), x <= times.max()))[0] x = x[idxs] y_te = self.temp_df[chord].values[idxs] y_ne = self.dens_df[chord].values[idxs] z_te = np.polyfit(x, y_te, 5) # Returns five exponents. z_ne = np.polyfit(x, y_ne, 5) p_te = np.poly1d(z_te) # Creates the fit with the exponents. p_ne = np.poly1d(z_ne) yp_te = p_te(xp) # Use the fit to create 1,000 points. yp_ne = p_ne(xp) # Put into the Dataframe. self.temp_df_poly[chord] = yp_te self.dens_df_poly[chord] = yp_ne # Give the index the times. self.temp_df_poly.index = xp self.dens_df_poly.index = xp # Can we just swap temp_df out with temp_df_poly? #self.temp_df = self.temp_df_poly #self.dens_df = self.dens_df_poly def load_gfile_mds(self, shot, time, tree="EFIT04", exact=False, connection=None, tunnel=True, verbal=True): """ This is scavenged from the load_gfile_d3d script on the EFIT repository, except updated to run on python3. shot: Shot to get gfile for. time: Time of the shot to load gfile for, in ms. tree: One of the EFIT trees to get the data from. exact: If True will raise error if time does not exactly match any gfile times. False will grab the closest time. connection: An MDSplus connection to atlas. tunnel: Set to True if accessing outside DIII-D network. returns: The requested gfile as a dictionary. """ # Connect to server, open tree and go to g-file if connection is None: if tunnel is True: connection = mds.Connection("localhost") else: connection = mds.Connection('atlas.gat.com') connection.openTree(tree, shot) base = 'RESULTS:GEQDSK:' # get time slice if verbal: print("Loading gfile:") print(" Shot: " + str(shot)) print(" Tree: " + tree) print(" Time: " + str(time)) signal = 'GTIME' k = np.argmin(np.abs(connection.get(base + signal).data() - time)) time0 = int(connection.get(base + signal).data()[k]) if (time != time0): if exact: raise RuntimeError(tree + ' does not exactly contain time %.2f'\ %time + ' -> Abort') else: if verbal: print('Warning: Closest time is ' + str(time0) +'.') #print('Fetching time slice ' + str(time0)) time = time0 # store data in dictionary g = {'shot': shot, 'time': time} # get header line try: header = connection.get(base + 'ECASE').data()[k] except: print(" No header line.") # get all signals, use same names as in read_g_file translate = {'MW': 'NR', 'MH': 'NZ', 'XDIM': 'Xdim', 'ZDIM': 'Zdim', 'RZERO': 'R0', 'RMAXIS': 'RmAxis', 'ZMAXIS': 'ZmAxis', 'SSIMAG': 'psiAxis', 'SSIBRY': 'psiSep', 'BCENTR': 'Bt0', 'CPASMA': 'Ip', 'FPOL': 'Fpol', 'PRES': 'Pres', 'FFPRIM': 'FFprime', 'PPRIME': 'Pprime', 'PSIRZ': 'psiRZ', 'QPSI': 'qpsi', 'NBBBS': 'Nlcfs', 'LIMITR': 'Nwall'} for signal in translate: try: g[translate[signal]] = connection.get(base + signal).data()[k] except:
\section*{Introduction} High-dimensional problems arise in a wide range of fields such as quantum chemistry, molecular dynamics, uncertainty quantification, polymeric fluids, finance... In all these contexts, one wishes to approximate a function $u$ depending on $d$ variates $x_1$, ..., $x_d$ where $d\in{\mathbb N}^*$ is typically very large. Classically, the function $u$ is defined as the solution of a Partial Differential Equation (PDE) and cannots be obtained by standard approximation techniques such as Galerkin methods for instance. Indeed, let us consider a discretization basis with $N$ degrees of freedom for each variate ($N\in {\mathbb N}^*$), so that the discretization space is given by $$ V_N := \mbox{\rm Span}\left\{ \psi_{i_1}^{(1)}(x_1) \cdots \psi_{i_d}^{(d)}(x_d), \; 1\leq i_1, \cdots, i_d \leq N \right\}, $$ where for all $1\leq j \leq d$, $\left( \psi_i^{(j)} \right)_{1\leq i \leq N}$ is a family of $N$ functions which only depend on the variate $x_j$. A Galerkin method consists in representing the solution $u$ of the initial PDE as $$ u(x_1, \cdots, x_d) \approx \sum_{1\leq i_1, \cdots, i_d \leq N} \lambda_{i_1, \cdots,i_d} \psi_{i_1}^{(1)}(x_1) \cdots \psi_{i_d}^{(d)}(x_d), $$ and computing the set of $N^d$ real numbers $\left( \lambda_{i_1,\cdots, i_d} \right)_{1\leq i_1, \cdots, i_d \leq N}$. Thus, the size of the finite-dimensional problem to solve grows exponentially with the number of variates involved in the problem. Such methods cannot be implemented when $d$ is too large: this is the so-called \itshape curse of dimensionality\normalfont~\cite{Bellman}. Several approaches have recently been proposed in order to circumvent this significant difficulty. Let us mention among others sparse grids \cite{SchwabPeter}, tensor formats \cite{Hackbusch}, reduced bases \cite{Maday} and adaptive polynomial approximations \cite{Cohen2}. \medskip In this paper, we will focus on a particular kind of methods, originally introduced by Ladev\`eze {\em et~al.} to do time-space variable separation \cite{Ladeveze}, Chinesta {\em et~al.} to solve high-dimensional Fokker-Planck equations in the context of kinetic models for polymers \cite{Chinesta} and Nouy in the context of uncertainty quantification \cite{Nouy}, under the name of \itshape Progressive Generalized Decomposition \normalfont (PGD) methods. Let us assume that each variate $x_j$ belongs to a subset ${\cal X}_j$ of ${\mathbb R}^{m_j}$, where $m_j\in{\mathbb N}^*$ for all $1\leq j\leq d$. For each $d$-uplet $(r^{(1)}, \cdots, r^{(d)})$ of functions such that $r^{(j)}$ only depends on $x_j$ for all $1\leq j\leq d$, we call a \itshape tensor product function \normalfont and denote by $r^{(1)}\otimes \cdots \otimes r^{(d)}$ the function which depends on all the variates $x_1, \cdots, x_d$ and is defined by $$ r^{(1)}\otimes \cdots \otimes r^{(d)}: \left\{ \begin{array}{ccc} {\cal X}_1 \times \cdots \times {\cal X}_d & \to & {\mathbb R}\\ (x_1, \cdots, x_d) & \mapsto & r^{(1)}(x_1) \cdots r^{(d)}(x_d).\\ \end{array} \right . $$ The approach of Ladev\`eze, Chinesta, Nouy and coauthors consists in approximating the function $u$ by a separate variable decomposition, i.e. \begin{equation}\label{eq:sumexp} u(x_1, \cdots, x_d) \approx \sum_{k=1}^{n} r_k^{(1)}(x_1) \cdots r_k^{(d)}(x_d) = \sum_{k=1}^{n} r_k^{(1)}\otimes \cdots \otimes r_k^{(d)}(x_1, \cdots, x_d), \end{equation} for some $n\in{\mathbb N}^*$. In the above sum, each term is a tensor product function. Each $d$-uplet of functions $\left(r_k^{(1)}, \cdots, r_k^{(d)}\right)$ is iteratively computed in a \itshape greedy \normalfont \cite{Temlyakov} way: once the first $k$ terms in the sum (\ref{eq:sumexp}) have been computed, they are fixed, and the $(k+1)^{th}$ term is obtained as the \itshape next best tensor product function \normalfont to approximate the solution. This will be made precise below. Thus, the algorithm consists in solving several low-dimensional problems whose dimensions scale linearly with the number of variates and may be implementable when classical methods are not. In this case, if we use a discretization basis with $N$ degrees of freedom per variate as above, the size of the discretized problems involved in the computation of a $d$-uplet $\left(r_k^{(1)}, \cdots , r_k^{(d)}\right)$ scales like $Nd$ and the total size of the discretization problems is $nNd$. This numerical strategy has been extensively studied for the resolution of (linear or nonlinear) elliptic problems \cite{Temlyakov,LBLM,Figueroa,CELgreedy,NouyFalco}. More precisely, let $u$ be defined as the unique solution of a minimization problem of the form \begin{equation}\label{eq:min} u = \mathop{\mbox{\rm argmin}}_{v\in V} \mathcal{E}(v), \end{equation} where $V$ is a reflexive Banach space of functions depending on the $d$ variates $x_1$, ..., $x_d$, and $\mathcal{E}: V \to {\mathbb R}$ is a coercive real-valued energy functional. Besides, for all $1\leq j \leq d$, let $V_{x_j}$ be a reflexive Banach space of functions which only depend on the variate $x_j$. The standard greedy algorithm reads: \begin{enumerate} \item set $u_0 = 0$ and $n=1$; \item find $\left(r_n^{(1)}, \cdots, r_n^{(d)}\right) \in V_{x_1}\times \cdots \times V_{x_d}$ such that $$ \left(r_n^{(1)}, \cdots, r_n^{(d)}\right) \in \mathop{\mbox{argmin}}_{\left( r^{(1)}, \cdots, r^{(d)}\right) \in V_{x_1} \times \cdots \times V_{x_d}} \mathcal{E} \left( u_{n-1} + r^{(1)} \otimes \cdots \otimes r^{(d)} \right), $$ \item set $u_n = u_{n-1} + r_n^{(1)}\otimes \cdots \otimes r_n^{(d)}$ and $n=n+1$. \end{enumerate} Under some natural assumptions on the spaces $V$, $V_{x_1}$, ..., $V_{x_d}$ and the energy functional $\mathcal{E}$, all the iterations of the greedy algorithm are well-defined and the sequence $(u_n)_{n\in{\mathbb N}^*}$ strongly converges in $V$ towards the solution $u$ of the original minimization problem (\ref{eq:min}). This result holds in particular when $u$ is defined as the unique solution of $$ \left\{ \begin{array}{l} \mbox{\rm find } u \in V \mbox{\rm such that}\\ \forall v\in V, \; a(u,v) = l(v),\\ \end{array} \right . $$ where $V$ is a Hilbert space, $a$ a \itshape symmetric \normalfont continuous coercive bilinear form on $V\times V$ and $l$ a continuous linear form on $V$. In this case, $u$ is equivalently solution of a minimization problem of the form (\ref{eq:min}) with $\mathcal{E}(v) = \frac{1}{2}a(v,v) - l(v)$ for all $v\in V$. \medskip However, when the function $u$ cannot be defined as the solution of a minimization problem of the form (\ref{eq:min}), designing efficient iterative algorithms is not an obvious task. This situation occurs typically when $u$ is defined as the solution of a \itshape non-symmetric \normalfont linear problem $$ \left\{ \begin{array}{l} \mbox{\rm find } u \in V \mbox{\rm such that}\\ \forall v\in V, \; a(u,v) = l(v),\\ \end{array} \right . $$ where $a$ is a non-symmetric continuous bilinear form on $V\times V$ and $l$ is a continuous linear form on $V$. The aim of this article is to give an overview of the state of the art of the numerical methods based on the greedy iterative approach used in this non-symmetric linear context and of the remaining open questions concerning this issue. In Section~\ref{sec:sym}, we present the standard greedy algorithm for the resolution of symmetric coercive high-dimensional problems and the theoretical convergence results proved in this setting. Section~\ref{sec:nonsym} explains why a naive transposition of this algorithm for non-symmetric problems is doomed to failure and motivates the need for more subtle approaches. Section~\ref{sec:resmin} describes the certified algorithms existing in the literature for non-symmetric problems. All of them consist in \itshape symmetrizing \normalfont the original non-symmetric problem by minimizing the residual of the equation in a well-chosen norm. However, depending on the choice of the norm, either the conditioning of the discretized problems may behave badly or several intermediate problems may have to be solved online, which leads to a significant increase of simulation times and memory needs compared to the original algorithm in a symmetric linear coercive case. So far, there are no methods avoiding these two problems and for which there are theoretical convergence results in the general case. In Section~\ref{sec:dual}, we present some existing algorithms designed by Nouy \cite{NouyMinMax} and Lozinski \cite{Lozinski} to circumvent these difficulties and the partial theoretical results which are known for these algorithms. Section~\ref{sec:us} is concerned with another algorithm we propose, for which some partial convergence results are proved. In Section~\ref{sec:num}, the behaviors of the different algorithms presented here are illustrated on simple toy numerical examples. Lastly, we present in the Appendix some possible tracks to design other methods, but for which further work is needed. \section{The symmetric coercive case}\label{sec:sym} \subsection{Notation}\label{sec:notation} Let us first introduce some notation. Let $d$ be a positive integer, $m_1$, ..., $m_d$ positive integers and ${\cal X}_1$, ...,${\cal X}_d$ open subsets of ${\mathbb R}^{m_1}$, ..., ${\mathbb R}^{m_d}$ respectively. Let $\mu_{x_1}$, ..., $\mu_{x_d}$ denote measures on ${\cal X}_1$, ..., ${\cal X}_d$ respectively.
looking at things makes it unsurprising that the introduction to the market of “app stores” – directly linking basement-entrepreneurs to customers – that soaks up readers who have become disillusioned with both the labor market and VC market. The “perceived promise” moves in the direction of DIY, direct-to-consumer business models. Hence the good performance of Objective C titles (as noted above) What does the future hold, then? Perhaps titles around health care by fall of this year or early next, depending on how the legislative process goes. Perhaps android, chrome, chromeos, and wave titles but not so much until next year, at least. Some see likely a “double dip” economy where late this year or early next the indexes take another, deeper dive and unemployment gets even worse. That means that android and chromeos are going to have trouble because consumer demand for devices will be lousy. Meanwhile, overall spending at app stores won’t grow as fast and the average and median returns for an app will fall as the supply side becomes overcrowded. It’s not clear to me that health reform, if it happens much at all, is genuinely going to create a huge rush of new software development spending. (I.e., I’m not I’d be rushing to come up with a lot of health care titles at this stage.) On top of all that gloom, it looks like backlash against social networking services and Google is on the rise as awareness of their surveillance-based business models seeps into greater public awareness. What’s left? One wise guy likes to quip “work on things that matter” and I think what “matters” in this scenario is helping consumers cope with deprivation and aid in escape from lock-in to business models in which they lose trust. On the server side, distributed, decentralized systems for one-to-one, many-to-one, one-to-many, and many-to-many communication look interesting. On the client side, doing “more” with “less [expensive] HW” is an interesting trick. A positive sign here is that OEM deals for less expensive hardware are easier to come by for less money for a broader range of gadgets. If I were in an O’Reilly editors’ meeting, I might suggest looking for books about how to get good deals in the leased server market, how to build slender gnu/linux distros, how to use developer kits for new gadgets, how to use various dist./decent. server technologies, and how to get innovations built with such parts to consumers professionally and effectively yet with as little investment as possible. I also assume, as I think O’Reilly does, that out of the current buzz of activity around open government some platforms will emerge that recapitulate *in miniature* the kind of labor demand growth we saw with the early LAMP stack. That is, open gov platforms applicable at the municipal, county, or state level (and perhaps also in some corporate settings) and resulting demand to customize these and deploy them in many different governments. I’m not sure, though, how large an industry that will be given the fiscal condition of all of those levels of government (and of the federal government). Potholes, fire departments, schools, and so forth come first and are increasingly in triage mode. It’s hard for a government to decide to undertake long-term IT investing at a time when its busy trying to keep the most basic services from falling apart. Still, some new platforms will arise, there will be some buzz about them, and that’ll be good for at least a small boost to book sales for a little while. This is probably already in the pipeline, somewhere. -t • Wondering Has anyone considered that Windows consumer would be down anyway? I haven’t noticed a lot of Windows 7 books out there, and even non-technical people know it is coming. • http://zooofthenew.com BBrian Any idea when Paul McCartney will be playing next? Everyone else is secondary, he’s a hero. • Ted Here’s the thing: some of your books are too pricey. This book: http://oreilly.com/catalog/9780596518738/ worth 10-15 bucks. This one: http://oreilly.com/catalog/9780596529307/ while its quality is unquestionable, just doesn’t worth 30 bucks (the thickness factor brought it down). I’d say it worth 15-17 ish. So what would I do? I would go to my local library, borrow the second book, take notes, put it on my blog for me to recall the information in the future. Some of your books worth as much as an “International Edition” version of textbooks; why would the customer buy something that is more expensive and not everlasting? I went to a closing-down bookstore once (Half Price Computer Books), people were grabbing cheap books (price tag: $5-$7) even though some looked outdated or have low hype level (Web Programming in Tcl/Tk). If you guys can cut the price to $5-$20 with $30 tops for thick book like C# 3.0 (I consider Erlang, Haskell or anything less not mainstream does not worth$30 even if the book is thick), you would see more of your brands in any bookshelves. I’m pretty sure that authors understand that they can’t make much money out of selling technical books. They use the opportunity to publish their names/brands/consulting companies. • Maya How does the downturn in 2009 compare with the period of 2001-2003? Book sales slumped significantly then, and it would be good to see how we’re stacking up now. • Simon Hibbs @Ted: Erlang and Haskel have very limited audiences, therefore limited sales. Let’s say that 60% of the book price is profit when sold direct. You would at least have to increase sales by 6 times to break even, whereas in practice it’s unlikely you’d even double sales. Try setting up your own publishing company and see how far your philosophy gets you. Seriously, if your model worked there would be people doing it. • Ted Hoise For me, I’ve gotten a little gun shy. The programming book market has grown rapidly, but the quality seems to have deteriorated badly. The books I have purchased lately don’t seem to be giving me the information (and value) I am looking for. I am using the web more and more. Sometimes I find what I need; sometimes not. But at least I am not out 30-40-50 bucks on a book that doesn’t really tell me anything. That really hacks me. • http://www.hamagudi.com Tathagata Chakraborty Thanks for the data! @Ted I agree with you. It does seem that the quality of technical books have detoriated lately. The authors don’t seem to be be doing enough research before writing the books, which is sad. Probably, everyone is in a race to complete a book before the book gets outdated. • http://robreed.net/weblog Rob Reed It seems obvious enough that the size of the audience is only increasing both in the US and internationally, i.e. there are more people now than there have ever been who need access to the sort of information in books you publish. This will continue to be true until there is some radical, and unforeseeable change. As a society we haven’t even begun to approach the idea of computer fluency in any real way. As someone with more than a decade’s worth of IT experience who recently went back to school for a Masters in CS I was taken aback by just how computer illiterate the community of people at a well-respected university in the U.S. is in the year 2009. Faculty and students both outside of the CS department and within the department simply don’t know how to use their personal computers, much less understand topics related to networking and infrastructure. The ability to think creatively about how to utilize technology is almost completely absent. People struggle with how to use single applications. Many of whom have been living with this technology more or less in its current form for 10 – 15 years now. Long story short, there are a lot of problems. The best answer for all
Philadelphia, in speaking of the same fruit said, "It has the greatest flavor that has eve r tickled the palate of man. It is commonly called the Aristocrat of Fruits, and persons who attempt to describe it, use up all their supply of superlatives, and "hope the day will come when they can serve you with one of the big beautiful Hayden or Mulgobas," for it is the im proved type to which they refe r The seedlings are far inferior, but still are eaten and enjoyed b y millions of people in all tropical countries. Ask ten people who have li ved for years in the tropics and nine of them will tell you that the turpentine mango is d e licious, and thev don't mind the turpentine flavor at all, in fact that they like it, but, of course it is not to be compared with the improved mango. A well fruit e d papaya tree, showing various si::es from blossoms to matured fruit. puckery little choke cherry, its big pit, little skin and no meat, and the largest and most highly improved Early Richmond or big, meaty, black cherries. Almost as great a gulf exists between a hard, sour tough crab-apple and a big, beautiful Deliciou s Apple Those who have eaten the jungle variety of mango only, and failed to find reason for desiring a second taste, have a delightful surprise awaiting them when they have their fir s t bite of the improved type of the same fruit. The skin of the Hay den is almost as smooth and glossy as an apple, and its texture is between that of an apple and a peach, while the flesh is one of the ri\. \ golden yellow ,pf the Alberta. rhe flavor is so delicious that most people are immediately fond of it, and refuse to eat them in any way except sliced, like peaches, or served in halves and eaten with a spoon. However there are many ways of preparing them. They are so wholesome and possessed of such an absolutely good taste that much is claimed for them as a beneficial article of diet. Someone has said they didn't need medicinal value; they were so delicious that they need nothing more to recommend them. Vitamines gambol within their lo vely walls, ready to assist in -the little matter of the digestion of him who eats thereof. They are eaten in such prodigious quantities b y children and grownups alike, that no claim for them seems t o sound over enthusiastic. One really. hesitates to limit claims therefor. Like the apple of the North "it never was educated. It started to school, but didn't get there. It was too good to save till school was reached." In the South one sees children eating them on the way to school, on the streets and everywhere. Improved varieties are bein g shipped in increasing quantities to markets which eagerly await them, northern m a r k e t s d emanding more than can at present be supplied. Mangoes may be used even before the y are ripe Mrs. C. C. Aston, who was one of the old settlers at the mouth of the Miami River, says that there are two ways of making very simple mango pie. Green Mango Pie Take mangoes that are just turning, slice thin and stew till tender; add teaspoon butter; line deep pie pan with puff paste; put in mangoes; add sugar and juice of half a lemon, or lime; grate nutmeg over top crust, cut in narrow strips and bake in hot oven. Another good way to make mango pie is to boil the mangoes ; run through a ricer or sieve and make same as egg custard. Many persons prefer to use the mangoes while quite green, exactly as they do rhubarb, leaving out all spices, as the rich flavor of the mango needs no addition of seasoning to enhance its palatability. Peel and slice the green mangoes. Line the pan with good paste and put the fruit into it. Sprinkle with sugar and flour and a d d a tablespoon water. This is said to be equal to rhubarb pie and is so similar that many persons mistake it for that popular member of the pie tribe. It is a far cry from the littl e fibre-filled mangoes of the iungle type, with their strong turpentine flavor to the big, beautiful, fibreless m a n g o e s which are about the size of a cantaloupe. There is more difference between the two than b e t w e e n the The Sunder sha Mango is a n ew variety itztroduced in Florida b y the governm e nt's e.rpenm e nt stati on. Ripe Mango Pie Peel, and slice ripe fruit and proceed above, using less sugar By selecting man-58 PAGE 60 goes that are not ripe enough to be stringy, even inferior varie ties of seeding mangoes may be be used. Peel and slice thin. Put tablespoon butter in fry ing pan; heat and add man goes ; add sugar, nutmeg and cinnamon; cover and cook very slowly in order not to scorch. Mango Dumplings Make a rich baking powder biscuit dough. Roll almost as thin as pie crust; cut into squares large enough to cover an apple; put in the middle of each square, a piece of mango about the size of an apple ; sprinkl e a teaspoon of sugar, a small piece of butter; turn the ends of the dough over mango and lap them tight. Lay the dumplings in a well buttered p an, the smooth side upvard ; when the pan is filled put a small piece of butter on top of each dumpling, sprinkle the whole with a cupful of sugar, pour in a cupful of boiling water, then place in moderate oven for an hour. Baste with the liquor once or twice. Serve with .sugar and cream or a pudding sauce. Canned Mangoes Peel the fruit into neat slices, cutting from stem end. Put in boiling syrup, boil ten minutes, and place in well sterilized jars and seal at once Make syrup hy using one cup sugar and one cup water. Do not attempt to cook large quantity at one time or slices will break. Mrs. M. E. Jones, who came with her parents to Miami when a child, grew up, married and lived here for a number of years, gives as her simple method of making marmalade of the fibre-filled j ungle mangoes. Mango Marmalade Peel ripe mangoes and grate on coarse grater; strain through ricer or sieve to remove fibre; boil five minutes with a littl e less than equal part sugar till stiff. Others peel the ripe fruit and put into kettle with water to half cover. Pulp may or may not be cut from the seed. The latter makes a smoother marmalade. When the fruit is tender, rub through colander. Return to preserving kettle with one cup sugar to each quart of pul p. Boil thirty minutes and seal at once. Mrs. Jones says that a delicious ice <:ream may be made from the mangoes by preparing them as for marmaladeabout half a dozen to a two quart freez er. Use any good ice cream recipe add mangoes and freeze as any fruit ice <:ream Mango Jelly For jell y the green fruit is u s ed Peel and cook the green mangoes. Strain, a nd to each cup of boiling juice, add one cup of suga r Boil till jelly forms when dropped from a spoon. Mrs. W. A. Fickle who is
the fiscal class ending Jan. 31, 2021, was $4.75, according to The Wall Street Journal . consequently, Walmart ‘s P/E proportion is$ 139.55 / $4.75 = 29.38 . ### Comparing Companies Using P/E As an extra case, we can look at two fiscal companies to compare their P/E ratios and see which is relatively over- or undervalued . Bank of America Corporation ( BAC ) closed out the year 2021 with the adopt stats : • Stock Price =$30.31 • Diluted EPS = $1.87 • P/E = 16.21x ($30.31 / $1.87) In other words, Bank of America traded at roughly 16x chase earnings. however, the 16.21 P/E multiple by itself is n’t helpful unless you have something to compare it with, such as the stock ‘s diligence group, a benchmark index, or Bank of America ‘s historic P/E scope . Bank of America ‘s P/E at 16x was slightly higher than the S & P 500, which over time trades at about 15x trail earnings . To compare Bank of America ‘s P/E to a peer ‘s, we calculate the P/E for JPMorgan Chase & Co. ( JPM ) as of the end of 2020 : • Stock Price =$127.07 ## P/E vs. Earnings move over The inverse of the P/E proportion is the earnings yield ( which can be thought of as the E/P proportion ). The earnings yield is frankincense defined as EPS divided by the stock price, expressed as a share . ### relative P/E The relative P/E compares the current absolute P/E to a benchmark or a range of by P/Es over a relevant time time period, such as the past 10 years. The proportional P/E shows what part or share of the past P/Es the current P/E has reached. The proportional P/E normally compares the current P/E value to the highest value of the range, but investors might besides compare the stream P/E to the buttocks side of the range, measuring how close the current P/E is to the historic low . The relative P/E will have a measure below 100 % if the current P/E is lower than the past value ( whether the past high or low ). If the proportional P/E quantify is 100 % or more, this tells investors that the current P/E has reached or surpassed the past value . ## Limitations of Using the P/E Ratio Like any other cardinal designed to inform investors as to whether or not a stock is worth buy, the price-to-earnings proportion comes with a few significant limitations that are crucial to take into report because investors may frequently be led to believe that there is one single measured that will provide complete penetration into an investment decision, which is about never the case . Companies that are n’t profitable and, consequently, have no earnings—or negative earnings per share—pose a challenge when it comes to calculating their P/E. Opinions vary as to how to deal with this. Some say there is a negative P/E, others assign a P/E of 0, while most just say the P/E does n’t exist ( N/A or not available ) or is not explainable until a caller becomes profitable for purposes of comparison . One primary coil limitation of using P/E ratios emerges when comparing the P/E ratios of different companies. Valuations and growth rates of companies may frequently vary wildly between sectors due to both the different ways companies earn money and the differing timelines during which companies earn that money . As such, one should entirely use P/E as a comparative tool when considering companies in the same sector because this kind of comparison is the only kind that will yield productive insight. Comparing the P/E ratios of a telecommunications caller and an energy ship’s company, for example, may lead one to believe that one is clearly the superscript investment, but this is not a authentic presumption . ## other P/E Considerations An individual ship’s company ’ mho P/E ratio is much more meaningful when taken aboard the P/E ratios of other companies within the lapp sector. For exemplar, an energy company may have a high P/E ratio, but this may reflect a drift within the sector rather than one merely within the individual company. An individual company ’ randomness high P/E proportion, for exercise, would be less cause for concern when the integral sector has high P/E ratios . furthermore, because a company ’ south debt can affect both the prices of shares and the company ’ randomness earnings, leverage can skew P/E ratios deoxyadenosine monophosphate well. For example, suppose there are two like companies that differ primarily in the amount of debt they assume. The one with more debt will likely have a lower P/E prize than the one with less debt. however, if occupation is good, the one with more debt stands to see higher earnings because of the risks it has taken . Another crucial restriction of price-to-earnings ratios is one that lies within the recipe for calculating P/E itself. Accurate and unbiased presentations of P/E ratios trust on accurate inputs of the market value of shares and of accurate earnings per contribution estimates. The marketplace determines the prices of shares through its continuous auction. The print prices are available from a wide-eyed variety of dependable sources. however, the generator for earnings information is ultimately the caller itself. This single generator of datum is more well manipulated, so analysts and investors place entrust in the company ‘s officers to provide accurate data. If that trust is perceived to be broken, the broth will be considered riskier and consequently less valuable . To reduce the risk of inaccurate data, the P/E ratio is but one measurement that analysts scrutinize. If the company were to intentionally manipulate the numbers to look better, and frankincense deceive investors, they would have to work strenuously to be certain that all metrics were manipulated in a coherent manner, which is difficult to do. That ‘s why the P/E proportion continues to be one of the most centrally reference points of data when analyzing a company, but by no think of is it the entirely one . ## What Is a Good Price-to-Earnings Ratio? The question of what is a good or bad price-to-earnings ratio will necessarily depend on the diligence in which the caller is operating. Some industries will have higher average price-to-earnings ratios, while others will have lower ratios. For example, in January 2021, publicly traded circulate companies had an average trailing P/E ratio of only about 12, compared to more than 60 for software companies. If you want to get a general idea of whether a particular P/E proportion is high or low, you can compare it to the median P/E of the competitors within its industry . ## Is It Better to Have a Higher or Lower P/E Ratio? many investors will say that it is better to buy shares in companies with a lower P/E because this means you are paying less for every dollar of earnings that you receive. In that common sense, a lower P/E is like a lower price tag, making it attractive to investors looking for a dicker. In exercise, however, it is significant to understand the reasons behind a company ’ sulfur P/E. For exemplify, if a company has a abject P/E because its business exemplar is basically in refuse, then the apparent bargain might be an illusion . ## What Does a P/E Ratio of 15 Mean? Simply put, a P/E ratio of 15 would mean that the current marketplace value of the company is equal to 15 times its annual earnings. Put literally, if you were to hypothetically buy 100 % of the caller ’ mho shares, it would take 15 years for you to earn bet on your initial investment
it was misclassified. \subsection{Affine Pose Regression} Affine Pose Regression (APR) was recently proposed in \cite{APR} as a method for improving the performance of face alignment methods. In contrast to Cascaded Shape Regression (CSR), APR estimates a rigid transforms of the entire face shape: \begin{equation} \label{eq:transform} S' = \begin{bmatrix} a & b\\ c & d \end{bmatrix} S + \begin{bmatrix} t_x & \dots & t_x\\ t_y & \dots & t_y \end{bmatrix}, \end{equation} where $S$, a $2\times n$ matrix, is the current estimate of the face shape, $n$ is the number of landmarks in the face shape and $a, b, c, d, t_x, t_y$ are the parameters of the transform. The parameters are estimated by linear regression based on HOG features extracted at the facial landmarks. APR can be applied before CSR or in between CSR iterations to efficiently compensate for inaccurate initialization of the face shape in scale, translation and in-plane rotation. In this work we propose to improve the original APR framework by using KRFWS instead of linear regression. We estimate all the transform parameters by creating separate KRFWS models for $a,b,c,d$ and a joint model for $t_x, t_y$. Instead of extracting features at individual landmarks we extract a single feature that covers the entire face. We show the effectiveness of our approach in experiments on the 300-W dataset \cite{300-W} in section \ref{sec:experiments}. \subsection{3D Affine Pose Regression} As mentioned in the previous section, APR can be applied before CSR to compensate for inaccuracy in scale, translation and in-plane rotation of the face shape estimate. In this section we propose a method to extend APR by taking into account out-of-plane rotation of the head, namely: yaw and pitch. Our method, which we call 3D-APR, fits an average 3D face shape to the face in the image and uses the 2D projection of that shape as an initialization for face alignment. 3D Affine Pose Regression (3D-APR) consists of two steps: first we fit an average 3D face shape $\bar{S}$ to the initial face shape estimate $S$. The fitting is accomplished using a scaled orthographic projection: \begin{equation} \label{eq:scaledorto} \bar{s} = k \cdot P \cdot \bar{S} + \begin{bmatrix} t_x & \dots & t_x\\ t_y & \dots & t_y \end{bmatrix}, \end{equation} where $\bar{s}$ is the projected shape, $k$ is a scaling factor, $P$ are the first two rows of a rotation matrix and $t_x$, $t_y$ are translation parameters. The values of the parameters $\Gamma=\{k,P,t_x,t_y\}$ for any shape are obtained by solving the following optimization problem: \begin{equation} \label{eq:gauss} \newcommand{\operatornamewithlimits{arg\min}}{\operatornamewithlimits{arg\min}} \Gamma = \operatornamewithlimits{arg\min}_{\Gamma} \lVert \bar{s} - S \rVert^2. \end{equation} The optimization is performed using the Gauss-Newton method as in \cite{2Dvs3D}. In the second step we estimate an update $\Delta$ to $\Gamma$ that refines the projected 3D shape $\bar{s}$ so that it is closer to the true shape of the face in the image. As in APR the estimation is performed using KRFWS based on a PHOG descriptor extracted at the face region. Separate KRFWS models are trained to estimate $k$ and $P$, a joint model is used for $t_x$ and $t_y$. The projection matrix $P$ is parametrized using euler rotations. In practice, the estimation of $P$ is equivalent to head pose estimation. Learning is performed similarly to learning in APR and CSR. The training set consists of a set of images with corresponding ground truth landmark locations. For each image a number of initial shapes are generated from the ground truth shape. For each initial shape, the initial parameters $\Gamma$ and the ground truth parameters $\Gamma'$ are obtained using equation \eqref{eq:gauss} and the ground truth annotations. KRFWS learning is then applied to map the PHOG descriptor to the update $\Delta = \Gamma' - \Gamma$. \section{Experiments} \label{sec:experiments} In this section we test the effectiveness of the proposed methods in affine pose regression, face alignment and head pose estimation. The parameters we use for our methods in APR and face alignment have been established through cross-validation, with the exception of the number of children $K$. $K$ was set following \cite{KRF}, where the authors have found $K=2$ to be optimal for a target space similar to ours. We plan to investigate different values of $K$ in future experiments.. \subsection{Affine pose regression} We test the effectiveness of APR and 3D-APR on the 300-W dataset \cite{300-W}, which consists of face images with corresponding ground truth annotations of 68 characteristic points and bounding boxes generated by a face detector. The images in 300-W are gathered from several other datasets: AFW \cite{AFW}, HELEN \cite{HELEN}, IBUG \cite{300-W} and LFPW \cite{LFPW}. For learning we use the AFW dataset and the training subsets of the HELEN and LFPW datasets, which together consist of 3148 face images. Our test dataset consists of two subsets: the challenging IBUG dataset (135 images) and the less challenging test subsets of the LFPW and HELEN datasets (554 images). Together the two datasets form what we refer to as the full set. This division of the 300-W dataset is a standard in face alignment testing, employed in many recent articles \cite{LBF, CFSS, TransferredDCNN}. Each method is initialized with the face detector bounding box provided in the 300-W dataset. Similarly to \cite{LBF},\cite{CFSS} we use the inter-pupil distance normalized landmark error, all errors are expressed as the \% of the inter-pupil distance. The pupil locations are assumed to be the centroids of the landmarks located around each of the eyes. Five different configurations of APR and 3D-APR are tested: (1) Linear APR with feature extraction at landmarks, (2) Linear APR with a single feature extracted at the face center, (3) KRF APR with a single feature extracted at the face center, (4) KRFWS APR with a single feature extracted at the face center, (5) KRFWS APR followed by 3D-APR with a single feature extracted at the face center (Combined APR, CAPR). In all of the configurations the images are rescaled so that the face size is approximately $64\times 64$ pixels. In all experiments APR is performed for two iterations, while 3D-APR is performed once. In the first configuration Pyramid HOG \cite{KRF} features covering $32\times 32$ pixels are extracted at each landmark. The input descriptor for APR is formed by concatenating the descriptors from each of the landmarks. In configurations (2), (3), (4) and (5) a single PHOG is extracted at the face center. As the feature size is not a concern in this scenario (only one feature is extracted instead of 68) we use the extended version of the HOG feature described in \cite{extHOG}. The descriptor covers an area of $64\times 64$ pixels. The results of the experiments are shown in Table \ref{tab:APR}. KRFWS APR outperforms Linear APR on the challenging subset by 6\%. CAPR shows the best accuracy of all tested methods, reducing the error of Linear APR by 35\% on the full set. \begin{table}[htbp] \caption{Error of APR methods on the 300-W dataset.} \label{tab:APR} \begin{tabularx}{\linewidth}{ >{\centering\arraybackslash}X c c c } \Xhline{4\arrayrulewidth} Methods & \makecell{Common \\ subset} & \makecell{Challenging \\ subset} & Full set\\ \hline Linear APR & 12.70 & 26.00 & 15.29 \\ Linear APR single feature & 12.77 & 25.85 & 15.32 \\ KRF APR & 11.48 & 24.80 & 14.08 \\ KRFWS APR & 11.37 & 24.28 & 13.88 \\ KRFWS APR + 3D-APR (CAPR) & 8.61 & 15.26 & 9.90 \\ \Xhline{4\arrayrulewidth} \end{tabularx} \end{table} \subsection{Face alignment} \begin{figure}[!t] \centering \includegraphics[width=0.93\linewidth]{./imgs/faces.png}\\ \caption{A diagram showing the proposed face alignment pipeline. The images above, taken from the IBUG dataset, show the results at the consecutive stages of the pipeline. } \label{fig:faces} \end{figure} In face alignment we use the same training and evaluation data as in the APR experiments. In order to facilitate comparison with other methods we report the results of our full pipeline for both inter-pupil normalisation \cite{LBF},\cite{CFSS} and inter-ocular normalisation \cite{MDM}. Our face alignment method uses the Local Binary Feature framework \cite{LBF}, where instead of standard regression forests we use KRFWS, and instead of pixel difference features we use PHOG. The forest generated for each landmark consists of 5 trees with a maximum depth of 7, the PHOG extracted at landmarks cover an area of $32\times 32$ pixels each, with
distinct populations, viz., $2658 \pm 42$, $2817 \pm 19$ and $3097 \pm 34 \rm{Ma}$, probably correspond to three distinct metamorphic phases. The peak metamorphism at around 2800 Ma corresponds with the garnet and cordierite growth. A temperature of $673^{0}\rm{C}$ and a pressure of 4.7 kb have been estimated for peak metamorphism. • REE mineral chemistry and the nature of REE mineralization: A study from felsite dykes of Phulan area, Siwana Ring Complex, Rajasthan, India Neo-Proterozoic Siwana Ring Complex (SRC) comprises per alkaline rocks of Malani Igneous Suite,viz., rhyolite, granite, and late phase microgranite and felsite dykes. Phulan area, lying at the north-easternmargin of SRC exposes a small body of rhyolite (<$1.0 \rm{km}^{2}$) containing feldspar + quartz + aegirine + rebeckite and is cut by dykes of felsite. These felsite dykes have a general NNW–SSE trend andvary from 60–200 m long and 0.10–2.50 m wide. These dykes are composed of quartz, alkali feldspar,aegirine and opaques. These felsite dykes were sampled and analyzed using inductively coupled plasmamethods. Of special significance is the enrichment of trace elements and rare earth elements (REE) in thefelsite dykes. These include up to 1.17% Ce, 0.6% La, 0.8% Y, 0.12% Dy, 169.25 ppm U, 571 ppm Th,1385 ppm Nb, 9944 ppm Zr. These dykes are peralkaline in nature and show negative europium (Eu)anomaly. In this study, the authors attempted to characterize REE bearing phases of the felsite dykes ofPhulan area, SRC with respect to their geneiss. REE bearing phases identified in felsite dykes aremonazite, bastnaesite, parisite, eudiyalite, allanite, perreierite and tritomite. Monazite, perreierite,allanite and tritomite are mostly found to be of magmatic in origin whereas bastnaesite, parasite andeudiyalytes occur both as magmatic and as well as of hydrothermal types. Magmatic REE minerals aremostly formed during crystallization of REE rich magma. In felsite dykes, Zr/Hf ratio varies from 23 to 31and Nd/Ta ratio ranges from 7 to 44. These two ratios are positively correlated and indicators ofhydrothermal fluid influx. • Groundwater exploration in limestone–shale–quartzite terrain through 2D electrical resistivity tomography in Tadipatri, Anantapur district, Andhra Pradesh Two-Dimensional (2D) Electrical Resistivity Tomography (ERT) survey was carried out at 11 sites within an area of $10 \rm{km}^{2}$ to delineate deeper potential groundwater zones in a complex geological terrain underlain by quartzite, shale and limestone formations with varied resistivity characteristics. The area is in medium rainfall zone in Tadipatri mandal of Anantapur district, Andhra Pradesh state, India. The investigation was carried out to meet the growing demands of water supply. Interpretation of the highdensity 2D resistivity dataset results revealed potential zones at only three sites in Tummalapenta, Ayyavaripalle and Guruvanipalle villages within the depth zone of 24–124 m. A major fault zone orientedin EW direction is mapped at Tummalapenta site. Based on high resolution geophysical data interpretation and significant anomalies, four boreholes were drilled in complex, viz., limestone, shale and quartzite formations up to a maximum depth of 192 m in the area with the yield ranging from $300$ to $\sim 5000$ liter per hour (lph). These four anomalous drilled borehole sites corroborates with the aquifer zone delineated through ERT technique. The aquifer parameters estimated from pumping tests show that the transmissivity varies between $\sim 0.3$ and $\rm{179.5 m^{2}/day}$ while the storage coefficient ranges from 0.137 to 0.5 indicating large variation in aquifer characteristics of the system in a smaller area. Suitable water conservation measures were suggested for improving the groundwater condition and yield of the pumping wells. • Latent heat Cux variation during the warming phase of intraseasonal oscillations over northern Bay of Bengal The sensitivity of latent heat flux to the warming phase of intra-seasonal oscillation in the Bay of Bengal is studied with the help of in-situ data. This was analyzed from 2012 to 2015 with the help of data obtained from moored buoys deployed in the northern Bay of Bengal. The annual secondary peaks in sea surface temperature is observed in the northern Bay of Bengal associated with the warming phase of the intra-seasonal oscillation during southwest monsoon season, with net heat flux dominantly governing the mixed layer temperature. An increase in the release of latent heat flux from the northern bay is observed with the warming phase of intra-seasonal oscillation, which again leads to cooling of sea surface temperature. Higher latent heat flux release associated with the intra-seasonal warming phase during southwest monsoon season has intrigued us to study the sensitivity of latent heat flux with sea surface temperature. The sensitivity of gradient in saturation specific humidity is comparatively higher than the sensitivity of wind speed to sea surface temperature variations during southwest monsoon season. The gradient in sea–air saturation specific humidity is largely driven by saturation specific humidity of air ($Q_{a}$) during both the seasons. However, the correlation of gradient in saturation specific humidity with surface saturation specific humidity is higher during southwest monsoon season compared to northeast monsoon season. Thus, the warming phase of sea surface temperature associated with intra-seasonal oscillation during southwest monsoon season always lead to an increase in latent heat flux release, favoured by high sensitivity of surface saturation specific humidity to variations in sea surface temperature. • Shell weights of foraminifera trace atmospheric $\rm{CO}_{2}$ from the Miocene to Pleistocene in the central Equatorial Indian Ocean The Maldives Sea is a region dominated by the South Asian monsoon (SAM) and at present, a $\rm{CO}_{2}$ source to the atmosphere. Ti/Al elemental ratios from Site U1467 and U1468 recovered from the Maldives Sea show a gradual increase from $\sim 12 \rm{Ma}$ and indicate terrigenous inputs to this region associated with increasing wind intensity associated with initiation of the SAM. Shell weights of planktonic foraminifera, Globigerinoides trilobus have been used to understand variations in surface water carbonate ion concentration for the last 20 Ma. Shell weights show a good correspondence with global $\rm{CO}_{2}$ records and show heavier shell weights during the colder periods than compared to warmer intervals which reveals that the Maldives Sea behaved similar to other tropical oceanic regions in terms of its surface water carbonate chemistry. A significant decrease in $\rm{CaCO}_{3}$ wt.%, decrease in foraminifera shell weights and dissolution of spines along with an increase in organic carbon (OC%) towards 10.5 Ma is linked to the reduced carbonate deposition and increased productivity during monsoon which is a feature in all tropical sediment cores. Lower shell weights and dissolution features on foraminiferal shells were observed during periods of intense Oxygen Minimum Zone (OMZ) suggesting calcite dissolution due to an increase in bottom water $\rm{CO}_{2}$. • Deep insight to the complex aquifer and its characteristics from high resolution electrical resistivity tomography and borehole studies for groundwater exploration and development Discovering and locating the source and availability of groundwater in a plateau region of Chhotanagpur gneissic complex, where there is a varied hydrogeological characteristics, is a crucial task for earth scientists. One such region located at Garh Khatanga near Ranchi, Jharkhand, India was closely studied for groundwater assessment and exploration. High resolution electrical resistivity tomography 2D data were acquired to probe deep inside the earth up to a maximum depth of 220 m using state-of-the-art electrical resistivity tomography technique and mapped geoelectrical subsurface images at 16 sites in three different blocks along a 7.2 km line for prospecting and exploration of groundwater resources. The geophysical inversion of the 2D resistivity data revealed prospect groundwater scenario at six sites based on the hydrogeological interpretation and the significant resistivity contrast between the highly weathered/fractured and the massive rocks. The modelled resistivity sections revealed different degree of weathered, fractured and saturated weathered/fractured strata as well as clearly indicated the presence of a totally hard massive rock within the subsurface lying between $\sim$30 and 220 m depths. The geophysical anomalies were confirmed and validated by borehole drilling at four sites up to a maximum depth of 215 m
# Maclaurin and Taylor series ## Introduction Suppose that a function $f(x)$ can be expressed as the polynomial $$f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \ldots + a_k x^k + \ldots$$ where $a_i$ are constant coefficients. Then \begin{align} f'(x)&= a_1 + 2a_2 x+3a_3 x^2 + \ldots \\ f''(x) &= 2a_2 + 6a_3 x + \ldots \\ f'''(x) &= 6a_3 + \ldots \end{align} And so on. Then \begin{align} f(0) &= a_0 \\ f'(0) &= a_1 \\ f''(0) &= 2a_2 \Rightarrow a_2 = \frac{f''(0)}{2} = \frac{f''(0)}{2!}\\ f'''(0) &= 6a_3 \Rightarrow a_3 = \frac{f'''(0)}{6} = \frac{f'''(0)}{3!} \end{align} And so on. Then we can express $f(x)$ as $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots + \frac{f^{(k)}(0)}{k!} x^k + \ldots$$ This is the Maclaurin series for $f(x)$. Many differentiable functions can be expressed as their Maclaurin series. They are useful for approximating functions and finding approximate derivatives and integrals. To find an approximation to $f(x)$ you truncate a function's Maclaurin series up to as many terms as you want. The more terms you use, the more accurate the approximation. Remember from the definition above that in theory, $f(x)$ is equal to the infinite Maclaurin series. In FP2 you need to know the Maclaurin series expansions for $e^x$, $\sin x$, $\cos x$, and $\ln(1+x)$. ## Maclaurin series of $e^x$ Let $f(x) = e^x$. Then \begin{align} f'(x) &= e^x \\ f''(x) &= e^x \\ f'''(x) &= e^x \\ & \ldots \\ f^{(k)}(x) &= e^x \end{align} And so on. Then \begin{align} a_0 &= f(0) = 1 \\ a_1 &= f'(0) = 1 \\ a_2 &= \frac{f''(0)}{2!} = \frac{1}{2!} \\ a_3 &= \frac{f'''(0)}{3!} = \frac{1}{3!} \\ & \ldots \\ a_k &= \frac{f^{(k)}(0)}{k!} = \frac{1}{k!} \end{align} And so on. The formula for a Maclaurin series is $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots + \frac{f^{(k)}(0)}{k!} x^k + \ldots$$ Therefore the Maclaurin series of $e^x$ is $$e^x \equiv 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots + \frac{x^k}{k!} + \ldots$$ Here's a graph showing truncated version of the Maclaurin series compared to the actual $e^x$ graph. • $y=e^x$ is in blue • $y=1$ is in red • $y=1 + x$ is in orange • $y=1 + x + \frac{x^2}{2!}$ is in purple • $y=1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}$ is in green See how each progressively longer truncation gets closer to the $e^x$ graph. ## Maclaurin series of $\sin x$ and $\cos x$ Let $f(x) = \sin x$. Then $f(0) = 0$. Going through the derivatives \begin{align} f'(x) &= \cos x \Rightarrow f'(0) = 1 \\ f''(x) &= -\sin x \Rightarrow f''(0) = 0 \\ f'''(x) &= -\cos x \Rightarrow f'''(0) = -1 \end{align} And so on. The formula for a Maclaurin series is $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots + \frac{f^{(k)}(0)}{k!} x^k + \ldots$$ Therefore the Maclaurin series of $\sin x$ is $$\sin x \equiv x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \ldots + \frac{\left(-1\right)^k x^{2k+1}}{\left(2k+1\right)!} + \ldots$$ By similar reasoning the Maclaurin series of $\cos x$ is $$\cos x \equiv 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \ldots + \frac{\left(-1\right)^k x^{2k}}{\left(2k\right)!} + \ldots$$ Here's a graph showing truncated version of the Maclaurin series compared to the actual $\sin x$ graph. • $y=\sin x$ is in blue • $y=x$ is in red • $y=x - \frac{x^3}{3!}$ is in orange • $y=x - \frac{x^3}{3!} + \frac{x^5}{5!}$ is in purple • $y=x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!}$ is in green See how each progressively longer truncation gets closer to the $\sin x$ graph. Maclaurin series for trigonometric functions are particularly useful because many of them are periodic over $x$, and longer truncations provide extremely close approximations for relatively small $x$. ## Maclaurin series of $\ln(1+x)$ Let $f(x) = \ln(1+x)$. Then $f(0) = 0$. Going through the derivatives \begin{align} f'(x) &= \frac{1}{1+x} \Rightarrow f'(0) = 1 \\ f''(x) &= -\frac{1}{\left(1+x\right)^{2}} \Rightarrow f''(0) = -1 \\ f'''(x) &= \frac{2}{\left(1+x\right)^{3}} \Rightarrow f'''(0) = 2 \\ f^{iv}(x) &= -\frac{6}{\left(1+x\right)^{4}} \Rightarrow f^{iv}(0) = -6 \end{align} And so on. The formula for a Maclaurin series is $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots + \frac{f^{(k)}(0)}{k!} x^k + \ldots$$ Therefore the Maclaurin series of $\ln(1+x)$ is $$\ln(1+x) \equiv x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \ldots + \frac{\left(-1\right)^{k-1} x^{k}}{k} + \ldots$$ This is valid for $|x|\lt 1$. Here's a graph showing truncated version of the Maclaurin series compared to the actual $\ln(1+x)$ graph. • $y=\ln(1+x)$ is in blue • $y=x$ is in red • $y=x - \frac{x^2}{2}$ is in orange • $y=x - \frac{x^2}{2} + \frac{x^3}{3}$ is in purple • $y=x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4}$ is in green See how each progressively longer truncation gets closer to the $\ln(1+x)$ graph. ## Taylor series Maclaurin series are actually a special case of Taylor series. Note that the Maclaurin series $$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots + \frac{f^{(k)}(0)}{k!} x^k + \ldots$$ will not work for a function like $f(x)=\frac{1}{x}$ because $f(0)$ is undefined. The Taylor series expansion for a function $f(x)$ is $$f(x) = f(a) + f'(a) \left(x-a\right) + \frac{f''(a)}{2!} \left(x-a\right)^2 + \frac{f'''(a)}{3!} \left(x-a\right)^3 + \ldots + \frac{f^{(k)}(a)}{k!} \left(x-a\right)^k + \ldots$$ where $a$ is a non-zero constant. We say that this is "the Taylor series about $x=a$". Letting $a=0$ yields the Maclaurin series for $f(x)$. The derivation is very similar to that of Maclaurin series. Suppose that a function $f(x)$ can be expressed as the polynomial $$f(x) = b_0 + b_1 \left(x-a\right) + b_2 \left(x-a\right)^2 + b_3 \left(x-a\right)^3 + \ldots + b_k \left(x-a\right)^k + \ldots$$ where $b_i$ are constant coefficients and $a$ is a non-zero constant. Then \begin{align} f'(x)&= b_1 + 2b_2 \left(x-a\right)+3b_3 \left(x-a\right)^2 + \ldots \\ f''(x) &= 2b_2 + 6b_3 \left(x-a\right) + \ldots \\ f'''(x) &= 6b_3 + \ldots \end{align} And so on. Then \begin{align} f(a) &= b_0 \\ f'(a) &= b_1 \\ f''(a) &= 2b_2 \Rightarrow b_2 = \frac{f''(a)}{2} = \frac{f''(a)}{2!}\\ f'''(a) &= 6b_3 \Rightarrow b_3 = \frac{f'''(a)}{6} = \frac{f'''(a)}{3!} \end{align} And so on. Then we can express $f(x)$ as $$f(x) = f(a) + f'(a) \left(x-a\right) + \frac{f''(a)}{2!} \left(x-a\right)^2 + \frac{f'''(a)}{3!} \left(x-a\right)^3 + \ldots + \frac{f^{(k)}(a)}{k!} \left(x-a\right)^k + \ldots$$ This is a very powerful idea because it means that any differentiable function can be expressed as a polynomial of arbitrarily large degree. This makes approximations much easier. The idea is that any Taylor series approximation you make by truncating the series is closest to the original function around the neighbourhood of $x=a$. Remember how in the graphs above the Maclaurin series were closest to their original functions around $x=0$? Exactly why this is the case is beyond the scope of FP2, but I thought you might like to know. ### Example Q) Find the first three terms of the Taylor series expansion about $x=\frac{\pi}{3}$ of $\ln(\cos x )$. A) Let $f(x) = \ln(\cos x )$ and $a=\frac{\pi}{3}$. Then $f(a) = -\ln(2)$. Going through the derivatives \begin{align} f'(x) &= -\tan x \Rightarrow f'(a) = -\sqrt{3} \\ f''(x) &= -\sec^2 x \Rightarrow f''(a) = -4 \end{align} And so on. The formula for a Taylor series is $$f(x) = f(a) + f'(a) \left(x-a\right) + \frac{f''(a)}{2!}\left(x-a\right)^2 + \ldots + \frac{f^{(k)}(a)}{k!} \left(x-a\right)^k + \ldots$$ Therefore the first three terms of the Taylor series expansion about $x=\frac{\pi}{3}$ of $\ln(\cos x )$ are $$\ln(\cos x ) \approx -\ln(2) -\sqrt{3}\left(x-\frac{\pi}{3}\right) - 2\left(x-\frac{\pi}{3}\right)^2$$ ### Example Q) Find the first four terms of the Taylor series expansion about $x=4$ of $\sqrt{x}$. A) Let $f(x) = \sqrt{x}$ and $a=4$. Then $f(a) = 2$. Going through the derivatives \begin{align} f'(x) &= \frac{1}{2}x^{-1/2} \Rightarrow f'(a) = \frac{1}{4} \\ f''(x) &= -\frac{1}{4}x^{-3/2} \Rightarrow f''(a) = -\frac{1}{32} \\ f'''(x) &= \frac{3}{8}x^{-5/2} \Rightarrow f'''(a) = \frac{3}{256} \end{align} And so on. The formula for a Taylor series is $$f(x) = f(a) + f'(a) \left(x-a\right) + \frac{f''(a)}{2!}\left(x-a\right)^2 + \ldots + \frac{f^{(k)}(a)}{k!} \left(x-a\right)^k + \ldots$$ Therefore the first four terms of the Taylor series expansion about $x=4$ of $\sqrt{x}$ are $$\sqrt{x} \approx 2+\frac{1}{4}\left(x-4\right)-\frac{1}{64}\left(x-4\right)^2+\frac{1}{512}\left(x-4\right)^3$$ ## Using Taylor series to solve differential equations Taylor series can be used to solve differential equations. ### Example Q) Use Taylor series to find the particular solution to the differential equation $\frac{d^2 y}{dx^2}+y^2