prompt
stringlengths
117
37.4k
Sydney FC’s draw for the Foxtel National Youth League 2017/18 season has been revealed today. The Sky Blues will again compete in Conference B, having won the conference in each of the last two seasons, and will face Central Coast Mariners, newly formed Canberra United, Newcastle Jets and Western Sydney Wanderers twice each over 10 rounds. Head Coach Rob Stanton said he was pleased with the fixture list and said he’s looking forward to the season ahead. “I’m happy with the draw, the bye first round gives us extra time to prepare the team for the upcoming season,” he said. “I’m happy to see Canberra coming in, they’ll offer a strong opposition to all the other clubs. “I’m looking forward to once again competing and developing players for the first team.” The Sky Blues will contest two Sydney Derbies in season 2017/18 again. Photo by Jaime Castaneda The Sky Blues have a BYE in round one before kicking off their campaign in Round 2 against Newcastle Jets at Lambert Park, Leichhardt on Sunday 26 November (kick off 5pm). An away Sydney Derby at Marconi Stadium follows in Round 3 before another BYE, as the Sky Blues then close in on Christmas with an away clash against the Mariners followed by a home encounter against Canberra on Friday 29 December. Sydney FC kick off the New Year away against the Newcastle Jets on Sunday 7 January before their home Sydney Derby at Lambert Park and another home clash against the Mariners on Sunday 21 January. Sydney FC finish the season away against Canberra United, with the winner of each Conference then set to face off in the National Youth League Grand Final on the weekend of the February 3 or 4. Sydney FC’s Foxtel National Youth League 2017/18 Fixtures: Rd 1 – BYE Rd 2 – Sunday 26 November v Newcastle Jets, Lambert Park, kick off 5pm Rd 3 – Friday 1 December v Western Sydney Wanderers, Marconi Stadium, kick off 7:30pm Rd 4 – BYE Rd 5 – Sunday 17 December v Central Coast Mariners, CCM Centre of Excellence, kick off 4:30pm Rd 6 – Friday 29 December v Canberra United, Lambert Park, kick off 6:30pm Rd 7 – Sunday 7 January v Newcastle Jets, No.2 Sportsground, kick off 4:30pm Rd 8 – Saturday 13 January v Western Sydney Wanderers, Lambert Park, kick off 7pm Rd 9 – Sunday 21 January v Central Coast Mariners, Lambert Park, kick off 5pm Rd 10 – Saturday 27 January v Canberra United, Canberra TBC, kick off 4:30pm 2017/18 Sydney FC Memberships available NOW Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} The process of particle creation from quantum vacuum because of moving boundaries or time-dependent properties of materials, commonly referred as the dynamical Casimir effect (DCE)\cite{1,2}, has been investigated since the pioneering works of Moore in 1970 \cite{moor}, who showed that photons would be created in a Fabry-Perot cavity if one of the ends of the cavity walls moved periodically, \cite{rev,rev1}. The dynamical Casimir effect is frequently used nowadays for phenomena connected with the photon creation from vacuum due to fast changes of the geometry or material properties of the medium. Moving bodies experience quantum friction \cite{Ramin} and so energy damping \cite{en,en1} and decoherence \cite{dec} due to the scattering of vacuum field fluctuations. The damping is accompanied by the emission of photons \cite{moor}, thus conserving the total energy of the combined system \cite{con}. An explicit connection between quantum fluctuations and the motion of boundaries was made in \cite{v}, where the name non-stationary Casimir effect was introduced, and in \cite{mir,mir1}, where the names Mirror Induced Radiation and Motion-Induced Radiation (with the same abbreviation MIR) were proposed. The frequency of created Photons in a mechanically moving boundary are bounded by the mechanical frequency of the moving body and to observe a detectable number of created photons the oscillatory frequency must be of the order of GHz which arise technical problems. Therefore, recent experimental schemes focus on simulating moving boundaries by considering material bodies with time-dependent electromagnetic properties \cite{sim, sim1}. In this scheme, for example for two semi-infinite dielectrics, the boundary is not moving mechanically but its moving is simulated or modelled by changing the electromagnetic properties of one of the dielectrics in a small slab periodically. An important factor in detecting the created photons is keeping the sample at a low temperature of $\sim$ 100 mK to suppress the number of thermal black body photons to less than unity. Particularly, the problem has been considered with mirrors (single mirror and cavities), where the input field reflected completely from the surface. Recently the Robin boundary condition (RBC) has been used as a helpful approach to consider the dynamical boundary condition for this kind of problem. The well known Drichlet and Neuwmann boundary conditions can be obtained as the limiting cases of Robin boundary condition \cite{hector,mintz}. The aim of the present work is to use a perturbative approach to study the effect of transition trough the interface on the spectral distribution of created photons. The interface between two semi-infinite dielectrics is modelled to simulate the oscillatory motion of the moving boundary. For this purpose, the electromagnetic field quantization in the presence of a dielectric medium \cite{matloob,matloob1} is reviewed briefly then a general approach to investigate the dynamical Casimir effect for simulated motion of some part of a dielectric medium is introduced and finally, the spectral distribution of created photons are derived and the effect of small transitions trough the interface has been discussed Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Cell-Free massive multiple-input multiple-output (MIMO) refers to a massive MIMO system \cite{MarzettaNonCooperative} where the base station antennas are geographically distributed \cite{NgoCF,MarzettaCF,Truong}. These antennas, called access points (APs) herein, simultaneously serve many users in the same frequency band. The distinction between cell-free massive MIMO and conventional distributed MIMO \cite{ZhouWCS} is the number of antennas involved in coherently serving a given user. In canonical cell-free massive MIMO, every antenna serves every user. Compared to co-located massive MIMO, cell-free massive MIMO has the potential to improve coverage and energy efficiency, due to increased macro-diversity gain. By operating in time-division duplex (TDD) mode, cell-free massive MIMO exploits the channel reciprocity property, according to which the channel responses are the same in both uplink and downlink. Reciprocity calibration, to the required accuracy, can be achieved in practice using off-the-shelf methods \cite{Lund}. Channel reciprocity allows the APs to acquire channel state information (CSI) from pilot sequences transmitted by the users in the uplink, and this CSI is then automatically valid also for the downlink. By virtue of the law of large numbers, the effective scalar channel gain seen by each user is close to a deterministic constant. This is called \textit{channel hardening}. Thanks to the channel hardening, the users can reliably decode the downlink data using only statistical CSI. This is the reason for why most previous studies on massive MIMO assumed that the users do not acquire CSI and that there are no pilots in the downlink \cite{MarzettaNonCooperative,DebbahULDL,BjornsonHowMany}. In co-located massive MIMO, transmission of downlink pilots and the associated channel estimation by the users yields rather modest performance improvements, owing to the high degree of channel hardening \cite{NgoDlPilots,Khansefid,Zuo}. In contrast, in cell-free massive MIMO, the large number of APs is distributed over a wide area, and many APs are very far from a given user; hence, each user is effectively served by a smaller number of APs. As a result, the channel hardening is less pronounced than in co-located massive MIMO, and potentially the gain from using downlink pilots is larger. \textbf{Contributions:} We propose a downlink training scheme for cell-free massive MIMO, and provide an (approximate) achievable downlink rate for conjugate beamforming processing, valid for finite numbers of APs and users, which takes channel estimation errors and power control into account. This rate expression facilitates a performance comparison between cell-free massive MIMO with downlink pilots, and cell-free massive MIMO without downlink pilots, where only statistical CSI is exploited by the users. The study is restricted to the case of mutually orthogonal pilots, leaving the general case with pilot reuse for future work Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{sintro} \section{Background: CUR and low-rank approximation}\label{sbcgr} {\em Low-rank approximation} of an $m\times n$ matrix $W$ having a small numerical rank $r$, that is, having a well-conditioned rank-$r$ matrix nearby, is one of the most fundamental problems of numerical linear algebra \cite{HMT11} with a variety of applications to highly important areas of modern computing, which range from the machine learning theory and neural networks \cite{DZBLCF14}, \cite{JVZ14} to numerous problems of data mining and analysis \cite{M11}. One of the most studied approaches to the solution of this problem is given by $CUR$ {\em approximation} where $C$ and $R$ are a pair of $m\times l$ and $k\times n$ submatrices formed by $l$ columns and $k$ rows of the matrix $W$, respectively, and $U$ is a $k\times l$ matrix such that $W\approx CUR$. Every low-rank approximation allows very fast approximate multiplication of the matrix $W$ by a vector, but CUR approximation is particularly transparent and memory efficient. The algorithms for computing it are characterized by the two main parameters: (i) their complexity and (ii) bounds on the error norms of the approximation. We assume that $r\ll \min\{m,n\}$, that is, the integer $r$ is much smaller than $\min\{m,n\}$, and we seek algorithms that use $o(mn)$ flops, that is, much fewer than the information lower bound $mn$. \section{State of the art and our progress}\label{ssartpr} The algorithms of \cite{GE96} and \cite{P00} compute CUR approximations by using order of $mn\min\{m,n\}$ flops.\footnote{Here and hereafter {\em ``flop"} stands for ``floating point arithmetic operation".} \cite{BW14} do this in $O(mn\log(mn))$ flops by using randomization. These are record upper bounds for computing a CUR approximation to {\em any input matrix} $W$, but the user may be quite happy with having a close CUR approximations to {\em many matrices} $W$ that make up the class of his/her interest. The information lower bound $mn/2$ (a flop involves at most two entries) does not apply to such a restricted input classes, and we go well below it in our paper \cite{PSZa} (we must refer to that paper for technical details because of the limitation on the size of this submission). We first formalize the problem of CUR approximation of an average $m\times n$ matrix of numerical rank $r\ll \min\{m,n\}$, assuming the customary Gaussian (normal) probability distribution for its $(m+n)r$ i.i.d. input parameters. Next we consider a two-stage approach: (i) first fix a pair of integers $k\le m$ and $l\le n$ and compute a CUR approximation (by using the algorithms of \cite{GE96} or \cite{P00}) to a random $k\times l$ submatrix and then (ii) extend it to computing a CUR approximation of an input matrix $W$ itself. We must keep the complexity of Stage (i) low and must extend the CUR approximation from the submatrix to the matrix $W$. We prove that for a specific class of input matrices $W$ these two tasks are in conflict (see Example 11 of \cite{PSZa}), but such a class of hard inputs is narrow, because we prove that our algorithm produces a close approximation to the average $m\times n$ input matrix $W$ having numerical rank $r$. (We define such an average matrix by assuming the standard Gaussian (normal) probability distribution.) By extending our two-stage algorithms with the technique of \cite{GOSTZ10}, which we call {\em cross-approximation}, we a little narrow the class of hard inputs of Example 11 of \cite{PSZa} to the smaller class of Example 14 of \cite{PSZa} and moreover deduce a sharper bounds on the error of approximation by maximizing the {\em volume} of an auxiliary $k\times l$ submatrix that defines a CUR approximation In our extensive tests with a variety of real world input data for regularization of matrices from Singular Matrix Database, our fast algorithms consistently produce close CUR approximation. Since our fast algorithms produce reasonably accurate CUR approximation to the average input matrix, the class of hard input matrices for these algorithms must be narrow, and we studied a tentative direction towards further narrowing this input class. We prove that the algorithms are expected to output a close CUR approximation to any matrix $W$ if we pre-process it by applying Gaussian multipliers. This is a nontrivial result of independent interest (proven on more than three pages), but its formal support is only for application of Gaussian multipliers, which is quite costly. We hope, however, that we can still substantially narrow the class of hard inputs even if we replace Gaussian multipliers with the products of reasonable numbers of random bidiagonal matrices and if we partly curb the permutation of these matrices. If we achieve this, then preprocessing would become non-costly. This direction seems to be quite promising, but still requires further work. Finally, our algorithms can be extended to the acceleration of various computational problems that are known to have links to low-rank approximation, but in our concluding Section \ref{scncl} we describe a novel and rather unexpected extensions to the acceleration of the Fast Multipole Method and Conjugate Gradient Algorithms,\footnote{Hereafter we use the acronyms FMM and CG.} both being among the most celebrated achievements of the 20th century in Numerical Linear Algebra. \subsection{Some related results on matrix algorithms and our progress on other fundamental subjects of matrix computations}\label{srltwr} A huge bibliography on CUR and low-rank approximation, including the known best algorithms, which we already cited, can be accessed from the papers \cite{HMT11}, \cite{M11}, \cite{BW14} and \cite{W14} Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{sec:intro} A useful representation of the occupied states in a periodic insulator is the Wannier function. Wannier functions (WFs) provide a localized real-space description of the extended Bloch states.~\cite{MLWFs-Review} In particular, WFs give a chemical picture of the bonding nature of a material, an alternative real-space formalism for many quantities, and can also be used for interpolating various physical properties on a fine mesh in the Brillouin zone~\cite{PhysRevB.74.195118,PhysRevB.76.165108}. For examples, WFs can be used to compute electronic polarization, orbital magnetization, the component of isotropic magnetoelectric coupling, and various transport properties. However, an exponentially localized Wannier function representation does not exist for insulators with a non-zero Chern number $C$.\cite{PhysRevB.74.235111,PhysRevLett.98.046402} Insulators with a non-zero Chern number are called integer quantum Hall insulators (or Chern insulators) and are characterized with a non-zero Hall conductance $\sigma = C e^2/h$. (Three-dimensional insulators are characterized by a triplet of Chern numbers.) In the past several years there has been significant interest in a group of materials related to the Chern insulator. These are called $\mathbb{Z}_2$ topological insulators (TIs) Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
Water from the iced-over Connecticut River numbed my hands as I cradled a hard, scaleless fish at the U.S. Geological Survey’s anadromous fish laboratory at Turners Falls, Massachusetts. Its back was dark brown, its belly cream. Five rows of bony plates ran the length of its thin body to the shark-like tail. Four barbels covered with taste buds dangled from its flat snout in front of the sucker mouth. At 20 inches it was a baby. Adults can measure 14 feet and weigh 800 pounds. This fish was an Atlantic sturgeon — the largest, longest-lived creature that reproduces in North American rivers collected by the Atlantic. Its species is at least 70 million years senior to my own. Yet my species threatens it with extinction. Atlantic sturgeon have large snouts for rooting out bottom-dwelling prey. MATT BALAZIK/VCU RICE RIVERS CENTER While that threat is still very real, it was reduced on February 6, 2012, when the National Marine Fisheries Service (NMFS) protected five “distinct population segments” of Atlantic sturgeon under the Endangered Species Act. The Gulf of Maine segment was listed as threatened while the New York Bight, Chesapeake Bay, Carolina, and South Atlantic segments were listed as endangered. This action has released a torrent of funding that is allowing researchers from Maine to Florida to identify and mitigate human-caused mortality. Nowhere has the population crash been more catastrophic than in the Delaware River, the East’s longest undammed waterway, draining 13,539 square miles from New York State through Pennsylvania, New Jersey, and Delaware. In the 19th century 75 percent of sturgeon caught in the U.S. came from the Delaware, then known as the “caviar capital of North America.” According to the NMFS, there used to be something like 180,000 ripe females entering the river in any given year. Now the agency figures there are fewer than 100. Silt from watershed development dooms the eggs laid by female Atlantic sturgeon. Atlantic sturgeon live in the ocean and spawn and spend their first few years in freshwater Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction.} Katz \cite[5.3.1]{Katz90ESDE} discovered that the hypergeometric $\sD$-modules on $\bA^1_{\bC}\setminus\{0\}$ can be described as the multiplicative convolution of hypergeometric $\sD$-modules of rank one. Precisely speaking, Katz proved the statement (ii) in the following theorem (the statement (i) is trivial but put to compare with another theorem later). \begin{cvthm} Let $\balpha=(\alpha_1,\dots,\alpha_m)$ and $\bbeta=(\beta_1,\dots,\beta_n)$ be two sequences of complex numbers and assume that $\alpha_i-\beta_j$ is not an integer for any $i,j$. Let $\sHyp(\balpha;\bbeta)$ be the $\sD$-module on $\bG_{\rmm,\bC}$ defined by the hypergeometric operator \[ \Hyp(\balpha;\bbeta)=\prod_{i=1}^m(x\partial-\alpha_i)-x\prod_{j=1}^n(x\partial-\beta_j), \] that is, \[ \sHyp(\balpha;\bbeta)\defeq\sD_{\bA^1_{\bC}\setminus\{0\}}/\sD_{\bA^1_{\bC}\setminus\{0\}}\Hyp(\balpha;\bbeta). \] Then, $\sHyp(\balpha;\bbeta)$ has the following properties. \textup{(i)} If $m\neq n$, then $\sHyp(\balpha;\bbeta)$ is a free $\sO_{\bG_{\rmm,\bC}}$-module of rank $\max\{m,n\}$. If $m=n$, then the restriction of $\sHyp(\balpha;\bbeta)$ to $\bG_{\rmm,\bC}\setminus\{1\}$ is a free $\sO_{\bG_{\rmm,\bC}\setminus\{1\}}$-module of rank $m$. \textup{(ii)} We have an isomorphism \[ \sHyp(\balpha;\bbeta)\cong \sHyp(\alpha_1;\emptyset)\ast\dots\ast\sHyp(\alpha_m;\emptyset) \ast\sHyp(\emptyset;\beta_1)\ast\dots\ast\sHyp(\emptyset;\beta_n), \] where $\ast$ denotes the multiplicative convolution of $\sD_{\bG_{\rmm,\bC}}$-modules. \end{cvthm} Besides the hypergeometric $\sD$-modules over the complex numbers, Katz also studied the $\ell$-adic theory of hypergeometric sheaves. Let $k$ be a finite field with $q$ elements, let $\psi$ be a non-trivial additive character on $k$ and let $\bchi=(\chi_1,\dots,\chi_m), \brho=(\rho_1,\dots,\rho_n)$ be sequences of characters on $k^{\times}$ satisfying $\chi_i\neq\rho_j$ for all $i,j$. Then, he \emph{defined} the $\ell$-adic hypergeometric sheaves $\sH_{\psi,!}^{\ell}(\bchi,\brho)$ on $\bG_{\rmm,k}$ by using the multiplicative convolution of $\sH_{\psi,!}^{\ell}(\chi_i;\emptyset)$'s and $\sHyp_{\psi,!}^{\ell}(\emptyset;\rho_j)$'s, where these convolvends are defined by using Artin--Schreier sheaves and Kummer sheaves. This $\ell$-adic sheaf $\sH_{\psi,!}^{\ell}(\bchi;\brho)$ has a property similar to (i) in the above theorem. Namely, it is a smooth sheaf on $\bG_{\rmm,k}$ of rank $\max\{m,n\}$ if $m\neq n$, and its restriction to $\bG_{\rmm,k}\setminus\{1\}$ is a smooth sheaf of rank $m$ if $m=n$ \cite[Theorem 8.4.2]{Katz90ESDE}. Moreover, by definition, $\sH_{\psi,!}^{\ell}(\bchi;\brho)$ has a Frobenius structure. The Frobenius trace functions of the $\ell$-adic hypergeometric sheaves are called the ``hypergeometric functions over finite field''. This function gives a generalization of the classical Kloosterman sums. Moreover, this function has an intimate connection with the Frobenius action on the \'etale cohomology of a certain class of algebraic varieties (for example, Calabi--Yau varieties) over finite fields. (The hypergeometric function over finite field is also called the ``Gaussian hypergeometric function'' by Greene \cite{Greene}, who independently of Katz found this function based on a different motivation.) The purpose of this article is to develop a $p$-adic counterpart of these complex and $\ell$-adic hypergeometric objects Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Handling batch insertions} In this section, we study the dynamic DFS tree problem in the batch insertion setting. The goal of this section is to prove Theorem \ref{batch-ins}. Our algorithm basically follows the same framework for fully dynamic DFS proposed in \cite{baswana2016dynamic}. Since we are only interested in the dynamic DFS tree problem in the batch insertion setting, the algorithms \textsf{BatchInsert} and \textsf{DFS} presented below is a moderate simplification of the original algorithm in \cite{baswana2016dynamic}, by directly pruning those details unrelated to insertions. \begin{algorithm}[H] \caption{\textsf{BatchInsert}} \KwData{a DFS tree $T$ of $G$, set of insertions $U$} \KwResult{a DFS tree $T^*$ of $G + U$} Add each inserted vertex $v$ into $T$, set $\mathit{par}(v) = r$\; Initialize $L(v)$ to be $\emptyset$ for each $v$\; Add each inserted edge $(u, v)$ to $L(u)$ and $L(v)$\; Call $\mathrm{\textsf{DFS}}(r)$\; \end{algorithm} \begin{algorithm}[H] \caption{\textsf{DFS}} \KwData{a DFS tree $T$ of $G$, the entering vertex $v$} \KwResult{a partial DFS tree} Let $u = v$\; \While{$\mathit{par}(u)$ is not visited} { Let $u = \mathit{par}(u)$\; } Mark $\mathit{path}(u, v)$ to be visited\; Let $(w_1, \dots, w_t) = \mathit{path}(u, v)$\; \For{$i \in [t]$} { \If{$i \ne t$} { Let $\mathit{par}^*(w_i) = w_{i + 1}$\; } \For{child $x$ of $w_i$ in $T$ except $w_{i + 1}$} { Let $(y, z) = Q(T(x), u, v)$, where $y \in \mathit{path}(u, v)$\; Add $z$ into $L(y)$\; } } \For{$i \in [t]$} { \For{$x \in L(w_i)$} { \If{$x$ is not visited} { Let $\mathit{par}^*(x) = w_i$\; Call $\mathrm{\textsf{DFS}}(x)$\; } } } \end{algorithm} In Algorithm \textsf{BatchInsert}, we first attach each inserted vertex to the super root $r$, and pretend it has been there since the very beginning. Then only edge insertions are to be considered. All inserted edges are added into the reduced adjacency lists of corresponding vertices. We then use \textsf{DFS}{} to traverse the graph starting from $r$ based on $T$, $L$, and build the new DFS tree while traversing the entire graph and updating the reduced adjacency lists. In Algorithm \textsf{DFS}, the new DFS tree is built in a recursive fashion. Every time we enter an untouched subtree, say $T(u)$, from vertex $v \in T(u)$, we change the root of $T(u)$ to $v$ and go through $\mathit{path}(v, u)$; i.e., we wish to reverse the order of $\mathit{path}(u, v)$ in $T^*$. One crucial step behind this operation is that we need to find a new root for each subtree $T(w)$ originally hanging on $\mathit{path}(u, v)$. The following lemma tells us where the $T(w)$ should be rerooted on $\mathit{path}(u, v)$ in $T^*$. \begin{lemma}[\cite{baswana2016dynamic}] \label{feasible_edge} Let $T^*$ be a partially constructed DFS tree, $v$ the current vertex being visited, $w$ an (not necessarily proper) ancestor of $v$ in tree $T^*$, and $C$ a connected component of the subgraph induced by unvisited vertices. If there are two edges $e$ and $e'$ from $C$ incident on $v$ and $w$, then it is sufficient to consider only $e$ during the rest of the DFS traversal. \end{lemma} Let $Q(T(w), u, v)$ be the edge between the highest vertex on $\mathit{path}(u, v)$ incident to a vertex in subtree $T(w)$, and the corresponding vertex in $T(w)$. $Q(T(w), u, v)$ is defined to be $\mathsf{Null}$ if such an edge does not exist. By Lemma \ref{feasible_edge}, it suffices to ignore all other edges but just keep the edge returned by $Q(T(w), u, v)$; this is because we have reversed the order of $\mathit{path}(u, v)$ in $T^*$ and thus $Q(T(w), u, v)$ connects to the lowest possible position in $T^*$. Hence $T(w)$ should be rerooted at $Q(T(w), u, v)$. Denote $(x, y)$ to be the edge returned by $Q(T(w), u, v)$ where $x \in \mathit{path}(u, v)$, and then we add $y$ into $L(x)$. After finding an appropriate entering edge for each hanging subtree, we process each vertex $v \in \mathit{path}(u, v)$ in ascending order of depth (with respect to tree $T$). For every unvisited $w \in L(v)$, we set $\mathit{par}^*(w) = v$, and recursively call $\mathrm{\textsf{DFS}}(w)$ Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
Delvon King A Maryland judge ordered a sheriff’s deputy to shock a defendant who would not stop citing “sovereign citizen” doctrine during a court hearing. According to the court transcript and reported last week by the Baltimore Post Examiner, Judge Robert C. Nalley asked 25-year-old Delvon King, who was representing himself, to stop talking. King, who was outfitted with an electronic shocking device on his leg, continued to challenge the validity of the case against him citing “common right and common reason,” and the judge ordered a Charles County sheriff’s deputy to administer the shock. “Do it,” the judge ordered, according to the transcript. “Use it.” The transcript does not indicate that King made any threatening movements toward the judge or anyone else in the courtroom or attempt to flee. King immediately crumpled to the ground when the shock was delivered. “He screamed and he kept screaming,” said his father, Alexander King. “When the officer hit the button, it was like an 18-wheeler hit Delvon. He hit the ground that quick. He kept screaming until the pain subsided.” The incident took place July 23, right before jury selection, although no prospective jurors were in the courtroom. A medical worker examined King at the courthouse, and then jury selection began. “I thought they’d give me time to recuperate,” said King, who was acting as his own attorney Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\subsection*{Acknowledgements} A large number of people have supplied advice and encouragement on this project over a period of many years. The undergraduates from my Introductory VIGRE Research Group held in Fall 2010 at the University of Georgia and my student Darcy Chanin performed many calculations for genus 4, 5, and 6 surfaces. I am grateful to the computational algebra group at the University of Sydney, especially John Cannon and Mark Watkins, for hosting me for a visit in June 2011 where I began programming the main algorithm in \texttt{Magma} \cite{Magma}. I have had many helpful conversations with my classmates and colleagues at Columbia University, the University of Georgia, and Fordham University. Valery Alexeev and James McKernan suggested the algorithm for matrix generators of representations outlined in Section \ref{matrix generators section}. Finally, I am grateful to Jennifer Paulhus, Tony Shaska, and John Voight, whose encouragement was essential in completing this project. This work was partially supported by the University of Georgia's NSF VIGRE grant DMS-03040000, a Simons Foundation Travel Grant, and a Fordham Faculty Research Grant. \subsection*{Online material} My webpage for this project is \cite{mywebpage}. This page contains links to the latest version of my \texttt{Magma} code, files detailing the calculations for specific examples, and many equations that are omitted in the tables in Section \ref{results section}. In future work, Jennifer Paulhus and I plan to include much of the data described in this paper and on the website \cite{mywebpage} in the L-Functions and Modular Forms Database at \texttt{lmfdb.org}. \section{The main algorithm} \label{algorithm section} We begin by stating the main algorithm. Then, in the following subsections, we discuss each step in more detail, including precise definitions and references for terms and facts that are not commonly known. \begin{algorithm} \label{main algorithm} \mbox{} \\ \textsc{Inputs:} \begin{enumerate} \item A finite group $G$; \item an integer $g \geq 2$; \item a set of surface kernel generators $(a_1,\ldots,a_{g_0}; b_1,\ldots,b_{g_0}; g_1,\ldots,g_r)$ determining a family of nonhyperelliptic Riemann surfaces $X$ of genus $g$ with $G \subset \operatorname{Aut}(X)$\\ \end{enumerate} \textsc{Output:} A locally closed set $B \subset \mathbb{A}^{n}$ and a family of smooth curves $\mathcal{X} \subset \mathbb{P}^{g-1} \times B$ such that for each closed point $b \in B$, the fiber $\mathcal{X}_b$ is a smooth genus $g$ canonically embedded curve with $G \subset \operatorname{Aut}(\mathcal{X}_b)$.\\ \begin{enumerate} \item[Step 1.] Compute the conjugacy classes and character table of $G$. \item[Step 2.] Use the Eichler trace formula to compute the character of the action on differentials and on cubics in the canonical ideal. \item[Step 3.] Obtain matrix generators for the action on holomorphic differentials. \item[Step 4.] Use the projection formula to obtain candidate cubics. \item[Step 5.] Compute a flattening stratification and select the locus yielding smooth algebraic curves with degree $2g-2$ and genus $g$. \end{enumerate} \end{algorithm} \subsection{Step 1: conjugacy classes and character table of $G$} This step is purely for bookkeeping. It is customary to list the conjugacy classes of $G$ in increasing order, and to list the rows in a character table by increasing degree. However, there is no canonical order to either the conjugacy classes or the irreducible characters. Given two different descriptions of a finite group $G$, modern software such as \texttt{Magma} may order the classes or the irreducible characters of $G$ differently. Hence, we compute and fix these at the beginning of the calculation. \subsection{Step 2: Counting fixed points and the Eichler trace formula} Here we define surface kernel generators for the automorphism group of a Riemann surface Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Let $G$ be a simple undirected graph with the \textit{vertex set} $V(G)$ and the \textit{edge set} $E(G)$. A vertex with degree one is called a \textit{pendant vertex}. The distance between the vertices $u$ and $v$ in graph $G$ is denoted by $d_G(u,v)$. A cycle $C$ is called \textit{chordless} if $C$ has no \textit{cycle chord} (that is an edge not in the edge set of $C$ whose endpoints lie on the vertices of $C$). The \textit{Induced subgraph} on vertex set $S$ is denoted by $\langle S\rangle$. A path that starts in $v$ and ends in $u$ is denoted by $\stackrel\frown{v u}$. A \textit{traceable} graph is a graph that possesses a Hamiltonian path. In a graph $G$, we say that a cycle $C$ is \textit{formed by the path} $Q$ if $ | E(C) \setminus E(Q) | = 1 $. So every vertex of $C$ belongs to $V(Q)$. In 2011 the following conjecture was proposed: \begin{conjecture}(Hoffmann-Ostenhof \cite{hoffman}) Let $G$ be a connected cubic graph. Then $G$ has a decomposition into a spanning tree, a matching and a family of cycles. \end{conjecture} Conjecture \theconjecture$\,$ also appears in Problem 516 \cite{cameron}. There are a few partial results known for Conjecture \theconjecture. Kostochka \cite{kostocha} noticed that the Petersen graph, the prisms over cycles, and many other graphs have a decomposition desired in Conjecture \theconjecture. Ozeki and Ye \cite{ozeki} proved that the conjecture holds for 3-connected cubic plane graphs. Furthermore, it was proved by Bachstein \cite{bachstein} that Conjecture \theconjecture$\,$ is true for every 3-connected cubic graph embedded in torus or Klein-bottle. Akbari, Jensen and Siggers \cite[Theorem 9]{akbari} showed that Conjecture \theconjecture$\,$ is true for Hamiltonian cubic graphs. In this paper, we show that Conjecture \theconjecture$\,$ holds for traceable cubic graphs Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} \label{sec:Introduction} Topological insulators (TIs) form a class of materials with unique properties, associated with a non-trivial topology of their quasiparticle band structure (for a review, see Refs.~\cite{Zhang:rev, Hasan-Kane:rev, Hasan-Moore:rev, Ando:rev}). The key feature of two-dimensional (2D) and three-dimensional (3D) TIs is the existence of special gapless edge and surface states, respectively, while the bulk states of those materials are gapped. The hallmark property of the surface states is their topological protection. Mathematically, the nontrivial topological properties of time-reversal (TR) invariant TIs are generally described \cite{Moore:2007} by multiple copies of the $Z_2$ invariants found by Kane and Mele \cite{Kane-Mele}. This implies that the energy band gap should close at the boundary between topological and trivial insulator (e.g., vacuum) giving rise to the occurrence of the gapless interface states and the celebrated bulk-boundary correspondence. The discovery of the $Z_2$ topology in TIs is an important breakthrough because it showed that nontrivial topology can be embedded in the band structure and that the presence of an external magnetic field is not mandatory for the realization of topological phases. Another distinctive feature of the 3D TIs is a relativistic-like energy spectrum of the surface states, whose physical origin is related to a strong spin-orbit coupling \cite{Hsieh:2009}. Indeed, the surface states on each of the surfaces are described by 2D massless Dirac fermions in an irreducible 2$\times$2 representation, with a single Dirac point in the reciprocal space. For comparison, quasiparticles in graphene demonstrate similar properties, but have four inequivalent Dirac cones due to a spin and valley degeneracy \cite{Castro:2009} that makes certain aspects of their physics very different from those of the surface states in TIs. In our study below, we will concentrate only on the case of the strong 3D TIs whose surface states are protected by the topology of the bulk bands in combination with the TR symmetry. This leads to the locking of momenta and spin degrees of freedom and, consequently, to the formation of a helical Dirac (semi)metal state \cite{Hsieh:2009}. Such a state is characterized by the electron antilocalization and the absence of backscattering. The phenomenon of antilocalization has deep mathematical roots and is usually explained by an additional Berry's phase $\pi$ that is acquired when an electron circles a Dirac point. From the physical viewpoint, when scattering on an impurity, an electron must change its spin in order to preserve its chirality. Such a process is possible only in the case of magnetic impurities which break explicitly the TR symmetry. Experimentally, a linear relativistic-like dispersion law of the surface states is observed in Bi$_{1-x}$Sb$_x$, Bi$_2$Se$_3$, Bi$_2$Te$_3$, Sb$_2$Te$_3$, Bi$_2$Te$_2$Se, and other materials by using angle resolved photoemission spectroscopy (ARPES) \cite{Hsieh:2008, Zhang:2009, Hsieh:2009, Chen:2009, Cava-Hasan}. Furthermore, scanning tunneling microscopy and scanning tunneling spectroscopy provide additional information about the topological nature of the surface states, such as the quasiparticles interference patterns around impurities and defects. The Fourier analysis of these patterns has shown that the backscattering between $\mathbf{k}$ and $-\mathbf{k}$ is highly suppressed in Bi$_{1-x}$Sb$_x$ \cite{Roushan:2009} and Bi$_2$Te$_3$ \cite{Zhang-Cheng:2009} in accord with the TR symmetry protection Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Optical remote sensing has made significant advances in recent years. Among these has been the deployment and wide-spread use of hyperspectral imagery on a variety of platforms (including manned and unmanned aircraft and satellites) for a wide variety of applications, ranging from environmental monitoring, ecological forecasting, disaster relief to applications pertaining to national security. With rapid advancements in sensor technology, and the resulting reduction of size, weight and power requirements of the imagers, it is also now common to deploy multiple sensors on the same platform for multi-sensor imaging. As a specific example, it is appealing for a variety of remote sensing applications to acquire hyperspectral imagery and Light Detection and Ranging (LiDAR) data simultaneously --- hyperspectral imagery offers a rich characterization of object specific properties, while LiDAR provides topographic information that complements Hyperspectral imagery \cite{dalponte2008fusion,YP2015,brennan2006object,shimoni2011detection,pedergnana2011fusion}. Modern LiDAR systems provide the ability to record entire waveforms for every return signal as opposed to providing just the point cloud. This enables a richer representation of surface topography. While feature reduction is an important preprocessing to analysis of single-sensor high dimensional passive optical imagery (particularly hyperspectral imagery), it becomes particularly important with multi-sensor data where each sensor contributes to high dimensional raw features. A variety of feature projection approaches have been used for feature reduction, including classical approaches such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and their many variants, manifold learning approaches such as Supervised and Unsupervised Locality Preserving Projections \cite{lunga2014manifold}. Several of these methods are implemented in both the input (raw) feature space and the Reproducible Kernel Hilbert Space (RKHS) for data that are nonlinearly separable. Further, most traditional approaches to feature extraction are designed for single-sensor data --- a unique problem with multi-sensor data is that feature spaces corresponding to each sensor often have different statistical properties, and a single feature projection may hence be sub-optimal. It is hence desired to have a projection for feature reduction that preserves the underlying information from each sensor in a lower dimensional subspace. More recently, we developed a feature projection approach, referred to as Angular Discriminant Analysis (ADA) \cite{CP2015_ADA_JSTSP,PC2013Asilomar_Sparse,PrasadCuiICASSP2013}, that was optimized for hyperspectral imagery and demonstrated robustness to spectral variability. Specifically, the approach sought a lower dimensional subspace where classes were maximally separated in an angular sense, preserving important spectral shape related characteristics. We also developed a local variant of the algorithm (LADA) that preserved angular locality in the subspace. In this paper, we propose a composite kernel implementation of this framework and demonstrate for the purpose of feature projection in multi-sensor frameworks. Specifically, by utilizing a composite kernel (a dedicated kernel for each sensor), and ADA (or LADA) for each sensor, the resulting projection is highly suitable for classification. The proposed approach serves as a very effective feature reduction algorithm for sensor fusion --- it optimally fuses multi-sensor data and projects it to a lower dimensional subspace. A traditional classifier can be employed following this, for supervised learning. We validate the method with the University of Houston multi-sensor dataset comprising of Hyperspectral and LiDAR data and show that the proposed method significantly outperforms other approaches to feature fusion. The outline of the remainder of this paper is as follows. In sec. \ref{sec:related}, we review related work. In sec. \ref{sec:proposed}, we describe the proposed approach for multi-sensor feature extraction Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
A of mine recently convinced me to watch "Loose Change", a documentary about the alleged conspiracy and cover-up of the 9/11 terrorist attacks by the U.S. . I'm not a big fan of conspiracy theories, and I knew little of the specific theories surrounding 9/11, but I watched the film with the most open mind I could muster. I found the film to be very engaging, and though I didn't buy the film's conspiracy and cover-up hypotheses, it did make me question whether something important was being kept secret. Seeing the conspiracy theories laid out so confidently and so sensationalistically also helped me to understand why one-third to one-half of Americans believe that our government either was somehow involved in the attacks or covered up information about them. One reason I generally have trouble accepting conspiracy theories is that they're usually based on far-fetched claims that are nearly impossible to disprove, or prove. My skepticism is further strengthened by the fact that we humans have an assortment of biases that can distort our judgments and allow us to maintain beliefs despite overwhelming evidence to the contrary. Some of these biases include the tendency to see patterns where none exist, and to interpret new information and recall old information in ways that confirm our expectations and beliefs. However, most of the time we're unaware of these biases and overly confident that our perceptions represent the objective truth. This is not to say that conspiracies never happen, or that I'm immune from engaging in my own conspiracy-like thinking sometimes. It just means that one of my own biases is to doubt these sorts of theories. Rather than speculating about the existence of specific conspiracies, I find a far more intriguing topic to be the psychology behind conspiracy thinking. Fortunately, an excellent book called Empire of Conspiracy by Tim Melley explores this issue. Melley seeks to explain why conspiracy theories and have become so pervasive in American culture in recent decades. He discusses some of the paranoia behind our obsessions with political assassinations, and race relations, stalkers, mind control, bureaucracies, and the power of corporations and governments. Melley proposes that conspiracy thinking arises from a combination of two factors, when someone: 1) holds strong individualist values and 2) lacks a sense of control Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
Some Arizona lawmakers want the state to change how their electoral delegates vote for the president. If they’re successful, Arizona would be the first Republican-leaning state to back electing presidents through a country-wide popular vote. Arizona Public Radio’s Justin Regan reports. The National Popular Vote Compact is a coalition of 10 states and the District of Columbia. They include California, New York and Illinois – states that traditionally back Democrats and went blue in the last election. The goal of the pact is to accumulate 270 votes – enough to elect a president – among the members during an election. Officials in those states would then order their electoral delegates to vote for the presidential candidate who wins the national popular vote. Northern Arizona University political science professor Fred Solop says the potential of Arizona joining the pact could represent a big achievement for the popular vote movement. “Arizona is unique as a red state to consider this. It’s one of the first red states to give serious thought to this. And it represents a change, it’s a broadening of this movement. So it gives it a little more momentum for the future,” said Solop. Solop says the popular vote could increase voter turnout nationwide. Advocates for the Electoral College say it guarantees all states have a say in the election instead of candidates focusing on only high population areas. A bill to join the National Popular Vote Coalition passed the Arizona house, and it’s now being reviewed by the Senate. If the legislature approves the measure, the Compact would have 176 electoral votes among its members Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
This article is from the archive of our partner . At the University of Iowa, the College Republicans sent an email this week to the entire college community about an event they termed the "Conservative Coming Out Week." Planned events include an "Animal Rights BBQ" and an opportunity for students to "pick up your Doctors' Notice to miss class for 'sick of being stressed', just like the Wisconsin public employees during the union protests." No doubt this email was intended to be provocative. But few expected this: Ellen Lewin, a professor of Anthropology and Gender, Women's & Sexuality Studies, wrote back: "FUCK YOU, REPUBLICANS." Natalie Ginty, Chairwoman of the Iowa Federation of College Republicans, wrote in an email to one of Lewin's supervisors: We understand that as a faculty member she has the right to express her political opinion, but by leaving her credentials at the bottom of the email she was representing the University of Iowa, not herself alone. In response, Lewin issued something of an apology on Monday, where she wrote: I admit the language was inappropriate, and apologize for any affront to anyone's delicate sensibilities. I would really appreciate your not sending blanket emails to everyone on campus, especially in these difficult times. As though that apology was not scant enough, the following day Lewin appeared to get more and more incensed over the issue, and tempered her apology even further on Tuesday: I should note that several things in the original message were extremely offensive, nearly rising to the level of obscenity. Despite the Republicans' general disdain for LGBT rights you called your upcoming event "conservative coming out day," appropriating the language of the LGBT right movement. Your reference to the Wisconsin protests suggested that they were frivolous attempts to avoid work. And the "Animal Rights BBQ" is extremely insensitive to those who consider animal rights an important cause. Then, in the email that Ms. Ginty sent complaining about my language, she referred to me as Ellen, not Professor Lewin, which is the correct way for a student to address a faculty member, or indeed, for anyone to refer to an adult with whom they are not acquainted. I do apologize for my intemperate language, but the message you all sent out was extremely disturbing and offensive. While Lewin makes several good points regarding why the College Republicans' email got under her skin, it is of course unacceptable for a professor to curse out students via mass email. It's a shame for Levin that her original response to the mass email was not the email above. After the University's president issued a statement condemning "intolerant and disrespectful discord," Lewin conceded by emailing the College Republicans' faculty advisor: "I have been sufficiently chastened by this incident that I can assure you it will not happen again Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Spectral embedding methods are based on analyzing Markov chains on a high-dimensional data set $\left\{x_i\right\}_{i=1}^{n} \subset \mathbb{R}^d$. There are a variety of different methods, see e.g. Belkin \& Niyogi \cite{belk}, Coifman \& Lafon \cite{coif1}, Coifman \& Maggioni \cite{coif2}, Donoho \& Grimes \cite{donoho}, Roweis \& Saul \cite{rs}, Tenenbaum, de Silva \& Langford \cite{ten}, and Sahai, Speranzon \& Banaszuk \cite{sahai}. A canonical choice for the weights of the graph is declare that the probability $p_{ij}$ to move from point $x_j$ to $x_i$ to be $$ p_{ij} = \frac{ \exp\left(-\frac{1}{\varepsilon}\|x_i - x_j\|^2_{\ell^2(\mathbb{R}^d)}\right)}{\sum_{k=1}^{n}{ \exp\left(-\frac{1}{\varepsilon}\|x_k - x_j\|^2_{\ell^2(\mathbb{R}^d)}\right)}},$$ where $\varepsilon > 0$ is a parameter that needs to be suitably chosen. This Markov chain can also be interpreted as a weighted graph that arises as the natural discretization of the underlying 'data-manifold'. Seminal results of Jones, Maggioni \& Schul \cite{jones} justify considering the solutions of $$ -\Delta \phi_n = \lambda_n^2 \phi_n$$ as measuring the intrinsic geometry of the weighted graph. Here we always assume Neumann-boundary conditions whenever such a graph approximates a manifold. \begin{figure}[h!] \begin{tikzpicture}[scale=0.2\textwidth, node distance=1.2cm,semithick] \node[origVertex] (0) {}; \node[origVertex] (1) [right of=0] {}; \node[origVertex] (2) [above of=0] {}; \node[origVertex] (3) [above of=1] {}; \node[origVertex] (4) [above of=2] {}; \path (0) edge[origEdge, out=-45, in=-135] node[newVertex] (m0) {} (1) edge[origEdge, out= 45, in= 135] node[newVertex] (m1) {} (1) edge[origEdge] node[newVertex] (m2) {} (2) (1) edge[origEdge] node[newVertex] (m3) {} (3) (2) edge[origEdge] node[newVertex] (m4) {} (3) edge[origEdge] node[newVertex] (m5) {} (4) (3) edge[origEdge, out=125, in= 55, looseness=30] node[newVertex] (m6) {} (3); \path (m0) edge[newEdge, out= 135, in=-135] (m1) edge[newEdge, out= 45, in= -45] (m1) edge[newEdge, out=-145, in=-135, looseness=1.7] (m2) edge[newEdge, out= -35, in= -45, looseness=1.7] (m3) (m1) edge[newEdge] (m2) edge[newEdge] (m3) (m2) edge[newEdge] (m4) edge[newEdge, out= 135, in=-135] (m5) (m3) edge[newEdge] (m4) edge[newEdge, out= 45, in= 15] (m6) (m4) edge[newEdge] (m5) edge[newEdge, out= 90, in= 165] (m6) ; \draw [thick, xshift=0.006cm,yshift=0.005cm] plot [smooth, tension=1] coordinates { (0.03,0.01) (0.04,-0.01) (0.06,0.01) (0.055,0.02) (0.05, 0.01) (0.04, 0.01) (0.035, 0 Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{Intro} The original 1970 conception of Mastermind by Mordechai Meirowitz was a sequence reconstruction game. One player (the \emph{codemaker}) would construct a hidden sequence of four pegs, each peg being one of six colors, and the other player (the \emph{codebreaker}) would make guesses of the same form, receiving feedback after each guess regarding how close they were to the hidden sequence. In 1963 Erd\H os and R\'enyi \cite{ER63} studied the two-color variant of this game, and after the release of Mastermind, Knuth showed that a minimax strategy guarantees guessing the hidden vector in no more than 5 turns \cite{DK76}. Many authors have since studied algorithms to minimize the number of guesses required in the worst case \cite{DS13,VC83,KT86,MG12,OS13,JP11,CC96,JP15, WG03, WG04}, and almost all of these results will be introduced and discussed at relevant points in this paper. We note here that the work of J{\"a}ger and Peczarski \cite{JP11,JP15} and Goddard \cite{WG03,WG04} deal with finding explicit optimal bounds for small numbers of colors and pegs, whereas we deal with asypmtotics when both of these quantities are large. The variants of Mastermind which we study are defined by the following parameters: \begin{enumerate}[label=(\roman*)] \item ($k$) \textit{Size of Alphabet} \item ($n$) \textit{Length of Sequence.} The hidden vector and all guess vectors will be elements of $[k]^n$. \item ($\Delta$) \textit{Distance Function.} $\Delta$ takes as inputs two vectors in $[k]^n$. The output may, for example, be a single integer, but this will not always be the case. Most research studies the two following distance functions: \begin{enumerate}[label=\alph*.] \item\textit{``Black-peg and white-peg.''} Informally, a black peg denotes ``the correct color in the correct spot,'' and a white peg denotes ``the correct color in an incorrect spot.'' For two vectors $Q_t$ and $H$, the black-peg and white-peg distance function is the ordered pair $\Delta(Q_t, H) := (b(Q_t, H), w(Q_t, H))$ where \begin{equation*}\label{blackHitsDefinition} b(Q_t, H) = \left|\{i\in [1,n] \mid q_i = h_i\}\right|, \end{equation*} and \begin{equation*}\label{whiteHitsDefinition} w(Q_t, H) = \max_{\sigma}~b(\sigma(Q_t), H) - b(Q_t, H), \end{equation*} where $\sigma$ iterates over all permutations of $Q_t$. This variant is the distance function used in the original game of Mastermind. \item\textit{``Black-peg-only.''} This is simply $\Delta(Q_t, H) := b(Q_t, H)$, where $b$ is defined as above. \end{enumerate} \item($R$) \textit{Repetition.} A commonly-studied variant of the game introduces the restriction that the guesses and vectors cannot have repeated components, i.e. they are vectors of the form $v \mid i \neq j \Rightarrow v_i \neq v_j$. \item($A$) \textit{Adaptiveness Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
Trump’s administration is acting quickly to dangerously reimagine reality. It has deleted all specific mentions of “climate change” and “global warming,” as well as removed an entire page dedicated to the subject at the http://www.whitehouse.gov/energy/climate-change URL, which is no longer an active link. UPDATE: While it’s typical to archive previous administration pages under new leadership, as Snopes.com points out, the fact remains that the new White House page removes mention of “climate change” and promotes a new energy policy page which specifically states it will pursue the elimination of key environmental policies like the Climate Action Plan. Motherboard noticed the change when it occurred at noon ET, when the official changeover took place. A screenshot of how it appeared under President Obama is included below for references, and you can view a live version courtesy of the Wayback Machine. The White House official website also used to list Climate Change as a “Top Issue,” but now the closest equivalent is “American First Energy Plan,” which leads to a statement about Trump’s broad policy goals in terms of U.S. energy policies, focusing on protectionist resource usage measures. The page also includes this troubling passage, indicating in no uncertain terms Trump’s goals as they pertain to the issue his new administration website won’t even call out by name: For too long, we’ve been held back by burdensome regulations on our energy industry. President Trump is committed to eliminating harmful and unnecessary policies such as the Climate Action Plan and the Waters of the U.S. rule. Lifting these restrictions will greatly help American workers, increasing wages by more than $30 billion over the next 7 years. As of this writing, the EPA.gov website still lists “Climate Change” as one of its most popular environmental topic pages, so the official government agency overseeing environmental issues at least acknowledges the existence of the world’s most pressing ecological issue. That may change, however. Motherboard reports insiders are skeptical that the EPA’s web presence will survive untouched under the new administration, and the new White House policy overview on energy policies says it will reset the EPA’s focus on protecting clean air and water domestically. These kinds of dangerous and irresponsible changes to institutional repositories of public knowledge on serious issues agreed upon by the vast majority of credible experts are unlikely to be isolated. They could also have a profound impact on the availability of environmental data, and of course policy changes that follow from these immediate positional shifts are likely to have considerable effect on startups focused on green and clean tech projects. The official White House page for LGBTQ rights was also removed when the turnover occurred Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
Efforts to terminate the North American Free Trade Agreement (NAFTA) aren’t going very well for the Trump administration. Talks between Mexico, the United States, and Canada stalled on Tuesday as trade negotiators clashed and exchanged barbs over steep U.S. demands. Both Mexico and Canada have indicated U.S. requests are too extreme, with Canadian Foreign Minister Chrystia Freeland accusing the United States of bringing a “winner-take-all mindset” to negotiations. U.S. Trade Representative Robert Lighthizer said by contrast that he was “surprised and disappointed by the resistance to change” displayed by his counterparts. That back and forth isn’t going away. Plans for a December deadline have been scrapped, and negotiations are now likely to stretch through March 2018. But it’s unclear whether any of that will help officials reach a consensus. Here’s why. How we got here All three North American countries entered into NAFTA in January 1994 with the aim of easing barriers to trade and investment across the region. The landmark agreement — which involved years of negotiations — took over a decade to reach its full effect and its impact on the North American economy is indisputable. Along with dispute resolution mechanisms and intellectual property protections, agriculture, textiles, and car manufacturing were all major components of the agreement, with environmental and labor safeguards agreed on the side. It also allows for all three countries to sell goods without imposing tariffs on one another. Advertisement Since NAFTA was introduced, trade between Canada, the United States, and Mexico has more than tripled. Mexico sends around 80 percent of its exports to its northern neighbor and a number of U.S. factories are housed in Mexico, which in turn churn out items for U Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} A popular version of the third law of thermodynamics is that the entropy density of a physical system tends to zero in the $T \to 0$ limit\cite{mandl}. However, there is a class of theoretical models that violate this law\cite{fowler33,pauling,nagle66,lieb67,chow87,bramwell01,castelnovo08}:\ models in this class exhibit a ground-state degeneracy which grows exponentially with the system size, leading to a non-zero entropy density even at $T=0$. Nor can these be easily dismissed as theorists' abstractions, since one also sees ample evidence in experiment\cite{giauque36,harris97,ramirez99,higashinaka03} that there are systems in which the entropy plateaus at a non-zero value over a large range of temperature. In many such cases it is suspected that it eventually falls to zero at a much lower temperature scale, though recent theoretical work on skyrmion magnets suggests that this intuition may not always be reliable \cite{moessner}. Whatever the ultimate low-temperature fate of these materials, it is clear that over a broad range of temperatures they exhibit physics which is well captured by models with a non-zero residual entropy density. One important class of these are so-called ice models, in which the ground-state manifold consists of all configurations which satisfy a certain local `ice rule' constraint\cite{siddharthan99,denhertog00,isakov05}. The first such model was Pauling's model for the residual configurational entropy of water ice\cite{pauling}. Here the local constraint is that two of the four hydrogens neighboring any given oxygen should be chemically bonded to it to form a water molecule. Similar models were subsequently discovered to apply to the orientations of spins along local Ising axes in certain rare-earth pyrochlores\cite{siddharthan99,bramwell01}, which by analogy were dubbed `spin ice' compounds. Such models develop power-law spin-spin correlations at low temperatures, with characteristic `pinch points' in the momentum representation of the spin-spin correlation function\cite{bramwell01a}, but they do not order. Their low-temperature state is often referred to as a `co-operative paramagnet' \cite{villain79}. One interesting feature of such co-operative paramagnets is their response to an applied magnetic field. The configurations that make up the ice-rule manifold usually have different magnetizations; thus an applied field, depending on its direction, may either reduce\cite{higashinaka03,hiroi03,tabata06} or entirely eliminate\cite{fukazawa02,jaubert08} the degeneracy. In the latter case, further interesting physics may arise when the system is heated, especially if the ice-rule constraints do not permit the thermal excitation of individual flipped spins. In such cases the lowest-free-energy excitation may be a {\it string\/} of flipped spins extending from one side of the system to the other. A demagnetization transition mediated by such excitations is known as a {\it Kasteleyn transition}\cite{kasteleyn,fennell07,jaubert08}. In spin ice research to date, insight has often been gained from the study of simplified models where the dimensionality is reduced or the geometry simplified while retaining the essential physics\cite{mengotti11,chern12,wan}. In that spirit, we present in this paper a two-dimensional ice model which exhibits a Kasteleyn transition in an applied magnetic field. The model is especially interesting since, unlike its three-dimensional counterparts, it has the same Ising quantization axis for every spin. This raises the possibility that it could be extended to include a transverse magnetic field, thereby allowing the exploration of quantum Kasteleyn physics. The remainder of this paper is structured as follows. In section \ref{sec:model}, we present our spin ice model, along with some analytical and numerical results on its thermodynamic properties in the absence of an applied magnetic field Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
American tourist gives Nazi salute in Germany, is beaten up BERLIN (AP) — Police say a drunken American man was punched by a passer-by as he gave the stiff-armed Nazi salute multiple times in downtown Dresden. Dresden police said Sunday the 41-year-old, whose name and hometown weren’t given for privacy reasons, suffered minor injuries in the 8:15 a.m. Saturday assault. Police say the American, who is under investigation for violating Germany’s laws against the display of Nazi symbols or slogans, had an extremely high blood alcohol level. His assailant fled the scene, and is being sought for causing bodily harm. It’s the second time this month that tourists have gotten themselves into legal trouble for giving the Nazi salute. On August 5 two Chinese tourists were caught taking photos of themselves making the gesture in front of Berlin’s Reichstag building Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} \label{sec:1} The influence of social groups in pedestrian dynamics, especially in evacuation scenarios, is an area of recent interest, see e.g. \cite{mueller2,mueller} and other contributions in these proceedings. The situations that are considered are widespread and well-known in everyday life. For example, many people visit concerts or soccer matches not alone, but together with family and friends in so-called social groups. In case of emergency, these groups will try to stay together during an evacuation. The strength of this cohesion depends on the composition of the social group. Several adult friends would form a loose group that is mainly connected via eye contact, whereas a mother would take her child's hand and form a strong or even fixed bond. In addition, even the size of the social groups could have an effect on the evacuation behaviour. In order to consider these phenomena in a more detailed way, a cooperation of researchers of the universities of Cologne and Wuppertal and the Forschungszentrum J\"ulich has performed several experiments aiming at the determination of the general influence of inhomogeneities on pedestrian dynamics. They contained two series of experiments with pupils of different ages in two schools in Wuppertal. The first series focussed on the determination of the fundamental diagram of inhomogeneous groups, i.e. pedestrians of different size. The second series of experiments considered evacuation scenarios Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Ultra-cold molecules can play important roles in studies of many-body quantum physics~\cite{pupillo2008cold}, quantum logic operations~\cite{demille2002quantum}, and ultra-cold chemistry~\cite{ni2010dipolar}. In our recent studies of LiRb, motivated largely by its large permanent dipole moment~\cite{aymar2005calculation}, we have explored the generation of these molecules in a dual species MOT~\cite{dutta2014formation,dutta2014photoassociation}. In particular, we have found that the rate of generation of stable singlet ground state molecules and first excited triple state molecules through photoassociation, followed by spontaneous emission decay, can be very large~\cite{v0paper,Adeel,lorenz2014formation}. There have been very few experimental studies of triplet states in LiRb~\cite{Adeel}, in part because they are difficult to access in thermally-distributed systems. Triplet states of bi-alkali molecules are important to study for two reasons: first, Feshbach molecules, which are triplet in nature, provide an important association gateway for the formation of stable molecules~\cite{marzok2009feshbach}; also, photoassociation (PA) of trapped colliding atoms is often strongest for triplet scattering states. Mixed singlet - triplet states are usually required to transfer these molecules to deeply bound singlet states. \begin{figure}[t!] \includegraphics[width=8.6cm]{PEC.png}\\ \caption{(Color on-line) Energy level diagram of the LiRb molecule, showing relevant PECs from Ref.~\protect\cite{Korek}. Vertical lines show the various optical transitions, including {\bf (a)} photoassociation of atoms to molecular states below the D$_1$ asymptote; {\bf (b)} spontaneous decay of excited state molecules leading to the $a \: ^3 \Sigma ^+$ state; {\bf (c)} RE2PI to ionize LiRb molecules, ($\nu_{c}$ used later in this paper is the frequency of this laser source); and {\bf (d)} state-selective excitation of the $a \: ^3 \Sigma ^+$ state for depletion of the RE2PI signal (with laser frequency $\nu_{d}$). The black dashed line represents our PA states. The inset shows an expanded view of the different $d \ ^3\Pi$ spin-orbit split states as well as the perturbing neighbor $D \ ^1\Pi$.} \label{fig:PEC} \end{figure} \begin{figure*} [t!] \includegraphics[width=\textwidth]{v11Progression.png}\\ \caption{(Color on-line) Subsection of the RE2PI spectra. The PA laser is tuned to the $2(0^-) \ v=-11 \ J=1$ resonance, from which spontaneous decay is primarily to the $a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ state. Most of these lines are $d \ ^3\Pi_{\Omega} \ v^{\prime} \leftarrow a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ transitions, where $v^\prime$ is labeled on individual lines. From top to bottom: black solid lines label transitions to $\Omega=2$, blue dashed lines label transitions to $\Omega=1$, green dot-dashed lines label transitions to $\Omega=0$. Also shown (red dotted lines) are three $ D \ ^1\Pi \ v^\prime \leftarrow a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ transitions.} \label{fig:v11progression} \end{figure*} We show an abbreviated set of potential energy curves (PEC), as calculated in Ref.~\cite{Korek}, in Fig.~\ref{fig:PEC}. The d$^3 \Pi$ - D$^1 \Pi$ complex in LiRb, asymptotic to the Li 2p $^2P_{3/2, 1/2}$ + Rb 5s $^2S_{1/2}$ free atom state, has several features that can promote its utility in stimulated-Raman-adiabatic-passage (STIRAP) and photoassociation. First, the \textit{ab inito} calculations of Ref.~\cite{Korek} predict mixing between low vibrational levels of the $d \ ^3\Pi_1$ and the D$^1 \Pi$ states. Second, both legs of a STIRAP transfer process from loosely bound triplet-character Feshbach molecules to the rovibronic ground state can be driven with commercially-available diode lasers. And third, similar deeply bound $^3 \Pi$ resonances have been successfully used for short-range PA in RbCs ~\cite{RbCs1,RbCs2,RbCs3}. While an interesting discovery on its own, spontaneous decay of these states after PA can populate the $a \ ^3\Sigma^+ \ v^{\prime \prime}=0$ state; one RbCs team~\cite{RbCs2} found spontaneous decay of these states even populated the $X \ ^1\Sigma^+ \ v^{\prime \prime}=0$ state Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Principle of nano strain-amplifier} \begin{figure*}[t!] \centering \includegraphics[width=5.4in]{Fig1} \vspace{-0.5em} \caption{Schematic sketches of nanowire strain sensors. (a)(b) Conventional non-released and released NW structure; (c)(d) The proposed nano strain-amplifier and its simplified physical model.} \label{fig:fig1} \vspace{-1em} \end{figure*} Figure \ref{fig:fig1}(a) and 1(b) show the concept of the conventional structures of piezoresistive sensors. The piezoresistive elements are either released from, or kept on, the substrate. The sensitivity ($S$) of the sensors is defined based on the ratio of the relative resistance change ($\Delta R/R$) of the sensing element and the strain applied to the substrate ($\varepsilon_{sub}$): \begin{equation} S = (\Delta R/R)/\varepsilon_{sub} \label{eq:sensitivity} \end{equation} In addition, the relative resistance change $\Delta R/R$ can be calculated from the gauge factor ($GF$) of the material used to make the piezoresistive elements: $\Delta R/R = GF \varepsilon_{ind}$, where $\varepsilon_{ind}$ is the strain induced into the piezoresistor. In most of the conventional strain gauges as shown in Fig. \ref{fig:fig1} (a,b), the thickness of the sensing layer is typically below a few hundred nanometers, which is much smaller than that of the substrate. Therefore, the strain induced into the piezoresistive elements is approximately the same as that of the substrate ($\varepsilon_{ind} \approx \varepsilon_{sub}$). Consequently, to improve the sensitivity of strain sensors (e.g. enlarging $\Delta R/R$), electrical approaches which can enlarge the gauge factor ($GF$) are required. Nevertheless, as aforementioned, the existence of the large gauge factor in nanowires due to quantum confinement or surface state, is still considered as controversial. It is also evident from Eq. \ref{eq:sensitivity} that the sensitivity of strain sensors can also be improved using a mechanical approach, which enlarges the strain induced into the piezoresistive element Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section*{Supplemental Material} In this Supplemental Material, we provide more numerical data for the ground-state entanglement entropy and entanglement spectrum. \subsection*{Ground-state entanglement entropy} In the main text, we have discussed the ground-state entanglement entropy $S(\overline{\rho})$ obtained by averaging the density matrices of the three ground states, i.e., $\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$. Now we compute the corresponding result $S(|\Psi_i\rangle)$ and its derivative $dS(|\Psi_i\rangle)/dW$ of the three individual states. The sample-averaged results are shown in Fig.~\ref{Spsi}. The data of three individual states have some differences, but are qualitatively the same: for all of them, the entanglement decreases with $W$, and the derivative with respect to $W$ has a single minimum that becomes deeper for larger system sizes. For the finite systems that we have studied, the location of the minimum does depend somewhat on the individual states, but the value does not deviate much from $W=0.6$. To incorporate the effects of all of the three states, we compute the mean $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$. This is an alternative averaging method to the one ($\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$) that we use in the main text. The sample-averaged results are shown in Fig.~\ref{Sbar}. The minimum of $\langle d\overline{S}/dW\rangle$ is located at $W\approx0 Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
Matt Cain makes startling revelation about his pitching arm Milwaukee -- Matt Cain made a startling revelation about his right arm: He said he has not been able to extend it fully since high school and has learned to throw without full extension. The bone chips in his elbow, which will be removed in surgery Monday, have worsened the condition. Cain provided a demonstration in the visiting clubhouse Wednesday at Miller Park. He extended his left arm fully, but his right arm goes only so far. "I've never had full range of motion in my arm," he said. "I still won't after the surgery. It won't be good for my elbow." Cain finally agreed to surgery because the pain would not allow him to pitch. "I lost too much range of motion in my arm," he said. "I probably would have lost stuff, too. It was just time to do it." The 29-year-old right-hander firmly denied that the bone chips caused his steadily rising ERA over the past several years, saying, "I don't believe that's right. A lot of it was making bad pitches and not executing them. I didn't have a problem (with the chips) until just before the All-Star break." Cain said the bone chips bothered him mainly between starts, and he found a way to "manipulate" them out of harm's way. Now they have shifted in a way that he cannot maneuver them. San Francisco Giants pitcher Matt Cain has not been able to extend his right arm fully since high school. Bone chips in his elbow, which will be removed in surgery, have worsened the condition. San Francisco Giants pitcher Matt Cain has not been able to extend his right arm fully since high school. Bone chips in his elbow, which will be removed in surgery, have worsened the condition. Photo: Darron Cummings, Associated Press Photo: Darron Cummings, Associated Press Image 1 of / 28 Caption Close Matt Cain makes startling revelation about his pitching arm 1 / 28 Back to Gallery He is unsure if he will travel with the team after the surgery but promised to be there for home games, saying, "I'm going to try to keep these guys as loose as possible." Briefly: Angel Pagan was supposed to arrive in Milwaukee at 10 p.m. local time, but manager Bruce Bochy had not heard from him. Pagan is expected to report Thursday and start in center field. .. Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
Following an on-air rant this morning about Man Booker Prize winner Eleanor Catton, in which he called her a “traitor” and an “ungrateful hua,” radio host Sean Plunket has pre-emptively taken himself to MediaWorks head offices in Auckland, where he intends to remain until such time as he would’ve been called there anyway. A spokesperson for MediaWorks said Punket entered the offices at about 12:11pm this afternoon, about ten minutes after finishing his morning show on Radio Live. The secretary greeted him. “Yes,” said Plunket. “I’m here for, uhm, you know–” “Yes,” she nodded, before directing him to the office of Wendy Palmer, chief executive of MediaWorks radio. Palmer is currently on holiday, and won’t be back until next week, but according to several sources, Plunket remains in the office, snacking on complimentary mints, and has no intention of leaving. “He’s very quiet,” said Julie Langsford, who works near Palmer’s office. “Every hour or so, he just says something to himself under his breath, like ‘oh shit’, or ‘fuck me.’ Not upset, really. Quite peaceful, actually. I think he’s just accepted it, and doesn’t want to fight anymore.” MediaWorks said that the fire department has been alerted, and is on standby to rescue Plunket if he runs out of mints Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Outline} \maketitle \tocless\section{Introduction}{} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} There is a vast literature on the theory of {\em Newton polyhedra}, initiated by V.\ I.\ Arnold's hypothesis that `reasonable' invariants of objects (e.g.\ singularities, varieties, etc) associated to a `typical' (system of) analytic function(s) or polynomial(s) should be computable in terms of their {\em Newton diagrams} or {\em Newton polytopes} (\cref{Newton-definition}). In this article we revisit two of the original questions that shaped this theory, namely the question of computing the Milnor number\footnote{\label{milnor-footnote}Let $f \in \ensuremath{\mathbb{K}}[x_1, \ldots, x_n]$, where $\ensuremath{\mathbb{K}}$ is an algebraically closed field. Then the {\em Milnor number} $\mu(f)$ of $f$ at the origin is the dimension (as a vector space over $\ensuremath{\mathbb{K}}$) of the quotient ring of $\ensuremath{\mathbb{K}}[[x_1, \ldots, x_n]]$ modulo the ideal generated by partial derivatives of $f$ with respect to $x_j$'s.} of the singularity at the origin of the hypersurface determined by a generic polynomial or power series, and the question of computing the number (counted with multiplicity) of isolated zeroes of $n$ generic polynomials in $n$ variables. The first question was partially solved in a classical work of Kushnirenko \cite{kush-poly-milnor} and a subsequent work of Wall \cite{wall}; Bernstein \cite{bern}, following the work of Kushnirenko \cite{kush-poly-milnor}, solved the second question for the `torus' $(\kk^*)^n$ (where $\ensuremath{\mathbb{K}}$ is an algebraically closed field and $\ensuremath{\mathbb{K}}^* := \ensuremath{\mathbb{K}} \setminus \{0\}$), and many other authors (including Khovanskii \cite{khovanus}, Huber and Sturmfels \cite{hurmfels-bern}, Rojas \cite{rojas-toric}) gave partial solutions for the case of the affine space $\ensuremath{\mathbb{K}}^n$. Extending the approach from Bernstein's proof in \cite{bern} of his theorem, we give a complete solution to the first problem, and complete the program of extending Bernstein's theorem to $\ensuremath{\mathbb{K}}^n$ (or more generally, to the complement of a union of coordinate subspaces of $\ensuremath{\mathbb{K}}^n$). \\ In \cite{kush-poly-milnor} Kushnirenko gave a beautiful expression for a lower bound on the generic Milnor number and showed that a polynomial (or power series) attains this bound in the case that its Newton diagram is {\em convenient}\footnote{Kushnirenko used the term {\em commode} in French; `convenient' is also widely used, see e.g.\ \cite{boubakri-greuel-markwig}.} (which means that the Newton diagram contains a point on each coordinate axis), and it is {\em Newton non-degenerate}, i.e.\ the following is true (see \cref{inndefinition} for a precise formulation): \begin{savenotes} \begin{align} \parbox{.84\textwidth}{% for each {\em weighted order}\footnote{A {\em weighted order} corresponding to weights $(\nu_1, \ldots, \nu_n) \in \ensuremath{\mathbb{Z}}^n$ is the map $\nu:\ensuremath{\mathbb{K}}[x_1, \ldots, x_n] \to \ensuremath{\mathbb{Z}}$ given by $\nu(\sum a_\alpha x^\alpha ) := \min\{\sum_{k=1}^n \alpha_k\nu_k: a_\alpha \neq 0\}$.} $\nu$ on $\ensuremath{\mathbb{K}}[x_1, \ldots, x_n]$ with positive weights, the partial derivatives of the corresponding {\em initial form}\footnote{Given a weighted order $\nu$, the {\em initial form} of $f = \sum_\alpha a_\alpha x^\alpha \in \ensuremath{\mathbb{K}}[x_1, \ldots, x_n]$ is the sum of all $a_\alpha x^\alpha$ such that $\nu(x^\alpha) = \nu(f)$ Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{sec:intro} The precise manipulation of nano and sub-nanoscale physical systems lies at the heart of the ongoing quantum revolution, by which new communication and information technologies are expected to emerge \cite{bib:natphot2009,bib:nature2010}. In this context, an amazing progress has been made in the study of non-equilibrium dynamics of many-body quantum systems, both theoretically and experimentally \cite{bib:rev_mbody2008,bib:rev_neq2011}. A wide range of different phenomena has been closely studied in recent years, such as many-body localization \cite{bib:mbl2010,bib:mbl2015}, relaxation \cite{bib:rigol2007,bib:eisert2008,bib:wisn2015}, thermalization \cite{bib:rigol2008,bib:santos2011,bib:fazio2011}, quantum phase transitions \cite{bib:qpt2011}, among others.\\ Understanding the dynamics of such complex quantum systems is the first step towards the ultimate goal: the ability to engineer its complete time-evolution using a small number of properly tailored control fields. To tackle this problem, optimal control theory (OCT) \cite{bib:tannor1993,bib:rabitz1998} emerges as the natural tool. Routinely used in various branches of science \cite{bib:krotov1996}, optimization techniques allows to derive the required shape for a control field $\epsilon(t)$ that optimizes a particular dynamical process for a quantum system described by a Hamiltonian $H(\epsilon)$. For example, a typical goal in quantum control is to connect a given initial $\Ket{\psi_0}$ and target states $\Ket{\psi_f}$ in some evolution time $T$. In recent years, optimal control has been applied with great success in systems of increasing complexity, with applications including state control of many-boson dynamics \cite{bib:sherson2013,bib:calarco2015}, the crossing of quantum-phase transitions \cite{bib:doria2011}, generation of many-body entangled states \cite{bib:mintert2010,bib:caneva2012} and optimization of quantum thermodynamic cycles \cite{bib:montangero2016}. A lot of attention has also been devoted to investigate the fundamental limitations of OCT, most of all in connection with the study of the so-called quantum speed limit \cite{bib:caneva2009,bib:murphy2010,bib:hegerfeldt2013,bib:nos_qsl2013,bib:nos_qsl2015}. In a recent work, OCT has even been used in a citizen-science scenario allowing to investigate the power of gamification techniques in solving quantum control problems \cite{bib:sherson2016}.\\ In this work, we investigate the connection between the complexity of a quantum system and its controllability. To this end, we study optimal control protocols on a spin-1/2 chain with short-range interactions, both in the few- and many-body regimes. By using this model, we are able to tune the physical complexity of the system in two different ways: (a) by adding excitations to the chain, we can increase the system space dimension; (b) by tuning the interparticle coupling, we can drive the system through a transition from a regular energy spectrum to a chaotic one. We perform an unconstrained optimization in order to obtain the control fields needed to drive various physical processes, and define two figures of merit based on the frequency spectrum of the fields: the spectral bandwidth, associated with the maximum frequency present in the field and the spectral inverse participation ratio (sIPR), related to the signal complexity. We find that the spectral bandwidth is strongly connected to the structure of the control Hamiltonian. In the common scenario where the control is applied locally on any site of the chain, we find that the bandwidth is independent of the state space dimension, for various processes. On the other hand, the complexity of the signal grows with the dimension, due to the increase of energy levels Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} The combinatorial structure treated in this paper is a $2 \rightarrow 1$ directed hypergraph defined as follows. \begin{definition} A \emph{$2 \rightarrow 1$ directed hypergraph} is a pair $H = (V,E)$ where $V$ is a finite set of \emph{vertices} and the set of \emph{edges} $E$ is some subset of the set of all pointed $3$-subsets of $V$. That is, each edge is three distinct elements of $V$ with one marked as special. This special vertex can be thought of as the \emph{head} vertex of the edge while the other two make up the \emph{tail set} of the edge. If $H$ is such that every $3$-subset of V contains at most one edge of $E$, then we call $H$ \emph{oriented}. For a given $H$ we will typically write its vertex and edge sets as $V(H)$ and $E(H)$. We will write an edge as $ab \rightarrow c$ when the underlying $3$-set is $\{a,b,c\}$ and the head vertex is $c$. \end{definition} For simplicity from this point on we will always refer to $2 \rightarrow 1$ directed hypergraphs as just \emph{graphs} or sometimes as \emph{$(2 \rightarrow 1)$-graphs} when needed to avoid confusion. This structure comes up as a particular instance of the model used to represent definite Horn formulas in the study of propositional logic and knowledge representation \cite{angluin1992, russell2002}. Some combinatorial properties of this model have been recently studied by Langlois, Mubayi, Sloan, and Gy. Tur\'{a}n in \cite{langlois2009} and \cite{langlois2010}. In particular, they looked at the extremal numbers for a couple of different small graphs. Before we can discuss their results we will need the following definitions. \begin{definition} Given two graphs $H$ and $G$, we call a function $\phi:V(H) \rightarrow V(G)$ a homomorphism if it preserves the edges of $H$: \[ab \rightarrow c \in E(H) \implies \phi(a)\phi(b) \rightarrow \phi(c) \in E(G).\] We will write $\phi:H \rightarrow G$ to indicate that $\phi$ is a homomorphism. \end{definition} \begin{definition} Given a family $\mathcal{F}$ of graphs, we say that a graph $G$ is \emph{$\mathcal{F}$-free} if no injective homomorphism $\phi:F \rightarrow G$ exists for any $F \in \mathcal{F}$. If $\mathcal{F} = \{F\}$ we will write that $G$ is $F$-free. \end{definition} \begin{definition} Given a family $\mathcal{F}$ of graphs, let the \emph{$n$th extremal number} $\text{ex}(n,\mathcal{F})$ denote the maximum number of edges that any $\mathcal{F}$-free graph on $n$ vertices can have. Similarly, let the \emph{$n$th oriented extremal number} $\text{ex}_o(n,\mathcal{F})$ be the maximum number of edges that any $\mathcal{F}$-free oriented graph on $n$ vertices can have. Sometimes we will call the extremal number the \emph{standard} extremal number or refer to the problem of determining the extremal number as the \emph{standard version} of the problem to distinguish these concepts from their oriented counterparts. As before, if $\mathcal{F} = \{F\}$, then we will write $\text{ex}(n,F)$ or $\text{ex}_o(n,F)$ for simplicity. \end{definition} These are often called Tur\'{a}n-type extremal problems after Paul Tur\'{a}n due to his important early results and conjectures concerning forbidden complete $r$-graphs \cite{turan1941, turan1954, turan1961}. Tur\'{a}n problems for uniform hypergraphs make up a large and well-known area of research in combinatorics, and the questions are often surprisingly difficult. Extremal problems like this have also been considered for directed graphs and multigraphs (with bounded multiplicity) in \cite{brown1973} and \cite{brown1969} and for the more general directed multi-hypergraphs in \cite{brown1984}. In \cite{brown1969}, Brown and Harary determined the extremal numbers for several types of specific directed graphs. In \cite{brown1973}, Brown, Erd\H{o}s, and Simonovits determined the general structure of extremal sequences for every forbidden family of digraphs analogous to the Tur\'{a}n graphs for simple graphs. The model of directed hypergraphs studied in \cite{brown1984} have $r$-uniform edges such that the vertices of each edge is given a linear ordering Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
SUNRISE, Fla. — The marriage between the Rangers and Martin St. Louis has ended, with several sources confirming the 40-year-old impending free agent winger will not return to Broadway. The Post has been told St. Louis, who suffered through a deep second-half slump that continued through the playoffs, does not intend to retire. It is believed the Devils and Penguins have at least preliminary interest in St. Louis that is reciprocal, though Pittsburgh is believed to be in the market for Blackhawks winger Patrick Sharp. St. Louis, eligible for a one-year, over-35, bonus-laden contract, recorded seven goals over the final 35 games of the season before scoring just once in the Blueshirts’ 19 playoff games as his game disintegrated. His 21 goals tied with Chris Kreider for second on the team behind Rick Nash’s 42. The winger, obtained at the 2014 trade deadline in exchange for Ryan Callahan and two first-round draft picks, recorded 60 points (22-38) in 93 games as a Ranger, adding 22 points (9-13) in 44 playoff matches as an integral part of the 2014 Stanley Cup finalists and the 2015 Presidents’ Trophy winners. Callahan, meanwhile, has recorded 65 points (30-35) in 97 games for Tampa Bay, adding eight points (2-6) in 19 playoff contests for the Lightning, who advanced to the final this year after knocking out the Blueshirts in the seven-game Eastern final. The Rangers would have had a very difficult job fitting St. Louis under the cap even if there had been mutual interest in an encore — which there was not by either party, we’re told, with the Hall of Fame-bound winger apparently unhappy over the way he was used as the playoffs evolved. The club will now have to find a top-nine replacement for him, either internally or from the outside, with Mats Zuccarello and J.T. Miller penciled in as the Blueshirts’ top two right wings Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
It's become the stuff of legend — lying deep in the depths of Lake Ontario are the remains of nine Avro Arrow models launched in the mid-1950s. Today a renewed search was announced to find and bring the model planes to a home at the Canada Aviation and Space Museum in Ottawa and the National Air Force Museum of Canada in Trenton, Ont. The Arrow, a sleek jet interceptor developed in Malton, Ont., in the 1950s, had the potential to propel Canada to the forefront in military aviation. When the program was abruptly cancelled in 1959 by Prime Minister John Diefenbaker, more than 30,000 employees lost their jobs — and the planes were ordered to be destroyed. The Arrow was built to intercept Soviet bombers that might have entered North American airspace over the North Pole during the Cold War. (Avro Museum) "Cut up with torches, hammered down with steel balls like they destroy buildings," described a photographer who rented a plane to take footage of the destruction, since media were not allowed in the facility. Cut up with torches, hammered down with steel balls like they destroy buildings. - Description of photographer who captured footage of Avro Arrow destruction It's believed that nine three-metre long, or one-eighth scale models of the Arrow fitted with sensors were strapped onto rockets, and fired over the lake. Today, with the help of equipment that assisted the successful Franklin expedition in 2016, the details of a search for those models were outlined. "We're not trying to rewrite the history of what happened to the Avro program, this is a search and ideally recovery," said John Burzynski, the president and CEO of Osisko Mining — the man who will lead the search team. April 1959: all five Arrows are cut to pieces. Nobody in Ottawa will own up to the decision. 3:53 Burzynski said the idea has been a work in progress for the last year and a half and his group has recently acquired all the necessary permits to conduct the search and possible recovery. The mission, a collaborative effort by several private companies in assistance with the Canadian Coast Guard and the Royal Canadian Military Institute, will begin next week. John Burzynski will lead the search team, which starts work in Lake Ontario next week. (Makda Ghebreslassie/CBC) A Newfoundland company, Kraken Sonar Systems, was awarded the $500,000 contract which will involve deploying its state-of-the-art ThunderFish underwater vehicle and AquaPix sonar system to capture high-quality images of the lake bed. This won't be the first search for the models, but Burzynski hopes it will be the first successful search. Theories about location abound "There's a lot of different stories about where we think they could be," said David Shea, vice-president of engineering at Kraken, the company that created the sonar equipment to be used. David Shea, with Kraken Sonar Systems, will provide an unmanned, untethered automated underwater vehicle to assist in the search. (Makda Ghebreslassie/CBC) They know the models took off from Point Petre in Prince Edward County, more than 200 kilometres away from Toronto. The search grid covers water ranging in depth from five metres closer to shore and 100 metres farther out in the lake, Shea said. The mission will run the underwater sonar equipment for eight hours a day, after which the data will be downloaded and analyzed by the team of scientists, which will also include archeologists. They expect to search an area about half the size of Vancouver, or 64 square kilometres. Thunderfish is in the water, no tethers (it's worth more than $2 million so you don't want to lose it) <a href="https://twitter.com/hashtag/cbcnl?src=hash">#cbcnl</a> <a href="https://t.co/5uESn9ai7M">pic.twitter.com/5uESn9ai7M</a> —@PeterCBC A 1980 CBC report says after the destruction of the existing Arrow planes — created based on the models now in Lake Ontario — pieces were sold to a Hamilton junk dealer, for 6.5 cents per pound Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section*{References}} \usepackage{graphicx} \usepackage{amsmath} \usepackage{natbib} \usepackage{amssymb} \usepackage{amsthm} \usepackage{lineno} \usepackage{subfig} \usepackage{enumerate} \usepackage{fullpage} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{xcolor} \usepackage[colorinlistoftodos]{todonotes} \usepackage{hyperref} \biboptions{sort,compress} \usepackage{xcolor} \newcommand*\patchAmsMathEnvironmentForLineno[1]{% \expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname \expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname \renewenvironment{#1}% {\linenomath\csname old#1\endcsname}% {\csname oldend#1\endcsname\endlinenomath} \newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{% \patchAmsMathEnvironmentForLineno{#1}% \patchAmsMathEnvironmentForLineno{#1*}}% \AtBeginDocument{% \patchBothAmsMathEnvironmentsForLineno{equation}% \patchBothAmsMathEnvironmentsForLineno{align}% \patchBothAmsMathEnvironmentsForLineno{flalign}% \patchBothAmsMathEnvironmentsForLineno{alignat}% \patchBothAmsMathEnvironmentsForLineno{gather}% \patchBothAmsMathEnvironmentsForLineno{multline}% } \usepackage{color,soul} \usepackage{enumitem} \usepackage{mathtools} \usepackage{booktabs} \definecolor{lightblue}{rgb}{.90,.95,1} \definecolor{darkgreen}{rgb}{0,.5,0.5} \newcommand\assignment[1]{\todo[inline,color=red!10,size=\normalsize]{#1}} \newcommand\guide[2]{\sethlcolor{lightblue}\hl{#2}\todo[color=lightblue,size=\tiny]{#1}} \newcommand\guidenoa[1]{\sethlcolor{lightblue}\hl{#1}} \definecolor{lightgreen}{rgb}{.90,1,0.90} \newcommand\bcon[1]{\todo[color=lightgreen,size=\tiny]{$\downarrow\downarrow\downarrow$ #1}} \newcommand\econ[1]{\todo[color=lightgreen,size=\tiny]{$\uparrow\uparrow\uparrow$ #1}} \newcommand\commofA[2]{\todo[color=red!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand\commofB[2]{\todo[color=blue!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand\commofC[2]{\todo[color=purple!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand{\boldsymbol{\tau}}{\boldsymbol{\tau}} \newcommand{\tilde{\boldsymbol{\tau}}^{rans}}{\tilde{\boldsymbol{\tau}}^{rans}} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\textbf{[ref]}}{\textbf{[ref]}} \usepackage{changes} \definechangesauthor[name={Reviewer 1}, color = red]{Editor} \usepackage{array} \newcolumntype{P}[1]{>{\centering\arraybackslash}m{#1}} \newcommand{\delete}[1]{\textcolor{gray}{\sout{#1}}} \graphicspath{ {./figs/} } \linespread{1.5} \journal{Flow Turbulence Combust} \begin{document} \begin{frontmatter} \clearpage \title{A~Priori Assessment of Prediction Confidence for Data-Driven Turbulence Modeling} \author[vt]{Jin-Long Wu} \ead{jinlong@vt.edu} \author[vt]{Jian-Xun Wang} \author[vt]{Heng Xiao\corref{corxh}} \cortext[corxh]{Corresponding author. Tel: +1 540 231 0926} \ead{hengxiao@vt.edu} \author[snl]{Julia Ling} \address[vt]{Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24060, USA} \address[snl]{Thermal/Fluid Science and Engineering, Sandia National Laboratories, Livermore, California 94551, USA} \begin{abstract} Although Reynolds-Averaged Navier--Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. However, the prediction performance of data-driven models inevitably depends on the choices of training flows. This work aims to identify a quantitative measure for \textit{a priori} estimation of prediction confidence in data-driven turbulence modeling. This measure represents the distance in feature space between the training flows and the flow to be predicted. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique are used as metrics to quantify the distance between flow data sets in feature space. To examine the relationship between these two extrapolation metrics and the machine learning model prediction performance, the flow over periodic hills at $Re=10595$ is used as test set and seven flows with different configurations are individually used as training sets. The results show that the prediction error of the Reynolds stress anisotropy is positively correlated with Mahalanobis distance and KDE distance, demonstrating that both extrapolation metrics can be used to estimate the prediction confidence \textit{a priori}. A quantitative comparison using correlation coefficients shows that the Mahalanobis distance is less accurate in estimating the prediction confidence than KDE distance. The extrapolation metrics introduced in this work and the corresponding analysis provide an approach to aid in the choice of data source and to assess the prediction performance for data-driven turbulence modeling. \end{abstract} \begin{keyword} turbulence modeling\sep Mahalanobis distance\sep kernel density estimation\sep random forest regression\sep extrapolation\sep machine learning \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} Even with the rapid growth of available computational resources, numerical models based on Reynolds-Averaged Navier--Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows. However, the development of turbulence models has stagnated--the most widely used general-purpose turbulence models (e.g., $k$-$\varepsilon$ models, $k$-$\omega$ models, and the S--A model) were all developed decades ago. These models are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature~\cite{Craft}. This lack of accuracy in complex flows has diminished the utility of RANS as a predictive simulation tool for use in engineering design, analysis, optimization, and reliability assessments. Recently, data-driven turbulence modeling has emerged as a promising alternative to traditional modeling approaches. While data-driven methods come in many formulations and with different assumptions, the basic idea is that a model or correction term is determined based on data Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
A large number of Indian workers in Saudi Arabia who have lost their jobs and cannot even buy food due to severe financial hardship will be brought back home, External Affairs Minister Sushma Swaraj said, asserting that not one of them will go hungry. In a statement in Parliament amid concerns by members in Lok Sabha and Rajya Sabha, Swaraj said her deputy V K Singh is leaving for Saudi Arabia to oversee the evacuation process. She said the Indian embassy in the Gulf nation was running five camps to feed the affected people. Advertising “Not one worker of ours will go hungry. This is my assurance to the country through Parliament… We will bring all of them back to India,” Swaraj said. Issues like logistics and modalities of a possible repatriation of the workers who want to return to India will be worked out during Singh’s visit. Official sources said approximately 10,000 Indian workers have been affected by the economic slowdown in the Gulf and the situation was “fluid and dynamic”. They said the situation varied from company to company. Sources said 3,172 Indian workers in Riyadh have not been paid their salary dues for several months but are getting regular rations. Separately, 2,450 Indian workers belonging to the Saudi Oger Company are housed in five camps in Jeddah, Mecca and Taif. Since July 25, the company had stopped providing meals to the workers besides defaulting on their salaries, the sources said. Watch Video: What’s making news The Indian Consulate in Jeddah, with the assistance of the diaspora, has provided rations to the workers which should be sufficient for the next 8–10 days, they said. The government, Swaraj said, was in touch with the foreign and labour offices in Saudi Arabia to ensure early evacuation of affected Indians. Advertising Swaraj noted that the law there does not permit an emergency exit visa without no objection certificate from the employers who, she said, have shut their factories and left the country, leaving these employees stranded. The government has requested the Saudi authorities to give them exit visas without NoC from employers and also urged it to clear the dues of workers who have not been paid for months, whenever they settle the accounts with the companies concerned Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
A Hezbollah fighter stands at attention in an orange field near the town of Naqura on the Lebanese-Israeli border on April 20, 2017 (AFP Photo/JOSEPH EID) Naqura (Lebanon) (AFP) - Lebanon's Hezbollah sought Thursday to show that Israel is building up defences in anticipation of another conflict, after a string of statements from Israeli officials warning of a potential confrontation. The powerful Shiite group, which fought a devastating war with the Jewish state in 2006, brought dozens of journalists on a rare and highly-choreographed trip to the demarcation line between Lebanon and Israel. "This tour is to show the defensive measures that the enemy is taking," said Hezbollah spokesman Mohamed Afif, on a hilltop along the so-called Blue Line. A military commander identified as Haj Ihab, dressed in digital camouflage and sunglasses, said the Israeli army was erecting earth berms up to 10 metres (30 feet) high, as well as reinforcing a military position near the Israeli border town of Hanita. "Because their position is directly by the border and the enemy fears that the resistance will advance on it, they have constructed a cliff and additional earth berms and put up concrete blocks," he said. "The Israeli enemy is undertaking these fortifications and building these obstacles in fear of an advance" by Hezbollah, he said. As he spoke, an Israeli military patrol of two armoured cars and a white bus wended their way along a road behind a fence, as two yellow bulldozers moved earth nearby. There has been rising speculation about the possibility of a new war between Israel and Hezbollah, a powerful Lebanese paramilitary organisation, more than a decade after their last direct confrontation. The 34-day conflict in 2006 led to the deaths of 1,200 people in Lebanon, mainly civilians, and 160 Israelis, mostly soldiers. Israel's army chief warned recently that in a "future war, there will be a clear address: the state of Lebanon and the terror groups operating in its territory and under its authority." There have been periodic skirmishes along the UN-monitored demarcation line between Israel and Lebanon, longtime adversaries which are technically still at war with each other. - 'We don't fear war' - Israel withdrew its forces from southern Lebanon in 2000, after an 22-year occupation. Thursday's tour sought to paint Israel as afraid of a new conflict, while depicting Hezbollah as ready for war despite having committed thousands of its fighters to bolstering Syria's President Bashar al-Assad. Journalists were taken from the southern Lebanese town of Naqura, with Hezbollah fighters in full military regalia stationed along the route alongside the group's yellow flag -- despite an official ban on any armed paramilitary presence in southern Lebanon. Faces smeared with black and green camouflage, they stood silently holding guns and RPG launchers. On the demarcation line, officially patrolled by the Lebanese army and the UN peacekeeping force known as UNIFIL, there was little sign of tension. The scents of wild thyme and yellow gorse mingled in the air, the landscape peaceful beyond the noise produced by the sudden scrum of visitors. While eager to discuss the measures they say Israel has been taking, Hezbollah officials refused to be drawn on their own preparations for war, beyond insisting on their ability to fight if one comes Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} The noise assessment in physical measurements time series is an important measure of its statistical characteristics and overall quality. Among the most effective approaches to analyzing measurement noise (scatter) is Allan variance (AVAR), which was originally introduced to estimate the frequency standards instability \cite{Allan1966}. Later, AVAR has proved to be a powerful statistical tool for time series analysis, particularly, for the analysis of geodetic and astronomical observations. AVAR has been used for quality assessment and improvement of the celestial reference frame (CRF) \cite{Feissel2000a,Gontier2001,Feissel2003a,Sokolova2007,Malkin2008j,LeBail2010a,LeBail2014a,Malkin2013b,Malkin2015b}, the time series analysis of station position and baseline length \cite{Malkin2001n,Roberts2002,LeBail2006,Feissel2007,LeBail2007,Gorshkov2012b,Malkin2013b,Khelifa2014}, and studies on the Earth rotation and geodynamics \cite{Feissel1980,Gambis2002,Feissel2006a,LeBail2012,Malkin2013b,Bizouard2014a}. AVAR estimates of noise characteristics have important advantages over classical variance estimates such as standard deviation (STD) and weighted root-mean-square (WRMS) residual. The latter cannot distinguish the different significant types of noise, which is important in several astro-geodetic tasks. Another advantage of AVAR is that it is practically independent of the long-term systematic components in the investigated time series. AVAR can also be used to investigate the spectral characteristics of the signal \cite{Allan1981,Allan1987} that is actively used for analysis of astrometric and geodetic data \cite{Feissel2000a,Feissel2003a,Feissel2006a,Feissel2007}. However, the application of original AVAR to the time series analysis of astro-geodetic measurements may not yield satisfactory results. Unlike clock comparison, geodetic and astrometric measurements mostly consist of data points with unequal uncertainties. This requires a proper weighting of the measurements during the data analysis. Moreover, one often deals with multi-dimensional quantities in geodesy and astronomy. For example, the station coordinates $X$, $Y$, and $Z$ form 3D vector of a geocentric station position (although this example is more complicated because the vertical and horizontal station displacements caused by the geophysical reasons may have different statistical characteristics, including AVAR estimates, see \cite{Malkin2001n,Malkin2013c} and references therein). The coordinates of a celestial object, right ascension and declination, also form a 2D position vector. To analyze such data types, AVAR modifications were proposed in \cite{Malkin2008j}, including weighted AVAR (WAVAR), multi-dimensional AVAR (MAVAR), and weighted multi-dimensional AVAR (WMAVAR). These modifications should be distinguished from the classical modified AVAR introduced in \cite{Allan1981}. The rest of the paper is organized as follows. Section~\ref{sect:overview} introduces AVAR and its modification, and gives several practical illustrations of their basic features. In Section~\ref{sect:results}, a brief overview is provided of the works that employ AVAR in geodesy and astrometry, and basic results obtained with the AVAR technique are presented. Additional details and discussion on the use of AVAR in space geodesy and astrometry can be found in \cite{LeBail2004t,Malkin2011c,Malkin2013b}. \section{Overview of AVAR and its modifications} \label{sect:overview} The classical time-domain AVAR applied to the time series $y_i, i=1, \dots, n$ is given by \cite{Allan1966} \begin{equation} AVAR = \frac{1}{2(n-1)}\sum_{i=1}^{n-1}(y_i-y_{i+1})^2\,. \label{eq:AVAR} \end{equation} Allan deviation ADEV = $\sqrt{\rm{AVAR}}$ is used as a noise characteristics in many data analysis applications. Both AVAR and ADEV estimates will be used throughout the paper depending on the context. In metrology, the analyst is normally interested in computing not only the parameter under investigation but also its uncertainty, as a measure of reliability of obtained result. A method of estimating the AVAR confidence interval is proposed in \cite{Howe1981}. The method of estimating the AVAR confidence interval proposed in \cite{Howe1981} was applied in \cite{LeBail2006} for analysis of geodetic time series Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
Hasan Mustafa Following a trend among coalescing opposition brigades in Syria, a new Islamic muhajireen formation has been declared named Jamaat Ahadun Ahad (Group of The One and Only, in reference to the strict monotheism in Islam). Extremely little is known about the group, aside from whatever bits of information can be gleaned off of social media accounts. Jamaat Ahadun Ahad formed some time back, but has only been announced recently. https://twitter.com/Fulan2weet/status/492970910404870144 Figure 1: The flag of Jamaat Ahadun Ahad What is know, is that Jamaat Ahadun Ahad is a smaller jihadist group consisting of several anonymous and independent muhajireen (foreign fighter) brigades. A number of Ansar (local Syrian) brigades have also joined the formation. Most of the constituent groups are unknown and not affiliated with either Jabhat al-Nusra or the Islamic State, or even the recently formed Jabhat Ansar al-Din. However they do share the ideological goal of these groups, which includes “making the word of Allah the highest” (instituting Islamic governance). A fighter with the group has also stated that they are neutral in regards to infighting plaguing the Syrian jihad. It must be noted that foreign fighter battalions operating in Latakia had always shared a closer relationship with Jabhat al-Nusra than with the Islamic State. Jamaat Ahadun Ahad is active mostly in the Latakia countryside, and its constituent brigades were involved in the 2014 Al-Anfal Offensive in Northern Latakia. Indeed, the groups of Jamaat Ahadun Ahad were among the last to leave the town of Kessab when it fell to the Syrian Arab Army. The overall commander of Jamaat Ahadun Ahad is a man named Al Bara Shishani, further highlighting the prominent role Chechen foreign fighters have played in this conflict. It is unclear whether or not Jamaat Ahadun Ahad shares any sort of relation with the Caucasus Emirate, but it is unlikely as the group is not solely a Northern Caucasian formation. This group shares a number of similarities with Muslim al-Shishani’s Junud al-Sham such as the fact that both are led by a Chechen, both operate heavily in Latakia, and both have attempted to stay neutral and independent in regards to the jihadist infighting. As a mostly foreign fighter brigade, Jamaat Ahadun Ahad boasts many Chechens, Turks, Arabs, Europeans, and even several former members of the Taliban. Based on social media activity on Twitter, many of the group’s supporters are Turkish. On July 26th, it was announced on social media that Jamaat Ahadun Ahad would be launching their first combat operation, most likely against Syrian regime targets, titled “Laylat-ul-Qadr Operation.” The operation is named after Laylat-ul-Qadr (The Night of Power), the holiest night in Islam which falls on an unknown date sometime in the last ten days of Ramadan. Jamaat Ahadun Ahad has also set up a Twitter account where they tweet in Arabic, English, Russian and Turkish. https://twitter.com/Fulan2weet/status/493101760647790592 Figure 2: Soldiers of Jamaat Ahadun Ahad waiting for Laylat-ul-Qadr Operation. Forestry in the back indicates this Jamaat is active in the coastal region Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{#1}\setcounter{equation}{0}} \newcommand{\subsect}[1]{\subsection{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \font\mbn=msbm10 scaled \magstep1 \font\mbs=msbm7 scaled \magstep1 \font\mbss=msbm5 scaled \magstep1 \newfam\mbff \textfont\mbff=\mbn \scriptfont\mbff=\mbs \scriptscriptfont\mbff=\mbss\def\fam\mbff{\fam\mbff} \def{\mbf T}{{\fam\mbff T}} \def{\mbf P}{{\fam\mbff P}} \def{\mbf F}{{\fam\mbff F}} \newcommand{{\mbf D}} {\mathbb{D}} \newcommand{{\mbf R}} { \mathbb{R}} \newcommand{\cH} {{\mathcal H}} \newcommand{\cP} {{\mathcal P}} \newcommand{{\mbf N}} { \mathbb{N}} \newcommand{{\mbf Z}} {\mathbb{Z} } \newcommand{\mbf C} {{\mathbb C}} \newcommand {\mbf Q} {{\mathbb Q}} \newtheorem{Th}{Theorem}[section] \newtheorem{Lm}[Th]{Lemma} \newtheorem{C}[Th]{Corollary} \newtheorem{D}[Th]{Definition} \newtheorem{Proposition}[Th]{Proposition} \newtheorem{R}[Th]{Remark} \newtheorem{Problem}[Th]{Problem} \newtheorem{E}[Th]{Example} \newtheorem*{P1}{Problem 1} \newtheorem*{P2}{Problem 2} \newtheorem*{P3}{Problem 3} \begin{document} \title[On Properties of Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces]{On Properties of Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces} \author{Alexander Brudnyi} \address{Department of Mathematics and Statistics\newline \hspace*{1em} University of Calgary\newline \hspace*{1em} Calgary, Alberta\newline \hspace*{1em} T2N 1N4} \email{abrudnyi@ucalgary.ca} \keywords{Predual space, Whitney problems, Finiteness Principle, linear extension operator, approximation property, dual space, Jackson operator, weak$^*$ topology, weak Markov set} \subjclass[2010]{Primary 46B20; Secondary 46E15} \thanks{Research supported in part by NSERC} \date{} \begin{abstract} Let $C_b^{k,\omega}({\mbf R}^n)$ be the Banach space of $C^k$ functions on ${\mbf R}^n$ bounded together with all derivatives of order $\le k$ and with derivatives of order $k$ having moduli of continuity majorated by $c\cdot\omega$, $c\in{\mbf R}_+$, for some $\omega\in C({\mbf R}_+)$. Let $C_b^{k,\omega}(S):=C_b^{k,\omega}({\mbf R}^n)|_S$ be the trace space to a closed subset $S\subset{\mbf R}^n$. The geometric predual $G_b^{k,\omega}(S)$ of $C_b^{k,\omega}(S)$ is the minimal closed subspace of the dual $\bigl(C_b^{k,\omega}({\mbf R}^n)\bigr)^*$ containing evaluation functionals of points in $S$. We study geometric properties of spaces $G_b^{k,\omega}(S)$ and their relations to the classical Whitney problems on the characterization of trace spaces of $C^k$ functions on ${\mbf R}^n$. \end{abstract} \maketitle \section{Formulation of Main Results} \subsection{Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces} In what follows we use the standard notation of Differential Analysis. In particular, $\alpha=(\alpha_1,\dots,\alpha_n)\in \mathbb Z^n_+$ denotes a multi-index and $|\alpha|:=\sum^n_{i=1}\alpha_i$. Also, for $x=(x_1,\dots, x_n)\in\mathbb R^n$, \begin{equation}\label{eq1} x^\alpha:=\prod^n_{i=1}x^{\alpha_i}_i \ \ \text{ and} \ \ D^\alpha:=\prod^n_{i=1}D^{\alpha_i}_i,\quad {\rm where}\quad D_i:=\frac{\partial}{\partial x_i}. \end{equation} Let $\omega$ be a nonnegative function on $(0,\infty)$ (referred to as {\em modulus of continuity}) satisfying the following conditions: \begin{enumerate} \item[(i)] $\omega(t)$ and $\displaystyle \frac {t}{\omega( t)}$ are nondecreasing functions on $(0,\infty)$;\medskip \item[(ii)] $\displaystyle \lim_{t\rightarrow 0^+}\omega(t)=0$. \end{enumerate} \begin{D}\label{def1} $C^{k,\omega}_b(\mathbb R^n)$ is the Banach subspace of functions $f\in C^k(\mathbb R^n)$ with norm \begin{equation}\label{eq3} \|f\|_{C^{k,\omega}_b(\mathbb R^n)}:=\max\left(\|f\|_{C^k_b(\mathbb R^n)}, |f|_{C^{k,\omega}_b(\mathbb R^n)}\right) , \end{equation} where \begin{equation}\label{eq4} \|f\|_{C^k_b(\mathbb R^n)}:=\max_{|\alpha|\le k}\left\{\sup_{x\in\mathbb R^n}|D^\alpha f(x)|\right\} \end{equation} and \begin{equation}\label{eq5} |f|_{C^{k,\omega}_b(\mathbb R^n)}:=\max_{|\alpha|=k}\left\{\sup_{x,y\in\mathbb R^n,\, x\ne y}\frac{|D^\alpha f(x)-D^\alpha f(y)|}{\omega(\|x-y\|)}\right\}. \end{equation} Here $\|\cdot\|$ is the Euclidean norm of $\mathbb R^n$. \end{D} If $S\subset\mathbb R^n$ is a closed subset, then by $C_b^{k,\omega}(S)$ we denote the trace space of functions $g\in C_b^{k,\omega}(\mathbb R^n)|_{S}$ equipped with the quotient norm \begin{equation}\label{eq1.5} \|g\|_{C_b^{k,\omega}(S)}:=\inf\{\|\tilde g\|_{C_b^{k,\omega}(\mathbb R^n)}\, :\, \tilde g\in C_b^{k,\omega}({\mbf R}^n),\ \tilde g|_{S}=g\}. \end{equation} Let $\bigl(C_b^{k,\omega}(\mathbb R^n)\bigr)^*$ be the dual of $C_b^{k,\omega}(\mathbb R^n)$ Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Many problems, particularly in combinatorics, reduce to asking whether some graph with a given property exists, or alternatively, asking how many such non-isomorphic graphs exist. Such graph search and graph enumeration problems are notoriously difficult, in no small part due to the extremely large number of symmetries in graphs. In practical problem solving, it is often advantageous to eliminate these symmetries which arise naturally due to graph isomorphism: typically, if a graph $G$ is a solution then so is any other graph $G'$ that is isomorphic to $G$. General approaches to graph search problems typically involve either: \emph{generate and test}, explicitly enumerating all (non-isomorphic) graphs and checking each for the given property, or \emph{constrain and generate}, encoding the problem for some general-purpose discrete satisfiability solver (i.e. SAT, integer programming, constraint programming), which does the enumeration implicitly. % In the explicit approach, one typically iterates, repeatedly applying an extend and reduce approach: First \emph{extend} the set of all non-isomorphic graphs with $n$ vertices, in all possible ways, to graphs with $n+1$ vertices; and then \emph{reduce} the extensions to their non-isomorphic (canonical) representatives. % In the constraint based approach, one typically first encodes the problem and then applies a constraint solver in order to produce solutions. The (unknown) graph is represented in terms of Boolean variables describing it as an adjacency matrix $A$. The encoding is a conjunction of constraints that constitute a model, $\varphi_A$, such that any satisfying assignment to $\varphi_A$ is a solution to the graph search problem. Typically, symmetry breaking constraints~\cite{Crawford96,CodishMPS13} are added to the model to reduce the number of isomorphic solutions, while maintaining the correctness of the model. It remains unknown whether a polynomial time algorithm exists to decide the graph isomorphism problem. Nevertheless, finding good graph isomorphism algorithms is critical when exploring graph search and enumeration problems. Recently an algorithm was published by \citeN{Babai15} which runs in time $O\left( {\exp \left( log^c(n) \right) } \right)$, for some constant $c>1$, and solves the graph isomorphism problem. Nevertheless, top of the line graph isomorphism tools use different methods, which are, in practice, faster. \citeN{nauty} introduces an algorithm for graph canonization, and its implementation, called \texttt{nauty}\ (which stands for \emph{no automorphisms, yes?}), is described in \cite{nauty_impl}. % In contrast to earlier works, where the canonical representation of a graph was typically defined to be the smallest graph isomorphic to it (in the lexicographic order), \texttt{nauty}\ introduced a notion which takes structural properties of the graph into account. For details on how \texttt{nauty}~defines canonicity and for the inner workings of the \texttt{nauty}\ algorithm see~\cite{nauty,nauty_impl,hartke_nauty,nautyII}. % In recent years \texttt{nauty}~has gained a great deal of popularity and success. Other, similar tools, are \textsf{bliss}~\cite{bliss} and \textsf{saucy}~\cite{saucy}. The \texttt{nauty}\ graph automorphism tool consists of two main components. (1) a C library, \texttt{nauty}, which may be linked to at runtime, that contains functions applicable to find the canonical labeling of a graph, and (2) a collection of applications, \texttt{gtools}, that implement an assortment of common tasks that \texttt{nauty}\ is typically applied to Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
Features Solid Forged Aluminum Mil-Spec Dimensions Standard, Pre-Cut Magazine Well Receiver Extension & Grip Screw Hole Drilled & Threaded In-The-White or Black Anodized, Not an FFL Item - Requires Machining to Complete The AR-STONER™ 80% Lower Receiver is forged from 7075-T6 aluminum for a precise high quality build. 80% Lower Receivers are available in-the-white or anodized black and require machining by the user to complete. Since final machining is needed, no FFL is required for purchase. The operations required to complete the 80% Lower Receiver include: -Milling the space for the fire control group -Drilling the Trigger Pin hole -Drilling the Hammer Pin hole -Milling the Trigger slot -Drilling the Safety Selector hole (safety selector detent hole is already drilled) Note: No milling/drilling jig, drill bits or instructions included. Significant machining will be required for completion Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{introduction} \setcounter{equation}{0} \numberwithin{equation}{section} In the present paper, we consider the equation \begin{equation}\label{1.1} -y''(x)+q(x)y(x )=f(x),\quad x\in \mathbb R \end{equation} where $f\in L_p^{\loc}(\mathbb R)$, $p\in[1,\infty)$ and \begin{equation}\label{1.2} 0\le q \in L_1^{\loc}(\mathbb R). \end{equation} Our general goal is to determine a space frame within which equation \eqref{1.1} always has a unique stable solution. To state the problem in a more precise way, let us fix two positive continuous functions $\mu(x)$ and $\theta(x),$ $x\in\mathbb R,$ a number $p\in[1,\infty)$, and introduce the spaces $L_p(\mathbb R,\mu)$ and $L_p(\mathbb R,\theta):$ \begin{align} &L_p(\mathbb R,\mu)=\left\{f\in L_p^{\loc}(\mathbb R):\|f\|_{L_p(\mathbb R ,\mu)}^p=\int_{-\infty}^\infty|\mu(x)f(x)|^pdx<\infty\right\}\label{1.3}\\ &L_p(\mathbb R,\theta)=\left\{f\in L_p^{\loc}(\mathbb R):\|f\|_{L_p(\mathbb R ,\theta)}^p=\int_{-\infty}^\infty|\theta(x)f(x)|^pdx<\infty\right\}.\label{1.4} \end{align} For brevity, below we write $L_{p,\mu}$ and $L_{p,\theta},$ \ $\|\cdot\|_{p,\mu}$ and $\|\cdot\|_{p,\theta}$, instead of $L_p(\mathbb R,\mu),$ $L_p(\mathbb R,\theta)$ and $\|\cdot\|_{L_p(\mathbb R,\mu)}$, $\|\cdot\|_{L_p(\mathbb R,\theta)},$ respectively (for $\mu=1$ we use the standard notation $L_p$ $(L_p:=L_p(\mathbb R))$ and $\|\cdot\|_p$ $(\|\cdot\|_p:=\|\cdot\|_{L_p}).$ In addition, below by a solution of \eqref{1.1} we understand any function $y,$ absolutely continuous together with its derivative and satisfying equality \eqref{1.1} almost everywhere on $\mathbb R$. Let us introduce the following main definition (see \cite[Ch.5, \S50-51]{12}: \begin{defn}\label{defn1.1} We say that the spaces $L_{p,\mu}$ and $L_{p,\theta}$ make a pair $\{L_{p,\mu},L_{p,\theta}\}$ admissible for equation \eqref{1.1} if the following requirements hold: I) for every function $f\in L_{p,\theta}$ there exists a unique solution $y\in L_{p,\mu}$ of \eqref{1.1}; II) there is a constant $c(p)\in (0,\infty)$ such that regardless of the choice of a function $f\in L_{p,\theta}$ the solution $y\in L_{p,\mu}$ of \eqref{1 Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
UPDATED 8:20 a.m. Sunday | Denny's Doughnuts and Bakery posted an update Sunday on its Facebook page about the success of its #sheetcaking. "Thanks to those who stopped by yesterday and picked up a cake. We sold 132 cakes and you raised $1,077.35 for Boys & Girls Club of Bloomington-Normal. A huge shout out to our awesome decorators, Anne Nicholl and Brooklynn Reed for coming in before dawn and getting things rolling. Thank you all! Original story: A local bakery is heeding comedian Tina Fey’s advice to turn their angst about white supremacists into a love of sheet cakes. Yes, sheet cakes. During an appearance on “Weekend Update” Thursday night on NBC, Fey responded to last weekend’s violence in Charlottesville. Instead of engaging in the streets with white supremacists, Fey had some less traditional advice. "A lot of us are feeling anxious and are asking ourselves, 'What can I do? I'm just one person,' so I would urge people this Saturday, instead of participating in the screaming matches and potential violence, find a local business to support – maybe a Jewish-run bakery or an African-American-run bakery – order a cake with the American flag on it, and just eat it,” Fey said. Denny’s Doughnuts and Bakery responded with a Facebook post Friday announcing an American flag sheet cake special, going on sale Saturday morning. “It's official. TOMORROW, these sheet cakes are only $7.95 each! Limited quantity. First come, first serve,” Denny’s said in its Facebook post Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
- A pair of parents from Maryland is under fire for pulling pranks on their children and posting them onto their YouTube page. After a recent video titled “INVISIBLE INK PRANK!” was posted on the DaddyOFive YouTube channel on April 12, many FOX 5 viewers have contacted our newsroom saying the parents’ antics are child abuse and child protective services should step in. In the video, the parents play a practical joke on their son, Cody, which ends up leaving him crying hysterically during the stunt. His mother explains at the beginning of the six-minute video that their son had previously spilled ink on the carpet in their home, and to prank their son, they would stage another spill on a bedroom carpet using disappearing ink -- placing the blame on him in a profanity-laced and screaming faux tirade. Many people were upset after watching the video and reached out to the news media as well as posting their outrage in the comment section of the YouTube video. The YouTube channel, which has over 700,000 subscribers and over 175 million total video views, is believed to be run by Mike and Heather Martin of Damascus, Maryland. Several days later, the family posted another YouTube video to respond to the negative posts about the invisible ink prank while explaining that their children were not abused. The caption for this video says: “The family responds to the HATE they received about the invisible ink prank. NO CHILDREN ARE OR HAVE EVER BEEN abused in any makings of our videos. They have the final say of weather a video gets aired or not. TO OUR FANS thank you and we truly do love you. TO THE HATERS we are blocking you...” The family has taken so much criticism that they released a statement on Twitter on Monday saying: “We have had a family meeting and reviewed many comments and concerns. We discussed different alternatives for our future videos and ways we can improve. We deeply apologize for your feelings of concern. We DO NOT condone child abuse in any way, shape, or form. As many of our friends and family would tell you we are a loving, close knit family and all enjoy making YouTube videos and having fun together. Thank you for your love and support Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
New kits from Italian manufacturer Hi! Hi! This report is dedicated to Italian company Italeri, which had its own stand on recent Spielwarenmesse 2016. Let's check what they displayed. Привет! Данный репортаж посвящён итальянской компании Italeri, у которой традиционно был свой стенд на Spielwarenmesse 2016. Давайте посмотрим, что они показали. First of all, Italeri keeps pressing on World of Tanks series, and 2016 will bring more kits under this label. Во-первых, Italeri продолжает нажимать на серию World of Tanks, и в 2016 мы увидим ещё несколько наборов под данной маркой. All tanks are molded in 1/35 scale and might be interesting for those who want to have a simple model copying their virtual vehicle. Все танки отлиты в масштабе 1/35 и могут заинтересовать тех, кто хотел бы собрать что-то простое, что будет копировать их виртуальную технику. In 2016, manufacturer plans to bring Pz.Kpfw.VI Tiger, Pz.Kpfw.V Panther, M4 Sherman and Cromwell. В 2016 году производитель планирует представить Pz.Kpfw.VI Tiger, Pz Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{sec:introduction} The use of GPS for localizing sensor nodes in a sensor network is considered to be excessively expensive and wasteful, also in some cases intractable, \cite{bul:00,bis:06}. Instead many solutions for the localization problem tend to use inter-sensor distance or range measurements. In such a setting the localization problem is to find unknown locations of say $N$ sensors using existing noisy distance measurements among them and to sensors with known locations, also referred to as anchors. This problem is known to be NP hard \cite{mor:97}, and there have been many efforts to approximately solve this problem, \cite{kim:09,bis:04,wan:06,bis:06,gho:13,nar:14,cha:09,soa:15,sch:15,sri:08}. One of the major approaches for approximating the localization problem, has been through the use of convex relaxation techniques, namely semidefinite, second-order and disk relaxations, see e.g., \cite{kim:09,bis:06,bis:04,wan:06,sri:08,gho:13,soa:15}. Although the centralized algorithms based on the these approximations reduce the computational complexity of solving the localization problem, they are still not scalable for solving large problems. Also centralized algorithms are generally communication intensive and more importantly lack robustness to failures. Furthermore, the use of these algorithms can become impractical due to certain structural constraints resulting from, e.g., privacy constraints and physical separation. These constraints generally prevent us from forming the localization problem in a centralized manner. One of the approaches to evade such issues is through the use of scalable and/or distributed algorithms for solving large localization problems. These algorithms enable us to solve the problem through collaboration and communication of several computational agents, which could correspond to sensors, without the need for a centralized computational unit. The design of distributed localization algorithms is commonly done by first reformulating the problem by exploiting or imposing structure on the problem and then employing efficient optimization algorithms for solving the reformulated problem, see e.g., some recent papers \cite{sim:14,gho:13,soa:15,sri:08}. For instance, authors in \cite{sri:08} put forth a solution for the localization problem based on minimization the discrepancy of the squared distances and the range measurements. They then propose a second-order cone relaxation for this problem and apply a Gauss-Seidel scheme to the resulting problem. This enables them to solve the problem distributedly. The proposed algorithm does not provide a guaranteed convergence and at each iteration of this algorithm, each agent is required to solve a second-order cone program, SOCP, which can potentially be expensive. Furthermore, due to the considered formulation of the localization problem, the resulting algorithm is prone to amplify the measurement errors and is sensitive to outliers. In \cite{sim:14}, the authors consider an SDP relaxation of the maximum likelihood formulation of the localization problem. They further relax the problem to an edge-based formulation as suggested in \cite{wan:06}. This then allows them to devise a distributed algorithm for solving the reformulated problem using alternating direction method of multipliers (ADMM). Even though this algorithm has convergence guarantees, each agent is required to solve an SDP at every iteration of the algorithm. In order to alleviate this, authors in \cite{gho:13} and \cite{soa:15} consider a disk relaxation of the localization problem and which correspond to an under-estimator of the original problem Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} \label{sec:intro} The Type~Ia supernova (SN~Ia) SN~2011fe was discovered on 2011 August 24, just 11~hr after explosion \citep{nugent11}. It is among the nearest ($\sim 6.9$~Mpc) and youngest ($\sim 11$~hr) SNe~Ia ever discovered. Extensive spectroscopic and photometric studies of SN~2011fe indicate that it is ``normal'' in nearly every sense: in luminosity, spectral and color evolution, abundance patterns, etc. \citep{parrent12,richmond12,roepke12,vinko12,munari13,pereira13}. Its unremarkable nature coupled with the wealth of observations made over its lifetime render it an ideal laboratory for understanding the physical processes which govern the evolution of normal SNe~Ia. Indeed, these data have allowed observers to place numerous and unprecedented constraints on the progenitor system of a particular SN~Ia \citep[e.g.,][]{li11,nugent11,bloom12,chomiuk12,horesh12,margutti12}. Equally as information-rich as observations taken at early times are those taken much later, when the supernova's photosphere has receded and spectrum formation occurs deep in the SN core. For example, \citet{shappee13} used late-time spectra to further constrain the progenitor system of SN~2011fe, namely that the amount of hydrogen stripped from the putative companion must be $< 0.001~M_\odot$. \citet{mcclelland13} found that the luminosity from SN~2011fe in the 3.6~$\mu$m channel of \textit{Spitzer}/IRAC fades almost twice as quickly as in the 4.5~$\mu$m channel, which they argue is a consequence of recombination from doubly ionized to singly ionized iron peak elements. In addition, \citet{kerzendorf14} used photometric observations near 930~d post-maximum light to construct a late-time quasi-bolometric light curve, and showed that the luminosity continues to trace the radioactive decay rate of $^{56}$Co quite closely, suggesting that positrons are fully trapped in the ejecta, disfavoring a radially combed or absent magnetic field in this SN. \citet{graham15} presented an optical spectrum at 981~d post-explosion and used constraints on both the mass of hydrogen as well as the luminosity of the putative secondary star as evidence against a single-degenerate explosion mechanism. \citet{taubenberger15} presented an optical spectrum at 1034~d post-explosion, and speculated about the presence of [\ion{O}{1}] lines near 6300~\AA, which, if confirmed, would provide strong constraints on the mass of unburned material near the center of the white dwarf progenitor of SN~2011fe. Non-detections of the H$\alpha$ line at both of these very late epochs also strengthened the constraints on the presence of hydrogen initially posed by \citet{shappee13}. Finally, \citet{mazzali15} used spectrum synthesis models of SN~2011fe from 192 to 364 days post-explosion to argue for a large central mass of stable iron and a small mass of stable nickel -- about 0.23~$M_\odot$ and 0.01~$M_\odot$, respectively. We complement these various late-time analyses with a series of radiative transfer models corresponding to a series of optical and ultraviolet (UV) spectra of SN~2011fe. \section{Observations} \label{sec:obs} \begin{table} \begin{tabular}{lll} UT Date & Phase & Telescope \\ & (days) & $+$Instrument \\ 2011 Dec 19 & $+$100 & WHT$+$ISIS \\ 2012 Apr 2 & $+$205 & Lick 3-m$+$KAST \\ 2012 Jul 17 & $+$311 & Lick 3-m$+$KAST \\ 2012 Aug 23 & $+$349 & Lick 3-m$+$KAST \\ 2013 Apr 8 & $+$578 &Lick 3-m$+$KAST \end{tabular} \caption{Observing log of spectra that appear here for the first time. The phase is with respect to maximum light.} \label{tab:obs} \end{table} We obtained optical spectra of SN~2011fe at days +100, +205, +311, +349, and +594 (Dec 19, 2011, Apr 2, 2012, Jul 17, 2012, Aug 23, 2012, Mar 27, 2013); the observations are shown in Figure~\ref{fig:all_optical_spectra_11fe} and described in Table~\ref{tab:obs} Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} \label{sec:introduction} In recent years there has been a resurgence of interest in the properties of metastable states, due mostly to the studies of the jammed states of hard sphere systems; see for reviews Refs. \onlinecite{charbonneau16, baule16}. There are many topics to study, including for example the spectrum of small perturbations around the metastable state, i.e. the phonon excitations and the existence of a boson peak, and whether the Edwards hypothesis works for these states. In this paper we shall study some of these topics in the context of classical Heisenberg spin glasses both in the presence and absence of a random magnetic field. Here the metastable states which we study are just the minima of the Hamiltonian, and so are well-defined outside the mean-field limit. It has been known for some time that there are strong connections between spin glasses and structural glasses ~\cite{tarzia2007glass,fullerton2013growing, moore06}. It has been argued in very recent work~\cite{baity2015soft} that the study of the excitations in classical Heisenberg spin glasses provides the opportunity to contrast with similar phenomenology in amorphous solids~\cite{wyart2005geometric, charbonneau15}. The minima and excitations about the minima in Heisenberg spin glasses have been studied for many years \cite{bm1981, yeo04, bm1982} but only in the absence of external fields. In Sec. \ref{sec:models} we define the models to be studied as special cases of the long-range one - dimensional $m$-component vector spin glass where the exchange interactions $J_{ij}$ decrease with the distance between the spins at sites $i$ and $j$ as $1/r_{ij}^{\sigma}$. The spin $\mathbf{S}_i$ is an $m$-component unit vector. $m=1$ corresponds to the Ising model, $m=2$ corresponds to the XY model and $m=3$ corresponds to the Heisenberg model. By tuning the parameter $\sigma$, one can have access to the Sherrington-Kirkpatrick (SK) model and on dilution to the Viana-Bray (VB) model, and indeed to a range of universality classes from mean-field-type to short-range type \cite{leuzzi2008dilute}, although in this paper only two special cases are studied; the SK model and the Viana-Bray model Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{I. \ Objective function used to correct the ZZ-error channel} Thanks to the isomorphism between SU(2) generators, $\sigma_X,\sigma_Y, \sigma_Z$, and the subgroup of SU(4) generators, $\sigma_{ZZ},\sigma_{ZX},\sigma_{IY}$, the right hand side of Eq. (11) can be expressed in a more tractable way as \begin{equation}\label{eq:appendix_1} \left(\prod_{j=4}^{1}\exp\left[i\frac{\psi_j}{2} \sigma_{Z}\right]\left[U\right]^{n_j}\exp\left[-i\frac{\psi_j}{2} \sigma_{Z}\right]\right)U, \end{equation} where $U=\exp\left[-i \frac{5\theta_0}{2}(1+\delta)\sigma_{X}\right]$ and $\theta_0=\arccos\left[\frac{1}{4}\left(\sqrt{13}-1\right)\right]$. The error component of Eq. \eqref{eq:appendix_1} is then isolated by expanding the sequence to first order in $\delta$. The resulting unperturbed matrix and first order error matrix, $A +\delta B $, are expressed in terms of the SU(2) generators as $A=\Lambda_1\sigma_I + i \Lambda_2\sigma_X + i \Lambda_3 \sigma_Y + i \Lambda_4\sigma_Z$ and $B=\Delta_1\sigma_I + i \Delta_2\sigma_X + i \Delta_3 \sigma_Y + i \Delta_4\sigma_Z$, where the $\Delta_i$'s and $\Lambda_i $'s are functions of $\psi_j$'s, $n_j$'s and $\theta_0$. Their closed forms are rather long to include them here, but can be easily obtained with any symbolic computation program.\\ \indent Making use again of the isomorphism between SU(2) and a subgroup of SU(4), we express the local invariants corresponding to $\mathcal{U}^{(6k)}$ in Eq. (11) in terms of the elements of the matrix $A$: \begin{equation} \begin{aligned} G_1(\mathcal{U}^{(6k)})&=(\Lambda_1^2+\Lambda_4^2 - \Lambda_2^2 - \Lambda_3^2)^2\\ G_2(\mathcal{U}^{(6k)})&=3 \Lambda_4^4 + 3 \Lambda_1^4 - 2 \Lambda_1^2 (\Lambda_2^2 + \Lambda_3^2) + 3 (\Lambda_2^2 + \Lambda_3^2)^2 + \Lambda_4^2 (6 \Lambda_1^2 - 2 (\Lambda_2^2 + \Lambda_3^2)). \end{aligned} \end{equation} \indent With the above expressions and the terms that conform the matrix $B$, we construct our objective function such that the error matrix $B$ is canceled and the local invariants of the sequence and target operation are as close as possible. Accordingly, the objective function is given by \begin{equation}\label{eq:objective_function} f=\Delta_1^2+ \Delta_2^2 + \Delta_3^2 + \Delta_4^2 +[G_1(\mathcal{U}^{(6k)})- G_1(\mathfrak{U})]^2 +[G_2(\mathcal{U}^{(6k)})- G_2(\mathfrak{U})]^2, \end{equation} where $G_i(\mathfrak{U})$ are the local invariants of the target operation.\\ \indent The values of the solutions found by numerically minimizing the objective function while targeting a {\sc cnot} operation are: \begin{equation} \begin{aligned} \psi_1=& 1.135268,\\ \psi_2=& -0.405533,\\ \psi_3=& -1.841855,\\ \psi_4=& 0.191753. \end{aligned} \end{equation} Moreover, the angles of the local operations needed to transform $\mathcal{U}^{(6k)}_{\text{\sc cnot}}$ into {\sc cnot}, Eq. (12), are \begin{equation} \begin{aligned} \phi_1=& -1.607820,\\ \phi_2=& 0.234035. \end{aligned} \end{equation} \indent Similarly, the solutions found with the numerical minimization of Eq. \eqref{eq:objective_function} that yield a corrected rotation equivalent to $(5\theta_0/k)_{ZZ}$, for $k=\{5,10,20\}$ respectively, are \begin{equation} \begin{aligned} \psi_1=\{&-0.183589,-0.103032,-0.0522225\},\\ \psi_2=\{&-3 Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
Dave Wilton, Monday, February 02, 2009 This adjective, meaning joyful or light-hearted, is of uncertain origin. The English word comes from the French gai, but where this French word comes from is uncertain. There are cognates in other Romance languages, notably Provencal, Old Spanish, Portugeuse, and Italian, but no likely Latin candidate for a root exists. The word is probably Germanic in origin, with the Old High German gāhi, fast or fleeting, suggested as a likely progenitor.1 The word is first recorded in English c.1325, with the meaning of beautiful, in a poem titled Blow, Northerne Wind, which appears in the manuscript British Library MS Harley 2253 (As an aside, Harley 2253 is a very important manuscript. It is a treasure-trove of early English lyric poetry, containing early and unique copies of many poems.): Heo is dereworþe in day, graciouse, stout, ant gay, gentil, iolyf so þe iay. (She is precious in day gracious, stout, and gay, gentle, jolly as the jay.)2 Over the next few decades, the meaning of the word evolved from beautiful to bright, showy, and finely dressed. By the end of the 14th century, the modern sense of light-hearted and carefree had appeared. From Chaucer’s Troilus & Criseyde, Book II, lines 921-22, written c.1385: Peraunter in his briddes wise a lay Of love, that made hire herte fressh and gay. (By chance, in his bird’s manner [sang] a song Of love, that made her heart fresh and gay.)3 In recent years, however, this traditional sense of gay has been driven out of the language by the newer sense meaning homosexual. Many believe this new sense of gay to be quite recent, when in fact it dates at least to the 1920s and perhaps even earlier. This early existence is as a slang and self-identifying code word among homosexuals, only entering the mainstream of English in the late 1960s. So how did this word meaning joyful come to refer to homosexuality? There are two, not necessarily mutually exclusive, commonly proffered explanations that are plausible. Perhaps the most commonly touted one is that the modern use of gay comes from a clipping of gaycat, a slang term among hobos and itinerants meaning a boy or young man who accompanies an older, more experienced tramp, with the implication of sexual favors being exchanged for protection and instruction. The term was often used disparagingly and dates to at least 1893, when it appears in the November issue of Century magazine: The gay-cats are men who will work for “very good money,” and are usually in the West in the autumn to take advantage of the high wages offered to laborers during the harvest season. The disparaging sense can be seen in this citation from the 10 August 1895 issue of Harper’s Weekly: The hobo is an exceedingly proud fellow, and if you want to offend him, call him a “gay cat” or a “poke-outer.” And from Jack London’s The Road, published in 1907, but this passage is a reference to 1892: In a more familiar parlance, gay-cats are short-horns, chechaquos, new chums, or tenderfeet Summarize the preceding context in 7 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} Relativity theory, quantum theory, and information theory are fundamental blocks of theoretical physics \cite{peres2004}. The goal of theoretical physics is to describe and, to a certain extent, understand natural phenomena. Unfortunately, complex difficulties arise when one attempts to merge general relativity theory and quantum theory. For example, in classical mechanics it is often said that gravity is a purely geometric theory since the mass does not appear in the usual Newtonian equation of a particle trajectory \begin{equation} m\frac{d^{2}\vec{x}}{dt^{2}}=-m\vec{\nabla}_{\vec{x}}\Phi_{\text{gravity }\Leftrightarrow\frac{d^{2}\vec{x}}{dt^{2}}=\vec{g}\text{. \end{equation} This is a direct consequence of the equality of the gravitational and inertial masses. In quantum mechanics, the situation is rather different. As a matter of fact, the Schrodinger quantum-mechanical wave equation is given by \cite{sakurai} \begin{equation} \left[ -\frac{\hbar^{2}}{2m}\vec{\nabla}_{\vec{x}}^{2}+m\Phi_{\text{gravity }\right] \psi\left( \vec{x}\text{, }t\right) =i\hbar\frac{\partial }{\partial t}\psi\left( \vec{x}\text{, }t\right) \text{. \end{equation} The mass $m$ no longer cancels and, instead, it appears in the combination $\hbar/m$ (where $\hbar\overset{\text{def}}{=}h/2\pi$ and $h$ denotes the Planck constant). Therefore, in an instance where $\hbar$ appears, $m$ is also expected to appear \cite{colella1975}. It seems evident that there is the possibility that such difficulties are not simply technical and mathematical, but rather conceptual and fundamental. This viewpoint was recently presented in \cite{brukner14}, where the idea was advanced that \emph{quantum causality} might shed some light on foundational issues related to the general relativity-quantum mechanics problem. When describing natural phenomena at the quantum scale, the interaction between the mechanical object under investigation and the observer (or, observing equipment) is not negligible and cannot be predicted \cite{bohr35, bohr37, bohr50}. This fact leads to the impossibility of unambiguously distinguishing between the object and the measuring instruments. This, in turn, is logically incompatible with the classical notion of causality; the possibility of sharply distinguishing between the subject and the object is essential to the ideal of causality. In his attempt to bring consistency in science, Bohr proposed to replace the classical ideal of causality with a more general viewpoint termed \emph{complementarity}. Roughly speaking, anyone can understand that one cannot bow in front of somebody without showing one's back to somebody else Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} For decades, the microscopic process which causes a linear-in-temperature term in the electrical resistivity of pure ferromagnetic metals (Fe, Co and Ni) at low temperatures---which is clearly observed around liquid-helium temperatures \cite{Campbell,Volkenshtein}---has remained unclear. In this temperature region, the $T^2$ dependence of the electrical resistivity characteristic of the transition metals at low temperatures, due to the $s$-$d$ exchange interaction \cite{Kasuya2,Goodings,Mannari} and inter-electronic collisions, \cite{Baber} ceases to be the only dominant contribution. The most known intrinsic mechanism giving a linear term in the resistivity is the spin-orbit interaction between the orbits of the $4s$ conduction electrons and the spins of the nearly localized $3d$ ferromagnetic electrons. \cite{Turov,Turov2,Turov3} However, this predicts a linear coefficient which is about a thousand of times smaller than observed. \cite{Turov2,Goodings,Taylor} Despite other mechanisms have been proposed \cite{Volkenshtein} to explain this anomalous behavior, including e.g. electron-magnon scattering taking into account the electronic spin polarization, and scattering of the conduction electrons by 2D spin-wave excitations on the magnetic domain walls; it is believed, \cite{Campbell,Volkenshtein} based on a series of experiments, that the anomaly is caused by the scattering of conduction electrons by the internal magnetic induction present in the ferromagnetic metals, observed as an internal magnetoresistance effect. However, no explanation of this fact has been given so far using quantum mechanics. In this article, I propose a simple picture of an internal magnetoresistance effect in the ferromagnetic metals which predicts the correct magnitude of the linear coefficient. This is realized as the contribution to the electrical resistivity coming from electronic spin-flip transitions in the conduction band---which is Zeeman-split by the internal magnetic induction---and mediated by the isotropic spin-phonon interaction of the conduction-electron spins with the orbital \emph{contact} (hyperfine) field these electrons produce at the ionic positions. This mechanism, which accounts for the observed spin-lattice relaxation times of pure ferromagnetic metals at room temperatures, complements the existing theories of spin relaxation of conduction electrons in metals, \cite{Overhauser,Fabian,Boross,Mokrousov} which do not deal with the ferromagnetic case. The electronic spin-flip transitions introduced here portrait phonons as carriers of angular momentum. The macroscopic consequences of this were first discussed by Zhang and Niu\cite{Zhang} in their consideration of the Einstein-de Hass effect in a magnetic crystal, leading to the envisioning of \emph{chiral} phonons\cite{ZhangNiu2} as lattice modes supporting left-handed and right-handed excitations and spin\cite{Garanin,Holanda}. The direct observation of chiral phonons has been done very recently\cite{SKim,HZhu}; however, as far as I know, the role played by them in the electrical resistivity of a metal has not been considered before. \section{Description of the model} Consider a system of itinerant and interacting electrons \emph{magnetically} coupled to the localized ions of the material. The Hamiltonian of this system is \begin{equation}\label{Hked} H_e = \sum_{\bm{k} s}E_k n_{\bm{k} s}+\dfrac{1}{2}\sum_{\bm{k} \neq 0}J(\bm{k})\rho_{\bm{k}}\rho_{-\bm{k}}+H_{\tm{dd}}. \end{equation} Here, the first term represents the kinetic energy of these electrons, which have wave number $\bm{k}$ and spin index $s$, with $E_k=\hbar^2k^2/2m$ and $n_{\bm{k} s}=\hat{c}_{\bm{k} s}^{\dagger}\hat{c}_{\bm{k} s}$ being the electron number operator. The second term represents the electron-electron Coulomb interactions, with $\rho_{\bm{k}}=\sum_{\bm{l}s}\hat{c}_{\bm{l}+\bm{k},s}^{\dagger}\hat{c}_{\bm{l} s}$ being a Fourier component of the electronic density, and $J(\bm{k})$ being the Fourier transform of the Coulomb electric potential. The third term in \eqref{Hked} represents the magnetic dipole-dipole interactions between electron pairs and between electron-ion pairs. This is given by \begin{equation}\label{dd} H_{\tm{dd}}=-\sum_{r}\bm{\mu}_r\cdot\bm{B}(\bm{x}_r)=-\sum_{rq}\bm{\mu}_r\cdot \bm{D}(\bm{x}_r-\bm{x}_q)\cdot\bm{\mu}_q, \end{equation} where $\bm{\mu}_r$ is the magnetic moment of the $r^{\tm{th}}$ dipole at position $\bm{x}_r$, which interacts with the magnetic dipole field $\bm{B}(\bm{x}_r)=\sum_q \bm{D}(\bm{x}_r-\bm{x}_q)\cdot\bm{\mu}_q$ generated by the other dipoles. Here $\bm{D}(\bm{x}_r-\bm{x}_q)$ is a dyad representing the dipole kernel \begin{equation}\label{dk} \bm{D}(\bm{x}_r-\bm{x}_q) = \dfrac{3\,\hat{\bm{x}}_{rq}\hat{\bm{x}}_{rq}-\bm{1}}{|\bm{x}_r-\bm{x}_q|^3}+\dfrac{8\pi}{3}\delta(\bm{x}_r-\bm{x}_q)\bm{1}, \end{equation} with $\hat{\bm{x}}_{rq}$ a unit vector from $\bm{x}_r$ to $\bm{x}_q$---note that the second term in \eqref{dk} is necessary to account for the volume integral of the magnetic dipole field $\bm{B}(\bm{x})$ over a region containing all the dipoles.\cite{Jackson} Let me divide now the magnetic dipoles in two classes: those belonging to the ions, in which case the label is changed to $r_i$, and those belonging to itinerant electrons, in which case the label is changed to $r_e$ Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
“To vote for this is to vote for job creation.” Councilman Chris Anderson Drink samples made with Dr. Thacher's syrup are included in the tour at Chattanooga Whiskey Stillhouse. Drink samples made with Dr. Thacher's syrup are... Photo by Angela Lewis Foster /Times Free Press. Caleb Warren mixes a drink at Chattanooga Whiskey Stillhouse on Market Street. Caleb Warren mixes a drink at Chattanooga Whiskey... Photo by Angela Lewis Foster /Times Free Press. Chattanooga Whiskey Co. has received the City Council's blessing to build a larger downtown distillery facility. Plans call for the company to re-purpose the former Newton Chevrolet dealership site on West M Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
TOTW 1 ○ Last year, EA released their first TOTW on the 21st of September. This was one day after the release of the web app. This year, it doesn’t appear that there will be any platform up and running at the launch of this potential TOTW (Web app or EA Access). While this doesn’t necessarily mean EA can’t release this TOTW for FIFA 18, it does make it a bit awkward. In the same vein, it would be a bit awkward to not have a TOTW in packs for EA Access and the start of the Icon and Ronaldo release. Launch of Squad Battles ○ Under the same reasoning as the challenges, our guess is that EA will have squad battles ready to try in some fashion for EA Access. The only tricky thing here could be the reward system. Essentially a single player version of FUT Champs, we are not yet sure how long the ranking window will be (equivalent to the weekend we had this past year to play 40 games). While it would certainly be a bummer for EA Access members if they weren’t able to try out one of this year’s headline new features, EA might prefer to kick things off at the full launch Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction} The inflationary universe scenario \cite{starobinsky,guth,linde} in which the early universe undergoes a rapid expansion has been generally accepted as a solution to the horizon problem and some other related problems of the standard big-bang cosmology. The origin of the field that drives inflation is still unknown and is subject to speculations. Among many models of inflation a popular class comprise tachyon inflation models \cite{fairbairn,feinstein,shiu1,kofman,sami,shiu2,cline,steer,campo,li,tachyon}. These models are of particular interest as in these models inflation is driven by the tachyon field originating in string theory. The tachyon potential is derived from string theory and has to satisfy some definite properties to describe tachyon condensation and other requirements in string theory. However, Kofman and Linde have shown \cite{kofman} that the slow-roll conditions are not compatible with a string coupling much smaller than one, and the compactification length scale much larger than the Planck length. This leads to the density fluctuations produced during inflation being incompatible with observational constraint on the amplitude of the scalar perturbations. This criticism is based on the string theory motivated values of the parameters in the tachyon potential, i.e., the brane tension and the parameters in the four-dimensional Newton constant obtained via conventional string compactification. Of course, if one relaxes the string theory constraints on the above mentioned parameters, the effective tachyon theory will naturally lead to a type of inflation which will slightly deviate from the conventional inflation based on the canonical scalar field theory. Steer and Vernizzi \cite{steer} have noted a deviation from the standard single field inflation in the second order consistency relations. Based on their analysis they concluded that the tachyon inflation could not be ruled out by the then available observations. It seems like the present observations \cite{planck2015} could perhaps discriminate between different tachyon models and disfavor or rule out some of these models (for a recent discussion on phenomenological constraints imposed by Planck 2015, see, e.g., ref \cite{pirtskhalava}). A simple tachyon model can be analyzed in the framework of the second Randall-Sundrum (RSII) model \cite{randall2}. The RSII model was originally proposed as a possible mechanism for localizing gravity on the 3+1 universe embedded in a 4+1 dimensional spacetime without compactification of the extra dimension. The model is a 4+1 dimensional Anti de Sitter (AdS$_5$) universe containing two 3-branes with opposite tensions separated in the fifth dimension: observers reside on the positive tension brane and the negative tension brane is pushed off to infinity. The Planck mass scale is determined by the curvature of the AdS spacetime rather then by the size of the fifth dimension. The fluctuation of the interbrane distance along the extra dimension implies the existence of the so called {\em radion} -- a massless scalar field that causes a distortion of the bulk geometry. In this regard, a stabilization mechanism of the interbrane distance has been proposed \cite{goldberger} by assuming the presence of scalar fields in the bulk. The stabilization mechanism is relevant for the RSI model where the interbrane distance is kept finite. In RSII model, as the negative tension brane is pushed off to infinity the radion disappears. However, it has been shown by Kim, Tupper, and Viollier \cite{kim} that a disappearance of the radion in RSII is an artifact of linear theory and hence, when going beyond linear theory the radion remains a dynamical field in the RSII model. Moreover, owing to the radion, the distance between branes remains finite in the RSII limit of infinite coordinate bulk even though the coordinate position of the second brane is infinite. The presence of the radion may have interesting physical implications Summarize the preceding context in 5 sentences. Do not try to create questions or answers for your summarization.
The TNR essay is here, prompted by the publication of Angus Burgin’s The Great Persuasion: Reinventing Free Markets Since the Great Depression. Excerpt: The MPS was no more influential inside the economics profession. There were no publications to be discussed. The American membership was apparently limited to economists of the Chicago School and its scattered university outposts, plus a few transplanted Europeans. “Some of my best friends” belonged. There was, of course, continuing research and debate among economists on the good and bad properties of competitive and noncompetitive markets, and the capacities and limitations of corrective regulation. But these would have gone on in the same way had the MPS not existed. It has to be remembered that academic economists were never optimistic about central planning. Even discussion about the economics of some conceivable socialism usually took the form of devising institutions and rules of behavior that would make a socialist economy function like a competitive market economy (perhaps more like one than any real-world market economy does). Maybe the main function of the MPS was to maintain the morale of the free-market fellowship. Solow neglects to mention that Milton Friedman turned out to be right on most of the issues he discussed (though targeting money doesn’t work), that MPS economists shaped at least two decades of major and indeed beneficial economic reforms across the world, or that some number of the economists at MIT envied the growth performance of the Soviet Union and that such remarks were found in the most popular economics textbook in the profession. You can consider this essay a highly selective, error-laden, and disappointing account of a topic which could in fact use more serious scrutiny. By the way, if you read Solow’s own 1962 review of Maurice Dobb on economic planning (JSTOR gate), it shows very little understanding of Hayek’s central points on these topics, which by then were decades old. Arguably it shows “negative understanding” of Hayek. Or to see how important Friedman’s work on money and also expectations was, try comparing it with…um…the Solow and Samuelson 1960 piece on the Phillips Curve (JSTOR), which Friedman pretty much refuted point by point. Here is the closing two sentences of that piece: We have not here entered upon the important question of what feasible institutional reforms might be introduced to lessen the degree of disharmony between full employment and price stability.These could of course involve such wide-ranging issues as direct price and wage controls, antiunion and antitrust legislation, and a host of other measures hopefully designed to move the American Phillips’ curves downward and to the left. And Solow wonders why the Mont Pelerin Society and monetarism were needed. Solow should have started his piece with a sentence like “Milton Friedman was not right about everything, but most of his criticisms of my earlier views have been upheld by subsequent economic theory and practice….” Greg Ransom…telephone! For the pointer I thank Peter Boettke Summarize the preceding context in 4 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{sec:intro} A major challenge towards self-organizing networks (SON) is the joint optimization of multiple SON use cases by coordinately handling multiple configuration parameters. Widely studied SON use cases include coverage and capacity optimization (CCO), mobility load balancing (MLB) and mobility robustness optimization (MRO)\cite{3GPP36902}.\cosl{We need a reference here} However, most of these works study an isolated single use case and ignore the conflicts or interactions between the use cases \cite{giovanidis2012dist,razavi2010self}. In contrast, this paper considers a joint optimization of two strongly coupled use cases: CCO and MLB. The objective is to achieve a good trade-off between coverage and capacity performance, while ensuring a load-balanced network. The SON functionalities are usually implemented at the network management layer and are designed to deal with \lq\lq long-term\rq\rq \ network performance. Short-term optimization of individual users is left to lower layers of the protocol stack. To capture long-term global changes in a network, we consider a cluster-based network scenario, where users served by the same base station (BS) with similar SINR distribution are adaptively grouped into clusters. Our objective is to jointly optimizing the following variables: \begin{itemize} \item Cluster-based BS assignment and power allocation. \item BS-based antenna tilt optimization and power allocation. \end{itemize} The joint optimization of assignment, antenna tilts, and powers is an inherently challenging problem. The interference and the resulting performance measures depend on these variables in a complex and intertwined manner. Such a problem, to the best of the authors' knowledge, has been studied in only a few works. For example, in \cite{klessig2012improving} a problem of jointly optimizing antenna tilt and cell selection to improve the spectral and energy efficiency is stated, however, the solution derived by a structured searching algorithm may not be optimal. In this paper, we propose a robust algorithmic framework built on a utility model, which enables fast and near-optimal uplink solutions and sub-optimal downlink solutions\cosl{Do we know that this is near-optimal?} by exploiting three properties: 1) the monotonic property and fixed point of the monotone and strictly subhomogenoues (MSS) functions \footnote{Many literatures use the term {\it interference function} for the functions satisfy three condotions, positivity, monotonicity and scalability \cite{yates95}. Positivity is shown to be a consequence of the other two properties \cite{leung2004convergence}, and we use the term {\it strctly subhomogeneous} in place of scalable from a constraction mapping point of view in keeping with some related literature \cite{nuzman2007contraction}.}, 2) decoupled property of the antenna tilt and BS assignment optimization in the uplink network, and 3) uplink-downlink duality. The first property admits global optimal solution with fixed-point iteration for two specific problems: utility-constrained power minimization and power-constrained max-min utility balancing \cite{vucic2011fixed,stanczak2009fundamentals,schubert2012interference,yates95}. The second and third properties enable decomposition of the high-dimensional optimization problem, such as the joint beamforming and power control proposed in \cite{BocheDuality06,schubert2005iterative,huang2013joint,he2012multi}. Our distinct contributions in this work can be summarized as follows:\\ 1) We propose a max-min utility balancing algorithm for capacity-coverage trade-off optimization over a joint space of antenna tilts, BS assignments and powers. The utility defined as a convex combination of the average SINR and the worst-case SINR implies the balanced performance of capacity and coverage. Load balancing is improved as well due to a uniform distribution of the interference among the BSs.\\ 2) The proposed utility is formulated based on the MSS functions, which allows us to find the optimal solution by applying fixed-point iterations.\\ 3) Note that antenna tilts are BS-specific variables, while assignments are cluster-specific, we develop two optimization problems with the same objective functions, formulated either as a problem of per-cluster variables or as a problem of per-base variables. We propose a two-step optimization algorithm in the uplink to iteratively optimize the per BS variables (antenna tilts and BS power budgets) and the cluster-based variables (assignments and cluster power). Since both problems aim at optimizing the same objective function, the algorithm is shown to be convergent.\\ 4) The decoupled property of antenna tilt and assignment in the uplink decomposes the high-dimensional optimization problem and enables more efficient optimization algorithm. We then analyze the uplink-downlink duality by using the Perron-Frobenius theory\cite{meyer2000matrix}, and propose an efficient sub-optimal solution in the downlink by utilizing optimized variables in the dual uplink. \section{System Model}\label{sec:Model} We consider a multicell wireless network composed of a set of BSs $\set{N}:=\{1,\ldots, N\}$ and a set of users $\set{K}:=\{1,\ldots, K\}$ Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
\section{\label{intro}Introduction} The ``instanton calculus'' is a common approach for studying the non-perturbative semiclassical effects in gauge theories and sigma models. One of the first and perhaps the best known illustration of this approach is the $O(3)$ Non-Linear Sigma Model (NLSM) in two dimensions, where multi-instanton configurations admit a simple analytic form \cite{Polyakov:1975yp}. It is less known that the $O(3)$ NLSM provides an opportunity to explore a mechanism of exact summation of the instanton configurations in the path integral. In order to explain the purpose of this paper, we start with a brief overview of the main ideas behind this summation. The instanton contributions in the $O(3)$ NLSM were calculated in a semiclassical approximation in the paper \cite{Fateev:1979dc}. It was shown that the effect of instantons with positive topological charge can be described in terms of the non-interacting theory of Dirac fermions. Moreover, every instanton has its anti-instanton counterpart with the same action and opposite topological charge. Thus, neglecting the instanton-anti-instanton interaction, one arrives to the theory with two non-interacting fermions. Although the classical equation has no solutions containing both instanton-anti-instanton configurations, such configurations must still be taken into account. In ref.\,\cite{Bukhvostov:1980sn} Bukhvostov and Lipatov (BL) have found that the weak instanton-anti-instanton interaction is described by means of a theory of two Dirac fermions, $\psi_\sigma \ (\sigma=\pm)$, with the Lagrangian \begin{eqnarray}\label{Lagr1} {\cal L}= \sum_{\sigma=\pm }{\bar \psi}_\sigma \big({\rm i} \gamma^\mu\partial_\mu-M\big){ \psi}_\sigma- g\, \big({\bar \psi}_+\gamma^\mu{ \psi}_+\big) \big({\bar \psi}_- \gamma_\mu{ \psi}_-\big)\ . \end{eqnarray} The perturbative treatment of \eqref{Lagr1} leads to ultraviolet (UV) divergences and requires renormalization. The renormalization can be performed by adding the following counterterms to the Lagrangian which preserve the invariance w.r.t. two independent $U(1)$ rotations $\psi_\pm\mapsto \mbox{e}^{{\rm i}\alpha_\pm} \, \psi_\pm$, as well as the permutation $\psi_+\leftrightarrow\psi_-$: \begin{eqnarray}\label{Lagr2} {\cal L}_{\rm BL}={\cal L}-\sum_{\sigma=\pm}\Big(\,\delta M\, {\bar \psi}_\sigma{ \psi}_\sigma+ \frac{g_1}{2}\, \big({\bar \psi}_\sigma\gamma^\mu{ \psi}_\sigma\big)^2\Big)\ . \end{eqnarray} In fact the cancellation of the UV divergences leaves undetermined one of the counterterm couplings. It is possible to use the renormalization scheme where the renormalized mass $M$, the bare mass $M_0=M+\delta M$ and UV cut-off energy scale $\Lambda_{\rm UV}$ obey the relation \begin{eqnarray}\label{aoisasosa} \frac{M}{M_0}=\bigg(\frac{M}{\Lambda_{\rm UV}}\bigg)^\nu\ , \end{eqnarray} where the exponent $\nu$ is a renormalization group invariant parameter as well as dimensionless coupling $g$. For $\nu=0$ the fermion mass does not require renormalization and the only divergent quantity is the zero point energy. The theory, in a sense, turns out to be UV finite in this case. Then the specific {\it logarithmic} divergence of the zero point energy can be interpreted as a ``small-instanton'' divergence in the context of $O(3)$ NLSM. Recall, that the standard lattice description of the $O(3)$ sigma model has problems -- for example, the lattice topological susceptibility does not obey naive scaling laws. L\"uscher has shown \cite{Luscher:1981tq} that this is because of the so-called ``small instantons'' -- field configurations such as the winding of the $O(3)$-field around plaquettes of lattice size, giving rise to spurious contribution to quantities related to the zero point energy Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
\section{Introduction}\label{intro} Gas has a fundamental role in shaping the evolution of galaxies, through its accretion on to massive haloes, cooling and subsequent fuelling of star formation, to the triggering of extreme luminous activity around super massive black holes. Determining how the physical state of gas in galaxies changes as a function of redshift is therefore crucial to understanding how these processes evolve over cosmological time. The standard model of the gaseous interstellar medium (ISM) in galaxies comprises a thermally bistable medium (\citealt*{Field:1969}) of dense ($n \sim 100$\,cm$^{-3}$) cold neutral medium (CNM) structures, with kinetic temperatures of $T_{\rm k} \sim 100$\,K, embedded within a lower-density ($n \sim 1$\,cm$^{-3}$) warm neutral medium (WNM) with $T_{\rm k} \sim 10^{4}$\,K. The WNM shields the cold gas and is in turn ionized by background cosmic rays and soft X-rays (e.g. \citealt{Wolfire:1995, Wolfire:2003}). A further hot ($T_{\rm k} \sim 10^{6}$\,K) ionized component was introduced into the model by \cite{McKee:1977}, to account for heating by supernova-driven shocks within the inter-cloud medium. In the local Universe, this paradigm has successfully withstood decades of observational scrutiny, although there is some evidence (e.g. \citealt{Heiles:2003b}; \citealt*{Roy:2013b}; \citealt{Murray:2015}) that a significant fraction of the WNM may exist at temperatures lower than expected for global conditions of stability, requiring additional dynamical processes to maintain local thermodynamic equilibrium. Since atomic hydrogen (\mbox{H\,{\sc i}}) is one of the most abundant components of the neutral ISM and readily detectable through either the 21\,cm or Lyman $\alpha$ lines, it is often used as a tracer of the large-scale distribution and physical state of neutral gas in galaxies. The 21\,cm line has successfully been employed in surveying the neutral ISM in the Milky Way (e.g. \citealt{McClure-Griffiths:2009,Murray:2015}), the Local Group (e.g. \citealt{Kim:2003,Bruns:2005,Braun:2009,Gratier:2010}) and low-redshift Universe (see \citealt{Giovanelli:2016} for a review). However, beyond $z \sim 0.4$ (\citealt{Fernandez:2016}) \mbox{H\,{\sc i}} emission from individual galaxies becomes too faint to be detectable by current 21\,cm surveys and so we must rely on absorption against suitably bright background radio (21\,cm) or UV (Lyman-$\alpha$) continuum sources to probe the cosmological evolution of \mbox{H\,{\sc i}}. The bulk of neutral gas is contained in high-column-density damped Lyman-$\alpha$ absorbers (DLAs, $N_{\rm HI} \geq 2 \times 10^{20}$\,cm$^{-2}$; see \citealt*{Wolfe:2005} for a review), which at $z \gtrsim 1.7$ are detectable in the optical spectra of quasars. Studies of DLAs provide evidence that the atomic gas in the distant Universe appears to be consistent with a multi-phase neutral ISM similar to that seen in the Local Group (e.g. \citealt*{Lane:2000}; \citealt*{Kanekar:2001c}; \citealt*{Wolfe:2003b}). However, there is some variation in the cold and warm fractions measured throughout the DLA population (e.g. \citealt*{Howk:2005}; \citealt{Srianand:2005, Lehner:2008}; \citealt*{Jorgenson:2010}; \citealt{Carswell:2011, Carswell:2012, Kanekar:2014a}; \citealt*{Cooke:2015}; \citealt*{Neeleman:2015}). The 21-cm spin temperature affords us an important line-of-enquiry in unraveling the physical state of high-redshift atomic gas. This quantity is sensitive to the processes that excite the ground-state of \mbox{H\,{\sc i}} in the ISM (\citealt{Purcell:1956,Field:1958,Field:1959b,Bahcall:1969}) and therefore dictates the detectability of the 21\,cm line in absorption. In the CNM the spin temperature is governed by collisional excitation and so is driven to the kinetic temperature, while the lower densities in the WNM mean that the 21\,cm transition is not thermalized by collisions between the hydrogen atoms, and so photo-excitation by the background Ly $\alpha$ radiation field becomes important. Consequently the spin temperature in the WNM is lower than the kinetic temperature, in the range $\sim$1000 -- 5000\,K depending on the column density and number of multi-phase components (\citealt{Liszt:2001}) Summarize the preceding context in 6 sentences. Do not try to create questions or answers for your summarization.
The term comes from the dead man’s hand, the fabled poker hand containing both black aces and two eights. As the legend goes, the Old West gunfighter, lawman and gambler Wild Bill Hickok was holding the hand when he was murdered in 1876. Eddie Shore, an N.H.L. star in the 1920s and ’30s and later an A.H.L. owner and coach, is believed to have brought the term to hockey, adopting it to refer to players working their way back into the lineup. The distinction has taken on new meaning in the modern N.H.L. as a promotion that tends to involve plenty of watching and waiting. “Typically, the playoffs in the N.H.L. are so long that usually there’s a couple of guys who get banged up. So there’s always a good chance,” Carpenter said. “That’s pretty much what they told us. Just to be ready. You’ve got to look at it as a really good thing and a great opportunity.” As remote as the opportunity may seem, there is a precedent for Black Aces getting the call in the Stanley Cup finals. Earlier this season, Carpenter and Goldobin did not need to look too far to find a player who could attest as much. Ben Smith spent much of this season playing with the Barracudas before being traded to the Toronto Maple Leafs on Feb. 27 Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.
Beginning in February 2017, mobile phone users in India started receiving messages informing them that the government had made it mandatory for mobile phone numbers to be linked to Aadhaar card numbers failing which the phone number would be deactivated. But unlike the order that made it mandatory to link your PAN with your Aadhaar, most people did not sense any urgency to this order since the deadline notified was February 2018. There was equally a lull on the part of telephone service providers in following up on the notification, but in the past two weeks, there seems to be renewed interest, with customers receiving numerous reminders from their service providers. It's curious that this increasing insistence comes in the aftermath of the Supreme Court's recent ruling in the privacy judgment where a nine-judge bench has unequivocally affirmed that the right to privacy is a fundamental right in India. While the privacy judgment itself does not say anything about Aadhaar, it has to be remembered that the judgment came out of a reference made during the hearing of cases challenging the constitutional validity of Aadhaar. The implementation of Aadhaar was challenged in 2012, and in the course of arguments, there were serious disagreements between the petitioners and the government, with the latter claiming that there was no fundamental right to privacy in India (on the basis of the M.P. Sharma and Kharak Singh eight- and six-judge benches, respectively) while subsequent judgments by smaller benches had held privacy to be a fundamental right. In August 2015, a three-judge bench referred the question of whether privacy is a fundamental right to a higher bench resulting in the Puttaswamy decision of August 2017. The affirmation of the decision provides us a good sense of the standards that will be used to decide the primary Aadhaar case as well as the challenges to the incidental policies arising from it. In addition to people who have deferred their decision to link their mobile numbers with their Aadhaar number for reasons of convenience, there are many others who were awaiting the decision of the Supreme Court in the privacy judgment to see if it would have any bearing on the implementation of Aadhaar. In light of the privacy judgment, citizens concerned about the arbitrary manner in which Aadhaar is being tied to every single transaction of the government, have good reason to believe that some of these policies of the government will not withstand a constitutional challenge. It seems reasonable to conclude that the renewed urgency with which service providers are pressuring customers to link mobile phone numbers to Aadhaar is at the behest of the government. By exploiting the fact that there exists a legal vacuum about the overall legality of Aadhaar, the government is effectively enforcing a policy whose legal validity remains in grave doubt. While it's not illegal per se, their insistent messaging certainly smacks of illegitimacy, and at the very least-in the aftermath of the privacy judgment-one would have expected the government to suspend the extension of Aadhaar into domains such as mobile phones. Consumers cannot be faulted for succumbing to the combined pressures of the state and their service providers given the level of inconvenience involved in having your number deactivated. But the policy of executing the Aadhaar through a logic of a fait accompli reveals, following Hegel's memorable phrase, a cunning of reason of the state. It is worrying that rights that are formally granted to citizens run the risk of being undone by policies that target them not as vulnerable citizens but as vulnerable consumers. At a time when it appears that the judiciary remains one of the last bastions against the caprice of the state and its willing accomplices-the private service providers in this case-it is likely that this compulsory linking will be challenged in the courts and one hopes that the judiciary will uphold its own judgments and end the uncertainty that citizen-consumers face since what is at stake is not just a choice of toothpastes but the fundamental right to communicate and to do so with your privacy intact Summarize the preceding context in 8 sentences. Do not try to create questions or answers for your summarization.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card