diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlipz" "b/data_all_eng_slimpj/shuffled/split2/finalzzlipz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlipz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFor more than a decade experiments at LEP (CERN) and SLC (SLAC) \ngathered a wealth of high precision high energy hadronic data\nfrom electron-positron annihilation at a range of centre-of-mass \nenergies~\\cite{ALEPH-qcdpaper,lep,sld}. \nThis data provides one of the \n cleanest\nways of probing our quantitative understanding of QCD. \nThis is particularly so because the strong interactions occur only in \nthe final state and are not entangled with the parton density functions associated \nwith beams of hadrons.\nAs the understanding of the strong interaction, and the capability of \nmaking more precise theoretical predictions, develops, \nmore and more stringent comparisons of theory and experiment are possible,\nleading to improved measurements\nof fundamental quantities such as the strong \ncoupling constant~\\cite{expreview}.\n\nIn addition\nto measuring multi-jet production rates, more specific information about the\ntopology of the events can be extracted. To this end, many variables have been\nintroduced which characterise the hadronic structure of an event. \nWith the precision data from LEP and SLC, experimental\ndistributions for such event shape variables have been extensively studied and\nhave been compared with theoretical calculations based on next-to-leading order\n(NLO) parton-level event generator programs~\\cite{ERT,kunszt,event}, \n improved by\nresumming kinematically-dominant leading and next-to-leading logarithms\n(NLO+NLL)~\\cite{ctwt} and by the inclusion of \nnon-perturbative models of power-suppressed hadronisation\neffects~\\cite{power}. \n\nThe precision of the strong coupling constant \ndetermined from event shape data has been limited up to now \nlargely by the scale\nuncertainty of the perturbative NLO calculation. We report here on the \nfirst calculation of NNLO corrections to event shape variables, and discuss \ntheir phenomenological impact.\n\n\n\\section{Event shape variables}\n\\label{sec:shapes}\n\nIn order to characterise hadronic final states in electron-positron\nannihilation, a variety of event shape variables have been proposed in \nthe literature, for a review see e.g.~\\cite{QCDbooks}. These variables can be categorised \ninto different classes, \naccording to the minimal number of final-state particles required for them \nto be non-vanishing: In the following we shall only consider three particle final states which are thus closely related to three-jet final states.\n\nAmong those shape variables,\nsix variables~\\cite{shapes}\n were studied in great detail: the thrust $T$, the\nnormalised heavy jet mass $\\rho$, \nthe wide and total jet\nbroadenings $B_W$ and $B_T$, \nthe $C$-parameter and the transition from three-jet to \ntwo-jet final states in the Durham jet algorithm $Y_3$.\n\n\nThe perturbative expansion for the distribution of a \ngeneric observable $y$ up to NNLO at $e^+e^-$ centre-of-mass energy $\\sqrt{s}$, \nfor a renormalisation scale $\\mu^2$ is given by\n\\begin{eqnarray}\n\\frac{1}{\\sigma_{{\\rm had}}}\\, \\frac{\\hbox{d}\\sigma}{\\hbox{d} y} (s,\\mu^2,y) &=& \n\\left(\\frac{\\alpha_s{}(\\mu^2)}{2\\pi}\\right) \\frac{\\hbox{d} \\bar A}{\\hbox{d} y} +\n\\left(\\frac{\\alpha_s{}(\\mu^2)}{2\\pi}\\right)^2 \\left( \n\\frac{\\hbox{d} \\bar B}{\\hbox{d} y} + \\frac{\\hbox{d} \\bar A}{\\hbox{d} y} \\beta_0 \n\\log\\frac{\\mu^2}{s} \\right)\n\\nonumber \\\\ &&\n+ \\left(\\frac{\\alpha_s{}(\\mu^2)}{2\\pi}\\right)^3 \n\\bigg(\\frac{\\hbox{d} \\bar C}{\\hbox{d} y} + 2 \\frac{\\hbox{d} \\bar B}{\\hbox{d} y}\n \\beta_0\\log\\frac{\\mu^2}{s}\n\\nonumber \\\\ &&\n\\hspace{24mm} + \\frac{\\hbox{d} \\bar A}{\\hbox{d} y} \\left( \\beta_0^2\\,\\log^2\\frac{\\mu^2}{s}\n+ \\beta_1\\, \\log\\frac{\\mu^2}{s} \\right)\\bigg)+ {\\cal O}(\\alpha_s{4}) \\;.\n\\label{eq:NNLOmu} \n\\end{eqnarray}\nThe dimensionless \nperturbative coefficients $\\bar A$, $\\bar B$ and $\\bar C$ depend only \non the event shape variable $y$. They are computed by a fixed-order \nparton-level calculation, which includes final states with three partons \nat LO, up to four partons at NLO and up to five partons at NNLO. \nLO and NLO corrections to event shapes have been available already for \na long time~\\cite{ERT,kunszt,event}. \n\n The calculation of the NNLO corrections is carried out using \na newly developed\nparton-level event generator programme {\\tt EERAD3} which contains \nthe relevant \nmatrix elements with up to five external partons~\\cite{3jme,muw2,V4p,tree5p}. \nBesides explicit infrared divergences from the loop integrals, the \nfour-parton and five-parton contributions yield infrared divergent \ncontributions if one or two of the final state partons become collinear or \nsoft. In order to extract these infrared divergences and combine them with \nthe virtual corrections, the antenna subtraction method~\\cite{ant} \nwas extended to NNLO level~\\cite{ourant} and implemented\nfor $e^+e^- \\to 3\\,\\mathrm{jets}$ and related event-shape variables~\\cite{eerad3}. The analytical cancellation of all \ninfrared divergences serves as a very strong check on the implementation. \n{\\tt EERAD3} yields the perturbative $A$, $B$ and $C$ coefficients as \nhistograms for all infrared-safe event-shape variables \nrelated to three-particle \nfinal states at leading order. From these, \n $\\bar A$, $\\bar B$ and $\\bar C$ are computed by normalising to the total \nhadronic cross section.\nAs a cross check, the $A$ and $B$ coefficients have also been obtained from an independent integration~\\cite{event}\nof the NLO matrix elements~\\cite{ERT}, showing excellent agreement. \n\nFor small values of the event shape variable $y$, the fixed-order expansion, \neq.\\ (\\ref{eq:NNLOmu}), fails to converge, \nbecause the fixed-order coefficients are enhanced by powers of $\\hbox{ln}(1\/y)$.\nIn order to obtain reliable predictions\nin the region of $y \\ll 1$ it is necessary to resum entire sets of logarithmic terms at all orders in $\\alpha_s$. \nA detailed description of the predictions at next-to-leading-logarithmic approximation (NLLA) can\nbe found in Ref.\\ \\cite{as_theory-uncertainties}. \n\n\n\\section{NNLO results}\n\n\nThe precise size and shape of the NNLO corrections depend on the observable \nin question. Common to all observables is the divergent behaviour of \nthe fixed-order prediction in the two-jet limit, where soft-gluon effects \nat all orders become important, and where resummation is needed. For several \nevent shape variables \n (especially $T$ and $C$) the full kinematical range is not yet realised \nfor three partons, but attained only in the multi-jet limit. In this case,\nthe fixed-order description is also insufficient since it is limited \nto a fixed multiplicity (five partons at NNLO). Consequently, the \nfixed-order description is expected to be reliable in a restricted \ninterval bounded by the two-jet limit on one side and the multi-jet \nlimit on the other side. \n\nIn this intermediate region, we observe that \ninclusion of NNLO corrections (evaluated at the $Z$-boson mass, and \nfor fixed value of the strong coupling constant) typically increases \nthe previously available NLO prediction. \nThe magnitude of this increase differs considerably between \ndifferent observables\\cite{ourevent}, \nit is substantial for $T$ (18\\%), $B_T$ (17\\%) and \n$C$ (15\\%), moderate for $\\rho$ and $B_W$ (both 10\\%) and small for \n$Y_3$ (6\\%). For all shape variables, we observe that the renormalisation\nscale uncertainty of the NNLO prediction is reduced by a factor 2 or more\ncompared to the NLO prediction. \nInclusion of the NNLO corrections modifies the shape of the event shape \ndistributions. We observe that \nthe NNLO prediction describes the shape of the measured event shape \ndistributions over a wider kinematical range than the NLO prediction, both \ntowards the two-jet and the multi-jet limit. To illustrate the \nimpact of the NNLO corrections, we compare the fixed-order predictions \nfor $Y_3$ to LEP2-data obtained by the ALPEH experiment in \nFigure~\\ref{fig:y23}, which illustrates especially the improvement\nin the approach to the two-jet region (large $-\\hbox{ln}(Y_3)$). \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[angle=-90,width=10cm]{aleph.y23.ps}\n\\end{center}\n\\caption{\\small Perturbative fixed-order predictions \nfor the $Y_3$-distribution, compared to LEP2 data from ALEPH.} \n\\protect\\label{fig:y23}\n\\end{figure}\n\nThe information contained in the event shape distributions can be \nrestructured by computing individual moments. Moments of event shape \ndistributions have been studied \ntheoretically and experimentally in particular in view of \nunderstanding non-perturbative power corrections~\\cite{power}.\nConsequently, perturbative NNLO corrections will improve the discrimination \nbetween higher perturbative orders and genuine non-perturbative effects. \nFor the first moment $\\langle 1-T \\rangle$ of the thrust distribution, we find\nthe integrated coefficients\n\\begin{displaymath}\n{\\cal A} = 2.101\\,\\qquad {\\cal B} = 44.98\\,\\qquad \n{\\cal C} = 1095 \\pm 130\\;,\n\\end{displaymath}\nwhich yields for $\\sqrt{s}=\\mu=M_Z$:\n\\begin{displaymath}\n\\langle 1-T \\rangle(\\alpha_s(M_Z) = 0.1189) \n= 0.0398\\, ({\\rm LO})\\; +\\; 0.0146\\, ({\\rm NLO}) \\; \n+ \\; 0.0068 \\, ({\\rm NNLO})\\;.\n\\end{displaymath}\nWork on moments of the event shapes is ongoing.\n\n\\section{Determination of the strong coupling constant}\nUsing the newly computed NNLO corrections to event shape variables, we\nperformed\\cite{ouras} \na new extraction of $\\alpha_s$ from data on the standard set of \nsix event shape variables, measured \n by the ALEPH\\ collaboration \\cite{ALEPH-qcdpaper}\nat centre-of-mass energies of 91.2, 133, 161, 172, 183, 189, 200 and 206 GeV.\nThe combination of \nall NNLO determinations from all shape variables yields \n\\begin{displaymath}\n \\alpha_s(M_Z) = 0.1240 \\;\\pm\\; 0.0008\\,\\mathrm{(stat)}\n \t\t\t\t\t \\;\\pm\\; 0.0010\\,\\mathrm{(exp)}\n \\;\\pm\\; 0.0011\\,\\mathrm{(had)}\n \\;\\pm\\; 0.0029\\,\\mathrm{(theo)} .\n \\end{displaymath}\nWe observe a clear improvement in the fit quality when going to\nNNLO accuracy. Compared to NLO the value of $\\alpha_s$ is lowered \nby about 10\\%, but still higher than for NLO+NLLA~\\cite{ALEPH-qcdpaper},\n which \nshows the obvious need for a matching of NNLO+NLLA for a fully reliable \nresult. \n The scatter among the\n $\\alpha_s$-values extracted from different shape variables is \nlowered considerably, and the theoretical uncertainty is decreased by \na factor 2 (1.3) compared to NLO (NLO+NNLA). \n\nThese observations visibly illustrate the improvements gained from \nthe inclusion of the NNLO corrections, and highlight the need for \nfurther studies on the matching of NNLO+NLLA, and on the \nderivation of NNLLA resummation terms.\n\n\n\\section{Outlook}\nOur results for the NNLO corrections open up a whole \nnew range of possible \ncomparisons with the LEP data.\nThe potential of these studies is\nillustrated by the new determination of \n$\\alpha_s$ reported here, which can be \nfurther improved by the matching NLLA+NNLO, currently in progress. \nSimilarly, our results will also allow a renewed study of\npower corrections, now matched to NNLO. \n\n\n\n\\bibliographystyle{JHEP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNeural network models have recently contributed towards a great amount of progress in natural language processing. These models typically share a common backbone: recurrent neural networks (RNN), which have proven themselves to be capable of tackling a variety of core natural language processing tasks \\cite{hochreiter1997long,elman1990finding}.\nOne such task is language modeling, in which we estimate a probability distribution over sequences of tokens that corresponds to observed sentences (\\S\\ref{sec:background}). Neural language models, particularly models conditioned on a particular input, have many applications including in machine translation \\cite{bahdanau2016end}, abstractive summarization \\cite{chopra2016abstractive}, and speech processing \\cite{graves2013speech}. Similarly, state-of-the-art language models are almost universally based on RNNs, particularly long short-term memory (LSTM) networks \\cite{jozefowicz2016exploring,inan2016tying,merity2016pointer}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.15,trim=1.2cm 0 0 0]{newlatticelm}\n \\caption{Lattice decomposition of a sentence and its corresponding lattice language model probability calculation}\n \\vspace{-2mm}\n \\label{fig:latticelm}\n\\end{figure}\n\nWhile powerful, LSTM language models usually do not \\textit{explicitly} model many commonly-accepted linguistic phenomena. As a result, standard models lack linguistically informed inductive biases, potentially limiting their accuracy, particularly in low-data scenarios \\cite{adams2017,koehn}. In this work, we present a novel modification to the standard LSTM language modeling framework that allows us to incorporate some varieties of these linguistic intuitions seamlessly:\n\\textit{neural lattice language models} (\\S\\ref{sec:proposed}). Neural lattice language models define a lattice over possible paths through a sentence, and maximize the marginal probability over all paths that lead to generating the reference sentence, as shown in Fig. \\ref{fig:latticelm}. Depending on how we define these paths, we can incorporate different assumptions about how language should be modeled.\n\nIn the particular instantiations of neural lattice language models covered by this paper, we focus on two properties of language that could potentially be of use in language modeling: the existence of multi-word lexical units \\cite{zgusta1967multiword} (\\S\\ref{sec:multitoken}) and polysemy \\cite{ravin2000polysemy} (\\S\\ref{sec:polysemy}). Neural lattice language models allow the model to incorporate these aspects in an end-to-end fashion by simply adjusting the structure of the underlying lattices.\n\nWe run experiments to explore whether these modifications improve the performance of the model (\\S\\ref{sec:experiments}). Additionally, we provide qualitative visualizations of the model to attempt to understand what types of multi-token phrases and polysemous embeddings have been learned.\n\n\n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Language Models}\n\nConsider a sequence $X$ for which we want to calculate its probability.\nAssume we have a vocabulary from which we can select a unique list of $|X|$ tokens $x_1,x_2,\\ldots,x_{|X|}$ such that $X = [x_1;x_2;\\ldots;x_{|X|}]$, i.e. the concatenation of the tokens (with an appropriate delimiter).\nThese tokens can be either on the character level \\cite{hwang2017character,DBLP:journals\/corr\/LingTDB15} or word level \\cite{inan2016tying,merity2016pointer}.\nUsing the chain rule, language models generally factorize $p(X)$ in the following way:\n\\begin{align}\n\\label{eq:regmarg}\np(X) &= p(x_1,x_2,\\ldots,x_{|X|}) \\nonumber \\\\\n &= \\prod_{t=1}^{|X|}p(x_t\\mid x_1,x_2,\\ldots,x_{t-1})\n\\end{align}\n\nNote that this factorization is exact only in the case where the segmentation is unique.\nIn character-level models, it is easy to see that this property is maintained, because each token is unique and non-overlapping.\nIn word-level models, this also holds, because tokens are delimited by spaces, and no word contains a space.\n\n\\subsection{Recurrent Neural Networks}\n\nRecurrent neural networks have emerged as the state-of-the-art approach to approximating $p(X)$.\nIn particular, the LSTM cell \\cite{hochreiter1997long} is a specific RNN architecture which has been shown to be effective on many tasks, including language modeling \\cite{press2016using,jozefowicz2016exploring,merity2016pointer,inan2016tying}.%\n\\footnote{In this work, we utilize an LSTM with linked input and forget gates, as proposed by \\newcite{greff2016lstm}.}\nLSTM language models recursively calculate the hidden and cell states ($h_t$ and $c_t$ respectively) given the input embedding $e_{t-1}$ corresponding to token $x_{t-1}$:\n\\begin{align}\n\\label{eqn:lstm}\nh_t, c_t = \\text{LSTM}(h_{t-1},c_{t-1},e_{t-1},\\theta),\n\\end{align}\nthen calculate the probability of the next token given the hidden state, generally by performing an affine transform parameterized by $W$ and $b$, followed by a softmax:\n\\begin{align}\n\\label{eq:softmax}\np(x_t \\mid h_t) := \\text{softmax}(W * h_t + b).\n\\end{align}\n\n\n\n\n\\section{Neural Lattice Language Models}\n\n\\subsection{Language Models with Ambiguous Segmentations}\n\\label{sec:proposed}\n\nTo reiterate, the standard formulation of language modeling in the previous section requires splitting sentence $X$ into a unique set of tokens $x_1,\\ldots,x_{|X|}$.\nOur proposed method generalizes the previous formulation to remove the requirement of uniqueness of segmentation, similar to that used in non-neural $n$-gram language models such as \\newcite{dupont1997lattice} and \\newcite{goldwater2007distributional}.\n\nFirst, we define some terminology.\nWe use the term ``token'', designated by $x_i$, to describe any indivisible item in our vocabulary that has no other vocabulary item as its constituent part.\nWe use the term ``chunk'', designated by $k_i$ or $x_i^j$, to describe a sequence of one or more tokens that represents a portion of the full string $X$, containing the unit tokens $x_i$ through $x_j$: $x_i^j = [x_i,x_{i+1};\\ldots;x_j]$.\nWe also refer to the ``token vocabulary'', which is the subset of the vocabulary containing only tokens, and to the ``chunk vocabulary'', which similarly contains all chunks.\n\nNote that we can factorize the probability of any sequence of chunks $K$ using the chain rule, in precisely the same way as sequences of tokens:\n\\begin{align}\n\\label{eq:regmarg}\np(K) &= p(k_1,k_2,\\ldots,k_{|K|}) \\nonumber \\\\\n &= \\prod_{t=1}^{|K|}p(k_t\\mid k_1,k_2,\\ldots,k_{t-1})\n\\end{align}\n\nWe can factorize the overall probability of a token list $X$ in terms of its chunks by using the chain rule, and marginalizing over all segmentations. \nFor any particular token list $X$, we define a set of valid segmentations $\\mathcal{S}(X)$, such that for every sequence $s \\in \\mathcal{S}(X)$, $X = [x_{s_0}^{s_1-1};x_{s_1}^{s_2-1};\\ldots;x_{s_{|s|-1}}^{s_{|s|}}]$.\nThe factorization is:\n\\small\n\\begin{align}\n\\label{eq:latmarg}\np(X) &= \\sum_S p(X, S) = \\sum_S p(X|S) p(S) = \\sum_{S \\in \\mathcal{S}(X)} p(S) \\nonumber \\\\\n &= \\sum_{S \\in \\mathcal{S}(X)}\\prod_{t=1}^{|S|}p(x_{s_{t-1}}^{s_t-1}\\mid x_{s_0}^{s_1-1},x_{s_1}^{s_2-1},\\ldots,x_{s_{t-2}}^{s_{t-1}-1})\n\\end{align}\n\\normalsize\n\nNote that, by definition, there exists a unique segmentation of $X$ such that $x_1,x_2,\\ldots$ are all tokens, in which case $|S|=|X|$.\nWhen only that one unique segmentation is allowed per $X$, $\\mathcal{S}$ contains only that one element, so summation drops out, and therefore for standard character-level and word-level models, Eq.~(\\ref{eq:latmarg}) reduces to Eq.~(\\ref{eq:regmarg}), as desired. \nHowever, for models that license multiple segmentations per $X$, computing this marginalization directly is generally intractable.\nFor example, consider segmenting a sentence using a vocabulary containing all words and all 2-word expressions.\nThe size of $\\mathcal{S}$ would grow exponentially with the number of words in $X$, meaning we would have to marginalize over trillions of unique segmentations for even modestly-sized sentences.\n\n\\subsection{Lattice Language Models}\n\n\nTo avoid this, it is possible to re-organize the computations in a lattice, which allows us to dramatically reduce the number of computations required \\cite{dupont1997lattice,neubig2010learning}.\n\nAll segmentations of $X$ can be expressed as the edges of paths through a lattice over token-level prefixes of $X$: $x_{<1}, x_{<2}, \\ldots, X$. The infimum is the empty prefix $x_{<1}$; the supremum is $X$; an edge from prefix $x_{