text
stringlengths 84
944
|
---|
page_content='For any x,y∈Rd, we propose transformation ϕ1,ψ1:Rd→Rd+2such that\nϕ1(x)=h\n(D−1\nxx)⊤0q\n1−∥xD−1x∥2\n2i⊤\nψ1(y)=h\n(D−1\nyy)⊤q\n1−∥yD−1y∥2\n20i⊤\n(13)\nHereDx,Dyare some constant that make sure both x/D xandy/D yhave norms less than 1. Under these transformations,\nbothϕ1(x)andψ1(y)have norm 1andargmax y∈Y⟨ϕ1(x),ψ1(y)⟩=argmax y∈Y⟨x,y⟩.\nCombining transformations in Eq. (11) and Eq. (13), we obtain query transform ϕ:Rd→Rd+3with form ϕ(x)=ϕ1(ϕ0(x))' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='and data transform ϕ:Rd→Rd+3with form ψ(y)=ψ1(ψ0(y)). Using ϕandψ, we transform the direction search problem in\noptimization into a MaxIP in unit sphere. Moreover, given a set Y⊂Rdand a query x∈Rd, the solution zof(c,ϕ,ψ,τ )-MaxIP\nover(x,Y)has the propriety that ⟨z−x,∇g(x)⟩≤c·miny∈Y⟨y−x,∇g(x)⟩. Thus, we could approximate the direction search\nwithLSH based MaxIP data-structure.\nNote that only MaxIP problem with positive inner product values could be solved by LSH. We found the direction search' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='problem naturally satisfies this condition. We show that if gis convex, given a set S⊂Rd, we have mins∈S⟨∇g(x),s−x⟩≤0\nfor any x∈B(S), where Bis the convex hull of S. Thus, max y∈Y⟨ϕ0(x),ψ0(y)⟩is non-negative following Eq. (12).\nJ.4 Data Structures\nIn this section, we present a formal statement that solves (c,τ)-MaxIP problem on unit sphere using LSH for(c,r)-ANN .\nTheorem J.9. Letc∈(0,1)andτ∈(0,1). Given a set of n-vector set Y⊂ Sd−1on the unit sphere, there exists a data' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='structure with O(dn1+o(1))preprocessing time and O(n1+o(1)+dn)space so that for any query x∈Sd−1, we take O(d·nρ)\nquery time to retrieve the (c,τ)-MaxIP ofxinYwith probability at least 0.97, where ρ:=2(1−τ)2\n(1−cτ)2−(1−τ)4\n(1−cτ)4+o(1)\nProof. We know that ∥x−y∥2\n2=2−2⟨x,y⟩for all x,y∈Sd−1. In this way, if we have a LSH data-structure for (c,r)-ANN .\nIt could be used to solve (c,τ)-MaxIP withτ=1−0.5r2andc=1−0.5c2r2\n1−0.5r2. Next, we write c2as\nc2=1−c(1−0.5r2)\n0.5r2=1−cτ\n1−τ.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='Next, we show that if the LSH is initialized following Theorem J.3, it takes query time O(d·nρ), space O(n1+o(1)+dn)and\npreprocessing time O(dn1+o(1))to solve (c,τ)-MaxIP through solving (c,r)-ANN , where\nρ=2\nc2−1\nc4+o(1)=2(1−τ)2\n(1−cτ)2−(1−τ)4\n(1−cτ)4+o(1).\nIn practice, cis increasing as we set parameter τclose to MaxIP (x,Y). There is also another LSH data structure (Andoni\n& Razenshteyn, 2015) with longer preprocessing time and larger space that could solve the (c,τ)-MaxIP with similar query' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='7It is obvious to boost probability from constant to δby repeating the data structure log(1/δ)times.\n36' metadata={'source': 'pdfs/paper_3.pdf', 'page': 35} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\ntime complexity. We refer readers to Section 8.2 in (Shrivastava et al., 2021) for more details8. Moreover, Corollary J.9 could\nbe applied to projected MaxIP problem.\nTheorem J.10. Letc∈(0,1)andτ∈(0,1). Letϕ,ψ:Rd→Rkdenote two transforms. Let Tϕdenote the time to compute\nϕ(x)andTψdenote the time to compute ψ(y). Given a set of n-points Y∈Rdwithψ(Y)⊂ Sk−1on the sphere, one can' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='construct a data structure with O(dn1+o(1)+Tψn)preprocessing time and O(n1+o(1)+dn)space so that for any query\nx∈Rdwithϕ(x)∈Sk−1, we take query time complexity O(d·nρ+Tϕ)to solve (c,ϕ,ψ,τ )-MaxIP with respect to (x,Y)with\nprobability at least 0.9, where ρ:=2(1−τ)2\n(1−cτ)2−(1−τ)4\n(1−cτ)4+o(1).\nProof. The preprocessing phase can be decomposed in two parts.\n• It takes O(Tψn)time to transform every y∈Yintoψ(y).' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='• It takes O(O(dn1+o(1))time and O(dn1+o(1)+dn)to index every ψ(y)intoLSH using Theorem J.9.\nThe query phase can be decomposed in two parts.\n• It takes O(Tϕ)time to transform every x∈Rdintoϕ(x).\n• It takes O(d·nρ)time perform query for ϕ(x)inLSH using Theorem J.9.\nK Self-attention layer as a clustering algorithm\nThe self-attention layer in the Transformer looks like mean-shift clustering. Suppose {(xj,vj)}are a bunch of key and value' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='pairs and qis the query. Note that q=Wqx,k=Wkxandv=Wvxare computed by three projection matrices Wk,Wq\nandWvfrom a common x. Then from self-attention we have:\nv=X\njpjvj=P\njexp(x⊺W⊺\nqWkxj)WvxjP\njexp(x⊺W⊺\nqWkxj)=WvP\njexp(x⊺W⊺\nqWkxj)xjP\njexp(x⊺W⊺\nqWkxj)(14)\nwhere∼(q,kj):=exp( q⊺kj)=exp( x⊺W⊺\nqWkxj)andpj=∼(q,kj)/P\nj∼(q,kj).\nOn the other hand, mean-shift clustering looks like the following:\nm(x)=P\njK(xj,x)xjP\njK(xj,x)(15)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='where K(xj,x)is a kernel matrix that measure the similarity between xjandx. According to the mean-shift algorithm,\nin the next iteration, we will simply replace xwithm(x).\nSo in some sense, self-attention is just to do some kind of clustering for the input embedding qandk, plus a transformation of the\nembedding to another place. The term “projection” is due to the fact that there is a projection matrix Wvonxfor the next level.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='Residue connection and LayerNorm . Compared to mean-shift, Transformer layer has residue connection. Therefore, for\nsingle-headed attention, what you actually get is v+x, followed by a LayerNorm. For the residue connection, the mean-shift\nanalog already shows the output m(x)contains x+part. The reason why we need residue connection is that the self-attention\npart might only model the “change” of xin the mean-shift picture, rather than the full update of x.\nL The role of self-attention' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='Consider we have a vocabulary of size mandddimensional embedding space. In practice, many papers in NLP have reported\nclustering behaviors of word embeddings: such a clustering of word embedding naturally occurs after training.\nAn explanation for the above phenomenon is that, by grouping these word embedding together, we might generalize better, since\nsimilarity in word now can transfer (e.g., A linked to B, B linked to C, then A might link to C as well) and generalization follows.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='Let’s treat it as a fact and focus on how this is achieved and how self-attention plays a role here.\n8Recently, there a line of work that use fast MaxIP data structure to speedup the iterative-type optimization algorithms (Shrivastava et al.,\n2021; Song & Ye, 2023; Qin et al., 2023a; Song et al., 2023a).\n37' metadata={'source': 'pdfs/paper_3.pdf', 'page': 36} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nL.1 The capacity of embedding layer\nFirst let us take a look at the following pairwise distance constraints between word embedding (e.g., some words should\nbe close to each other, some should be far away from each other) as the following:\n∥xi−xj∥=D(i,j) (16)\nwhere D(i,j)is large for iandjthat should be far apart and D(i,j)is small for iandjthat are close to each other. In' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='visualization, this is called Multidimensional Scaling (MDS) (Cox & Cox, 2008).\nNote that in neural network training, the constraint (Eqn. 16) is not directly enforced during training, but the clustering\nnaturally happens. Since we talk about capacity, how we achieve Eqn. 16 doesn’t matter for now.\nIn general we cannot find a fixed low-dimensional embedding ( d≪m) to satisfy these constraints, since we only have md' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='parameters ( mvectors, each has dentries), but m2constraint. So two vectors that are supposed to be close may not be close\nenough (but hopefully they remain close to each other).\nL.2 The role of self-attention\nFor this, the self-attention mechanism comes to the rescue, trading model-size with additional computation. It fulfills what\n(static) embedding cannot achieve: to further group the embedding vectors together in a multi-layer structure.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='Note that one sentence never covers all dvocabularies. Once the words in the sentence are picked, they are grouped together\nvia self-attention layers to collectively represent a concept that can be useful for the task.\nL.3 How the clustering happens through self-attention?\nNow one fundamental questions arise: How the static clustering of embedding happens during end-to-end training? In\npractice, no one explicitly enforces the MDS constraint (Eqn. 16).' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='Let’s start with a simple example. we have two unit embedding: xandywith the normalization condition that ∥x∥2= 1\nand∥y∥2=1, and a simple self-attention layer (without projection) which output z:\nz=(1−p)x+py (17)\nWhere the attention map is:\np=ex⊺y\nex⊺x+ex⊺y=1\n1+e1−x⊺y(18)\nNote that here we attend to xso0<p< 1/2always. The last two is due to normalization condition.\nNow we consider a loss function L=−1\n2∥z∥2\n2. The intuition behind is that “for some reason, we found that zis a good' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='representation for our task, and want to make sure its length is as long as possible”.\nUnder this context, what would be the gradient rule for xandy? Will they cluster together?\nThe answer is yes! We could compute\n∂z\n∂x= (1 −p)I+∂p\n∂x(y−x)⊺(19)\n∂z\n∂y=pI+∂p\n∂y(y−x)⊺(20)\nLett:=1−x⊺yand define the following function with respect to t:\nf(t):=(x−y)⊺z=(1−2p)(1−x⊺y)>0 (21)\nTherefore, we can compute the gradient for xand gradient for y:\n−gx:=−∂L\n∂x=−∂z\n∂x∂L\n∂z=(1−p)2x+p(1−p)(1−f(t))y (22)\n−gy:=−∂L' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='∂y=−∂z\n∂y∂L\n∂z=p2y+p(1−p)(1−f(t))x (23)\nNote that since xandyare kept to be normalized, the term (1−p)2xin∂L/∂ xis gone (and similarly p2yforgy). So how\nxandymove depends on the sign of 1−f(t).\nWith some computation, we could see 0< f(t)<1when t <1.5424 . In summary, if x⊺y>−0.4576 , then the (negative)\ngradient of xpushes it towards yand pushes xtowards y, and the clustering of static embedding happens during training.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='Note that since both xandyare normalized, −1≤x⊺y≤1, so this is a quite loose condition and can be easily satisfied.\n38' metadata={'source': 'pdfs/paper_3.pdf', 'page': 37} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nL.4 Multiple embeddings\nPeople might wonder what happen to multiple unit embeddings x,y1,y2,...,yK? In this case, we can similarly define\nself-attention probability pi(note that here we consider the case that every embedding attends to x):\npi:=ex⊺yi\nex⊺x+P\njex⊺yj=ex⊺yi\n1+P\njex⊺yj(24)\nDefine pS:=PK\ni=1pi=1−1\n1+P\njex⊺yj<1and we have:\nz=(1−pS)x+X\nipiyi (25)\nLet˜pi:=pi/pSbe the (normalized) probability on yiand¯y:=1\npSP\nipiyi=P' metadata={'source': 'pdfs/paper_3.pdf', 'page': 38} |
page_content='i˜piyibe the weighted mean of {yi}other\nthanx, then we have:\nz=(1−pS)x+pS¯y (26)\nNow we can still compute the partial derivative:\n∂pj\n∂x=pj[−pS¯y+yj] (27)\n∂pj\n∂yi=pi[−pj+I(i=j)]x (28)\nwhich gives\n∂z\n∂x= (1 −pS)I+X\nj∂pj\n∂x(yj−x)⊺(29)\n∂z\n∂yi=piI+X\nj∂pj\n∂yi(yj−x)⊺(30)\nAfter some manipulation, we have:\n∂z\n∂x=(1−pS)[I+pS¯y(¯y−x)⊺]+pSQ (31)\nwhere Q:=P\nj˜pj(yj−¯y)(yj−¯y)⊺is the weighted covariance matrix of data points {yj}.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 38} |
page_content='Similar to the two unit case, we want to check −gxto see how the embedding xchanges over time.\n−gx=−∂L\n∂x=−∂z\n∂x∂L\n∂z(32)\n= (1 −pS)2x+pS\x02\n(1−2pS)x⊺¯y−(1−pS)+pS∥¯y∥2\x03¯y+pSQz\nIf things are already quite clustered, then ∥¯y∥≈1(usually ∥¯y∥2<1since sphere is a convex set), Qz≈0(since Qspans\non the tangent space of zat the sphere and zis perpendicular to it), and we have:\n−gx≈(1−pS)2x+pS(1−2pS)(x⊺¯y−1)¯y (33)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 38} |
page_content='It is clear that x⊺¯y<1. When pS>1/2, which is high likely for large K, then−gxhas positive component of ¯yandxwill\nmove towards ¯y.\nOn the other hand, we could also check\n∂z\n∂yi=pi[I+(1−pS)x(¯y−x)⊺]+pix(yi−¯y)⊺(34)\nwhich gives an expression of −gy:\n· (35)\nWith the same argument, it moves towards ¯y(so all yiwill cluster together) and towards x.\nWhen there is a WkandWqbefore the embedding, following the same logic, only the column subspace of Wk(orWq) will' metadata={'source': 'pdfs/paper_3.pdf', 'page': 38} |
page_content='be clustered together. On the other hand, the value part will be different in order to enable encoding of more complicated\nconcepts based on co-occurrence of multiple tokens.\nM Link self-attention with generative models.\nConsider the following self-attention structure. Consider an embedding matrix X∈Rn×dand for embedding xiandxj, let\nyij=ϕ(xi;xj):=(1−βij)xi+βijxj, β ij:=ex⊺\nixj\nex⊺\nixi+ex⊺\nixj(36)\n39' metadata={'source': 'pdfs/paper_3.pdf', 'page': 38} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nHere ϕ(xi;xj) :=xi+βij(xj−xi)is the self-attention operation. More properties of this operator ϕneed to be explored.\nThen we want to maximize the following objective:\nmax\nX,∥xi∥2=1X\nijkP(k|i,j)y⊺\nijxk (37)\nor more formally, using a softmax to avoid trivial solution xi≡x, we have:\nmax\nX,∥xi∥2=1J:= max\nX,∥xi∥2=1X\nijkP(k|i,j)logδijk, δijk:=ey⊺\nijxk\nP\nkey⊺\nijxk(38)\nwhich is:\nmax\nX,∥xi∥2=1X\nijkP(k|i,j)"\ny⊺\nijxk−logX\nkey⊺\nijxk#\n(39)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 39} |
page_content='We can compute its gradient update. Here we assume the index knever appears in index iandj(encoding and decoding\nmatrices are decoupled), then by gradient rule, we have:\n˙xk=∂L\n∂xk=P⊥\nxkX\nijP(k|i,j)(1−δijk)yij (40)\nwhere P⊥\nxkis the projection matrix that projects a vector to the orthogonal complement space of xk. The projection is due\nto the constraint ∥xk∥2=1. If the training converges ( ˙xk=0), then we know thatX\nijP(k|i,j)(1−δijk)yij=γxk (41)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 39} |
page_content='for some γ >0(note that γ <0will be an unstable stationary point).\nDepending on different structure of the generative model specified by P(k|i,j), we might end up learning different embedding\nmatrix X.\nThe first thing we want to check is independency. Assume that for some specific token kandi, we have P(k|i,j) =P(k|i)\nfor any j, which means that the frequency of token khas nothing to do with the second entry j. Furthermore, token kis not' metadata={'source': 'pdfs/paper_3.pdf', 'page': 39} |
page_content='connected with other token i′̸=i, i.e,P(k|i′,j)≡0. If we just let δijk=δ >0, then we have:\nP(k|i)X\njyij=γ′xk (42)\nwhich yields\nP(k|i)nxi+X\njβij(xj−xi)=γ′xk (43)\nAnd we could possibly show thatP\njβij(xj−xi)≈0since βij= 1/(1+e1−x⊺\nixj)applies equal weights for embeddings\naround xiand they cancel out. Therefore, xkis aligned with xi.\nAnother thing we might want to check is identification of two tokens. Assume that there exists two tokens j1andj2and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 39} |
page_content='specific kandi, so that P(k|i,j1)=P(k|i,j2). For other k,i,j combination P(k|i,j)≡0, then we have:\nP(k|i,j1)yij1=γ1xk (44)\n(not sure how to continue).\nIf we have Wq,WkandWv, then the formulation doesn’t change that much. The only difference here is that now\nβij:=ex⊺\niWpqxj\nex⊺\niWpqxi+ex⊺\niWpqxj(45)\nandy⊺\nijxknow becomes y⊺\nijWvxk.\n40' metadata={'source': 'pdfs/paper_3.pdf', 'page': 39} |
page_content='Giraffe: Adventures in Expanding Context Lengths in LLMs\nArka Pal∗, Deep Karkhanis, Manley Roberts,\nSamuel Dooley, Arvind Sundararajan, Siddartha Naidu\nAbacus.AI\nAbstract\nModern large language models (LLMs) that rely on attention mechanisms are typically trained with\nfixed context lengths which enforce upper limits on the length of input sequences that they can handle\nat evaluation time. To use these models on sequences longer than the train-time context length, one' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='might employ techniques from the growing family of context length extrapolation methods — most of\nwhich focus on modifying the system of positional encodings used in the attention mechanism to indicate\nwhere tokens or activations are located in the input sequence. We conduct a wide survey of existing\nmethods of context length extrapolation on a base LLaMA or LLaMA 2 model, and introduce some of' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='our own design as well — in particular, a new truncation strategy for modifying the basis for the position\nencoding.\nWe test these methods using three new evaluation tasks (FreeFormQA, AlteredNumericQA, and\nLongChat-Lines) as well as perplexity, which we find to be less fine-grained as a measure of long context\nperformance of LLMs. We release the three tasks publicly as datasets on HuggingFace. We discover' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='that linear scaling is the best method for extending context length, and show that further gains can be\nachieved by using longer scales at evaluation time. We also discover promising extrapolation capabilities\nin the truncated basis. To support further research in this area, we release three new 13B parameter\nlong-context models which we call Giraffe : 4k and 16k context models trained from base LLaMA-13B,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='and a 32k context model trained from base LLaMA2-13B. We also release the code to replicate our\nresults.1\n1 Introduction\nIn recent years, transformers [1] have become the dominant neural network architecture in a variety of\nnatural language modelling tasks [2, 3], by dint of their flexibility and their amenability to being trained\non extremely large datasets [4, 5]. Subsequently, a popular term that has been adopted for these neural' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='networks is ‘Large Language Models’ (LLMs) — with the ‘Large’ referring both to the training dataset size\nas well as their parameter count (and indeed, the associated training and environmental cost).\nA key element of the standard transformer architecture is its inherent insensitivity to the ordering of\nthe input elements. Attention is naturally a set-like operation in which the position of the elements does' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='not matter [1]. However, the order of elements is crucial for many important tasks such as parsing natural\nlanguage, coding, forecasting, etc. Thus it is necessary to inject positional information into the inputs of the\nLLM, typically in the form of positional encodings.\nOne possible desideratum of a positional encoding scheme is context length extrapolation : the ability\nto use the LLM for inference on input lengths longer than those it was trained on. Due to the quadratic' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='complexity growth of the attention mechanism in transformers, it is often infeasible to train on large context\nlengths. The benefit of increased context length is diverse - allowing reading longer documents and papers,\n∗Correspondence to arka@abacus.ai .\n1Github repo at: https://github.com/abacusai/Long-Context .\n1arXiv:2308.10882v1 [cs.AI] 21 Aug 2023' metadata={'source': 'pdfs/paper_1.pdf', 'page': 0} |
page_content='more internal consistency in long conversations with users in LLM-powered chatbots, working on bigger\ncodebases, and so on. We can break context length extrapolation down into two main paradigms. First,\nthere is finetuned extrapolation where a model previously pretrained on shorter contexts is allowed to finetune,\nor update model weights based on the longer context length. Additionally, there is zero-shot extrapolation' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='where a model previously pretrained on short contexts is immediately evaluated on longer context lengths\nwith the same weights as the shorter context model.\nIn this paper, we focus primarily on zero-shot extrapolation and make the following key contributions :\nBenchmark of different context extrapolation schemes We conduct a survey of methods for context\nlength extrapolation with a pretrained base model, and try a few of our own inventions as well. In particular,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='we present a new truncated basis for position encodings. The focus in this paper on pretrained models is\nalso different from other work in the literature [6, 7], which tend to instead train from scratch with a\nchosen positional encoding scheme. As mentioned above, although LLMs have been successful, training\nthem is a costly enterprise. Well known closed-source model include GPT-4 [8] and Claude [9]. Recently the' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='open-source LLaMA [10] has been released by a team at Meta AI, and this was followed by the improved\nLLaMA2 [11]. In our view, the resources required to train competitive base models of this nature will remain\nconstrained to a few large players. Therefore, it is imperative to be able to modify the models as desired for\nthe end user—ideally, with a fraction of the compute power applied.\nOur main findings are that:\n•Linear interpolation is the best as a context length extrapolation method.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='•All context length extrapolation methods show degradation on task accuracy, even for lengths where\nthey provide otherwise coherent output (and perplexity scores are still reasonable).\n•Further context length increase can be achieved by utilising a higher scale factor at evaluation time\nthan finetune time, but seemingly only up to a factor of 2x.\nPublic release of LLM weights and evaluation datasets We release the weights of two new 13B' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='models trained from base LLaMA with an extended context length of 16k2and a context length of 4k3\non HuggingFace. We also release a 13B model trained to a length of 32k from base LLaMA 24. We call\nthis family of models Giraffe . In addition, we release three datasets (LongChat-Lines5, FreeFormQA6and\nAlteredNumericQA7) to evaluate long context performance of these, and other, models. LongChat-Lines is' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='a key-value fine-grained retrieval task. FreeFormQA and AlteredQA are question-answering datasets based\non the Natural Questions Dataset [12]. Some existing work [6, 7] focuses only on perplexity on a document\ncorpus evaluation set as their measure of extrapolation performance. We find that perplexity scores are not\nas sensitive a measure of long context performance as our introduced tasks.\n2 Related Work' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='RoPE In this work, we examine the efficacy of the positional encoding choice of LLaMA [10] for context\nlengths longer than the base model was trained on. The positional encoding used by LLaMA is RoPE (Rotary\nPosition Embedding) [13]. RoPE works by rotating slices of the query and key projection matrices at different\nspeeds. Thus for example even if the query and key are projected to the same encoding, they will be rotated' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='by different amounts depending on their position in the sequence. If they are subsequently unaligned, their\ndot product will be smaller relative to what it would be if they were not rotated at all. Conversely, they\n2https://huggingface.co/abacusai/Giraffe-v1-delta-13b-scaled-16\n3https://huggingface.co/abacusai/Giraffe-v1-delta-13b-scaled-4\n4https://huggingface.co/abacusai/Giraffe-v2-13b-32k\n5https://huggingface.co/datasets/abacusai/LongChat-Lines' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='6https://huggingface.co/datasets/abacusai/WikiQA-Free_Form_QA\n7https://huggingface.co/datasets/abacusai/WikiQA-Altered_Numeric_QA\n2' metadata={'source': 'pdfs/paper_1.pdf', 'page': 1} |
page_content='could become more aligned, leading to a larger dot product and attention score. In RoPE, this rotation is\nhappening at different speeds on all 2-slices of the query and key in the embedding dimension, allowing the\nmodel to build a complex function of attention scores over distances. One of the main appeals of utilising\nthe RoPE method is that it ensures mathematically that the attention score function is dependent only on' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='therelative distance between a query and a key, rather than their absolute positions. This is considered to\nbe a desirable property of LLMs [13, 14].\nALiBi Although RoPE was successful in this aim, the work on ALiBi [6] demonstrated that RoPE was not\nable to perform zero-shot context length extrapolation. The ALiBi paper showed that RoPE quickly degraded\nas it was tested on context lengths longer than the model had seen during training; it also introduced its' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='own proposed alternative that showed superior extrapolation ability on their benchmarks. However, ALiBi\nhas its own shortcomings; its use of simple linear functions for modulating the attention scores over distance\nmeans that it cannot represent as complex distance-attention-functions as the Fourier basis of RoPE. In\naddition, ALiBi uses a single such function per head, further reducing expressive power. This may explain' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='why, although ALiBi does extrapolate, models which utilize it have worse performance than RoPE-based\nmodels on benchmarks such as MMLU [2] and the LMSys arena which measures human preferences [15].\nxPos Sun et al. [7] examine why RoPE fails to extrapolate successfully and determines that this is due to\nthe effect of the high frequency components causing residual noise in the attention score even when tokens' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='are long distances apart. They attempt to address this by adding an exponentially decaying amplitude\nterm to RoPE. This new method, called xPos, decays these noisy high frequency components faster than\nlow frequency components. This method shows good results on from-scratch training of LLMs [7], and the\nintuition driving it aligns with our own hypotheses on the deficiency of RoPE. However, Sun et al. do not' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='experiment in our setting of interest: taking a model pretrained with RoPE and seeing if it can be coaxed\n(via limited finetuning) to learn the xPos encoding instead. Furthermore, their experiments demonstrate\nthat Blockwise Causal Attention is necessary for them to achieve extrapolation.\nLinear Scaling/Positional Interpolation This simple but effective context length extrapolation tech-\nnique was concurrently reported by kaiokendev [16] and by a team at Meta [17]. The method that is used' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='here is to simply divide the position vector by a scaling factor which fits the input within the context length\nof the original model. The intuition of this technique is to utilize the LLM’s interpolation capability, rather\nthan relying on extrapolation. It is a well known phenomenon that neural networks tend to interpolate\nwithin a range of previously seen values better than they extrapolate outside that range (e.g. [18]). In' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='the specific case of positional encodings, [17] claim that positional interpolation avoids the risk of massive\nnumerical explosion in attention values associated with extrapolation. We perform many experiments on\nthis scheme and variations of it and report the results in this paper.\nRandomized Positional Encodings Ruoss et al present this method in [19]. During training they\nrandomly generate their position vector by drawing N many samples uniformly without replacement from' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='the range [1, L], where N is the training context length and L is a large value that is greater than the\n(assumed to be known prior) maximum evaluation context length. These sampled positions are then sorted\nin increasing order and act as the position inputs that the model sees at evaluation time. During evaluation,\nthe position inputs [1, ..., M] are given to the model. The authors claim improved performance on context' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='length extrapolation. We independently arrived at a similar idea to this paper but instead randomized by\ndrawing from sub-integer positions approximately in the range [1, N]; see Section 4 for further details. We\nalso note that Ruoss et al investigate the use of such a scheme for training LLMs from scratch, whereas\nwe are primarily interested in post-hoc finetuning a pretrained LLM with randomization to enable context\nlength extrapolation.\n3' metadata={'source': 'pdfs/paper_1.pdf', 'page': 2} |
page_content='3 Assessing Long Context Extrapolation\nThe main question posed in this paper revolves around extending the context length capacity of LLMs. To\nevaluate this, a commononly used metric in the literature is perplexity [6, 7, 13]. However, as we show in\nSection 5.3, perplexity is somewhat coarse-grained for evaluating how well the model can use longer context\nwindows. Our intuition is that — in many natural language datasets — a reasonable perplexity score can' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='be achieved even if the model is only attending to information in a limited range (the final 512 tokens, say)\nof the context window. For example, a positional encoding scheme which simply masks out any elements\nof the context (and inner key and query activations in the attention heads) that are greater in length than\nwhat the model was trained on should succeed in achieving a reasonable perplexity score, but would fare\npoorly on the tasks we describe below.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='We expand upon existing work to look at the accuracy of a model when presented with problems which\nhave verifiable answers. In using this metric, we can evaluate how the model is using the additional contextual\ninformation in order to respond to prompts. We rely on two types of evaluation tasks to assess models’ ability\nto extract and use information from long input contexts: the first is key-value retrieval tasks and the other' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='is question answering tasks. By using these two types of tasks, we enforce the requirement of the model to\nattend to the full context in order to obtain high accuracies. We consider the retrieval task to be a more\npure test of information retrieval free of many natural language biases. However, the retrieval task is a\nsomewhat artificial construct which the LLM will likely not have seen during training, so we also include\nquestion answering to replicate more real-world tasks.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='LongChat-Lines We start with a synthetic fine-grained key-value retrieval task first proposed in [20] and\nalso used by [21]. While these works are excellent given the standard contexts of LLMs, they lack the longer\ncontext lengths that are needed to evaluate our experiments. Thus, we utilize the same task as [20, 21], but\ngenerate additional samples of longer context lengths. This task gives the model a prompt with lines of the\nform:\n•line grotesque-classmate: REGISTER CONTENT is <42527 >' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='•line imperfect-bull: REGISTER CONTENT is <3119 >\n•line supreme-inversion: REGISTER CONTENT is <13960 >\n•...\nThe model is asked to memorize the value corresponding to the REGISTER CONTENT for each line and\nis asked at the end to retrieve the value for a specific line. By varying the number of lines in the prompt,\nwe can control the context length. We release longer length versions of this task than in [20], and we also\nrelease the generation script for this task.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='WikiQA We also create two new datasets from the Natural Questions [12] dataset with longer context\nevaluations specifically in mind which we collectively term WikiQA. In this evaluation, the prompt given\nto the LLM is in the format of a Wikipedia document followed by a question pertaining to that document;\nthe model is asked to answer the question. We ensure the answer to the question is a short answer which is' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='either a single word or a small sentence that has an exact string match in the document given to the LLM\nas input. We call this task Free Form QA (FFQA) .\nA potential issue in a Wikipedia based dataset however is that the model could perhaps correctly answer\nfrom its pretrained corpus and not specifically using the information in the context. To resolve this, we have\ncreated another “altered” dataset, which we call Altered Numeric QA (AltQA) . This dataset consists' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content='only of questions which have numerical answers. Here, we change the answer and every occurrence of the\nanswer in the document to a different number, thus ensuring that the LLM must attend to the context, and\nonly the context, in order to give a correct answer. The modification is made as follows:\n4' metadata={'source': 'pdfs/paper_1.pdf', 'page': 3} |
page_content="... is the third and final part of Dante 's Divine Comedy , following\nthe Inferno and the Purgatorio . It is an allegory telling of Dante 's\njourney through Heaven , guided by Beatrice , who symbolises\ntheology . In the poem , Paradise is depicted as a series of\nconcentric spheres surrounding the Earth , consisting of the Moon ,\nMercury , Venus , the Sun , Mars , Jupiter , Saturn , the Fixed\nStars , the Primum Mobile and finally , the Empyrean . It was\nwritten in the early 14th century ..." metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content="Question: \nWho serv es as dante's guide through par adise\nReference Answer:\nBeatriceDocument:FreeForm WIkiQA\n... Greece has hosted the Summer Olympic Games on two\noccasions , the inaug ural modern Olympics in 1896 and again in\n2004 2009 . Both were held in Athens , which along with Paris and\nLos Angeles are the cities that have hosted the Olympic Games\ntwice , with London being the only city to have hosted them three\ntimes . The Greek capital also hosted the 1906 Intercalated Games" metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content=', which at the time were considered to be Olympic Games by the\nInternational Olympic Committee ...\nQuestion: \nWhen w as the last time the olympics were in Greece?\nReference Answer:\n2009* \n*and not 2004Document:AlteredNumeric WIkiQAFigure 1: Example QA snippets from our WikiQA dataset.\n•If the answer is a year, which is quite frequent, (i.e. it is between 1000-2100), we change it to a different' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='random value within +/- 10 of the original value. We treat years as a special case so as not to disrupt\nthe overall coherence of the document by having highly anachronistic date values.\n•If the answer is any other number, we change it to a different random number which has the same\nnumber of digits.\nFigure 1 highlights examples from our WikiQA dataset. Since the contexts in our application are long,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='the location of the answer within the context could play a significant role in the model’s ability to answer\nthe question. Therefore, we utilize both of the WikiQA tasks to conduct analysis on the performance of\nthe LLM as both the answer location moves within the document (in the beginning 10%, the last 10%, or\nrandomly anywhere else), as well as with the question given at the beginning or the end of the prompt — in\na bid to replicate the analyses of [22].' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='4 Context Length Extrapolation Techniques\nWe examine several context length extrapolation techniques, including existing approaches (or slight varia-\ntions on them) as well as our own newly proposed approaches.\n4.1 Existing Context Length Extrapolation Techniques\nSeveral methods exist to adapt RoPE positional encodings to longer context lengths. We evaluated the\nfollowing techniques.\nLinear Scaling/Positional Interpolation Here, the position vector is divided by a scaling factor. Hence' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='if the original model was trained on a range of positions [0 ,1, ...,2048], say, then the new model will see instead\n[0\nx,1\nx, ...,2048\nx] where xis the scaling factor.\nxPos We wanted to examine whether a checkpoint trained with the base model’s RoPE encoding scheme\ncould be finetuned to the xPos [7] scheme. On top of the programming hurdle of patching the entire attention\nmodule to handle xPos’ unique transformation of keys and queries, the major issue presented by this sort of' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='adaptation is xPos’ sensitivity to floating point precision. The method relies on scaling the key by numeric\nvalues with large (absolute) exponents; these later cancel in the dot product with the query. For long\ncontexts, however, the large values can actually exceed the magnitude supported by float16. We chose to\n5' metadata={'source': 'pdfs/paper_1.pdf', 'page': 4} |
page_content='work around this by performing the core attention operation in float32 at the cost of a 2X training slow\ndown.\nRandomized Position Encodings Here we randomize the distances between the position values uni-\nformly in the range [ ϵ,2] for 0 < ϵ≪1, rather than using the typical [0 ,1, ..., n ] which has fixed intervals\nof size 1. The intuition behind this approach is that by showing the model many different intra-position' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='distances at finetuning time, the model will be able to generalize to any choice of fine-grained positions at\nevaluation time, thereby allowing for an effective increase in context length by choosing smaller divisions.\nThis has some similarity to the procedure described in Ruoss et al. [19]. We set an upper bound of 2 so that\nthe model will in expectation see a final position of n (as E[X]≈1 for X∼U(ϵ,2)). We also set a positive,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='non-zero lower bound of ϵin order to avoid issues with position aliasing due to limited numerical precision.\n4.2 Newly Proposed Context Length Extrapolation Techniques\nPower Scaling In the original RoPE, the basis that is used is given by:\nΘ ={θi= 10000−2(i−1)\nd|i∈ {1,2, . . . ,d\n2}} (1)\nwhere dis the embedding dimension. We use instead the basis given by:\nΘ∗=(\nθ∗\ni=θi\x12\n1−2i\nd\x13k\n|i∈ {1,2, . . . ,d\n2})\n(2)' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='where kis a parameter to be set. By applying this transformation, the high frequency (short distance)\nelements of the basis are less affected than the low frequency (long distance) elements, which are made even\nlower in frequency – see Figure 2. By doing so, our hope was that the model would have to perform less\ncomplex extrapolation for the low frequencies where it has not seen the full range of the periodic function' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='during train time, and thereby extrapolate better. A potential issue however is that the model relies on\nspecific relationships across frequencies that a linear transform preserves but a non-linear transformation\ndestroys.\nTruncated Basis Beginning from Equation 1, we instead use the basis given by applying:\nθ∗\ni=\uf8f1\n\uf8f4\uf8f2\n\uf8f4\uf8f3θiforθi≥b,\nρfora < θ i< b,\n0 for θ∗\ni≤a.(3)\nWhere ρis a fixed value that is relatively small, and aandbare chosen cutoff values. The idea here is that we' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='wish to preserve the high frequency components of the basis but set the low frequency elements to a constant\nvalue—in this case, 0. By doing so with a judicious choice of cutoff a, the model will have experienced all\nvalues of the basis in the context length used during finetuning (due to the periodic nature of the sine and\ncosine functions) and should therefore extrapolate better to larger context lengths for evaluation. However,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='the model still needs to be able to distinguish between distances that span the entire context it was trained\non, so we include the ρfixed frequency as well. In summary, we hope that with this basis the model can avoid\nthe issue of having to learn complicated coefficients in the entire RoPE basis by instead learning smooth\nfunctions at longer distances (as demonstrated in the paper [17]).\nIn Figure 2, we visually compare the frequencies produced by the standard RoPE basis, power scaling,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='and truncation.\n6' metadata={'source': 'pdfs/paper_1.pdf', 'page': 5} |
page_content='Figure 2: Comparison of the standard RoPE basis vs the power basis and the truncated basis. The x-axis\nspans over the embedding dimension, and the y-axis is the frequency value of the sine-cosine basis.\n5 Results & Discussion\nIn the following experiments, we finetuned a base LLaMA-13B model on a portion of the RedPajama dataset\n[5] which has been modified so that each data sample has a size of exactly 4096 tokens. We trained with' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='each positional encoding approach until the evaluation loss roughly plateaued. Loss curves can be found in\nAppendix B.\nWe then further applied instruction finetuning (IFT) with the Vicuna dataset [23] and using LoRA [24]\non the base model. However, we discovered that although IFT did boost accuracies on LongChat-Lines, it\ndid not significantly change the range of contexts that the base model was able to deal with (see Figure 5 in' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='Appendix C). This we found to be a marked contrast with the WikiQA variants; there, IFT was necessary\nfor the model to produce any meaningful results at all. Hence for LongChat-Lines, we used non-IFT models;\nfor WikiQA, we performed evaluation on a subset of the more promising models with additional IFT.\n5.1 Finetuned Context Length Extrapolation\nLongChat-Lines We conducted evaluations on LongChat-Lines with the techniques described in Section' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='4 and report the results in Table 1. We expected all models to be able to perform with a non-zero accuracy\nuntil at least 4200 given that the model is finetuned on context lengths of 4096 and convergence is achieved\nin all cases. However, this turned out not to be the case for xPos, which was not able to perform the task\nat all. We suspect this may be because xPos is too different from the RoPE basis for the model to be able' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='to adapt in finetuning; as we see in Appendix B, the training and evaluation loss for xPos was not able to\nreach the same values as the other methods. This may also be a product of the numerical precision issues\nthat are encountered in the implementation of xPos.\nLinear scaling is able to achieve successful context length extrapolation. It is worth mentioning here that\nwe would expect scaling with a factor of xto achieve non-zero accuracies up to 2048 ·x, due to the base' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='model being trained on a context length of 2048. Although this is observed with linear scaling with a factor\nof 4, we see in Table 1 much quicker degradation as context length increases with a scaling factor of 16. By\ncontext length 17 500 it is already recording 0% accuracy even though we naively would expect reasonable\n7' metadata={'source': 'pdfs/paper_1.pdf', 'page': 6} |
page_content='Context\nLengthLinear\nScaling\n(Factor=4)Linear\nScaling\n(Factor=16)Power basis Truncated\nbasisRandomized\npositionxPos\n2500 0.7 0.64 0.96 0.42 0.54 0\n3600 0.64 0.42 0.64 0.26 0.54 0\n4200 0.56 0.56 0.1 0.18 0.3 0\n4800 0.66 0.62 0 0.14 0.14 0\n7100 0.36 0.4 0 0.04 0.16 0\n9400 0 0.22 0 0 0 0\n11800 0 0.14 0 0 0 0\n14000 0 0.12 0 0 0 0\n16000 0 0.1 0 0 0 0\n17500 0 0 0 0 0 0\n20000 0 0 0 0 0 0\n22000 0 0 0 0 0 0' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='Table 1: After finetuning LLaMA-13B with a base context length of 4096, this table represents evaluations\nwith different context extrapolation methods on LongChat-Lines. An accuracy of 1.0 would indicate perfect\nperformance and 0.0 indicates getting every evaluation wrong. The power basis uses a parameter of k= 0.5.\nThe truncated basis uses the following parameters: a=1\n82π\n2048, b=2π\n2048, ρ=1\n162π\n2048. Randomization uses a\nlower bound parameter of ϵ=1' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='16. Evaluations are all performed without additional instruction finetuning.\nperformance up to roughly 32 000 context length. We believe that this indicates that there are limits to the\ninterpolation methodology and are interested in examining this further in future work.\nThe power basis, although it performs best at the shortest context, also decays fastest and is unable to\nshow any extrapolation performance beyond 4200 context at all.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='The randomized position approach may appear to be extrapolating based on the results in the table.\nHowever, this is likely due to how we evaluated the model. At train time, the model samples distances\nuniformly in [ ϵ,2] as described in Section 4. At evaluation time, it is unclear a priori what the best choice\nof positions is. We tried a range of different approaches: fixed distances of size 1, uniform random in [ ϵ,2]' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='and uniform random in [ ϵ,1]. We found best results for extrapolation with the latter, so we report this. We\nhoped that by reducing the upper bound further, we could coax the desired context length extrapolation\nfrom the model. However, going to [ ϵ,0.5] and below significantly degraded the performance of the model.\nOur conclusion from this is that the model cannot independently learn to represent each position without' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='knowing the other positions as well. An interesting avenue for future work would be to condition the Q-proj\nand K-proj matrices on the sampled positions during training (and evaluation).\nThe truncated basis does seem to offer true context length extrapolation, as it is able to achieve non-zero\naccuracies on context lengths outside any values it has seen before. Although the performance does degrade' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='as the length increases and the current manifestation of this is inferior in performance to linear scaling,\nwe believe that this may be a direction of investigation that can lead to better extrapolation performance.\nTruncation can also be combined with linear scaling, as we discuss in Section 5.2.\nWikiQA Variants We further conducted evaluations of the linear scaling and truncated basis approaches' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='on the WikiQA variants described in Section 4. Unlike the retrieval task, we found that models were unable\nto perform this task successfully without any instruction finetuning, so we performed this analysis on only\na few approaches of interest. The results are shown in Tables 2 and 3. They largely match the pattern seen\nin LongChat-Lines—linear scaling with scale factor 4 is able to perform the task up to 7500 context but' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |
page_content='not beyond, whilst scale factor 16 is able to surpass this cutoff but with a slope-off in accuracy. As with\nLongChat-Lines, we see that the models appear to show some degradation of accuracy as context length\nincreases. We see again that the truncated basis is able to extrapolate successfully to about the same context\nlength as in LongChat-Lines with comparable accuracies to linear scaling with scale 4, but again seemingly\ncannot go further than a context length of about 8k.\n8' metadata={'source': 'pdfs/paper_1.pdf', 'page': 7} |