text
stringlengths 84
944
|
---|
page_content='Context Length Linear scaling (Factor=4) Linear scaling\n(Factor=16)Truncated basis\n2000 0.72 0.69 0.74\n3800 0.72 0.73 0.73\n7500 0.62 0.70 0.46\n15000 0.00 0.68 0.00\n24000 0.00 0.56 0.00\n32000 0.00 0.18 0.00\nTable 2: Accuracy on the AlteredNumeric WikiQA variant of models finetuned with different context ex-\ntrapolation methods. Evaluations are all performed with instruction finetuning.\nContext Length Linear scaling (Factor=4) Linear scaling\n(Factor=16)Truncated basis\n1900 0.44 0.47 0.46' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='3800 0.49 0.44 0.55\n7600 0.46 0.45 0.36\n15000 0.00 0.48 0.00\n24000 0.00 0.42 0.00\n32000 0.00 0.20 0.00\nTable 3: Accuracy on the FreeForm WikiQA variant of models finetuned with different context extrapolation\nmethods. Evaluations are all performed with instruction finetuning.\n5.2 Zero-Shot Linear Scaling\nIn the previous section, we examined the performance of linear scaling using the same value at finetuning' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='time as at evaluation time. In this section, we instead investigate the effect of using a different scaling factor\nat evaluation time than what the model was trained on. In Table 4, we show results for this strategy as\nevaluated on LongChat-Lines. We find in general that if the model is trained with a scale factor of x, then\nthe model can successfully evaluate zero-shot with a scale factor of 2 x(with some reduction of accuracy' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='within the range of context lengths the model could previously handle). It also appears that at a scale factor\nof 16, the model is no longer able to increase its effective context length by using this approach. We also\nfind that evaluating with >2xleads to the model breaking and being unable to perform the task.\nWe show that zero-shot linear scaling can be applied successfully after finetuning with the truncated basis.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='Interestingly, whilst for linear scaling using a longer scale factor at evaluation time results in a deterioration\nof the accuracy on context lengths the model could previously handle, this does not appear to be the case\nfor the truncated basis—instead, the range of context lengths that the model achieves non-zero accuracy on\nincreases, and accuracy improves among context lengths the model was finetuned on.\n5.3 Comparing Perplexity To Tasks' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='In Section 3, we introduced two tasks which specifically require a long-context LLM to extract answers from\nthroughout the entire text, arguing that these tasks may assess long context performance better than raw\nperplexity. To analyze how perplexity fares as compared to these tasks, we report perplexities on a held\nout set of the RedPajama dataset for a subset of our trained models (see Table 5). Perplexity scores do' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='show a large increase when a context length is reached that the model is completely unable to deal with (for\nexample, beyond 2k on the base LLaMA model, or beyond 8k on the linear scale 4 model). However, they\nare appear less effective for showing the decrease in long-context capability within that effective range. In\nparticular, while we observe a steep slope-off in performance on LongChat-Lines and the WikiQA variants as' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='the context length increases for the linear scale 4 and truncated basis columns of Table 1), this degradation\nis not strongly reflected in the perplexity scores at those contexts. However, the linear scale 16 model\ndoes appear to have well-correlated perplexity and accuracy on our tasks. Perhaps most tellingly, we see\n9' metadata={'source': 'pdfs/paper_1.pdf', 'page': 8} |
page_content='Train Scaling = 1 (base model) Train Scaling = 4 Train Scaling = 16 Truncated (Scaling = 1)\nContext Length Eval 1 Eval 2 Eval 4 Eval 4 Eval 8 Eval 16 Eval 16 Eval 32 Eval 64 Eval 1 Eval 2 Eval 4\n2500 0 0.32 0 0.88 0.64 0 0.64 0.24 0 0.42 0.58 0\n3600 0 0.3 0 0.8 0.58 0 0.42 0.26 0 0.26 0.44 0\n4200 0 0.18 0 0.86 0.62 0 0.56 0.12 0 0.18 0.28 0\n4800 0 0 0 0.86 0.62 0 0.62 0.22 0 0.14 0.26 0\n7100 0 0 0 0.64 0.38 0 0.4 0.12 0 0.04 0.04 0\n9400 0 0 0 0 0.32 0 0.22 0.12 0 0 0.04 0' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='11800 0 0 0 0 0.30 0 0.14 0.1 0 0 0.04 0\n14000 0 0 0 0 0.1 0 0.12 0.04 0 0 0 0\n16000 0 0 0 0 0.12 0 0.1 0.02 0 0 0 0\n17500 0 0 0 0 0 0 0 0 0 0 0 0\n20000 0 0 0 0 0 0 0 0 0 0 0 0\n22000 0 0 0 0 0 0 0 0 0 0 0 0\nTable 4: Accuracy on LongChat-Lines of models finetuned with different context extrapolation methods\nand then evaluated with additional linear scaling. Whenever the evaluation linear scale is greater than' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='the training linear scale, this produces zero-shot context length extrapolation. In general, increasing the\nevaluation context length 2x over train actually does double the usable context length, at an accuracy cost,\nfor train lengths 1 and 4 (for train length 16, it does not). The truncated basis accuracy improves with 2x\nscaling. A more aggressive scale up of 4x times the train length leads to apparent model failure.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='the shortcoming of perplexity for between-model comparisons. According to Table 5, the truncated basis\nperforms best at 8k and below; however, in Tables 1, 2 and 3 truncated is significantly lower in performance\ncompared to the linear scaled models at 8k context.\nPerplexity is commonly used in the literature to measure long context performance [13, 7], but we believe\nthese results show it is not in itself a sufficient measure of long context performance, but is best utilised with' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='other tasks which additionally probe the capabilities of the LLM.\nContext Length LLaMA Base Linear Scaling (Factor=4) Linear Scaling (Factor=16) Truncated Basis\n512 4.06 4.06 4.05 3.79\n1k 3.88 3.87 3.86 3.63\n2k 3.79 3.75 3.74 3.52\n4k 9022 3.66 3.66 3.46\n8k 7198 3.79 3.97 3.78\n16k 5141 14902 5.43 15793\n24k 4980 21236 8.73 13929\n32k 4408 55480 98.12 12534\nTable 5: Perplexity scores on a held out evaluation set of the RedPajama dataset at various context lengths' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='on different models. The evaluation length is 256 tokens, and the prompt given to the models is the previous\nN - 256 tokens of the document, where N is the context length we evaluated. When the context length is too\nlong for the model to handle effectively, the perplexity does blow up; however, within ranges the model can\nhandle, perplexity appears to be less sensitive to context-usage degradation than the LongChat-Lines task,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='and does not follow the same between-model ranking as that of LongChat-Lines or WikiQA.\n5.4 Analysis of question and answer positioning\nFor the WikiQA variants, we performed a stratified analysis of the effect of the position of the answer and\nthe question. As described in Section 4, we looked at the impact of placing the answer within the first 10%\nof the document, the last 10%, or elsewhere randomly. We also examined the effect of putting the question' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='at the beginning or the end of the prompt. The results are shown in Tables 6 and 7, performed on the model\nwith linear scaling with a factor of 16, with additional instruction finetuning.\n10' metadata={'source': 'pdfs/paper_1.pdf', 'page': 9} |
page_content='Context Length Answer Location Question Location\nStart Middle End Start End\n2000 0.74 0.68 0.66 0.70 0.69\n3800 0.70 0.67 0.66 0.63 0.73\n7500 0.69 0.63 0.65 0.61 0.70\n15000 0.68 0.68 0.69 0.68 0.68\n24000 0.30 0.26 0.50 0.14 0.56\n32000 0.13 0.13 0.23 0.15 0.18\nTable 6: Stratified accuracy analysis on the AlteredNumericQA task. For answers, “Start” refers to the first\n10% of the document, “End” to the last 10%, and “Middle” to any other location. For questions, “Start”' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='refers to placing the question before the rest of the language prompt, and “End” refers to placing the question\nat the end. Results are reported on LLaMA-13B finetuned with a linear scale of 16, with IFT applied.\nContext Length Answer Location Question Location\nStart Middle End Start End\n1900 0.37 0.40 0.36 0.27 0.47\n3800 0.40 0.30 0.32 0.24 0.44\n7600 0.35 0.34 0.35 0.24 0.45\n15000 0.44 0.43 0.45 0.40 0.48\n24000 0.18 0.20 0.40 0.10 0.42\n32000 0.07 0.11 0.18 0.04 0.20' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='Table 7: Stratified accuracy analysis on the FreeFormQA task. For answers, “Start” refers to the first 10%\nof the document, “End” to the last 10%, and “Middle” to any other location. For questions, “Start” refers\nto placing the question before the rest of the language prompt, and “End” refers to placing the question at\nthe end. Results are reported on LLaMA-13B finetuned with a linear scale of 16, with IFT applied.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='We aimed to build on a similar analysis from [22]. However, we were not able to replicate the results shown\nin that paper on the LongChat-13B (16K) model (to which our modeling approach is most comparable). On\nboth FreeFormQA and AlteredNumericQA, we observed no clear trend with regards to the location of the\nanswer within the prompt and the model’s accuracy up to 15k context length. There also did not appear to be' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='a significant impact on the location of the question for AlteredNumericQA, but there is a noticeable impact\nobserved for FreeFormQA where having the question at the end appears to have a significant improvement\nin accuracy. However, at 24k and 32k context lengths we see a clear indication in both datasets for both the\nanswer at the end and question at the end returning superior accuracy to their placements elsewhere. These' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='results are a marked contrast to those in [22]. Our take away from this is that there is plausibly a great\ndeal of task-conditional variability in the performance of LLMs with regards to how well they can utilize all\nportions of the context; even small differences in task construction can lead to large differences in observed\ntrends.\n6 Conclusion and Limitations\nIn this paper we examined multiple approaches to finetuning a pretrained base LLaMA and LLaMA2 LLM' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='that has a limited context length such that it is capable of extrapolating zero-shot to new, longer context\nlengths. We compared the methods using perplexity, as well as two custom tasks that probe long context\nperformance; we find that the custom tasks offer a more fine-grained understanding of long context perfor-\nmance than perplexity. We showed that the method of linear interpolation performed best at context length' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='extrapolation, and found some promise in the potential of using a new basis which we termed the truncated\nbasis. We release three models which we call Giraffe that extend the context length of the base LLaMA and\nLLaMA 2 models using the method of linear interpolation.\nThere is significant room for building on the work presented in this paper. We note that all methods show\n11' metadata={'source': 'pdfs/paper_1.pdf', 'page': 10} |
page_content='a degradation in accuracy on our evaluation tasks as context length increases, even though perplexity often\nremains reasonable and the model can still produce coherent outputs. This is a shortcoming that would be\nof interest to address, and in our view is necessary for claiming ‘true’ long context extrapolation ability of a\nmodel.\nThe limitations of this work are that we only conducted our perplexity analysis on a single document' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='dataset. Future work could look to replicate this analysis on other datasets. Additionally, we focused\nspecifically on context-length extrapolation from a pretrained base model, and in particular the LLaMA\nand LLaMA 2 models trained with RoPE positional encodings. Future work could investigate whether the\nanalysis herein extends to other positional encoding types and models. Future work could also address' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='the limitations of the linear interpolation method. We see some evidence on the LongChat-Lines task in\nparticular of accuracy degradation as the scale factor is increased. What is the limit of the size of scale\nfactor of this method? Is there a point beyond which it simply does not improve the range of contexts it\ncan handle? Furthermore, can the truncated basis approach which seems to show signs of true extrapolation' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='capability be modified in a manner to gain parity with or surpass the linear interpolation method? We\nbelieve these are some potential future directions of interest.\nReferences\n[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. Attention is all you need, 2023.\n[2] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='Steinhardt. Measuring massive multitask language understanding, 2021.\n[3] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan,\nHarri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger,\nMichael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder,\nMikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel\nHerbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin,\nSuchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh\nAchiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati,\nKatie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='Wojciech Zaremba. Evaluating large language models trained on code, 2021.\n[4] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,\nHorace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb\ndataset of diverse text for language modeling, 2020.\n[5] Together Computer. Redpajama: An open source recipe to reproduce llama training dataset, April\n2023.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='[6] Ofir Press, Noah A. Smith, and Mike Lewis. Train short, test long: Attention with linear biases enables\ninput length extrapolation, 2022.\n[7] Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary,\nXia Song, and Furu Wei. A length-extrapolatable transformer, 2022.\n[8] OpenAI. Gpt-4 technical report, 2023.\n[9] Anthropic. Introducing claude, 2023.\n[10] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ ee' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='Lacroix, Baptiste Rozi` ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand\n12' metadata={'source': 'pdfs/paper_1.pdf', 'page': 11} |
page_content='Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models,\n2023.\n[11] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay\nBashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Can-\nton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu,\nBrian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev,\nPunit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu,\nYuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew\nPoulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael\nSmith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan\nNarang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open\nfoundation and fine-tuned chat models, 2023.\n[12] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,\nDanielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural ques-\ntions: a benchmark for question answering research. Transactions of the Association of Computational\nLinguistics , 2019.\n[13] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced\ntransformer with rotary position embedding, 2022.\n[14] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations,\n2018.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='[15] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,\nZhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging\nllm-as-a-judge with mt-bench and chatbot arena, 2023.\n[16] Things i’m learning while training superhot, 2023.\n[17] Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of\nlarge language models via positional interpolation, 2023.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='[18] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken ichi Kawarabayashi, and Stefanie Jegelka.\nHow neural networks extrapolate: From feedforward to graph neural networks, 2021.\n[19] Anian Ruoss, Gr´ egoire Del´ etang, Tim Genewein, Jordi Grau-Moya, R´ obert Csord´ as, Mehdi Bennani,\nShane Legg, and Joel Veness. Randomized positional encodings boost length generalization of trans-\nformers, 2023.\n[20] anadim. A little retrieval test for large language models, May 2023.' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='[21] Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, Joseph E. Gonzalez, Ion Stoica,\nXuezhe Ma, and Hao Zhang. How long can open-source llms truly promise on context length?, June\n2023.\n[22] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and\nPercy Liang. Lost in the middle: How language models use long contexts, 2023.\n[23] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging\nllm-as-a-judge with mt-bench and chatbot arena, 2023.\n13' metadata={'source': 'pdfs/paper_1.pdf', 'page': 12} |
page_content='[24] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and\nWeizhu Chen. Lora: Low-rank adaptation of large language models, 2021.\n14' metadata={'source': 'pdfs/paper_1.pdf', 'page': 13} |
page_content='Context Length LLaMA 2 Linear (8)\n2500 0.48\n3600 0.42\n4200 0.32\n4800 0.74\n7100 0.56\n9400 0.50\n11800 0.50\n14000 0.42\n16000 0.38\n17500 0.14\n20000 0.14\n22000 0.08\n26000 0.00\n30000 0.00\nTable 8: LLaMA 2 performance on LongChat-Lines with a scale factor of 8.\nContext Length LLaMA 2 Linear (8)\nAltQA FFQA\n2k 0.72 0.56\n4k 0.76 0.55\n8k 0.71 0.56\n16k 0.59 0.44\n24k 0.36 0.28\n32k 0.15 0.10\nTable 9: LLaMA 2 performance on the WikiQA variants with a scale factor of 8.\nA LLaMA 2' metadata={'source': 'pdfs/paper_1.pdf', 'page': 14} |
page_content='As we were finalising this paper, Meta released LLaMA 2 [11]. We verified that similar results of context\nlength extrapolation were achievable with LLaMA 2 by the linear interpolation method. We applied the\nsame method as described in Section 4, training LLaMA 2-13B on a portion of the RedPajama dataset\nmodified such that each data sample has a size of exactly 4096 tokens. We then also applied instruction' metadata={'source': 'pdfs/paper_1.pdf', 'page': 14} |
page_content='finetuning with the Vicuna dataset. We used a scale of 8. Performance is shown in tables 8 and 9.\nWe see that the model is able to achieve non-zero accuracies on LongChat-Lines up to a context length\nof 22000, further than any of the models we tested in the main paper. The model is also able to achieve\nnon-zero performance on the WikiQA variants up to 32k context. However, we do see diminishing accuracy' metadata={'source': 'pdfs/paper_1.pdf', 'page': 14} |
page_content='in both tasks as the context length increases. It is also worth noting that the accuracies on both tasks\nare slightly lower than LLaMA 1 with scale 16 on the context lengths which both models are capable of\nproducing non-zero results.\n15' metadata={'source': 'pdfs/paper_1.pdf', 'page': 14} |
page_content='B Loss Curves\nFigure 3: Training loss curves of models during the initial fitting runs on 4096 token samples extracted from\nthe RedPajama dataset.\nFigure 4: Evaluation loss curves of models during the initial fitting runs on 4096 token samples extracted\nfrom the RedPajama dataset.\n16' metadata={'source': 'pdfs/paper_1.pdf', 'page': 15} |
page_content='Figure 5: Comparison between IFT and non-IFT on LongChat-Lines with linear scaling of 4 applied. Al-\nthough IFT improves the accuracies, it does not extend the range of contexts on which the model obtains a\nnon-zero accuracy.\nC Impact of IFT\nAs mentioned in Section 5 of the main text, we found that instruction-fine-tuning with the Vicuna dataset\ndid improve accuracies on LongChat-Lines, but did not change the span of non-zero contexts for a given' metadata={'source': 'pdfs/paper_1.pdf', 'page': 16} |
page_content='model. Figure 5 shows this on the model with linear interpolation with scale 4.\n17' metadata={'source': 'pdfs/paper_1.pdf', 'page': 16} |