Text Generation
Transformers
English
llama
TheBloke's picture
Upload README.md
8d7007e
|
raw
history blame
22.9 kB
metadata
datasets:
  - Open-Orca/OpenOrca
inference: false
language:
  - en
library_name: transformers
license: llama2
model_creator: Open-Orca
model_link: https://huggingface.co/Open-Orca/LlongOrca-7B-16k
model_name: LlongOrca 7B 16K
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


LlongOrca 7B 16K - GGML

Description

This repo contains GGML format model files for Open-Orca's LlongOrca 7B 16K.

Important note regarding GGML files.

The GGML format has now been superseded by GGUF. As of August 21st 2023, llama.cpp no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.

Please use the GGUF models instead.

About GGML

GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:

  • text-generation-webui, the most popular web UI. Supports NVidia CUDA GPU acceleration.
  • KoboldCpp, a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
  • LM Studio, a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
  • LoLLMS Web UI, a great web UI with CUDA GPU acceleration via the c_transformers backend.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.

Repositories available

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Compatibility

These quantised GGML files are compatible with llama.cpp between June 6th (commit 2d43387) and August 21st 2023.

For support with latest llama.cpp, please use GGUF files instead.

The final llama.cpp commit with support for GGML was: dadbed99e65252d79f81101a392d0d6497b86caa

As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.

Explanation of the new k-quant methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
  • GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
llongorca-7b-16k.ggmlv3.q2_K.bin q2_K 2 3.05 GB 5.55 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
llongorca-7b-16k.ggmlv3.q3_K_S.bin q3_K_S 3 3.12 GB 5.62 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
llongorca-7b-16k.ggmlv3.q3_K_M.bin q3_K_M 3 3.45 GB 5.95 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
llongorca-7b-16k.ggmlv3.q3_K_L.bin q3_K_L 3 3.77 GB 6.27 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
llongorca-7b-16k.ggmlv3.q4_0.bin q4_0 4 3.79 GB 6.29 GB Original quant method, 4-bit.
llongorca-7b-16k.ggmlv3.q4_K_S.bin q4_K_S 4 3.98 GB 6.48 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
llongorca-7b-16k.ggmlv3.q4_1.bin q4_1 4 4.21 GB 6.71 GB Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
llongorca-7b-16k.ggmlv3.q4_K_M.bin q4_K_M 4 4.24 GB 6.74 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
llongorca-7b-16k.ggmlv3.q5_0.bin q5_0 5 4.63 GB 7.13 GB Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
llongorca-7b-16k.ggmlv3.q5_K_S.bin q5_K_S 5 4.79 GB 7.29 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
llongorca-7b-16k.ggmlv3.q5_K_M.bin q5_K_M 5 4.92 GB 7.42 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
llongorca-7b-16k.ggmlv3.q5_1.bin q5_1 5 5.06 GB 7.56 GB Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
llongorca-7b-16k.ggmlv3.q6_K.bin q6_K 6 5.65 GB 8.15 GB New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization
llongorca-7b-16k.ggmlv3.q8_0.bin q8_0 8 7.16 GB 9.66 GB Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

How to run in llama.cpp

Make sure you are using llama.cpp from commit dadbed99e65252d79f81101a392d0d6497b86caa or earlier.

For compatibility with latest llama.cpp, please use GGUF files instead.

./main -t 10 -ngl 32 -m llongorca-7b-16k.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\nYou are a story writing assistant.<|im_end|>\n<|im_start|>user\nWrite a story about llamas<|im_end|>\n<|im_start|>assistant"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 2048 to the desired sequence length for this model. For example, -c 4096 for a Llama 2 model. For models that use RoPE, add --rope-freq-base 10000 --rope-freq-scale 0.5 for doubled context, or --rope-freq-base 10000 --rope-freq-scale 0.25 for 4x context.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: Open-Orca's LlongOrca 7B 16K

🐋 The First Llong Context Orca! 🐋

OpenOrca Logo

OpenOrca - LlongOrca - 7B - 16k

We have used our own OpenOrca dataset to fine-tune on top of LLongMA-2-7b-16k. This dataset is our attempt to reproduce the dataset generated for Microsoft Research's Orca Paper. We use OpenChat packing, trained with Axolotl.

This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our OpenOrcaxOpenChat-Preview2-13B model.

This release reveals that stacking our training on an existing long context fine-tuned model yields significant improvements to model performance. We measured this with BigBench-Hard and AGIEval results, finding ~134% of the base Llongma2-16k model's performance on average.

We have run extensive evaluations internally and expect this model to place number 4 on the HuggingFaceH4 Open LLM Leaderboard for 7B models, but with >99% performance of the first place and place number 1 for longer context 7B models.

We did this training as part of testing integration of OpenChat's MultiPack algorithm into the Axolotl trainer. MultiPack achieves 99.85% bin-packing efficiency on our dataset. This has significantly reduced training time, with efficiency improvement of 3-10X over traditional methods.

Want to visualize our full (pre-filtering) dataset? Check out our Nomic Atlas Map.

Atlas Nomic Dataset Map

Many thanks to @EnricoShippole, @theemozilla, and @kaiokendev1 for the fine work on creating the LlongMA-2-7b-16k model this was trained on top of!

We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.

We will also give sneak-peak announcements on our Discord, which you can find here:

https://AlignmentLab.ai

Prompt Template

We used OpenAI's Chat Markup Language (ChatML) format, with <|im_start|> and <|im_end|> tokens added to support this.

Example Prompt Exchange

<|im_start|>system
You are LlongOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
How are you now?<|im_end|>

Evaluation

We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base long context model. As well, we should place #4 among all 7B models (and #1 for a model with long context) at release time!

AGIEval Performance

We present our performance on AGI Eval in comparison to base Llama2-7B and to Llongma2-7b-16k, which we trained on top of. This demonstrates the benefits of stacking OpenOrca dataset training on existing models. Most notably, there is a very dramatic improvement of nearly 3X in the English writing performance.

LlongOrca 7B 16k AGIEval Performance

BigBench-Hard Performance

We present our performance on BigBench-Hard in comparison to base Llama2-7B and to Llongma2-7b-16k, which we trained on top of. This demonstrates the benefits of stacking OpenOrca dataset training on existing models.

LlongOrca 7B 16k BigBench-Hard Performance

HuggingFaceH4 Open LLM Leaderboard Performance

We have run our own tests using parameters matching the HuggingFaceH4 Open LLM Leaderboard evals.

We place #4 for all 7B models at release time, and #1 for long context models.

LlongOrca 7B 16k Leaderboard Internal Performance

Dataset

We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. Further details of our curation practices will be forthcoming with our full model releases.

Training

Built with Axolotl

We trained with 8x A6000-48GB (first-gen) GPUs for 37 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$200. Axolotl training parameters can be found in configs/oo7b.yml. We used the packing-attn branch of Axolotl during training.

Citation

@software{lian2023llongorca7b,
  title = {LlongOrca7B: Llama2-7B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset},
  author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/Open-Orca/LlongOrca-7B-16k},
}
@software{openchat,
  title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
  author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
  doi = {10.5281/zenodo.8105775},
  url = {https://github.com/imoneoi/openchat},
  version = {pre-release},
  year = {2023},
  month = {7},
}
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{longpre2023flan,
      title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, 
      author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
      year={2023},
      eprint={2301.13688},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
@misc{touvron2023llama,
    title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, 
    author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
    year={2023},
    eprint={2307.09288},
    archivePrefix={arXiv},
}