--- inference: false language: - en library_name: transformers license: other model_creator: augtoma model_link: https://huggingface.co/augtoma/qCammel-70-x model_name: qCammel 70 model_type: llama pipeline_tag: text-generation quantized_by: TheBloke tags: - pytorch - llama - llama-2 - qCammel-70 ---
TheBlokeAI

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

# qCammel 70 - GGML - Model creator: [augtoma](https://huggingface.co/augtoma) - Original model: [qCammel 70](https://huggingface.co/augtoma/qCammel-70-x) ## Description This repo contains GGML format model files for [augtoma's qCammel 70](https://huggingface.co/augtoma/qCammel-70-x). GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with CUDA GPU acceleration: * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server. * [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/qCammel-70-x-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/qCammel-70-x-GGML) * [augtoma's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/augtoma/qCammel-70-x) ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Compatibility ### Requires llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later. Or one of the other tools and libraries listed above. To use in llama.cpp, you must add `-gqa 8` argument. For other UIs and libraries, please check the docs. ## Explanation of the new k-quant methods
Click to see details The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how.
## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [qcammel-70-x.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | [qcammel-70-x.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [qcammel-70-x.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | [qcammel-70-x.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | [qcammel-70-x.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. | | [qcammel-70-x.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | [qcammel-70-x.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | [qcammel-70-x.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | [qcammel-70-x.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | [qcammel-70-x.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | [qcammel-70-x.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | qcammel-70-x.ggmlv3.q5_1.bin | q5_1 | 5 | 51.76 GB | 54.26 GB | Original quant method, 5-bit. Higher accuracy, slower inference than q5_0. | | qcammel-70-x.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | qcammel-70-x.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### q5_1, q6_K and q8_0 files require expansion from archive **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
Click for instructions regarding q5_1, q6_K and q8_0 files ### q5_1 Please download: * `qcammel-70-x.ggmlv3.q5_1.zip` * `qcammel-70-x.ggmlv3.q5_1.z01` ### q6_K Please download: * `qcammel-70-x.ggmlv3.q6_K.zip` * `qcammel-70-x.ggmlv3.q6_K.z01` ### q8_0 Please download: * `qcammel-70-x.ggmlv3.q8_0.zip` * `qcammel-70-x.ggmlv3.q8_0.z01` Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example: ``` sudo apt update -y && sudo apt install 7zip 7zz x qcammel-70-x.ggmlv3.q6_K.zip
## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 40 -gqa 8 -m qcammel-70-x.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: Write a story about llamas\nASSISTANT:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change -ngl 40 to the number of GPU layers you have VRAM for. Use -ngl 100 to offload all layers to VRAM, if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs. Remember the `-gqa 8` argument, required for Llama 70B models. If you want to have a chat-style conversation, replace the `-p ` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle Thank you to all my generous patrons and donaters! # Original model card: augtoma's qCammel 70 # qCammel-70 qCammel-70 is a fine-tuned version of Llama-2 70B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License before downloading this model .* The fine-tuning process applied to qCammel-70 involves a distilled dataset of 15,000 instructions and is trained with QLoRA, **Variations** The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 70B model. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** qCammel-70 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved **Research Papers** - [Clinical Camel: An Open-Source Expert-Level Medical Language Model with Dialogue-Based Knowledge Encoding](https://arxiv.org/abs/2305.12031) - [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) - [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.70971)