RichardErkhov's picture
uploaded readme
c4ef68d verified
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
cria-llama2-7b-v1.3 - GGUF
- Model creator: https://huggingface.co/davzoku/
- Original model: https://huggingface.co/davzoku/cria-llama2-7b-v1.3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [cria-llama2-7b-v1.3.Q2_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q2_K.gguf) | Q2_K | 2.36GB |
| [cria-llama2-7b-v1.3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [cria-llama2-7b-v1.3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [cria-llama2-7b-v1.3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [cria-llama2-7b-v1.3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [cria-llama2-7b-v1.3.Q3_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K.gguf) | Q3_K | 3.07GB |
| [cria-llama2-7b-v1.3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [cria-llama2-7b-v1.3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [cria-llama2-7b-v1.3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [cria-llama2-7b-v1.3.Q4_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_0.gguf) | Q4_0 | 3.56GB |
| [cria-llama2-7b-v1.3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [cria-llama2-7b-v1.3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [cria-llama2-7b-v1.3.Q4_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K.gguf) | Q4_K | 3.8GB |
| [cria-llama2-7b-v1.3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [cria-llama2-7b-v1.3.Q4_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_1.gguf) | Q4_1 | 3.95GB |
| [cria-llama2-7b-v1.3.Q5_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_0.gguf) | Q5_0 | 4.33GB |
| [cria-llama2-7b-v1.3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [cria-llama2-7b-v1.3.Q5_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K.gguf) | Q5_K | 4.45GB |
| [cria-llama2-7b-v1.3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [cria-llama2-7b-v1.3.Q5_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_1.gguf) | Q5_1 | 4.72GB |
| [cria-llama2-7b-v1.3.Q6_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q6_K.gguf) | Q6_K | 5.15GB |
| [cria-llama2-7b-v1.3.Q8_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
inference: false
language: en
license: llama2
model_type: llama
datasets:
- mlabonne/CodeLlama-2-20k
pipeline_tag: text-generation
tags:
- llama-2
---
# CRIA v1.3
💡 [Article](https://walterteng.com/cria) |
💻 [Github](https://github.com/davzoku/cria) |
📔 Colab [1](https://colab.research.google.com/drive/1rYTs3qWJerrYwihf1j0f00cnzzcpAfYe),[2](https://colab.research.google.com/drive/1Wjs2I1VHjs6zT_GE42iEXsLtYh6VqiJU)
## What is CRIA?
> krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco.
<p align="center">
<img src="https://raw.githubusercontent.com/davzoku/cria/main/assets/icon-512x512.png" width="300" height="300" alt="Cria Logo"> <br>
<i>or what ChatGPT suggests, <b>"Crafting a Rapid prototype of an Intelligent llm App using open source resources"</b>.</i>
</p>
The initial objective of the CRIA project is to develop a comprehensive end-to-end chatbot system, starting from the instruction-tuning of a large language model and extending to its deployment on the web using frameworks such as Next.js.
Specifically, we have fine-tuned the `llama-2-7b-chat-hf` model with QLoRA (4-bit precision) using the [mlabonne/CodeLlama-2-20k](https://huggingface.co/datasets/mlabonne/CodeLlama-2-20k) dataset. This fine-tuned model serves as the backbone for the [CRIA chat](https://chat.walterteng.com) platform.
## 📦 Model Release
CRIA v1.3 comes with several variants.
- [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3): Merged Model
- [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
- [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
## 🔧 Training
It was trained on a Google Colab notebook with a T4 GPU and high RAM.
### Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
## 💻 Usage
```python
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "davzoku/cria-llama2-7b-v1.3"
prompt = "What is a cria?"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## References
We'd like to thank:
- [mlabonne](https://huggingface.co/mlabonne) for his article and resources on implementation of instruction tuning
- [TheBloke](https://huggingface.co/TheBloke) for his script for LLM quantization.