Quantization made by Richard Erkhov.
cria-llama2-7b-v1.3 - GGUF
- Model creator: https://huggingface.co/davzoku/
- Original model: https://huggingface.co/davzoku/cria-llama2-7b-v1.3/
Name | Quant method | Size |
---|---|---|
cria-llama2-7b-v1.3.Q2_K.gguf | Q2_K | 2.36GB |
cria-llama2-7b-v1.3.IQ3_XS.gguf | IQ3_XS | 2.6GB |
cria-llama2-7b-v1.3.IQ3_S.gguf | IQ3_S | 2.75GB |
cria-llama2-7b-v1.3.Q3_K_S.gguf | Q3_K_S | 2.75GB |
cria-llama2-7b-v1.3.IQ3_M.gguf | IQ3_M | 2.9GB |
cria-llama2-7b-v1.3.Q3_K.gguf | Q3_K | 3.07GB |
cria-llama2-7b-v1.3.Q3_K_M.gguf | Q3_K_M | 3.07GB |
cria-llama2-7b-v1.3.Q3_K_L.gguf | Q3_K_L | 3.35GB |
cria-llama2-7b-v1.3.IQ4_XS.gguf | IQ4_XS | 3.4GB |
cria-llama2-7b-v1.3.Q4_0.gguf | Q4_0 | 3.56GB |
cria-llama2-7b-v1.3.IQ4_NL.gguf | IQ4_NL | 3.58GB |
cria-llama2-7b-v1.3.Q4_K_S.gguf | Q4_K_S | 3.59GB |
cria-llama2-7b-v1.3.Q4_K.gguf | Q4_K | 3.8GB |
cria-llama2-7b-v1.3.Q4_K_M.gguf | Q4_K_M | 3.8GB |
cria-llama2-7b-v1.3.Q4_1.gguf | Q4_1 | 3.95GB |
cria-llama2-7b-v1.3.Q5_0.gguf | Q5_0 | 4.33GB |
cria-llama2-7b-v1.3.Q5_K_S.gguf | Q5_K_S | 4.33GB |
cria-llama2-7b-v1.3.Q5_K.gguf | Q5_K | 4.45GB |
cria-llama2-7b-v1.3.Q5_K_M.gguf | Q5_K_M | 4.45GB |
cria-llama2-7b-v1.3.Q5_1.gguf | Q5_1 | 4.72GB |
cria-llama2-7b-v1.3.Q6_K.gguf | Q6_K | 5.15GB |
cria-llama2-7b-v1.3.Q8_0.gguf | Q8_0 | 6.67GB |
Original model description:
inference: false language: en license: llama2 model_type: llama datasets: - mlabonne/CodeLlama-2-20k pipeline_tag: text-generation tags: - llama-2
CRIA v1.3
π‘ Article | π» Github | π Colab 1,2
What is CRIA?
krΔ-Ι plural crias. : a baby llama, alpaca, vicuΓ±a, or guanaco.
or what ChatGPT suggests, "Crafting a Rapid prototype of an Intelligent llm App using open source resources".
The initial objective of the CRIA project is to develop a comprehensive end-to-end chatbot system, starting from the instruction-tuning of a large language model and extending to its deployment on the web using frameworks such as Next.js.
Specifically, we have fine-tuned the llama-2-7b-chat-hf
model with QLoRA (4-bit precision) using the mlabonne/CodeLlama-2-20k dataset. This fine-tuned model serves as the backbone for the CRIA chat platform.
π¦ Model Release
CRIA v1.3 comes with several variants.
- davzoku/cria-llama2-7b-v1.3: Merged Model
- davzoku/cria-llama2-7b-v1.3-GGML: Quantized Merged Model
- davzoku/cria-llama2-7b-v1.3_peft: PEFT adapter
π§ Training
It was trained on a Google Colab notebook with a T4 GPU and high RAM.
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.4.0
π» Usage
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "davzoku/cria-llama2-7b-v1.3"
prompt = "What is a cria?"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
References
We'd like to thank:
- Downloads last month
- 4,858