RichardErkhov commited on
Commit
c4ef68d
1 Parent(s): f348e4d

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ cria-llama2-7b-v1.3 - GGUF
11
+ - Model creator: https://huggingface.co/davzoku/
12
+ - Original model: https://huggingface.co/davzoku/cria-llama2-7b-v1.3/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [cria-llama2-7b-v1.3.Q2_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [cria-llama2-7b-v1.3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [cria-llama2-7b-v1.3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [cria-llama2-7b-v1.3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [cria-llama2-7b-v1.3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [cria-llama2-7b-v1.3.Q3_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [cria-llama2-7b-v1.3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [cria-llama2-7b-v1.3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [cria-llama2-7b-v1.3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [cria-llama2-7b-v1.3.Q4_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [cria-llama2-7b-v1.3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [cria-llama2-7b-v1.3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [cria-llama2-7b-v1.3.Q4_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [cria-llama2-7b-v1.3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [cria-llama2-7b-v1.3.Q4_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [cria-llama2-7b-v1.3.Q5_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [cria-llama2-7b-v1.3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [cria-llama2-7b-v1.3.Q5_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [cria-llama2-7b-v1.3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [cria-llama2-7b-v1.3.Q5_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [cria-llama2-7b-v1.3.Q6_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [cria-llama2-7b-v1.3.Q8_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ inference: false
46
+ language: en
47
+ license: llama2
48
+ model_type: llama
49
+ datasets:
50
+ - mlabonne/CodeLlama-2-20k
51
+ pipeline_tag: text-generation
52
+ tags:
53
+ - llama-2
54
+ ---
55
+
56
+ # CRIA v1.3
57
+
58
+ 💡 [Article](https://walterteng.com/cria) |
59
+ 💻 [Github](https://github.com/davzoku/cria) |
60
+ 📔 Colab [1](https://colab.research.google.com/drive/1rYTs3qWJerrYwihf1j0f00cnzzcpAfYe),[2](https://colab.research.google.com/drive/1Wjs2I1VHjs6zT_GE42iEXsLtYh6VqiJU)
61
+
62
+ ## What is CRIA?
63
+
64
+ > krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco.
65
+
66
+ <p align="center">
67
+ <img src="https://raw.githubusercontent.com/davzoku/cria/main/assets/icon-512x512.png" width="300" height="300" alt="Cria Logo"> <br>
68
+ <i>or what ChatGPT suggests, <b>"Crafting a Rapid prototype of an Intelligent llm App using open source resources"</b>.</i>
69
+ </p>
70
+
71
+ The initial objective of the CRIA project is to develop a comprehensive end-to-end chatbot system, starting from the instruction-tuning of a large language model and extending to its deployment on the web using frameworks such as Next.js.
72
+
73
+ Specifically, we have fine-tuned the `llama-2-7b-chat-hf` model with QLoRA (4-bit precision) using the [mlabonne/CodeLlama-2-20k](https://huggingface.co/datasets/mlabonne/CodeLlama-2-20k) dataset. This fine-tuned model serves as the backbone for the [CRIA chat](https://chat.walterteng.com) platform.
74
+
75
+ ## 📦 Model Release
76
+
77
+ CRIA v1.3 comes with several variants.
78
+
79
+ - [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3): Merged Model
80
+ - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model
81
+ - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter
82
+
83
+ ## 🔧 Training
84
+
85
+ It was trained on a Google Colab notebook with a T4 GPU and high RAM.
86
+
87
+ ### Training procedure
88
+
89
+
90
+ The following `bitsandbytes` quantization config was used during training:
91
+ - load_in_8bit: False
92
+ - load_in_4bit: True
93
+ - llm_int8_threshold: 6.0
94
+ - llm_int8_skip_modules: None
95
+ - llm_int8_enable_fp32_cpu_offload: False
96
+ - llm_int8_has_fp16_weight: False
97
+ - bnb_4bit_quant_type: nf4
98
+ - bnb_4bit_use_double_quant: False
99
+ - bnb_4bit_compute_dtype: float16
100
+
101
+ ### Framework versions
102
+
103
+
104
+ - PEFT 0.4.0
105
+
106
+
107
+ ## 💻 Usage
108
+
109
+ ```python
110
+ # pip install transformers accelerate
111
+
112
+ from transformers import AutoTokenizer
113
+ import transformers
114
+ import torch
115
+
116
+ model = "davzoku/cria-llama2-7b-v1.3"
117
+ prompt = "What is a cria?"
118
+
119
+ tokenizer = AutoTokenizer.from_pretrained(model)
120
+ pipeline = transformers.pipeline(
121
+ "text-generation",
122
+ model=model,
123
+ torch_dtype=torch.float16,
124
+ device_map="auto",
125
+ )
126
+
127
+ sequences = pipeline(
128
+ f'<s>[INST] {prompt} [/INST]',
129
+ do_sample=True,
130
+ top_k=10,
131
+ num_return_sequences=1,
132
+ eos_token_id=tokenizer.eos_token_id,
133
+ max_length=200,
134
+ )
135
+ for seq in sequences:
136
+ print(f"Result: {seq['generated_text']}")
137
+ ```
138
+
139
+ ## References
140
+
141
+ We'd like to thank:
142
+
143
+ - [mlabonne](https://huggingface.co/mlabonne) for his article and resources on implementation of instruction tuning
144
+ - [TheBloke](https://huggingface.co/TheBloke) for his script for LLM quantization.
145
+
146
+
147
+