seyf1elislam
commited on
Commit
•
dfa0333
1
Parent(s):
e834466
Update README.md
Browse files
README.md
CHANGED
@@ -1,63 +1,24 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
-
|
4 |
-
- mergekit
|
5 |
-
- lazymergekit
|
6 |
-
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
7 |
-
- mlabonne/NeuralBeagle14-7B
|
8 |
base_model:
|
9 |
-
-
|
10 |
-
- mlabonne/NeuralBeagle14-7B
|
11 |
---
|
12 |
-
|
13 |
# KunaiBeagle-7b
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
weight: 0.4
|
32 |
-
density: 0.6
|
33 |
-
merge_method: dare_ties
|
34 |
-
base_model: mistralai/Mistral-7B-v0.1
|
35 |
-
parameters:
|
36 |
-
int8_mask: true
|
37 |
-
dtype: bfloat16
|
38 |
-
```
|
39 |
-
|
40 |
-
## 💻 Usage
|
41 |
-
|
42 |
-
```python
|
43 |
-
!pip install -qU transformers accelerate
|
44 |
-
|
45 |
-
from transformers import AutoTokenizer
|
46 |
-
import transformers
|
47 |
-
import torch
|
48 |
-
|
49 |
-
model = "seyf1elislam/KunaiBeagle-7b"
|
50 |
-
messages = [{"role": "user", "content": "What is a large language model?"}]
|
51 |
-
|
52 |
-
tokenizer = AutoTokenizer.from_pretrained(model)
|
53 |
-
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
54 |
-
pipeline = transformers.pipeline(
|
55 |
-
"text-generation",
|
56 |
-
model=model,
|
57 |
-
torch_dtype=torch.float16,
|
58 |
-
device_map="auto",
|
59 |
-
)
|
60 |
-
|
61 |
-
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
62 |
-
print(outputs[0]["generated_text"])
|
63 |
-
```
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- GGUF
|
|
|
|
|
|
|
|
|
4 |
base_model:
|
5 |
+
- seyf1elislam/KunaiBeagle-7b
|
|
|
6 |
---
|
|
|
7 |
# KunaiBeagle-7b
|
8 |
+
- Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
|
9 |
+
- Original model: [KunaiBeagle-7b](https://huggingface.co/seyf1elislam/KunaiBeagle-7b)
|
10 |
+
|
11 |
+
<!-- description start -->
|
12 |
+
## Description
|
13 |
+
This repo contains GGUF format model files for [seyf1elislam's KunaiBeagle-7b ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b).
|
14 |
+
|
15 |
+
## Provided files
|
16 |
+
|
17 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
18 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
19 |
+
| [kunaibeagle-7b.Q2_K.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
|
20 |
+
| [kunaibeagle-7b.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
|
21 |
+
| [kunaibeagle-7b.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
|
22 |
+
| [kunaibeagle-7b.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
|
23 |
+
| [kunaibeagle-7b.Q6_K.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
|
24 |
+
| [kunaibeagle-7b.Q8_0.gguf ](https://huggingface.co/seyf1elislam/KunaiBeagle-7b-GGUF/blob/main/kunaibeagle-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|