shimmyshimmer commited on
Commit
091ac00
·
verified ·
1 Parent(s): e40e2ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -76
README.md CHANGED
@@ -28,84 +28,9 @@ tags:
28
  <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
29
  </a>
30
  </div>
31
- <h1 style="margin-top: 0rem;">Instructions to run this model in llama.cpp:</h2>
32
  </div>
33
 
34
- Or you can view more detailed instructions here: [unsloth.ai/blog/deepseekr1-dynamic](https://unsloth.ai/blog/deepseekr1-dynamic)
35
- 1. Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter. Also
36
- do not forget about `<think>\n`!
37
- Prompt format: `"<|User|>Create a Flappy Bird game in Python.<|Assistant|><think>\n"`
38
- 2. Obtain the latest `llama.cpp` at https://github.com/ggerganov/llama.cpp. You can follow the build instructions below as well:
39
- ```bash
40
- apt-get update
41
- apt-get install build-essential cmake curl libcurl4-openssl-dev -y
42
- git clone https://github.com/ggerganov/llama.cpp
43
- cmake llama.cpp -B llama.cpp/build \
44
- -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
45
- cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split
46
- cp llama.cpp/build/bin/llama-* llama.cpp
47
- ```
48
- 3. It's best to use `--min-p 0.05` to counteract very rare token predictions - I found this to work well especially for the 1.58bit model.
49
- 4. Download the model via:
50
- ```python
51
- # pip install huggingface_hub hf_transfer
52
- # import os # Optional for faster downloading
53
- # os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
54
-
55
- from huggingface_hub import snapshot_download
56
- snapshot_download(
57
- repo_id = "unsloth/r1-1776-GGUF",
58
- local_dir = "r1-1776-GGUF",
59
- allow_patterns = ["*UD-Q2_K_XL*"], # Select quant type Q2_K_XL for dynamic 2bit
60
- )
61
- ```
62
- 5. Example with Q4_0 K quantized cache **Notice -no-cnv disables auto conversation mode**
63
- ```bash
64
- ./llama.cpp/llama-cli \
65
- --model r1-1776-GGUF/UD-Q2_K_XL/r1-1776-UD-Q2_K_XL-00001-of-00005.gguf \
66
- --cache-type-k q4_0 \
67
- --threads 12 -no-cnv --prio 2 \
68
- --temp 0.6 \
69
- --ctx-size 8192 \
70
- --seed 3407 \
71
- --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|><think>\n"
72
- ```
73
- Example output:
74
-
75
- ```txt
76
- Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
77
- Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
78
- Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
79
- I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
80
- Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
81
- ```
82
-
83
- 6. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
84
- ```bash
85
- ./llama.cpp/llama-cli \
86
- --model r1-1776-GGUF/UD-Q2_K_XL/r1-1776-UD-Q2_K_XL-00001-of-00005.gguf \
87
- --cache-type-k q4_0 \
88
- --threads 12 -no-cnv --prio 2 \
89
- --n-gpu-layers 7 \
90
- --temp 0.6 \
91
- --ctx-size 8192 \
92
- --seed 3407 \
93
- --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|><think>\n"
94
- ```
95
- 7. If you want to merge the weights together, use this script:
96
- ```
97
- ./llama.cpp/llama-gguf-split --merge \
98
- r1-1776-GGUF/UD-Q2_K_XL/r1-1776-UD-Q2_K_XL-00001-of-00005.gguf \
99
- merged_file.gguf
100
- ```
101
-
102
- | Dynamic Bits | Type | Disk Size | Accuracy | Link | Details |
103
- | -------- | -------- | ------------ | ------------ | ---------------------| ---------- |
104
- | 2bit | UD-Q2_K_XL | **211GB** | Better | [Link](https://huggingface.co/unsloth/r1-1776-GGUF/tree/main/r1-1776-UD-Q2_K_XL) | MoE all 2.5bit. `down_proj` in MoE mixture of 3.5/2.5bit |
105
- | 3bit | UD-Q3_K_XL | **298GB** | Best | [Link](https://huggingface.co/unsloth/r1-1776-GGUF/tree/main/r1-1776-UD-Q3_K_XL) | MoE Q3_K_M. Attention parts are upcasted |
106
- | 4bit | UD-Q4_K_XL | **377GB** | Best | [Link](https://huggingface.co/unsloth/r1-1776-GGUF/tree/main/r1-1776-UD-Q4_K_XL) | MoE Q4_K_M. Attention parts are upcasted |
107
-
108
- # Finetune your own Reasoning model like R1 with Unsloth!
109
  We have a free Google Colab notebook for turning Llama 3.1 (8B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb
110
 
111
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
 
28
  <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
29
  </a>
30
  </div>
31
+ <h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2>
32
  </div>
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  We have a free Google Colab notebook for turning Llama 3.1 (8B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb
35
 
36
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)